title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
de_Silva_Edirimuni_IterativePFN_True_Iterative_Point_Cloud_Filtering_CVPR_2023 | Abstract The quality of point clouds is often limited by noise in-troduced during their capture process. Consequently, a fun-damental 3D vision task is the removal of noise, known as point cloud filtering or denoising. State-of-the-art learning based methods focus on training neural networks to infer fil-tered displacements and directly shift noisy points onto the underlying clean surfaces. In high noise conditions, they it-erate the filtering process. However, this iterative filtering is only done at test time and is less effective at ensuring points converge quickly onto the clean surfaces. We propose It-erativePFN (iterative point cloud filtering network), which consists of multiple IterationModules that model the true it-erative filtering process internally, within a single network. We train our IterativePFN network using a novel loss func-tion that utilizes an adaptive ground truth target at each it-eration to capture the relationship between intermediate fil-tering results during training. This ensures that the filtered results converge faster to the clean surfaces. Our method is able to obtain better performance compared to state-of-the-art methods. The source code can be found at: https: //github.com/ddsediri/IterativePFN . | 1. Introduction Point clouds are a natural representation of 3D geo-metric information and have a multitude of applications in the field of 3D Computer Vision. These applications range from robotics and autonomous driving to urban plan-ning [14, 19, 35, 38]. They are captured using 3D sensors and comprise of unordered points lacking connectivity in-formation. Furthermore, the capturing of point cloud data is error-prone as sensor quality and environmental factors may *Corresponding author: X. Lu, supported by PJ03906.PG00507.F002. †Z. Shao is supported by the NSFC (No. 62106268). Figure 1. Histograms of filtered point distances from clean surface after 1,4,8and24test time iterations for ScoreDenoise [22] on the Casting shape with 50K points and 2.5% Gaussian noise. We compare it with our proposed IterativePFN where 1 IterationMod-ule (ItM) corresponds to 1 internal iteration and 4 ItMs equal 1 external iteration (EI). There are 4 ItMs in the proposed network. Note 1 ItM is analogous to 1 test time iteration of ScoreDenoise. Our filtering results converge closer to the surface. introduce noisy artifacts. The process of removing noise is a fundamental research problem which motivates the field of point cloud filtering, also known as denoising. Filtering facilitates other tasks such as normal estimation and, by ex-tension, 3D rendering and surface reconstruction. Conventional point cloud filtering methods such as MLS based methods [2, 12], bilateral filtering mecha-nisms [9] and edge recovery algorithms [18, 34] rely on lo-cal information of point sets, i.e., point normals, to filter point clouds. However, such methods are limited by the accuracy of normals. Alternatives include the Locally Opti-mal Projection (LOP) family of methods [13,17,25], which downsample and regularize point clouds but incur the loss of important geometric details. More recently, deep learn-ing based filtering methods have been proposed to allevi-ate the disadvantages and limitations of conventional meth-ods [22–24, 28, 41]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13530 Early deep learning based filtering methods, such as PointProNets [31], require pre-processed 2D height maps to filter point clouds. However, the advent of PointNet, PointNet++ and DGCNN, made direct point set convolution a possibility [26, 27, 37]. Feature encoders based on these architectures were exploited by recent methods to produce richer latent representations of point set inputs and filter noise more effectively [20,22,28,41]. These methods can be broadly characterized as 1) resampling , 2)probability and 3)displacement based methods. Resampling based meth-ods such as DMRDenoise [20] suffer from the loss of ge-ometric details as the method relies on identifying down-sampled underlying clean surfaces and upsampling along them. ScoreDenoise, which models the gradient-log of the noise-convolved probability to find a point at a given posi-tion, iteratively performs Langevin sampling-inspired gra-dient ascent [22] to filter points. However, filtered points are slow to converge to the clean surface after many test time iterations of filtering, as illustrated in Fig. 1. By con-trast, for an IterativePFN network with 4 IterationModules, where 1 iterationModule (ItM) represents 1 internal itera-tion of filtering and is analogous to 1 test time iteration of ScoreDenoise, we see that a higher number of filtered points converge closer to the clean surface within the same number of test time iterations. Among displacement based methods, PointCleanNet (PCN) [28] shows sensitivity to high noise while Pointfil-ter [41] utilizes a bilateral filtering inspired weighted loss function that causes closely separated surfaces to collapse into a single surface during filtering. Moreover, gradient as-cent and displacement based methods filter point clouds it-eratively during test times and do not consider true iterative filtering during training. Although RePCDNet [7] offers a recurrent neural network inspired alternative to capture this information during training, at each iteration RePCDNet at-tempts to directly shift points onto the clean surface without considering, that at different iterations, their existing resid-ual noise, in decreasing order w.r.t. iteration number. Fur-thermore, it uses a single network to filter points, increasing the burden on the network to correctly distinguish between noise scales and requires multiple test time iterations which lead to low efficiency. Based on these major limitations, we propose: • a novel neural network architecture of stacked encoder-decoder modules, dubbed IterationModule , to model the true iterative filtering process internally (see Fig. 2). Each IterationModule represents an iter-ation of filtering and the output of the τ-th Iteration-Module becomes the input for the τ+ 1-th Iteration-Module. Thereby, the τ+ 1-th IterationModule repre-sents the filtering iteration t=τ+ 1. This allows the network to develop an understanding of the filtering relationship across iterations.• a novel loss function that formulates the nearest neigh-bor loss at each iteration as the L2norm minimization between the filtered displacements, inferred by the τ-th IterationModule, and the nearest point within a target point cloud at t=τ, of a lower noise scale στcom-pared to the noise scale στ−1of the target at t=τ−1. This promotes a gradual filtering process that encour-ages convergence to the clean surface. • a generalized patch-stitching method that designs Gaussian weights when determining best filtered points within overlapping patches. Patch stitching improves efficiency as it facilitates filtering multiple points simultaneously. We conduct comprehensive experiments, in compari-son with state-of-the-art methods, which demonstrate our method’s advantages on both synthetic and real world data. |
He_CLIP-S4_Language-Guided_Self-Supervised_Semantic_Segmentation_CVPR_2023 | Abstract Existing semantic segmentation approaches are often limited by costly pixel-wise annotations and predefined classes. In this work, we present CLIP-S4that leverages self-supervised pixel representation learning and vision-language models to enable various semantic segmenta-tion tasks (e.g., unsupervised, transfer learning, language-driven segmentation) without any human annotations and unknown class information. We first learn pixel embeddings withpixel-segment contrastive learning from different aug-mented views of images. To further improve the pixel em-beddings and enable language-driven semantic segmenta-tion, we design two types of consistency guided by vision-language models: 1) embedding consistency , aligning our pixel embeddings to the joint feature space of a pre-trained vision-language model, CLIP [34]; and 2) semantic con-sistency , forcing our model to make the same predictions as CLIP over a set of carefully designed target classes with both known and unknown prototypes. Thus, CLIP-S4en-ables a new task of class-free semantic segmentation where no unknown class information is needed during training. As a result, our approach shows consistent and substantial per-formance improvement over four popular benchmarks com-pared with the state-of-the-art unsupervised and language-driven semantic segmentation methods. More importantly, our method outperforms these methods on unknown class recognition by a large margin. | 1. Introduction Semantic segmentation aims to partition an input image into semantically meaningful regions and assign each re-gion a semantic class label. Recent advances in seman-tic segmentation [6, 27, 48] heavily rely on pixel-wise hu-man annotations, which have two limitations. First, ac-quiring pixel-wise annotations is extremely labor intensive and costly, which can take up to 1.5 hours to label one im-age [31]. Second, human annotations are often limited to a set of predefined semantic classes, with which the learned models lack the ability to recognize unknown classes [25]. Image MaskCLIP MaskCLIP+ CLIP-S⁴ (a) Pixel embeddings generated by different CLIP-based unsupervised approaches (c) CLIP-S⁴ aligns the pixel embeddings and their semantics with CLIP feature space CLIP (Text) CLIP (Pixel) CLIP-S⁴ (Pixel w/o Alignment)Align ci cjci cj CLIP (Text) CLIP (Pixel) CLIP-S⁴ (Pixel w/ Alignment)ciunknownKnown: airplane Unknown: moon (b) Semantic segmentation generated by different CLIP-based unsupervised approachesMaskCLIP MaskCLIP+ CLIP-S⁴ Figure 1. (a) Pixel embeddings from different CLIP-based un-supervised methods: Our method, CLIP-S4, generates sharper and more coherent pixel embeddings than MaskCLIP [49] and MaskCLIP+’s [49]; (b) Language-driven semantic segmentation by different methods: CLIP-S4can recognize challenging un-known classes (e.g., moon); (c) The key idea behind CLIP-S4: aligning the pixel embeddings and their semantics with CLIP fea-ture space. Various approaches have been proposed to tackle these limitations, among which we are inspired by two lines of recent research in particular. First, for unsupervised seman-tic segmentation (i.e., without human annotations), self-supervised pixel representation learning approaches [14,18, 19, 23, 40] have shown promising results on popular un-supervised benchmarks. The main idea is to extend self-supervised contrastive learning [7, 16] from images to pix-els by attracting each pixel’s embedding to its positive pairs and repelling it from negative pairs. The prior of pairs can be contours [18, 19], hierarchical groups [23], salience maps [40], and pre-trained models [14]. Although these approaches can group pixels into semantically meaningful clusters, human annotations are still needed to assign class labels to the clusters for semantic segmentation [37]. Second, for unknown classes in semantic segmenta-tion, large-scale vision-language models such as CLIP [34] This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11207 have shown great potential. This line of research, called language-driven semantic segmentation , aims to segment images with arbitrary classes defined by texts during test-ing time [25, 37, 46, 49]. Among these methods, most still need training time annotations, such as pixel annota-tions [25] and captions [46]. Only a few recent work, MaskCLIP & MaskCLIP+ [49] attempts to address this without using additional supervision: MaskCLIP directly extracts pixel embeddings correlated with texts from CLIP, but these pixel embeddings are coarse and noisy (Fig. 1a). To address this issue, MaskCLIP+ [49] trains a segmenta-tion model on the pseudo-labels generated by MaskCLIP for a set of predefined classes. However, the pixel embed-dings of MaskCLIP+ are distorted by the predefined classes (Fig. 1a), which limits its ability to recognize unknowns (Fig. 1b). Also, it needs unknown class information during training, which hinders its real-world applications. We propose a language-guided s elf-supervised s emantic segmentation approach, CLIP-S4, which takes advantage of the strengths from both lines of research and addresses their limitations accordingly. The key idea is to learn consistent pixel embeddings with respect to visual and conceptual se-mantics using self-supervised learning and the guidance of a vision-language model, CLIP. Specifically, we first train pixel embeddings with pixel-segment contrastive learning from different augmented im-age views [18, 19, 23] such that images can be partitioned into visually meaningful regions. To further improve pixel embedding quality and enable language-driven semantic segmentation, we introduce vision-language model guided consistency to regularize our model (Fig. 1c). The con-sistency is enforced from two aspects: embedding consis-tency andsemantic consistency . First, embedding consis-tency aims to align the pixel embeddings generated by our model with the joint feature space of texts and images of CLIP by minimizing the distance between the pixel embed-dings generated by our model and CLIP. Second, semantic consistency forces our model to make the same prediction as CLIP over a set of carefully designed target classes with both known andunknown prototypes. Note that unlike the previous methods [25,49] that use a predefined set of known classes , CLIP-S4also learns the representation of unknown classes from images during training. In the end, CLIP-S4also enables a new task, namely class-free semantic segmentation , as shown in Tab. 1. This new task does not need any human annotations and even as-sumes NO class names are given during training. This is a more challenging task than the recent work [49] that re-quires class names of both known and unknown. In summary, the contributions of this paper are threefold: • We propose a self-supervised semantic segmenta-tion approach that combines pixel-segment contrastive learning with the guidance of pre-trained vision lan-Known Unknown Annot.Cls Name Annot.Cls Name Add. Info. Un/Self-supervised ( [19] etc.) Fine-Tuning Supervised ( [27] etc.) ✓ ✓ N/A N/A N/A Zero-shot ( [3] etc.) ✓ ✓ ✓Word2Vec, etc. Language-MaskCLIP+ [49] ✓ ✓ CLIP Driven CLIP-S4✓ CLIP Table 1. Comparison of information required for training over dif-ferent tasks. CLIP-S4enables a new task called class-free seman-tic segmentation . Compared with MaskCLIP+ [49], the new task assumes unknown class names are NOT given during training. guage models. Our method can generate high-quality pixel embeddings without any human annotations and be applied to a variety of semantic segmentation tasks. • We open up new research potentials for language-driven semantic segmentation without any human an-notations by introducing and addressing a new task ofclass-free semantic segmentation (Tab. 1). Un-like previous work that assumes all the class names are known during training, our method can discover unknown classes from unlabelled image data without even knowing unknown class names. • Consistent and substantial gains are observed with our approach over the state-of-the-art unsupervised and language-driven semantic segmentation methods on four popular datasets. More importantly, our method significantly outperforms the state-of-the-art on the segmentation of unknown classes. |
Jin_Deep_Incomplete_Multi-View_Clustering_With_Cross-View_Partial_Sample_and_Prototype_CVPR_2023 | Abstract The success of existing multi-view clustering relies on the assumption of sample integrity across multiple views. However, in real-world scenarios, samples of multi-view are partially available due to data corruption or sensor failure, which leads to incomplete multi-view clustering study (IMVC). Although several attempts have been pro-posed to address IMVC, they suffer from the following draw-backs: i) Existing methods mainly adopt cross-view con-trastive learning forcing the representations of each sam-ple across views to be exactly the same, which might ig-nore view discrepancy and flexibility in representations; ii) Due to the absence of non-observed samples across mul-tiple views, the obtained prototypes of clusters might be unaligned and biased, leading to incorrect fusion. To ad-dress the above issues, we propose a Cross-view Partial Sample and Prototype Alignment Network (CPSPAN) for Deep Incomplete Multi-view Clustering. Firstly, unlike ex-isting contrastive-based methods, we adopt pair-observed data alignment as ’proxy supervised signals’ to guide instance-to-instance correspondence construction among views. Then, regarding of the shifted prototypes in IMVC, we further propose a prototype alignment module to achieve incomplete distribution calibration across views. Exten-sive experimental results showcase the effectiveness of our proposed modules, attaining noteworthy performance im-provements when compared to existing IMVC competitors on benchmark datasets. | 1. Introduction In modern society, data collected for real-world appli-cations usually stems from different domains, sensors or feature extractors, which gives rise to multi-view learning in literature [2, 44]. For instance, an autonomous car may have diverse sensors, and a movie is typically made up of images and audio. As an important paradigm of multi-view *Corresponding authorlearning, multi-view clustering (MVC) [10,20,21,24,40,47] divides data by exploiting the consistent and complemen-tary information across multiple views. The success of ex-isting multi-view clustering methods heavily relies on the fully-available data assumption. However, in practical ap-plications, some views of instances are only partially avail-able due to unstable sensors and damaged storage media. When some views are missing [9], the natural alignment property of same instances across multiple views is de-stroyed, which may result in insufficient mining of comple-mentary and consistent information. To handle the incom-pleteness issue, many incomplete multi-view clustering al-gorithms (IMVC) [15,31,38] with satisfactory performance have been proposed. Typical strategies are mainly based on matrix decomposition, incomplete multiple kernel learning and graph-based methods. Learning more discriminative consensus representations with incomplete view informa-tion is crucial to achieve better incomplete multi-view clus-tering performance. However, conventional IMVC methods are based on raw features and therefore, the performance heavily relies on the feature quality. As deep neural networks [5,8,16,17,25,43] have demon-strated superior performance in learning high-level repre-sentations, deep learning has become prevalent in various fields of computer vision and pattern classification. To this end, researchers have explored combining deep neu-ral networks [26, 35] and conventional IMVC methods to improve clustering performance, and the resulting cluster-ing method is called Deep Incomplete Multi-View Cluster-ing (DIMVC) [11, 30, 37, 39, 42]. Most existing DIMVC methods adopt the principle of contrastive learning, treat-ing different views of the same sample as positive pairs and their representations should be consistent. Such algorithms ignore the cross-view alignment correlation of samples and force instances of different views with unified representa-tion, which may destroy the flexibility and variety of rep-resentations. We argue that the essence of IMVC task lies in discovering structural correspondence between different views, rather than rigidly and simply enforcing uniform rep-resentations across each view. In fact, IMVC can be re-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11600 garded as a special case of ’ partially-aligned ’ multi-view setting, where the pair-observed data provides supervised instance-alignment signals. Moreover, as shown in the Fig. 1, the distribution learned from the incomplete multi-view data can be biased due to inadequate multi-view data. Specifically, during the clus-tering task, flexible representations may cause prototypes of each cluster to shift and become biased, which we re-fer to as the Prototype-Shifted Problem (termed PSP). Such a problem has been demonstrated in the Anchor-Unaligned Problem [32] for complete multi-view data, and undoubt-edly has more essential impact on incomplete multi-view data. At the same time, contrastive-based DIMVC methods neglect this issue and do not explore relationships among different instances within the same view, which may further aggravate PSP. Therefore, it is necessary to match the rela-tionship between the prototypes among views and perform clustering task accordingly. To address the aforementioned issues, we propose a novel approach termed Cross-view Partial Sample and Prototype Alignment Network (CPSPAN) for Deep Incom-plete Multi-view Clustering to perform cross-view partial sample alignment and solve the prototype-shifted problem. The framework of CPSPAN is illustrated in Fig. 3. In de-tail, different from the contrastive learning mode, the cross-view instance alignment module establishes the view-to-view correspondence of samples through the pair-observed data in Fig. 2 between each pair-wise views, so as to mine the structural information between views. Afterwards, to address the prototype-shifted problem in incomplete sce-nario, the prototype alignment module takes one view’s prototype set as anchors, and solves the permutation ma-trix between the two sets of prototypes, thereby establish-ing prototype-to-prototype correspondence based on opti-mal transport theory. Since prototypes are obtained based on samples, this module not only calibrates correspon-dences between cross-view shifted prototypes but also en-codes the relationships between within-view samples. Ul-timately, since our model is imputation-free upfront, in or-der to align the embeddings between views before finally performing feature fusion and clustering, we build cross-view structural relationship transfer for missing item impu-tations. We summarize the major contributions of our work as follows, • We propose a novel deep network to handle IMVC task, termed as CPSPAN. Differ from existing multi-view contrastive learning manner, we considers the IMVC from a novel insight with partially-aligned set-ting. To this end, CPSPAN optimal maximizes match-ing alignment between paired-observed data and con-struct cross-view intersection. #2 #1 #3 #1 #2 #3 Wrong Cross-view Prototype Correspondence Prototype-Shifted Problem Prototypes of View 1 ܆(ଵ) ܆(ଶ) Prototypes of View 2 #1 #2 #3 #1 #2 #3 Unaligned Prototypes in IMVC Complete Instance Missing Instance Figure 1. An example illustration of shifted prototype across multi-view caused by incomplete setting. With different missing status, the prototypes learned by incomplete multi-view data may be shifted and leads to wrong correspondences. • In order to solve the Prototype-Shifted Problem caused by incomplete information, CPSPAN proposes to fur-ther align the prototype sets between different views, so as to mine consistent cross-view structural informa-tion. • Extensive experiments have clearly demonstrated the effectiveness of the proposed cross-view partial sample and prototype alignment modules and the superiority over both conventional and deep SOTA methods. |
Cao_A_New_Comprehensive_Benchmark_for_Semi-Supervised_Video_Anomaly_Detection_and_CVPR_2023 | Abstract Semi-supervised video anomaly detection (VAD) is a critical task in the intelligent surveillance system. How-ever, an essential type of anomaly in VAD named scene-dependent anomaly has not received the attention of re-searchers. Moreover, there is no research investigating anomaly anticipation, a more significant task for preventing the occurrence of anomalous events. To this end, we pro-pose a new comprehensive dataset, NWPU Campus, con-taining 43 scenes, 28 classes of abnormal events, and 16 hours of videos. At present, it is the largest semi-supervised VAD dataset with the largest number of scenes and classes of anomalies, the longest duration, and the only one con-sidering the scene-dependent anomaly. Meanwhile, it is also the first dataset proposed for video anomaly antici-pation. We further propose a novel model capable of de-tecting and anticipating anomalous events simultaneously. Compared with 7 outstanding VAD algorithms in recent years, our method can cope with scene-dependent anomaly detection and anomaly anticipation both well, achieving state-of-the-art performance on ShanghaiTech, CUHK Av-enue, IITB Corridor and the newly proposed NWPU Cam-pus datasets consistently. Our dataset and code is available at:https://campusvad.github.io . | 1. Introduction Video anomaly detection (V AD) is widely applied in public safety and intelligent surveillance due to its ability to detect unexpected abnormal events in videos. Since anoma-lous events are characterized by unbounded categories and rare occurrence in practice, V AD is commonly set as a semi-supervised task, that is, there are only normal events without specific labels in the training set [1, 2]. The model trained only on the normal events needs to distinguish anomalous events from normal events in the testing phase. Semi-supervised V AD has been studied for years. Espe-†Corresponding authorcially in recent years, reconstruction-based and prediction-based methods [3–21] have made leaps and bounds in per-formance on existing datasets. For example, the frame-level AUCs (area under curve) on UCSD Ped1 and Ped2 datasets [22] have reached over 97% [2]. Despite the emer-gence of a few challenging datasets, researchers still over-look an important type of anomaly, i.e., the scene-dependent anomaly [2]. Scene dependency refers to that an event is normal in one scene but abnormal in another. For example, playing football on the playground is a normal behavior, but playing on the road is abnormal. Note that single-scene datasets cannot contain any scene-dependent anomaly. Nev-ertheless, the existing multi-scene datasets ( e.g., Shang-haiTech [23], UBnormal [24]) also have not taken this type of anomaly into account. As a result, there is currently no algorithm for studying scene-dependent anomaly detection, limiting the comprehensive evaluation of V AD algorithms. In addition to detecting various types of anomalies, we ar-gue that there is another task that also deserves the attention of researchers, which is to anticipate the occurrence of ab-normal events in advance. If we can make an early warning before the anomalous event occurs based on the trend of the event, it is of great significance to prevent dangerous accidents and avoid loss of life and property. However, ac-cording to our investigation, there is no research on video anomaly anticipation, and no dataset or algorithm has been proposed for this field. In this paper, we work on semi-supervised video anomaly detection and anticipation. First and foremost, to address the issue that the V AD datasets lack scene-dependent anomalies and are not suitable for anomaly an-ticipation, we propose a new large-scale dataset, NWPU Campus. Compared with existing datasets, our proposed dataset mainly has the following three advantages. First, to the best of our knowledge, the NWPU Campus is the largest semi-supervised V AD dataset to date. It contains 43 scenes, whose number is 3 times that of ShanghaiTech, the real recorded dataset with the largest number of scenes among the existing datasets. The total video duration of the NWPU Campus is 16 hours, which is more than 3 times that This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20392 Table 1. Comparisons of different semi-supervised V AD datasets. There are not any official training and testing splits in UMN. UBnormal has a validation set, which is not shown here. ”720p” means that the frame is 720 pixels high and 1280 or 1080 pixels wide. The frame resolutions of NWPU Campus are 1920 ×1080, 2048 ×1536, 704 ×576 and 1280 ×960 pixels. * represents the animated dataset. Dataset Year# Frames # Abnormal event classesResolution #ScenesScene dependencyTotal Training Testing Subway Entrance [25] 2008 86,535 18,000 68,535 5 512 ×384 1 ✗ Subway Exit [25] 2008 38,940 4,500 34,440 3 512 ×384 1 ✗ UMN [26] 2009 7,741 --3 320 ×240 3 ✗ USCD Ped1 [22] 2010 14,000 6,800 7,200 5 238 ×158 1 ✗ USCD Ped2 [22] 2010 4,560 2,550 2,010 5 360 ×240 1 ✗ CUHK Avenue [27] 2013 30,652 15,328 15,324 5 640 ×360 1 ✗ ShanghaiTech [23] 2017 317,398 274,515 42,883 11 856 ×480 13 ✗ Street Scene [28] 2020 203,257 56,847 146,410 17 1280 ×720 1 ✗ IITB Corridor [29] 2020 483,566 301,999 181,567 10 1920 ×1080 1 ✗ UBnormal [24] * 2022 236,902 116,087 92,640 22 720p 29 ✗ NWPU Campus (ours) 1,466,073 1,082,014 384,059 28 multiple 43 ✓ of the existing largest semi-supervised V AD dataset IITB Corridor [29]. The quantitative comparison between the NWPU Campus and other datasets can be seen in Tab. 1. Second, the NWPU Campus has a variety of abnormal and normal events. In terms of anomalies, it contains 28 classes of anomalous events, which is more than any other dataset. Fig. 1 displays some examples from our dataset. More importantly, the NWPU Campus dataset contains scene-dependent anomalous events, which are missing in other datasets. As an example, the behavior of a vehicle turning left is anomalous in the scene where left turns are prohib-ited, while it is normal in other unrestricted scenes. Along with the diversity of anomalous events, the normal events in our dataset are diverse as well. Unlike other datasets, we do not only take walking and standing as normal be-haviors. In our dataset, regular walking, cycling, driving and other daily behaviors that obey rules are also consid-ered as normal events. Third, in addition to being served as a video anomaly detection benchmark, the NWPU Campus is the first dataset proposed for video anomaly anticipation (V AA). The existing datasets do not deliberately consider the anomalous events applicable to anticipation. In contrast, we take into account the complete process of the events in the data collection phase so that the occurrence of abnormal events is predictable. For instance, before the vehicle turns left (the scene-dependent anomalous event as mentioned be-fore), the movement trend of it can be observed, and hence the algorithm could make an early warning. As a compari-son, it is considered to be abnormal when a vehicle simply appears in the ShanghaiTech dataset, which is unpredictable and therefore not suitable for anomaly anticipation. Besides comprehensive benchmarks, there is currently a lack of algorithms for scene-dependent anomaly detection and video anomaly anticipation. Therefore, in this work, wefurther propose a novel forward-backward frame prediction model that can detect anomalies and simultaneously antici-pate whether an anomalous event is likely to occur in the fu-ture. Moreover, it has the ability to handle scene-dependent anomalies through the proposed scene-conditioned auto-encoder. As a result, our method achieves state-of-the-art performance on ShanghaiTech [23], CUHK Avenue [27], IITB Corridor [29], and our NWPU Campus datasets. In summary, our contribution is threefold: •We propose a new dataset NWPU Campus, which is the largest and most complex semi-supervised video anomaly detection benchmark to date. It makes up for the lack of scene-dependent anomalies in the current research field. •We propose a new video anomaly anticipation task to anticipate the occurrence of anomalous events in ad-vance, and the NWPU Campus is also the first dataset proposed for anomaly anticipation, filling the research gap in this area. •We propose a novel method to detect and anticipate anomalous events simultaneously, and it can cope with scene-dependent anomalies. Comparisons with 7 state-of-the-art V AD methods on the NWPU Cam-pus, ShanghaiTech, CUHK Avenue and IITB Corridor datasets demonstrate the superiority of our method. |
Chen_Revisiting_Multimodal_Representation_in_Contrastive_Learning_From_Patch_and_Token_CVPR_2023 | Abstract Contrastive learning-based vision-language pre-training approaches, such as CLIP , have demonstrated great success in many vision-language tasks. These meth-ods achieve cross-modal alignment by encoding a matched image-text pair with similar feature embeddings, which are generated by aggregating information from visual patches and language tokens. However, direct aligning cross-modal information using such representations is challenging, as visual patches and text tokens differ in semantic levels and granularities. To alleviate this issue, we propose a Finite Discrete Tokens (FDT) based multimodal representation. FDT is a set of learnable tokens representing certain visual-semantic concepts. Both images and texts are embedded using shared FDT by first grounding multimodal inputs to FDT space and then aggregating the activated FDT representations. The matched visual and semantic concepts are enforced to be represented by the same set of discrete tokens by a sparse activation constraint. As a result, the granularity gap between the two modalities is reduced. Through both quantitative and qualitative analyses, we demonstrate that using FDT representations in CLIP-style models improves cross-modal alignment and performance in visual recognition and vision-language downstream tasks. Furthermore, we show that our method can learn more comprehensive representations, and the learned FDT capture meaningful cross-modal correspondence, ranging from objects to actions and attributes.1 | 1. Introduction Recently, the Contrastive Language-Image Pre-training (CLIP) framework [16, 27] has demonstrated notable capa-*This work was done during a research internship at ByteDance. †Dimitris N. Metaxas has been supported by NSF IUCRC CARTA-1747778, 2235405, 2212301, 1951890, 2003874. 1The source code can be found at https://github.com/ yuxiaochen1103/FDT . Multimodal SpaceVisionLanguageVisionLanguageFDTActivatedFDTAsmilingdogonthegrassAsmilingdogonthegrass Figure 1. Comparison of different feature representation learning methods. Left: contrastive vision-language pre-training (CLIP). Right : CLIP with our proposed finite discrete tokens (FDT). bilities for learning powerful and transferable feature rep-resentations [10, 22, 40–43]. In this framework, models are trained to align text and image information in a two-stream approach where image and text representations are extracted through two separate encoders. The InfoNCE loss [27] is used to train the encoders which enforces the representations of matched image-text pairs to be closer, while those of unmatched pairs to be far apart (as shown in Figure 1 (Left)). However, the fact that the information conveyed in im-ages and text captions is naturally of different levels of gran-ularities [29, 34] is not considered by such models. For ex-ample, an image of a dog also portrays various lower-level attributes, such as its breed, fur color, body size, and shape, while the textual description, such as “a smiling dog”, is generally more abstract and compact. In CLIP, images and text captions are represented through the aggregation of vi-sual patches and text tokens without explicitly aligning the visual and semantic concepts at the same level of granular-ity. It can cause challenges in multimodal representation learning, or even potentially result in performance degrada-tion [35]. Additionally, the learned models may overlook certain semantic concepts [14]. Therefore, we argue that This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15095 unifying the information granularities of images and texts can help generate better multimodal representations. In this paper, we propose a new Finite Discrete Tokens (FDT) based representations. FDT is a set of learnable to-kens that encode cross-modal shared semantic concepts. Both image and text are represented as the combinations of FDT shared between modalities so that the information granularities are unified (see Figure 1 (Right)). Figure 2 gives an overview of our method. For an image, its patch embeddings are first extracted by an image encoder. The correspondence between the FDT and the image is then measured by max pooling over the attention weights of FDT among all patches. Finally, the FDT-based representation of the image is calculated as the attention-weighted sum of FDT. The FDT-based embeddings for input texts can be constructed in the same way. The encoders and FDT are trained to pull close the FDT-based representations of matched image-text pairs while pushing away those of un-matched pairs by using the InfoNCE loss. To the point of leveraging a shared FDT across modalities is to enforce the matched visual and semantic concepts to be represented by the same discrete tokens. For example, the visual patches of a dog and the word “dog” should activate the same sub-sets of FDT. We empirically demonstrate that this can be achieved by simply enforcing relatively sparse attention-weights between FDT and the inputs. We conduct extensive experiments covering a wide range of pre-training settings and downstream tasks to evaluate the proposed method. We conclude with the following key observations: (1) Our approach exhibits consistent perfor-mance enhancements across various pre-training dataset scales, CLIP-based pre-training frameworks [20], and en-coder architectures. Notably, our method outperforms CLIP by 5.0% on zero-shot image classification when pre-training on 145M datasets, and by 33.4% in image-text retrieval with 30M datasets; (2) Our method tends to alleviate the model degradation problem and learns more comprehensive fea-ture representations than CLIP; (3) The learned FDT ex-hibit better: we visualize FDT’s correspondent patches and language tokens, and the results show that FDT success-fully capture and align visual-semantic concepts including objects, attributes, and actions. |
Jing_Deep_Graph_Reprogramming_CVPR_2023 | Abstract In this paper, we explore a novel model reusing task tai-lored for graph neural networks (GNNs), termed as “deep graph reprogramming”. We strive to reprogram a pre-trained GNN, without amending raw node features nor model parameters, to handle a bunch of cross-level down-stream tasks in various domains. To this end, we propose an innovative Data Reprogramming paradigm alongside a Model Reprogramming paradigm. The former one aims to address the challenge of diversified graph feature dimen-sions for various tasks on the input side, while the latter alleviates the dilemma of fixed per-task-per-model behav-ior on the model side. For data reprogramming, we specif-ically devise an elaborated Meta-FeatPadding method to deal with heterogeneous input dimensions, and also de-velop a transductive Edge-Slimming as well as an induc-tive Meta-GraPadding approach for diverse homogenous samples. Meanwhile, for model reprogramming, we pro-pose a novel task-adaptive Reprogrammable-Aggregator, to endow the frozen model with larger expressive capaci-ties in handling cross-domain tasks. Experiments on four-teen datasets across node/graph classification/regression, 3D object recognition, and distributed action recognition, demonstrate that the proposed methods yield gratifying re-sults, on par with those by re-training from scratch. | 1. Introduction With the explosive growth of graph data, graph neural networks (GNNs) have been deployed across increasingly wider areas [18, 20, 55, 57, 58], such as recommendation system [48] and autonomous driving [32, 45, 47]. However, the favorable performance for such applications generally comes at the expense of tremendous training efforts and high memory loads, precluding the deployment of GNNs on the edge side. As such, reusing pre-trained GNNs to alleviate training costs has recently emerged as a trending research topic [7, 11, 19, 21, 39, 53, 54, 56, 69]. Pioneered by the work of [56] that generalize knowl-edge distillation [14,31,34,59–61] to the non-Euclidean do-Pre-trained TaskNode Classification Results Graph Neural NetworkDownstream Task #1 Graph Neural Network 1032)( Node Classification ResultsDeep Graph Reprogramming 102)(⋯⋮⋮⋯Downstream Task #2Downstream Task #3 Graph Neural Network Graph Classification ResultsBacteria Graph Neural Network [ 2.262, -1.354 ]Graph Regression Results⋮Figure 1. Illustrations of the proposed task of deep graph repro-gramming (GARE) that aims to reuse pre-trained GNNs to handle plenty of cross-level tasks with heterogeneous graph feature di-mensions, without changing model architectures nor parameters. main, almost all existing approaches on reusing GNNs are achieved by following the distillation pipeline in [56]. De-spite the encouraging results, the distilling-based scheme is limited to the per-task-per-distillation setting, where a dis-tilled model can only tackle the same task as the teacher can, leading to considerable storage and computation bur-dens, especially for the deployment of multiple tasks. Meanwhile, the distillation mechanism rests upon the hypothesis that abundant pre-trained models are available in the target domains, which indeed holds for image-based areas that always take data in the regular RGB form, thereby readily allowing for per-model-multiple-dataset reusing. However, such an assumption is typically notsatisfied in the non-Euclidean domain: on the input side, irregular graph samples have heterogeneous feature dimensions, as shown with the color bars in Fig. 1; on the task side, graph analysis takes various task levels and settings, such as graph-, node-, and edge-level learning, as well as transductive and induc-tive scenarios. Such nature of topological diversities leads to inadequate pre-trained GNNs that fit the target down-stream tasks. In this paper, we strive to take one step towards gener-alized and resource-efficient GNN reusing, by studying a novel deep graph reprogramming (GARE) task. Our goal is This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24345 to reuse a single pre-trained GNN across multiple task lev-els and domains, for example the pre-trained one working on node classification and the downstream ones on graph classification and regression, as shown in Fig. 1. We fur-ther impose two constraints to both data and model, where raw features and parameters are frozen in handling down-stream tasks. As such, unlike distillation that essentially leverages a pre-trained teacher to guide the re-training of a student, the proposed task of G ARE, without re-training nor fine-tuning, can thereby be considered to reprogram a pre-trained GNN to perform formerly unseen tasks. Nonetheless, such an ambitious goal is accomplished with challenges: diversified graph feature dimensions and limited model capacities with a single frozen GNN. Driven by this observation, we accordingly reformulate G ARE into two dedicated paradigms on data and model sides, respec-tively, termed as data reprogramming (DARE) and model reprogramming (MERE). The goal of D ARE is to handle downstream graph samples with both the heterogeneous and homogenous dimensions, without amending pre-trained ar-chitectures. Meanwhile, M ERE aims to strengthen the ex-pressive power of frozen GNNs by dynamically changing model behaviors depending on various tasks. Towards this end, we propose a universal Meta-FeatPadding (MetaFP) approach for heterogeneous-DARE that allows the pre-trained GNN to manipulate heterogeneous-dimension graphs, by accommodating pre-trained feature dimensions via adaptive feature padding in a task-aware manner. The rationale behind the proposed MetaFP , paradoxically, is derived from adversarial repro-gramming examples [8] that are conventionally treated as attacks to learning systems, where attackers secretly repur-pose the use of a target model without informing model providers, by inserting perturbations to input images. Here we turn the role of the adversarial reprogramming example on its head, by padding around graph perturbations for generalized cross-task model reusing. Complementary to the dedicated MetaFP that is tai-lored for heterogeneous-D ARE, we also devise a trans-ductive Edge-Slimming (EdgSlim) and an inductive Meta-GraPadding (MetaGP) methods for homogenous-D ARE, that handle the downstream graphs with homogenous di-mensions under transductive and inductive task settings, re-spectively, by adaptively eliminating node connections or inserting a tiny task-specific graph, with only, for exam-ple, ten vertices, to the raw input sample. Furthermore, we perform a pilot study on M ERE, exploring the pre-trained model capacity for various downstream tasks, by only re-programming the pre-trained aggregation behavior ( ReAgg ) upon the well-established Gumbel-Max trick. In sum, our contribution is a novel GNN-based model reusing paradigm that allows for the adaption of a pre-trained GNN to multiple cross-level downstream tasks,and meanwhile requires no re-training nor fine-tuning. This is typically achieved through a series of complemen-tary approaches entitled MetaFP ,EdgSlim , and MetaGP , that tackle the heterogeneous-and homogenous-dimension graphs within the transductive and inductive scenarios, re-spectively, together with an elaborated ReAgg method to enhance the model capacity. Experimental results on four-teen benchmarks demonstrate that a pre-trained GNN with GARE is competent to handle all sorts of downstream tasks. |
Cui_Learning_Joint_Latent_Space_EBM_Prior_Model_for_Multi-Layer_Generator_CVPR_2023 | Abstract This paper studies the fundamental problem of learn-ing multi-layer generator models. The multi-layer gener-ator model builds multiple layers of latent variables as a prior model on top of the generator, which benefits learn-ing complex data distribution and hierarchical represen-tations. However, such a prior model usually focuses on modeling inter-layer relations between latent variables by assuming non-informative (conditional) Gaussian distribu-tions, which can be limited in model expressivity. To tackle this issue and learn more expressive prior models, we pro-pose an energy-based model (EBM) on the joint latent space over all layers of latent variables with the multi-layer gen-erator as its backbone. Such joint latent space EBM prior model captures the intra-layer contextual relations at each layer through layer-wise energy terms, and latent variables across different layers are jointly corrected. We develop a joint training scheme via maximum likelihood estima-tion (MLE), which involves Markov Chain Monte Carlo (MCMC) sampling for both prior and posterior distribu-tions of the latent variables from different layers. To ensure efficient inference and learning, we further propose a varia-tional training scheme where an inference model is used to amortize the costly posterior MCMC sampling. Our experi-ments demonstrate that the learned model can be expressive in generating high-quality images and capturing hierarchi-cal features for better outlier detection. | 1. Introduction Deep generative models (a.k.a, generator models ) have made promising progress in learning complex data distri-butions and achieved great successes in image and video synthesis [21, 34, 37, 39] as well as representation learn-ing [5,48]. Such models usually consist of low-dimensional latent variables together with a top-down generation model that maps such latent factors to the observed data. The latent factors can serve as an abstract data representation, but it is often modelled via a single latent vector with non-informative prior distribution which leads to limited model expressivity and fails to capture different levels of abstrac-tions. Learning an informative prior model for hierarchical representations is needed, yet research in this direction is still under-developed. A principled way to learn such a prior model is by learn-ing the generator models with multiple layers of latent vari-ables. However, the learning of multi-layer generator model can be challenging as the inter-layer structural relation (i.e., latent variables across different layers) and the intra-layer contextual relation (i.e., latent units within the same layer) have to be effectively modelled and efficiently learned. Var-ious methods have been proposed [5,28,32,35,40], but they only focused on inter-layer modeling by assuming the con-ditional Gaussian distribution across different layers while ignoring the intra-layer contextual modeling as the latent units are conditional independent within each layer. The energy-based models (EBMs), on the other hand, are shown to be expressive and proved to be powerful in cap-turing contextual and non-structural data regularities. No-tably, [33] considers the EBM in the latent space for the non-hierarchical generator model, where the energy func-tion is considered as a correction of the non-informative Gaussian prior. The low dimensionality of the latent space makes EBM effective in capturing regularities in the data. However, a single latent vector in [33] is infeasible for cap-turing the patterns at multiple layers of abstractions, which limits its model capacity. In this paper, we propose to combine the strengths of the latent space EBM and the generator with multiple lay-ers of latent variables for better hierarchical representations and a more expressive prior model. Specifically, we in-troduce layer-wise energy terms to exponentially tilt the non-informative Gaussian conditional at each layer, and la-tent variables across different layers are modelled jointly through EBM with the multi-layer generator model as its backbone. Such a joint EBM prior model seamlessly in-tegrates the intra-layer contextual modeling via layer-wise energy terms and inter-layer structural modeling with multi-layer latent variables. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3603 The joint EBM prior model can be learned by maxi-mum likelihood estimation (MLE). Each learning iteration involves Markov chain Monte Carlo (MCMC) sampling of latent variables in each layer from both the prior and pos-terior distributions. The prior sampling can be efficiently done due to the low dimensionality of the latent variables and, more importantly, the lightweight networks for energy functions, while the posterior sampling can be less effi-cient. Therefore, we further develop the variational train-ing scheme where an additional inference model is used for posterior approximation and is jointly trained with the joint EBM prior model. Contributions: 1) We propose a joint latent space EBM prior model for the generator model with multiple layers of latent variables; 2) We develop the maximum likelihood learning algorithm that learns the joint EBM prior model based on MCMC prior and posterior sampling across differ-ent layers. We further propose the variational joint training scheme for efficient learning and inference; 3) We provide strong empirical results through extensive experiments. |
Alexandre_Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding_CVPR_2023 | Abstract Typical video compression systems consist of two main modules: motion coding and residual coding. This general architecture is adopted by classical coding schemes (such as international standards H.265 and H.266) and deep learning-based coding schemes. We propose a novel B-frame coding architecture based on two-layer Conditional Augmented Normalization Flows (CANF). It has the strik-ing feature of not transmitting any motion information. Our proposed idea of video compression without motion coding offers a new direction for learned video coding. Our base layer is a low-resolution image compressor that replaces the full-resolution motion compressor. The low-resolution coded image is merged with the warped high-resolution images to generate a high-quality image as a condition-ing signal for the enhancement-layer image coding in full resolution. One advantage of this architecture is signifi-cantly reduced computational complexity due to eliminat-ing the motion information compressor. In addition, we adopt a skip-mode coding technique to reduce the trans-mitted latent samples. The rate-distortion performance of our scheme is slightly lower than that of the state-of-the-art learned B-frame coding scheme, B-CANF , but outperforms other learned B-frame coding schemes. However, compared to B-CANF , our scheme saves 45% of multiply–accumulate operations (MACs) for encoding and 27% of MACs for de-coding. The code is available at https://nycu-clab.github.io. | 1. Introduction Digital video compression has been studied for over 50 years. It is a challenging research topic to exploit both spa-tial and temporal redundancies inside the video data. Theconcept of using motion compensation to reduce tempo-ral correlation for video coding first appeared in 1969 [30]. Since then, motion estimation and coding have become in-dispensable components in a video coding system. Two critical components in a mainstream video codec are mo-tion coding (including motion estimation and compensa-tion) and residual image coding. Motion coding is used to reduce temporal redundancy, and residual coding is used to reduce spatial redundancy. This structure is thus often called hybrid coding . The influential and widespread inter-national video standards in the past three decades, MPEG-2, A VC/H.264, HEVC/H.265, and VVC/H.266 all adopt this basic hybrid coding structure, although the fine de-tails vary in different versions of standards. These stan-dards specify three types of coding frames inside a Group of Pictures (GOP): I-frame (intra-coded), P-frame (predic-tive), and B-frame (bidirectional predictive). The P-frame coding process uses the previously coded frame to predict the target frame, and the B-frame coding uses two refer-ence frames (often previous and future frames) to predict the target frame. In this paper, we focus on learning-based B-frame video coding. In the past few years, deep-learning techniques have been used in video compression. Up to now, most learned codecs adopt the hybrid coding structure of the classical coding systems; that is, it contains two major components: motion coding and residual image coding. It is generally believed that accurate motion compensation is a very effec-tive way to reduce the temporal redundancy in the video. Only the remaining unpredictable (‘new’) pixels are coded using image coding techniques. Describing accurately the motion field around arbitrary shape objects often needs a large number of bits. For example, the HEVC standard de-fines a variety of block partitions to specify regions sharing the same motion vectors [31]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10249 Thanks to the advancement of neural networks, more ac-curate video predictors without transmitting bits are now available. Then, we need only to send the unpredictable pix-els. Often the locations of unpredictable pixels are sparse. It costs many bits to send the precise location information. Hence, we develop a bootstrap strategy. Instead of transmit-ting motion or location information or both, we send the un-predictable pixel information in two layers. The base layer sends the downsampled unpredictable information (contain-ing locations and pixel values) to the decoder. This piece of information serves two purposes. It provides a rough, downsampled image of unpredictable pixels and contains information indicating which pixels are unpredictable. With a well-designed neural network, we generate a weighting map that merges predictable and unpredictable pixels to construct a good-quality target frame. Then, at the enhance-ment layer , we send additional information (bits) to improve the quality of the final coded image. Motivated by the above observations, we propose a learned video compression scheme without a motion cod-ing module. It contains two image coding layers: the base and enhancement layers. The base layer consists of a video frame interpolator, a downsampling network, a neural network-based image compressor, and a super-resolution network (SR-Net). We adopt the efficient Conditional Aug-mented Normalization Flows (CANF) [15] for the image compressors at the base and enhancement layers. The frame interpolator produces the conditioning image for the base-layer CANF. The SR-Net upsamples the decoded base-layer image to recover a full-resolution image. The enhancement layer consists of a multi-frame merging network, skip-mask generator, skip-mode coding module and CANF compres-sor. The multi-frame merging network combines all the im-age information available at both the encoder and the de-coder to form a merged image. The merged image serves as the conditioning signal for the enhancement-layer CANF. To this end, we design a merging map (weights) genera-tor, a neural network accepting inputs from the upsampled base-layer image, and two motion-warped reference frames. To improve the coding efficiency of the enhancement-layer compressor, we design a skip-mode coding technique. A neural network generates a binary skip mask SMtaccord-ing to the predicted motion information, the base-layer merged output, and the enhancement-layer hyperprior out-put. The skip mask specifies the locations of significant and insignificant latent samples. The insignificant sam-ples are skipped from coding; at the decoder, they are re-placed by the corresponding mean values predicted by the enhancement-layer hyperprior module. The detailed skip-mode coding operation is described in the supplementary document. Our contributions are summarized as follows. • We propose a two-layer B-frame coding frameworkthat skips motion information from coding. • We introduce a multi-frame merging network to com-bine the base-layer and enhancement-layer frames in constructing a high-quality predictor for the enhancement-layer CANF compressor. We implement the above ideas in an end-to-end learned B-frame video compression system. Because the input im-age to the base-layer compressor has a much smaller di-mension, our system has much lower computational com-plexity (about 45% lower in terms of encoding MACs) than B-CANF [10], a typical hybrid coding system with similar coding components. |
Dong_Benchmarking_Robustness_of_3D_Object_Detection_to_Common_Corruptions_CVPR_2023 | Abstract 3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sen-sor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehen-sively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corrup-tions on public datasets, we establish three corruption ro-bustness benchmarks—KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption ro-bustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant perfor-mance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the in-dispensability of LiDAR point clouds. We release the bench-marks and codes at https://github.com/thu-ml/ 3D_Corruptions_AD to be helpful for future studies. | 1. Introduction As a fundamental task in autonomous driving, 3D object detection aims to identify objects of interest ( e.g., vehicles, pedestrians, or cyclists) in the surrounding environment by predicting their categories and the corresponding 3D bound-ing boxes. LiDAR and camera are two important types of sensors for 3D object detection, where the former provides *Corresponding authors.the depth information of road objects as sparse point clouds, while the latter captures abundant semantic information of the scene as color images. Based on the complementary na-ture of the two modalities, 3D object detection models can be categorized into LiDAR-only [29,47,48,60,69], camera-only [39, 56–58], and LiDAR-camera fusion [11,28, 34,53] models. Since autonomous driving is safety-critical, it is of paramount importance to assess the robustness of 3D object detectors under diverse circumstances before deployed. Although the recent progress of 3D object detection has led to significant improvements in typical benchmarks ( e.g., KITTI [17], nuScenes [6], and Waymo [51]), the existing models based on data-driven deep learning approaches of-ten generalize poorly to the corrupted data caused by, e.g., adverse weathers [21, 22, 27], sensor noises [7, 25, 44], and uncommon objects [9, 31], posing a formidable obstacle to safe and reliable autonomous driving [1]. To perform ro-bustness evaluation, recent works construct new datasets of road anomalies [9,23,31,40] or under extreme weather con-ditions [4, 15, 41]. Nevertheless, they are usually of small sizes due to the high data collection costs and the rareness of corner cases or adverse weathers. Other works synthesize common corruptions on clean datasets to benchmark robust-ness on image classification [25] and point cloud recogni-tion [44, 50], but they only consider several simple corrup-tions, which could be insufficient and unrealistic for 3D ob-ject detection. Therefore, it remains challenging to com-prehensively characterize different corruptions considering diverse driving scenarios and fairly evaluate corruption ro-bustness of existing models within a unified framework. In this paper, we systematically design 27types of com-mon corruptions in 3D object detection for both LiDAR and camera sensors to comprehensively and rigorously evalu-ate the corruption robustness of current 3D object detectors. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1022 LiDAR Camera Alignment AlignmentWeather Weather Sensor Sensor Motion Motion Object ObjectSnow Rain Fog Strong Sunlight Gaussian Noise Impulse Noise Uniform Noise Motion Compensation Motion Blur Moving Object Local Density Dec Local CutoutLocal Impluse Noise Local Uniform Noise Local Gaussian Noise Scale Shear Rotation Density Decrease Cutout FOV LostLiDAR CrosstalkGaussian Noise Impulse Noise Uniform Noise Spatial Misalignment Temporal Misalignment Figure 1. An overview of 27 corruptions for 3D object detection, which are categorized into weather, sensor, motion, object, and alignment levels. As shown, some corruptions are effective for one modality, while the others are applied to both ( e.g.,Snow ,Moving Object ,Shear ). The corruptions are grouped into weather ,sensor ,motion , object , and alignment levels, covering the majority of real-world corruption cases, as demonstrated in Fig. 1. Most of them are specifically designed for autonomous driving ( e.g., motion-level ones), which have not been explored before. Following [25], every corruption has five severities, leading to a total number of 135distinct corruptions. By applying them to typical autonomous driving datasets—KITTI [17], nuScenes [6], and Waymo [51], we establish three corrup-tion robustness benchmarks— KITTI-C ,nuScenes-C , and Waymo-C . We hope that they can serve as general datasets for comprehensively benchmarking corruption robustness of 3D object detectors and facilitating future research. We conduct large-scale experiments to compare the cor-ruption robustness of existing 3D object detection models. Specifically, we evaluate 11 models on KITTI-C, 10 models on nuScenes-C, and 3 models on Waymo-C. The models are of great variety with different input modalities, representa-tion methods, and detection heads. Based on the evaluation results, we find that: 1) the corruption robustness of 3D ob-ject detectors is highly correlated with their clean accuracy; 2) motion-level corruptions impair the model performance most, while being rarely explored before; 3) LiDAR-camera fusion models are more resistant to corruptions, but there is a trade-off between robustness under image corruptions and point cloud corruptions of fusion models. More discussions are provided in Sec. 6. Moreover, we study data augmenta-tion strategies [14, 64, 67] as potential solutions to improve corruption robustness, but find that they provide a little ro-bustness gain, leaving robustness enhancement of 3D object detection an open problem for future research. |
Chen_Seeing_Beyond_the_Brain_Conditional_Diffusion_Model_With_Sparse_Masked_CVPR_2023 | Abstract Decoding visual stimuli from brain recordings aims to deepen our understanding of the human visual system and build a solid foundation for bridging human and computer vision through the Brain-Computer Interface. However , reconstructing high-quality images with correct semantics from brain recordings is a challenging problem due to the complex underlying representations of brain signals and the scarcity of data annotations. In this work, we present MinD-Vis : Sparse Masked Bra inModeling with Double-Conditioned Latent Diffusion Model for Human Vision Decoding. Firstly, we learn an effective self-supervised representation of fMRI data using mask modeling in a large latent space inspired by the sparse coding of information in the primary visual cortex. Then by augmenting a latent diffusion model with double-conditioning, we show that MinD-Vis can reconstruct highly plausible images with semantically matching details from brain recordings using very few paired annotations. We benchmarked our model qual-itatively and quantitatively; the experimental results indicate that our method outperformed state-of-the-art in both semantic mapping (100-way semantic classification) and generation quality (FID) by 66% and41% respectively. An exhaustive ablation study was also conducted to analyze our framework. *Equal contributions. †Corresponding author (helen.zhou@nus.edu.sg)1. Introduction “What you think is what you see” . Human perception and prior knowledge are deeply intertwined in one’s mind [51]. Our perception of the world is determined not only by objective stimuli properties but also by our experiences, forming complex brain activities underlying our perception. Understanding these brain activities and recovering the encoded information is a key goal in cognitive neuroscience. Within this broad objective, decoding visual information is one of the challenging problems that are the focus of a large body of literature [22,26,34,67]. As a non-invasive and effective method to measure brain activities indirectly, functional Magnetic Resonance Imaging (fMRI) is usually used to recover visual information, such as the image classes [21, 39]. With the help of recent deep learning models, it is intriguing if the original visual stimuli can be directly recovered from corresponding fMRI [2, 46], especially with the guidance of biological principles [43, 52]. However, due to the lack of fMRI-image pairs and useful biological guidance when decoding complex neural activity from fMRI directly, reconstructed images are usually blurry and semantically meaningless. Thus it is crucial to learn effective and biological-valid representations for fMRI so that a clear and generalizable connection between brain activities and visual stimuli can be established with a few paired annotations. Moreover, individual variability in brain representations further complicates this problem. Individuals have unique This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22710 brain activation patterns responding to the same visual stimulus (See Fig. 2). From the perspective of fMRI representation learn-ing, a powerful brain decoding algorithm should robustly rec-ognize features shared across the population over a background of individual variation [5, 21]. On the other hand, we should also expect decoding variances due to the variation in individual perceptions. Therefore, we aim to learn representations from a large-scale dataset with rich demographic compositions and relax the direct generation from fMRI to conditional synthesis al-lowing for sampling variance under the same semantic category. Self-supervised learning with pretext tasks in large datasets is a powerful paradigm to distill the model with context knowledge. A domain-specific downstream task ( e.g. classi-fication) is usually adopted to finetune the pre-trained model further [36, 58], especially when the downstream dataset is small. V arious pretext tasks are designed to benefit downstream tasks [23,66]. Among these methods, Masked Signal Modeling (MSM) has achieved promising results in both vision [18,62] and language understanding [8, 37] recently. At the same time, the probabilistic diffusion denoising model has shown its superior performance in content generation and training stability [9]. A strong generation ability is also desired in our task to decode faithful visual stimuli from various categories. Driven by the above analysis, we propose MinD-Vis : Sparse Masked Brain Modeling with Double-Conditioned Latent Dif-fusion Model for Human Vision Decoding, a framework that ex-ploits the power of large-scale representation learning and mim-ics the sparse coding of information in the brain [14], including the visual cortex [56]. Different from [18], we use a much larger representation-to-data-space ratio to boost the information capac-ity of learned representations. Our contributions are as follows: •We propose Sparse-Coded Masked Brain Modeling (SC-MBM), designed under biological guidance as an effective brain feature learner for vision decoding. •Augmenting the latent diffusion model with double condi-tioning (DC-LDM), we enforce stronger decoding consis-tency while allowing variance under the same semantics. •Integrating the representation ability of SC-MBM with the generation ability of DC-LDM, MinD-Vis generates more plausible images with better preserved semantic information compared with previous methods. •Quantitative and qualitative tests are performed on multiple datasets, including a new dataset that has not previously been used to evaluate this task. 2. Related Work Conventional Decoding Methods Conventional methods rely on training with fMRI and corresponding hierarchical image features extracted by a pre-trained VGG [21,46]. During testing, the predicted image features will either be used for classification or fed into a generative model like GAN [45] to Figure 2. Individual Differences in Regions Responding to Visual Stimuli. Masks of the regions of interest activating during the same visual task differ in location and size across subjects. The primary visual cortex at the left (red) and the right (orange) hemisphere are shown. reconstruct the original stimulus. Instead of directly learning the limited training pairs, [2] enabled unsupervised learning on unpaired fMRI and images with a reconfigurable autoencoder design. [16] further extended this method to images from diverse semantic categories. However, just as with conventional approaches, fMRI is used directly for training and decoding. In [31,33], a regression model was used to extract latent fMRI representation, which was then used to finetune a pre-trained conditional bigGAN for image decoding. Mind Reader [27] encoded fMRI signals into a pre-aligned vision-language latent space and used StyleGAN2 for image generation. These methods generate more plausible and semantically meaningful images. We note that there is parallel work to ours by Takagi and Nishimoto [13], who proposed a method for image reconstruction from fMRI using Stable Diffusion. Their approach involves decoding brain activities to text descriptions and converting them to natural images using stable diffusion. Masked Signal Modeling The power of MSM in learning representations from a large-scale dataset was first exploited in [8], which was later adapted to computer vision [18,60,62]. Successful applications to downstream tasks show that useful context knowledge is learned with MSM as a pretext task. In essence, MSM is a generalized denoising autoencoder that aims to recover the original data from the remaining after masking [4]. The portion of data to mask is different across data modalities, with an extremely high mask ratio ( 75%) usually used for visual signals [18]. In contrast, due to the disparity in information density, a low mask ratio ( 25%) is used in natural languages [8]. Diffusion Probabilistic Models Diffusion models [49] are emerging generative models that generate high-quality content. In its basic form [20], the diffusion model is a probabilistic model defined by a bi-directional Markov Chain of states. Two processes are transiting through the chain: (i)The forward diffusion process gradually adds noise to the data until it is fully destroyed to an isotropic Gaussian noise; (ii)The reverse process recovers the corrupted data by modeling a posterior distribution p(x)at each state and eventually obtains a sample in the original data distribution [20,49,50]. Formally, assume a Markov Chain with a fixed length T, then the reverse conditional probability can be expressed as q(xt−1|xt), where t= 1,...,T andxtis obtained by corrupting the image xt−1 with Gaussian noise. After parameterization, this conditional probability can be learned by optimizing a variational lower 22711 Figure 3. MinD-Vis .Stage A (left): Pre-train on fMRI with SC-MBM. We patchify, randomly mask the fMRI, and then tokenize them to large embed-dings. We train an autoencoder (EMBM andDMBM )to recover the masked patches. Stage B (right): Integration with the LDM through double con-ditioning. We project the fMRI latent (LfMRI )through two paths to | the LDM conditioning space with a latent dimension projector (PfMRI→Cond ). One path connects directly to cross-attention heads in the LDM. Another path adds the fMRI latent to time embeddings. The LDM operates on a low-dimensional, compressed version of the original image ( i.e. image latent), however, the original image is used in this figure for illustrations. bound which can be simplified to the following objective [20]: Lsimple t =Ex,ϵ∼N(0,1),t ∥ϵ−ϵθ(xt,t)∥2 2 , (1) where ϵθ(xt,t)is a set of denoising functions that are usually implemented as UNets [9,41,42]. We refer readers to [20] for detailed descriptions of the diffusion models. Latent Diffusion Model (LDM) Apart from the conventional diffusion models that generate samples in the original data space, another category of diffusion models that generate samples in the latent feature space has been proposed [41,48]. Operating in the latent feature space reduces the computational cost and introduces less spatial downsampling, giving better image synthesis quality. The LDM proposed in [41] consists of two components: (i) V ector Quantization (VQ) regularized [12] autoencoder that compresses images into lower-dimensional latent features and then reconstructs the images from features in the same space; (ii) UNet-based denoising model with attention modules. Incorporating attention mechanisms into the UNet allows the flexibility to condition image generation through key/value/query vectors during the Markov Chain transitions. 3. Methodology 3.1. Motivation and Overview In this subsection, we provide a detailed analysis of the fMRI data and elaborate on the motivations of our designs. (i)fMRI measures the brain blood-oxygen-level-dependent (BOLD) changes as 3D voxels that serve as a proxy for the under-lying changes in brain activity. Neighboring voxels often have similar amplitudes, indicating spatial redundancy in fMRI [53]. (ii)fMRI data is averaged across the time during which the stimulus is presented. A region of interest (ROI) of the averageddata is usually extracted as a 1D vector of voxels (in the visual processing hierarchy). The ROI size (voxel number) is generally smaller than the image size (pixel number). For example, [21] has about 4500 voxels (visual cortex), which is much smaller than a 256×256RGB image. This creates a large difference in dimensionality when transforming fMRI into images. (iii)fMRI data from different datasets may have significant domain shifts due to experimental conditions and scanner setups. Even with the same scan conditions, ROI size and location mismatch persist due to individual differences (See Fig. 2). Driven by this analysis, we propose MinD-Vis , designed with two sequential stages as outlined in Fig. 3. Briefly, in Stage A , fMRI representations are learned by an autoencoder trained in a large fMRI dataset with masked signal modeling as a pretext task. The learned representations will be used as a con-dition to guide the image-generation process in the next stage. InStage B , the pre-trained fMRI encoder is integrated with the LDM through cross-attention and time-step conditioning for con-ditional synthesis. In this stage, the encoder is jointly finetuned with cross-attention heads in the LDM using paired annotations. 3.2. Stage A: Sparse-Coded MBM (SC-MBM) Activity in the human brain involves non-linear interactions among 86 billion neuronal cells in the brain and are thus highly complex [32,40]. The fMRI measuring the BOLD signals is an indirect and aggregate measure of neuronal activities, which can be analyzed hierarchically with functional networks [1,6,59]. These functional networks comprised of voxels of fMRI data have implicit correlations with each other in response to external stimuli [54,68]. Therefore, learning these implicit correlations by recovering masked voxels will equip the pre-trained model with a deep contextual understanding of the fMRI data. 22712 Figure 4. Masked Brain Modeling. Mask ratio 0.75; 4500 voxels Following [18], we divide the vectorized voxels into patches which will be subsequently transformed into embeddings using a 1D convolutional layer with a stride equal to the patch size. The hemodynamic response and spatial smoothing functions in fMRI BOLD signal jointly cause spatial blurring, which creates spatial redundancy in fMRI data, like in natural images [11,47]. Due to the spatial redundancy, fMRI data can still be recovered even if a large portion is masked (See Fig. 4). Thus, in the first stage of MinD-Vis, we can mask a large portion of the fMRI patches to save computations without losing the learning power of masked modeling. Masked Image Modeling (MIM) uses the embedding-to-patch-size ratio around one [18], leading to a representation size similar to the original data size. However, we use a large embedding-to-patch-size ratio, which significantly increases the information capacity with a large fMRI representation space. This design also relates to the sparse coding of information in the brain, which has been proposed as a general strategy for the representation of sensory information [25]. We also adopt an asymmetric architecture as in [18]: the encoder is optimized to learn effective fMRI representations, while the decoder tries to predict the masked patches. Therefore, we make the decoder small in size, and it is discarded in Stage B as long as the pre-training converges. Visual Encoding and Brain-Inspired Sparse Coding Here, we explain the biological basis of using SC-MBM to learn representations of visual stimuli in the brain from the perspective of visual encoding mechanisms. Theoretical and empirical studies suggest that visual stimuli are sparsely encoded in the primary visual cortex [32, 38, 56], with most natural images activating only a portion of the neurons in the visual cortex. This strategy increases information transmission efficiency and creates minimal redundancy in the brain [38]. As a result, visual information of natural scenes can be reconstructed from a small portion of data collected from the primary visual cortex via different imaging modalities, including fMRI [15,64]. This observation is interesting for the computer vision community because the sparse coding could be an efficient way for vision encoding in computer vision as well [25,63]. Sparse coding is an encoding strategy that in essence uses over-complete bases to represent data, where more locality isgenerally enforced to generate smoother representations [57,65]. In SC-MBM, fMRI data are divided into patches to introduce locality constraints. Then each patch is encoded into a high-dimensional vector space with a size much larger than the original data space, thus creating an over-complete space for fMRI representation (See Appendix). Emulating the brain vision encoding, SC-MBM can be a biologically-valid and effective brain feature learner for fMRI decoding. 3.3. Stage B: Double-Conditioned LDM (DC-LDM) After the large-scale context learning in Stage A, the fMRI encoder transforms fMRI data into sparsely coded representa-tions with locality constraints. To further decode visual contents from this abstract representation and allow for sampling variance, we formulate the decoding task as a conditional synthesis problem and approach it with a pre-trained LDM. The LDM operates on the image latent space denoted by E(x) where xis an image in pixel space and E(·)is a VQ encoder. In our setting, we omit E(x)and use xdirectly to represent the la-tent variable of LDM for simplicity. Specifically, given the fMRI dataz, we aim to learn the reverse diffusion process formulated byq(xt−1|xt,z). As proposed in [41], conditional information is applied through cross-attention heads in the attention-based UNet, where CrossAttention (Q,K,V )=softmax QKT √ d , with Q=W(i) Qφi(xt), K=W(i) Kτθ(z), V=W(i) Vτθ(z). Here, τθis the fMRI encoder with a suitable dimension projec-tor,φi(xt)denotes intermediate values of the UNet and W(i) Q, W(i) K,W(i) Vare projector matrices with learnable parameters. Diversity and consistency are two opposite objectives when sampling a conditional generative model. Sampling diversity across various modalities such as label-to-image and text-to-image is very important in many image-generation tasks. However, the fMRI-to-image transition relies more on genera-tion consistency —decoded images from similar brain activities are expected to be semantically similar. Thus, a stronger conditioning mechanism is desired to ensure such generation consistency, especially for probabilistic diffusion models. In this way, we integrate the cross-attention conditioning with another conditioning method called the time steps condi-tioning [9] to provide stronger guidance for our task. In time steps conditioning, we add σθ(τθ(z))to time step embeddings, where σθ(·)is another suitable dimension projector. Time step embeddings are used in intermediate layers of the UNet, thus we haveφi(xt)=φi(xt,σθ(τθ(z))). We further reformulate the op-timization objective Eq. (1) to a double conditioning alternation: Lcond t=Ex,ϵ∼N(0,1),t ∥ϵ−ϵθ(xt,t,τ(z),σ(τ(z)))∥2 2 .(2) We omit the parameterization symbol θinτ(·)andσ(·)for simplicity. Additionally, we have τ(z)∈RM×dτandσ(τ(z))∈ R1×dt, where dτanddtare the latent dimensions and time em-bedding dimension respectively, and M |
Huang_Neural_Voting_Field_for_Camera-Space_3D_Hand_Pose_Estimation_CVPR_2023 | Abstract We present a unified framework for camera-space 3D hand pose estimation from a single RGB image based on 3D implicit representation. As opposed to recent works, most of which first adopt holistic or pixel-level dense regression to obtain relative 3D hand pose and then follow with complex second-stage operations for 3D global root or scale recov-ery, we propose a novel unified 3D dense regression scheme to estimate camera-space 3D hand pose via dense 3D point-wise voting in camera frustum. Through direct dense mod-eling in 3D domain inspired by Pixel-aligned Implicit Func-tions for 3D detailed reconstruction, our proposed Neural Voting Field (NVF) fully models 3D dense local evidence and hand global geometry, helping to alleviate common 2D-to-3D ambiguities. Specifically, for a 3D query point in camera frustum and its pixel-aligned image feature, NVF , represented by a Multi-Layer Perceptron, regresses: (i) its signed distance to the hand surface; (ii) a set of 4D offset vectors (1D voting weight and 3D directional vector to each hand joint). Following a vote-casting scheme, 4D offset vec-tors from near-surface points are selected to calculate the 3D hand joint coordinates by a weighted average. Experi-ments demonstrate that NVF outperforms existing state-of-the-art algorithms on FreiHAND dataset for camera-space 3D hand pose estimation. We also adapt NVF to the clas-sic task of root-relative 3D hand pose estimation, for which NVF also obtains state-of-the-art results on HO3D dataset. ∗Work done during Lin Huang’s internship with Microsoft.1. Introduction Monocular 3D hand pose estimation, which aims to re-cover 3D locations of hand joints from an RGB image, has attracted enormous attention and made remarkable progress in recent years. As a long-standing task in computer vision, it remains challenging due to its highly articulated structure, large variations in orientations, severe (self-)occlusion, and inherent 2D-to-3D scale and depth ambiguity. Owing to the aforementioned difficulties, most existing works [4–7, 15, 19, 26, 32, 35, 36, 45, 52, 53, 59, 60, 63] fo-cused on one aspect of this general problem, which is to estimate root-relative 3D hand pose ( i.e., 3D joint coordi-nates relative to a pre-defined root joint, such as hand wrist). While accurate 2D-to-3D root-relative pose estimation is essential for numerous applications in Virtual/Augmented Reality, there are various interactive tasks in which having root-relative hand joint coordinates alone is insufficient. For instance, being able to recover camera-space 3D hand joint coordinates in an AR view enables the user to directly use hands to manipulate virtual objects moving in 3D space. To recover robust camera-space 3D hand pose, there are two key design elements: (1) the ability to exploit dense local evidence. Specifically, as demonstrated in pre-vious works [14, 16, 25, 29, 34, 42, 43, 47, 49, 54–56], dense regression-based methods are more effective than holistic regression-based counterparts for handling highly articu-lated 3D pose structure, attributed to its ability to main-tain the input data spatial structure and fully exploit local evidence; (2) the ability to reason 3D hand global geome-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8969 Method First Stage Second Stage Iqbal et al. [29] 2D-Dense Scale Estimation ObMan [22] Holistic Root Estimation I2L-MeshNet [42] 1D-Dense Root Depth Estimation CMR [8] 2D-Dense+SpiralConv Registration Hasson et al. [21] Holistic Model Fitting NVF (Ours) Unified 3D-Dense Weighted Average Table 1. Comparison of representative absolute 3D hand pose estimation schemes. Please refer to Sec. 2.1 for more details. try. As shown in previous literature [13, 29, 33], given 2D evidence and camera intrinsic parameters, reasonable un-derstanding towards target object 3D structure/geometry is crucial to alleviate 2D-to-3D depth ambiguity, which is the key to accurately locate 3D hand pose in camera space. To fully integrate both elements into our algorithm de-sign in a unified manner, we connect with Pixel-aligned Implicit Function (PIFu) [24, 27, 49, 50, 62]. Through di-rect dense modeling in 3D domain with pixel-aligned lo-cal features, PIFu-based methods reconstruct highly de-tailed 3D human geometry from an RGB image in a uni-fied way, showing its ability to model high-frequency local details such as clothing wrinkles while generating complete global geometry including largely occluded region such as the back of a person. Inspired by these results, we pro-pose a novel unified 3D dense regression scheme based on a 3D implicit function for robust camera-space 3D hand pose estimation. Specifically, for each of the 3D query points densely sampled in camera frustum and its pixel-aligned image feature, unlike PIFu predicting occupancy value for each point, our proposed Neural V oting Field (NVF) re-gresses: (i) the signed distance between the point and the hand surface; (ii) a set of 4D offset vectors (1D voting weight and 3D directional vector from the point to each joint). Following a vote-casting scheme, 4D offset vectors from near-surface points ( i.e., points for which the predicted signed distance is below a threshold) are selected to calcu-late the 3D hand joint coordinates by a weighted average. Most existing works for camera-space 3D hand pose es-timation, as shown in Tab. 1, follow a two-stage estimation scheme. They first adopt holistic or pixel-level dense re-gression to obtain 2D and relative 3D hand poses and then follow with complex second-stage processing such as fit-ting, registration, using a separate network for 3D global root location or scale estimation. NVF instead provides a unified solution via direct dense modeling in 3D cam-era space followed by a simple weighted average operation, which enables reasoning about 3D dense local evidence and hand global geometry. As shown in Fig. 1, NVF makes solid 3D point-wise prediction and overall distribution of signed distance and voting weight even in highly occluded regions, leading to accurate camera-space pose estimation. In Sec. 4, we show that NVF noticeably outperforms two baselines based on holistic regression and 2D dense regres-sion. Besides, NVF exhibits state-of-the-art performancefor the task of camera-space 3D hand pose estimation on FreiHAND dataset. We also adapt NVF to the classic task of root-relative 3D hand pose estimation, for which NVF also achieves state-of-the-art results on HO3D dataset. Since estimating absolute 3D pose from an RGB image is an ill-posed problem due to scale and depth ambiguity [13, 29], in Sec. 4.4, we also provide ablation analysis on hand scale based on results from NVF and the baselines. This work makes the following contributions: | 1. We propose Neural V oting Field (NVF), as the first 3D implicit representation-based unified solution to esti-mate camera-space 3D hand pose. |
Jiang_Masked_and_Adaptive_Transformer_for_Exemplar_Based_Image_Translation_CVPR_2023 | Abstract We present a novel framework for exemplar based im-age translation. Recent advanced methods for this task mainly focus on establishing cross-domain semantic corre-spondence, which sequentially dominates image generation in the manner of local style control. Unfortunately, cross-domain semantic matching is challenging; and matching errors ultimately degrade the quality of generated images. To overcome this challenge, we improve the accuracy of matching on the one hand, and diminish the role of match-ing in image generation on the other hand. To achieve the former, we propose a masked and adaptive trans-former (MAT) for learning accurate cross-domain corre-spondence, and executing context-aware feature augmen-tation. To achieve the latter, we use source features of the input and global style codes of the exemplar, as sup-plementary information, for decoding an image. Besides, we devise a novel contrastive style learning method, for acquire quality-discriminative style representations, which in turn benefit high-quality image generation. Experimen-tal results show that our method, dubbed MATEBIT, per-forms considerably better than state-of-the-art methods, in diverse image translation tasks. The codes are available at https://github.com/AiArt-HDU/MATEBIT . | 1. Introduction Image-to-image translation aims at transfer images in a source domain to a target domain [16, 50]. Early stud-ies learn mappings directly by Generating Adversarial Net-works (GANs), and have shown great success in various ap-plications [2, 42]. Recently, exemplar based image transla-tion [29,30,45], where an exemplar image is used to control the style of translated images, has attracted a lot of attention. Such methods allow high flexibility and controllability, and have a wide range of potential applications in social net-works and metaverse. For example, people can transfer a *Corresponding Author Masked Corr. Full Corr. Exampler Source layer 1 layer 2 layer 3 layer 1 layer 2 layer 3 w/ Masked Corr. w/ Full Corr. CAM CAM0.840.880.920.961 0 1 2 3 4 # MAT blocksTexture Color Semantic 1519232731 0 1 2 3 4 # MAT blocksFID SWDw/o MAT w/o Style LossExemplar Source w/o AdaConv w/ Full Corr. Full w/o global z w/ CAST loss w/o skip conct. w/o ������ w/ �CAST Figure 1. Visualization of correspondence maps. The red point is the query position. Full Corr. andMasked Corr. denote the full correspondence [45] and masked one in our method, respectively. CAM denotes visualization by Class Activation Mapping [48]. facial sketch to an artistic portrait, in the style of oil paint-ings or avatars. Despite the remarkable progress, yielding high-fidelity images with consistent semantic and faithful styles remains a grand challenge. Early pioneering works [15, 21, 35] attempt to globally control the style of generated images. However, such meth-ods ignore spatial correlations between an input image and an exemplar, and may fail to produce faithful details. Re-cently, some advanced methods [25, 44, 45, 49] first estab-lish the cross-domain semantic correspondence between an input image and an exemplar, and then use it to warp the exemplar for controlling local style patterns. In these meth-ods, the quality of generated images relies heavily on the learned correspondence [39]. Unfortunately, cross-domain semantic matching is challenging, since there is no reliable supervision on correspondence learning [45]. As a result, potential matching errors ultimately lead to degraded arti-facts in generated images. To combat this challenge, we propose to boost the match-ing accuracy on one hand, and to diminish the role of match-ing in image generation on the other hand. Inspired by the great success of Transformers [6, 10, 26, 41], we first devise aMasked and Adaptive Transformer (MAT) for learning ac-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22418 curate cross-domain correspondence and executing context-aware feature augmentation. Previous works [44, 45, 49] have used the vanilla attention mechanism [41] for learning full correspondence. However, the initial attention typically involves ambiguous correspondences (2nd row in Fig. 1). To mitigate these limitations, in MAT, we use a masked attention to distinguish the correspondence as reliable or not, and then reliability-adaptively aggregate representa-tions. Besides, the Feed-Forward Network (FFN) [41] in vanilla transformers neglects contextual correlations inside an image. We thus replace FFN by an adaptive convolution block [28], where the coordinate attention [12] and depth-wise separable convolution [5] are used to capture contex-tual correlations and to improve efficiency. With a joint consideration of matching reliability and contextual correla-tions, MAT gradually focuses on accurate correspondences and emphasizes on features of interest (3rd row in Fig. 1). In addition, to boost both the semantic consistency and style faithfulness, we supplementally use semantic features of the input image and global style codes of the exemplar for decoding an image. To this end, we first design our whole network following the U-Net architecture [16]. Besides, we devise a novel contrastive style learning (CSL) framework for acquiring discriminative style representations. Recently, Zhang et al. [47] propose a similar CSL method, where the target exemplar is used as a positive sample, and the other exemplars as negative ones. Differently, we use low-quality images, generated during early training stages, as negative samples. In this way, our style codes are desired to dis-criminate not only subtle differences in style, but also those in perceptual quality. Ultimately, the learned global style codes, cooperating with the local style control induced by MAT, in turn benefit high-quality image generation. With the proposed techniques above, our full model, dubbed MATEBIT, diminishes the impact of position-wise matching on image quality, and integrates both local and global style control for image generation. Experimental results show that MATEBIT generates considerably more plausible images than previous state-of-the-art methods, in diverse image translation tasks. In addition, comprehen-sive ablation studies demonstrate the effectiveness of our proposed components. Finally, we perform interesting ap-plications of photo-to-painting translation and Chinese ink paintings generation. |
Chang_Pointersect_Neural_Rendering_With_Cloud-Ray_Intersection_CVPR_2023 | Abstract We propose a novel method that renders point clouds as if they are surfaces. The proposed method is differentiable and requires no scene-specific optimization. This unique capabil-ity enables, out-of-the-box, surface normal estimation, ren-dering room-scale point clouds, inverse rendering, and ray tracing with global illumination. Unlike existing work that focuses on converting point clouds to other representations— e.g., surfaces or implicit functions—our key idea is to directly infer the intersection of a light ray with the underlying sur-face represented by the given point cloud. Specifically, we train a set transformer that, given a small number of local neighbor points along a light ray, provides the intersection point, the surface normal, and the material blending weights, which are used to render the outcome of this light ray. Lo-calizing the problem into small neighborhoods enables us to train a model with only 48 meshes and apply it to un-seen point clouds. Our model achieves higher estimation accuracy than state-of-the-art surface reconstruction and point-cloud rendering methods on three test sets. When ap-plied to room-scale point clouds, without any scene-specific optimization, the model achieves competitive quality with the state-of-the-art novel-view rendering methods. Moreover, we demonstrate ability to render and manipulate Lidar-scanned point clouds such as lighting control and object insertion. | 1. Introduction Point clouds are abundant. They are samples of surfaces, and can be captured by sensors such as Lidar, continuous-wave time-of-flight, and stereo camera setups. Point-cloud representation provides a straightforward connection to the location of the surfaces in space, and thus is an intuitive primitive to represent geometry [15]. Despite being ubiquitous, a core limitation of point clouds is that they are non-trivial to render. Each point in the point cloud occupies no volume—one cannot render them into images as is. Therefore, existing methods either as-*Work done at Apple. Corresponding author: jenhao chang@apple.com (a) Illustration of the proposed method (b) Rendered results of a sphere represented by points Figure 1. We propose pointersect , a novel method to perform cloud-ray intersection. (a) Instead of projecting points onto the sensor, suffering from holes, we trace rays from the sensor and estimate the intersection point pbetween a ray and the underlying surface represented by the points. We additionally estimate the surface normal ⃗ n, and the convex combination weights of points near the ray to blend material or color, wj. (b) The capability to perform cloud-ray intersection enables us to render point clouds with the standard ray tracing method, i.e., path tracing. The result shows the effect of global illumination, e.g., the cast shadow and reflection. sign each point a volume-occupying shape, e.g., an ori-ented disk [ 36,53], a sphere [ 28], or turn it into other shape representations like meshes [ 27] or implicit func-tions [ 14,18,22,30,35]. However, it is difficult to de-termine the ideal shape, e.g., the radius of spheres or disks, for rendering. Small shapes would cause holes, while large shapes would cause blobby renderings, producing artifacts in the rasterized images. While the artifacts can be alleviated This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8359 by finetuning the rasterized images with an additional neural rendering step, the operation often requires per-scene train-ing [3,9,28]. On the other hand, transforming point clouds into other shape representations complicates the pipeline and prevents gradients passing back to the point cloud through the new shape representation (in the case of inverse ren-dering). For example, turning point clouds into a mesh or a Signed Distance Function (SDF) [ 18,30] would require any changes on the point cloud to trigger retraining of these representations, which would clearly be prohibitive. Recent works [ 34,51] raise new ideas to directly perform ray-casting on point clouds. For each scene, these methods first learn a feature embedding for each point; then they aggregate features near each camera ray to predict colors. However, these methods require per-scene training since the feature embedding is scene-dependent. In our case, we aim for a solution that does not require scene-specific optimiza-tion and can be applied to any scene. In this work, we propose pointersect , an alternative that can directly ray-trace point clouds by allowing one to use point clouds as surface primitives, as shown in Figure 1. That is, we propose to train a neural network that provides the surface intersection point, surface normal, and material blending weights—the necessary information to render (or ray trace) a surface—given a point cloud and a query ray. Implementing this idea requires paying attention to de-tails. A core observation is that the problem to find the intersecting surface point is SE(3) equivariant—any rigid transform on the input ( i.e., the point cloud and the query ray) should result in the rigid transformation of the output (i.e., the intersection point and the surface normal). Naively training a neural network would require the network to learn this equivariance, which is non-trivial [ 5,13,45]. Instead, we opt to remove the need for learning this equivariance by canonicalizing the input according to the queried light ray. In addition, pointersect should be invariant to the order in which the points are provided—we thus utilize a transformer to learn the set function. It is important to note that finding intersection points be-tween rays and surfaces is a highly atomic and localized problem which can be solved only with local information. Thus, we design our method to only consider nearby points, where the surface would have been, and how the surface texture and normal can be derived from these nearby points. By constraining the input to be a small number ( ∼100) of neighboring points associated to a query ray, our method can be trained on only a handful of meshes, then be applied to unseen point clouds. As our experiments show, while only trained on 48 meshes, pointersect significantly improves the Poisson surface reconstruction, a scene-specific optimiza-tion method, on three test datasets. We also demonstrate the generality and differentiability of pointersect on various ap-plications: novel-view synthesis on room-scale point clouds,inverse rendering, and ray tracing with global illumination. Finally, we render room-scale Lidar-scanned point clouds and showcase the capability to directly render edited scenes, without any scene-specific optimization. In short, our contributions are: •We propose pointersect , a neural network performing the cloud-ray intersection operation. Pointersect is easy to train, and once learned, can be applied to unseen point clouds—we evaluate the same model on three test datasets. •We demonstrate various applications with pointersect, in-cluding room-scale point cloud rendering, relighting, in-verse rendering, and ray tracing. •We apply pointersect on Lidar-scanned point clouds and demonstrate novel-view synthesis and scene editing. We encourage the readers to examine results and videos of novel-view rendering, relighting, and scene editing in the supplemental material and website (https://machinelearning. apple.com/research/pointersect). |
Chow_STDLens_Model_Hijacking-Resilient_Federated_Learning_for_Object_Detection_CVPR_2023 | Abstract Federated Learning (FL) has been gaining popularity as a collaborative learning framework to train deep learning-based object detection models over a distributed popula-tion of clients. Despite its advantages, FL is vulnerable to model hijacking. The attacker can control how the ob-ject detection system should misbehave by implanting Tro-janed gradients using only a small number of compromised clients in the collaborative learning process. This paper introduces STDLens, a principled approach to safeguard-ing FL against such attacks. We first investigate existing mitigation mechanisms and analyze their failures caused by the inherent errors in spatial clustering analysis on gradi-ents. Based on the insights, we introduce a three-tier foren-sic framework to identify and expel Trojaned gradients and reclaim the performance over the course of FL. We con-sider three types of adaptive attacks and demonstrate the ro-bustness of STDLens against advanced adversaries. Exten-sive experiments show that STDLens can protect FL against different model hijacking attacks and outperform existing methods in identifying and removing Trojaned gradients with significantly higher precision and much lower false-positive rates. The source code is available at https: //github.com/git-disl/STDLens . | 1. Introduction Federated Learning (FL) for object detection has at-tracted numerous applications [12], especially in healthcare, with strict privacy regulations protecting patient medical records [25, 27]. Instead of using a centralized data cura-tor, FL can train a global object detection model with a dis-tributed population of clients. Each client only shares its fixed-size gradient (updated model) parameters (e.g., 246.9 MB for YOLOv3 [15]) with the FL server for aggregation while keeping its private raw data local (e.g., terabytes of videos) [13]. Such a paradigm lowers the bar for knowl-edge sharing and eases the recruitment of contributors, but it becomes vulnerable to model hijacking [9, 21]. Model hijacking aims at interfering with the training pro-Scenario Object Detection Results AP person AP car Benign 52.43 74.12 Class-Poison Victim: Person 38.22 (↓27%)71.79 (↓3%) BBox-Poison Victim: Person 18.06 (↓66%)72.82 (↓2%) Objn-Poison Victim: Person 37.61 (↓28%)71.59 (↓3%) Table 1. FL-trained detectors can be hijacked by perception poi-soning to misdetect objects of designated classes (e.g., person) in three different ways ( 2nd to 4th rows) [3], while objects of non-victim classes (e.g., car) have negligible performance degradation. cess of a machine learning model and causes it to misbe-have at the deployment stage [17]. The FL-trained global model can be indirectly hijacked by a small number of com-promised clients who share Trojaned gradients to gradually confuse the learning trajectory of FL [10]. Such gradients can be generated in black-box through data poisoning. The attacker only has to poison the local training data owned by a compromised client, such as changing the ground-truth label of certain objects by dirty-label poisoning [21] or im-planting a backdoor to trigger the malfunctioning of a hi-jacked model [1, 2, 8, 20]. The FL software will take the poisoned local training data and unintentionally generate malicious gradients for sharing. With the growing num-ber of real-world applications, recent work has begun to understand the hijacking of FL-based object detection sys-tems [3]. As shown in Table 1, the adversary can designate how the hijacked model should misdetect objects of cer-tain classes (e.g., person) while ensuring other objects can still be precisely detected (e.g., car). This fine-grained con-trol opens the door for an adversary to design a stealthy at-tack configuration to hurt the FL system yet remains hardly noticeable by the owner [5], leading to catastrophic conse-quences such as car crashes. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16343 This paper introduces STDLens, a three-tier defense methodology against model hijacking attacks, with the fol-lowing original contributions. First, we analyze existing de-fenses based on spatial signature analysis and show their ineffectiveness in protecting FL. Second, we introduce a pipeline with per-client spatio-temporal signature analysis to identify Trojaned gradients, track their contributors, re-voke their subscriptions, and reclaim the detection perfor-mance. Third, we present a density-based confidence in-spection mechanism to manage the spatio-temporal uncer-tainty. It avoids purging benign clients contributing useful learning signals for the FL system. Finally, we extend per-ception poisoning with three types of adaptiveness for an-alyzing STDLens in countering advanced adversaries. Ex-tensive experiments show that STDLens outperforms com-petitive defense methods by a large margin in defense pre-cision with a low false-positive rate, reducing the accuracy drop under attack from 34.47% to0.24%, maintaining the object detection accuracy on par with the performance un-der the benign scenario. |
Grainger_PaCa-ViT_Learning_Patch-to-Cluster_Attention_in_Vision_Transformers_CVPR_2023 | Abstract Vision Transformers (ViTs) are built on the assumption of treating image patches as “visual tokens” and learn patch-to-patch attention. The patch embedding based to-kenizer has a semantic gap with respect to its counterpart, the textual tokenizer. The patch-to-patch attention suffers from the quadratic complexity issue, and also makes it non-trivial to explain learned ViTs. To address these issues in ViT, this paper proposes to learn Patch-to-Cluster atten-tion (PaCa) in ViT. Queries in our PaCa-ViT starts with patches, while keys and values are directly based on clus-tering (with a predefined small number of clusters). The clusters are learned end-to-end, leading to better tokenizers and inducing joint clustering-for-attention and attention-for-clustering for better and interpretable models. The quadratic complexity is relaxed to linear complexity. The proposed PaCa module is used in designing efficient and in-terpretable ViT backbones and semantic segmentation head networks. In experiments, the proposed methods are tested on ImageNet-1k image classification, MS-COCO object de-tection and instance segmentation and MIT-ADE20k se-mantic segmentation. Compared with the prior art, it ob-tains better performance in all the three benchmarks than the SWin [ 32] and the PVTs [ 47,48] by significant mar-gins in ImageNet-1k and MIT-ADE20k. It is also signifi-cantly more efficient than PVT models in MS-COCO and MIT-ADE20k due to the linear complexity. The learned clusters are semantically meaningful. Code and model checkpoints are available at https://github.com/ iVMCL/PaCaViT . | 1. Introduction A picture is worth a thousand words. Seeking solu-tions that can bridge the semantic gap between those words and raw image data has long been, and remains, a grand challenge in computer vision, machine learning and AI. Deep learning has revolutionized the field of computer vi-sion in the past decade. More recently, Vision Transform-ers (ViTs) [ 13,45] have witnessed remarkable progress in *T. Wu is the corresponding author. !cluster heatmaps (e.g., !=100) iii) The Proposed PaCa: Patch-to-Cluster Attention i) Vanilla Patch-to-PatchAttentionPatch-basedQueriesPatch-basedKey & Value !patches"patches ii) Patch-to-Reduced-PatchAttention Reducedcomplexity:%×'×(ℎ×*) Reduced-Patch-based Key & Valueℎpatches$patches Cluster-basedKey & Value…… Linearcomplexity:%×'×! Simple forward interpretability by directly visualizing the cluster heatmapsQuadraticcomplexity: (%×')! Figure 1. i) The vanilla patch-to-patch self-attention [ 13,45] di-rectly leverages image patch embeddings as visual tokens and suf-fers from its quadratic complexity. Every Query (e.g., the patches in the blue grid) needs to interact with every Key. ii) To address the quadratic complexity, one popular method is to leverage spa-tial reduction (e.g., implemented via a convolution with a stride r>1) in computing the Key and the Value [ 47,48]. It still per-forms patch-to-patch attention, but enjoys a reduced complexity. iii) We propose Patch-to-Cluster attention (PaCa) in this paper. A predefined number of Mcluster assignments is first learned and then used in computing the Key and Value, resulting in not only linear complexity, but also more meaningful visual tokens. computer vision. ViTs are built on the basis of treating image patches as “visual tokens” using patch embedding and learning patch-to-patch attention throughout. Unlike the textual tokens that are provided as inputs in natural lan-guage processing, visual tokens need to be learned first and continuously refined for more effective learning of ViTs. The patch embedding based tokenizer is a workaround in practice and has a semantic gap with respect to its counter-part, the textual tokenizer. On one hand, the well-known issue of the quadratic complexity of vanilla Transformer models and the 2D spatial nature of images create a non-trivial task of developing ViTs that are applicable for many vision problems including image classification, object de-tection and semantic segmentation. On the other hand, explaining trained ViTs requires non-trivial and sophisti-cated methods [ 4] following the trend of eXplainable AI (XAI) [ 18] that has been extensively studied with convolu-tional neural networks. To address the quadratic complexity, there have been This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18568 Attention (b) The Proposed Patch-to-Cluster based Self-AttentionAttention Spatial reduction,e.g., (a) Spatial-Reduction based Self-AttentionFigure 2. Illustration of (a) the spatial reduction based self-attention and (b) the proposed PaCa module in vision applications, where (HW)represents the number of patches in the input with HandWthe height and width respectively, and Ma predefined small number of clusters (e.g., M=1 0 0 ). See text for details. two main variants developed with great success: One is to exploit the vanilla Transformer model locally using a predefined window size (e.g., 7⇥7) such as the SWin-Transformer [ 32] and the nested variant of ViT [ 62]. The other is to exploit another patch embedding at a coarser level (i.e., nested patch embedding) to reduce the sequence length (i.e., spatial reduction) before computing the keys and values (while keeping the query length unchanged) [ 47, 48,52], as illustrated in Fig. 1(left-bottom) and Fig. 2(a). Most of these variants follow the patch-to-patch attention setup used in the vanilla isotropic ViT models [ 13]. Al-though existing ViT variants have shown great results, patch embedding based approaches may not be the best way of learning visual tokens due to the underlying predefined sub-sampling of the image lattice. Additionally, patch-to-patch attention does not account for the spatial redundancy found in images due to their compositional nature and reusable parts [ 15]. Thus, it is worth exploring alternative meth-ods towards learning more semantically meaningful visual tokens. A question arises naturally: Can we rethink the patch-to-patch attention mechanism in vision tasks to hit three “birds” (reducing complexity, facilitating better vi-sual tokenizer and enabling simple forward explainability) with one stone? As shown in Fig. 1(right) and Fig. 2(b), this pa-per proposes to learn Patch-to-Cluster attention (PaCa) , which provides a straightforward way to address the afore-mentioned question: Given an input sequence XN,C(e.g., N=H·W), a light-weight clustering module finds mean-ingful clusters by first computing the cluster assignment, CN,M(Eqn. 4and Eqn. 5) with a predefined small number of clusters M(e.g., M= 100 ). Then, Mlatent “visual tokens”, ZM,Care formed via simple matrix multiplication between CT N,M(transposed) and XN,C. In inference, we can directly visualize the clusters CN,Mas heatmaps to reveal what has been captured by the trained models (Fig. 1, right-bottom). The proposed PaCa module induces jointly learn-ing clustering-for-attention and attention-for-clustering inViT models. We study four aspects of the PaCa module: •Where to compute the cluster assignments? Consider the stage-wise pyramidical architecture (Fig. 3) of assem-bling ViT blocks [ 47,48], a stage consists of a number of blocks. We test two settings: block-wise by comput-ing the cluster assignment for each block, or stage-wise by computing it only in the first block in a stage and then sharing it with the remaining blocks. Both give compa-rable performance. The latter is more efficient when the model becomes deeper. •How to compute the cluster assignment? We also test two settings: using 2D convolution or Multi-Layer Perceptron (MLP) based implementation. Both have similar perfor-mance. The latter is more generic and sheds light on ex-ploiting PaCa for more general Token-to-Cluster attention (ToCa) in a domain agnostic way. •How to leverage an external clustering teacher? We in-vestigate a method of exploiting a lightweight convolu-tion neural network (Fig. 4) in learning the cluster assign-ments that are shared by all blocks in a stage. It gives some interesting observations, and potentially pave a way for distilling large foundation models [ 3]. •What if the number of clusters is known? We further ex-tend the PaCa module in designing an effective head sub-network for dense prediction tasks such as image seman-tic segmentation (Fig. 5) where the number of clusters M is available based on the ground-truth number of classes and the learned cluster assignment CN,Mhas direct su-pervision. The PaCa segmentation head significantly im-proves the performance with reduced model complexity. In experiments, the proposed PaCa-ViT model is tested on the ImageNet-1k [ 12] image classification, the MS-COCO object detection and instance segmentation [ 31] and the MIT-ADE20k semantic segmentation [ 64]. It ob-tains consistently better performance across the three tasks than some strong baseline models including the Swin-Transformers [ 32] and the PVTv2 models [ 47]. |
Bahl_Affordances_From_Human_Videos_as_a_Versatile_Representation_for_Robotics_CVPR_2023 | Abstract Building a robot that can understand and learn to inter-act by watching humans has inspired several vision prob-lems. However, despite some successful results on static datasets, it remains unclear how current models can be used on a robot directly. In this paper, we aim to bridge this gap by leveraging videos of human interactions in an environ-ment centric manner. Utilizing internet videos of human behavior, we train a visual affordance model that estimates where andhow in the scene a human is likely to interact. The structure of these behavioral affordances directly en-ables the robot to perform many complex tasks. We show how to seamlessly integrate our affordance model with four robot learning paradigms including offline imitation learn-ing, exploration, goal-conditioned learning, and action pa-rameterization for reinforcement learning. We show the effi-cacy of our approach, which we call Vision-Robotics Bridge (VRB) across 4 real world environments, over 10 different tasks, and 2 robotic platforms operating in the wild. The meaning or value of a thing consists of what it affords... what we perceive when we look at objects are their affordances, not their qualities. J.J. Gibson (1979)1. Introduction Imagine standing in a brand-new kitchen. Before taking even a single action, we already have a good understand-ing of how most objects should be manipulated. This un-derstanding goes beyond semantics as we have a belief of where to hold objects and which direction to move them in, allowing us to interact with it. For instance, the oven is opened by pulling the handle downwards, the tap should be turned sideways, drawers are to be pulled outwards, and light switches are turned on with a flick. While things don’t always work as imagined and some exploration might be needed, but humans heavily rely on such visual affordances of objects to efficiently perform day-to-day tasks across en-vironments [34, 35]. Extracting such actionable knowledge from videos has long inspired the vision community. More recently, with improving performance on static datasets, the field is increasingly adopting a broader ‘active’ definition of vision through research in egocentric visual understanding and visual affordances from videos of human interaction. With deep learning, methods can now predict heatmaps of where a human would interact [38, 75] or seg-⋆equal contribution This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13778 Learning Visual AffordancesDeployment on RobotTrajectory Networkcontact point heatmaptrajectoryAffordance ModelAffordance ModelSceneencoder trajectorycontact points Figure 2. VRB Overview . First, we learn an actionable representation of visual affordances from human videos: the model predicts contact points and trajectory waypoints with supervision from future frames. For robot deployment, we query the affordance model and convert its outputs to 3D actions to execute. mentation of the object being interacted with [101]. Despite being motivated by the goal of enabling downstream robotic tasks, prior methods for affordance learning are tested pri-marily on human video datasets with no physical robot or in-the-wild experiments. Without integration with a robotic system, even the most basic question of how the affordance should be defined or represented remains unanswered, let alone evaluating its performance. On the contrary, most robot learning approaches, whether imitation or reinforcement learning, approach a new task or a new environment tabula rasa . At best, the vi-sual representation might be pretrained on some dataset [65, 79, 91, 100, 115, 117]. However, visual representations are only a small part of the larger problem. In robotics, es-pecially in continuous control, the state space complexity grows exponentially with actions. Thus, even with perfect perception, knowing what to do is difficult. Given an im-age, current computer vision approaches can label most of the objects, and even tell us approximately where they are but this is not sufficient for the robot to perform the task. It also needs to know where and how to manipulate the object, and figuring this out from scratch in every new environment is virtually impossible for all but the simplest of tasks. How do we alleviate this clear gap between visual learning and robotics? In this paper, we propose to rethink visual affordances as a means to bridge vision and robotics. We argue that rich video datasets of humans interacting can offer a lot more actionable information beyond just replacing ImageNet as a pretrained visual encoder for robot learning. Particularly, human interactions are a rich source of how a wide range of objects can be held and what are useful ways to manipulate their state. However, several challenges hinder the smooth integration of vision and robotics. We group them into three parts. First , what is an actionable way to represent affor-dances? Second , how to learn this representation in a data-driven and scalable manner? Third , how to adapt visual af-fordances for deployment across robot learning paradigms? To answer the first question, we find that contact points and post-contact trajectories are excellent robot-centric repre-sentations of visual affordances, as well as modeling the inherent multi-modality of possible interactions. We make effective use of egocentric datasets in order to tackle the sec-ond question. In particular, we reformulate the data to focus on frames without humans for predicting contact points and the post-contact trajectories. To extract free supervision for this prediction, we utilize off-the-shelf tools for estimating egomotion, human pose, and hand-object interaction. Fi-nally, we show how to seamlessly integrate these affordance priors with different kinds of robot learning paradigms. We call our approach Vision-Robotics Bridge (VRB) due to its core goal of bridging vision and robotics. We evaluate both the quality of our affordances and their usefulness for 4 different robotic paradigms – imitation and offline learning, exploration, visual goal-reaching, and us-ing the affordance model as a parameterization for action spaces. These are studied via extensive and rigorous real-world experiments on physical robots which span across 10 real-world tasks, 4 environments, and 2 robot hardware plat-forms. Many of these tasks are performed in-the-wild out-side of lab environments (see Figure 1). We find that VRB outperforms other state-of-the-art human hand-object affor-dance models, and enables high-performance robot learning in the wild without requiring any simulation. Finally, we also observe that our affordance model learns a good visual representation for robotics as a byproduct. We highlight that all the evaluations are performed in the real world span-ning several hundred hours of robot running time which is a very large-scale evaluation in robotics. 13779 2. Related Work Affordance and Interaction Learning from Videos. Given a scene, one can predict interactions using geometry-based rules for objects via 3D scene understanding [42, 74, 127], estimating 3D physical attributes [8, 25, 40, 130] or through segmentation models trained on semantic interac-tions [97, 98]. These approaches, however, require special-ized datasets. More general interaction information can be learned from large human datasets [18–20, 39, 59, 63], to predict object information [29, 129] (RGB & 3D) [10], graphs [23] or environment information [27, 76] such as heatmaps [38, 75]. Approaches also track human poses, es-pecially hands [14,18,61,62,96,101,120]. Similarly, in ac-tion anticipation and human motion forecasting, high-level semantic or low level actions are predicted using visual his-tory [1,11,19,21,30–32,36,39,44–46,52,55,68,71,95,112, 113]. Since our observations only have robot arms and no human hands, we adopt a robot-first formulation, only mod-eling the contact point and post-contact phase of interaction. Visual Robot Learning. Learning control from visual in-puts directly is an important challenge. Previous works have leveraged spatial structures of convolutional networks to di-rectly output locations for grasping and pushing from just an image of the scene [87, 123, 124], which can limit the type of tasks possible. It is also possible to directly learn control end-to-end [50,58] which while general, is quite sample in-efficient in the real world. It has been common to introduce some form of prior derived from human knowledge, which could take the form of corrective interactions [22, 41, 64], structured policy spaces [2, 7, 7, 17, 48, 80, 89, 94, 102, 118], offline robotics data [24, 53, 54, 67, 92], using pretrained vi-sual representations [79,84,100,116,117] or human demon-strations [6, 15, 99, 102, 103, 107]. Learning Manipulation from Humans. Extensive work has been done on Learning from Demonstrations (LfD) where human supervision is usually provided through tele-operation (of a joystick or VR interface) [73, 109, 126] or kinesthetic teaching, where a user physically moves the robot arm [13, 16, 26, 66, 89]. With both these ap-proaches, collecting demonstrations is tedious and slow. Recently, works have shown alternate ways to provide hu-man demonstrations, via hand pose estimation and retarget-ing [5, 90, 104, 106, 119] in robot hands, but are mostly restricted to tabletop setups. First and third person human demonstrations have been used to train policies directly, transferred either via a handheld gripper [82,108,121] or us-ing online adaptation [6]. In contrast to directly mimicking a demonstration, we learn robot-centric affordances from passive human videos that provide a great initialization for downstream robot tasks, unlike previous work which re-quire in-domain demonstrations.3. Vision-Robotics Bridge (VRB) Our goal is to learn affordance priors from large-scale egocentric videos of human interaction, and then use them to expedite robot learning in the wild. This requires ad-dressing the three questions discussed in Sec. 1 about how to best represent affordances, how to extract them and how to use them across robot learning paradigms. 3.1. Actionable Representation for Affordances Affordances are only meaningful if there is an actor to execute them. For example, a chair has a sitting affordance only if it is possible for some person to sit on it. This prop-erty makes it clear that the most natural way to extract hu-man affordances is by watching how people interact with the world. However, what is the right object-centric rep-resentation for affordances: is it a heatmap of where the human makes contact? Is it the pre and postcondition of the object? Is it a description of the human interaction? All of these are correct answers and have been studied in prior works [42, 62, 75]. However, the affordance parameteriza-tion should be amenable to deployment on robots. If we want the robot to a priori understand how to manip-ulate a pan (Fig. 1, 4) without any interaction, then a seem-ingly simple solution is to exactly model human movement from videos [62], but this leads to a human-centric model and will not generalize well because human morphology is starkly different from that of robots. Instead, we take a first-principles approach driven by the needs of robot learning. Knowledge of a robot body is often known, hence reach-ing a point in the 3D space is feasible using motion plan-ning [51, 56, 57]. The difficul | ty is in figuring out where to interact (e.g. the handle of the lid) and then how to move after the contact is made (e.g., move the lid upwards). Inspired by this, we adopt contact points and post-contact trajectories as a simple actionable repre-sentation of visual affordance that can be easily transferred to robots. We use the notation cfor a contact point and τ for post-contact trajectory, both in the pixel space. Specif-ically, τ=f(It, ht), where Itis the image at timestep t, htis the human hand location in pixel space, and fis a learned model. We find that our affordance representation outperforms prior formulations across robots. Notably, the candτabstraction makes the affordance prior agnostic to the morphological differences across robots. 3.2. Learning Affordances from Egocentric Videos The next question is how to extract candτfrom human videos in a scalable data-driven manner while dealing with the presence of human body or hand in the visual input. VRB tackles this through a robot-first approach. 13780 Offline Data Collection Action Spacesk-Nearest NeighborsBehavior Cloningrewardaction s, a, rs, a, rs, a, rstateExplorationrewardaction s, a, rs, a, rs, a, rstate Goal-Conditioned Learningrewardaction s, a, rs, a, rs, a, rstateGoal Images rewardaction s, a, rs, a, rs, a, rstate 4InitializationDeployment132422 3412a*as𝝅Figure 3. Robot Learning Paradigms : (a) Offline Data Collection – Used to investigate the quality of the collected data. (b) Exploration – The robot needs to use intrinsic rewards to improve (c) Goal-Conditioned Learning – A desired task is specified via a goal image, used to provide reward. (d) Action Spaces – Reduced action spaces are easier to search and allow for discrete control. 3.2.1 Extracting Affordances from Human Videos Consider a video V, say of a person opening a door, consist-ing of Tframes i.e.V={I1, ..., I T}. We have a twofold objective — find where andwhen the contact happened, and estimate how the hand moved after contact was made. This is used to supervise the predictive model fθ(It)that out-puts contact points and post-contact trajectories. To do so, we utilize a widely-adopted hand-object detection model trained on human video data [101]. For each image It, this produces 2D bounding boxes of the hand ht, and a discrete contact variable ot. Using this information, we filter for frames where otindicates a contact in each video, and find the first timestep where contact occurs, tcontact . The pixel-space positions of the hand {ht}t′ tcontactconsti-tute the post-contact trajectory ( τ). To extract contact points c, we use the corresponding hand bounding box, and apply skin color segmentation to find all points at the periphery of the hand segment that intersect with the bounding box of the object in contact. This gives us a set of Ncontact points {ci}N, where Ncan differ depending on the im-age, object, scene and type of interaction. How should the contact points be aggregated to train our affordance model (fθ)? Some options include predicting the mean of {ci}N, or randomly sampling ci. However, we seek to encourage multi-modality in the predictions, since a scene likely con-tains multiple possible interactions. To enable this, we fit a Gaussian mixture model (GMM) to the points. Let us de-fine a distribution over contact points to be p(c). We fit the GMM parameters ( µk,Σk) and weights αk. p(c) = argmax µ1,...,µK,Σ1,...,ΣKNX i=1KX k=1αkN(ci|µk,Σk)(1)We use these parameters of the above defined GMM with Kclusters as targets for fθ. To summarize, 1) we find the first timestep where contact occurs in the human video, tcontact 2) For c, we fit a GMM to the contact points around the hand at frame Itcontact, parameterized by µk,Σkand 3) we find the post-contact trajectory of the 2D hand bounding box{ht}t′ tcontactforτ. Accounting for Camera Motion over Time: Consider a per-son opening a door. Not only do the person’s hands move but their body and hence their head also move closer to the handle and then away from it. Therefore, we need to com-pensate for this egomotion of the human head/camera from timetcontact tot′. We address this by using the homogra-phy matrix at timestep t,Htto project the points back into the coordinates of the starting frame. We obtain the ho-mography matrix by matching features between consecu-tive frames. We then use this to produce the transformed trajectory τ=Ht◦ {ht}t′ tcontact. Addressing Human-Robot Visual Domain Shift: The train-ing videos contain human body or hand in the frame but the human will not be present in downstream robotics task, generating domain shift. We deal with this issue with a sim-ple yet elegant trick: we extract affordances in the frames with humans but then map those affordances back to the first frame when human was yet to enter the scene. For videos in which a human is always in frame, we either crop out the human in the initial frame if there is no interaction yet or discard the frame if the human is always in contact. We compute the contact points and post-contact trajectories with respect to this human-less frame via the same homog-raphy procedure described above. This human-less frame is then used to condition our affordance model. 13781 3.2.2 Training Affordance Model Conditioned on the input image, the affordance model is trained to predict the extracted labels for contact points and post-contact trajectories. However, naive joint prediction does not work well as the learning problem is inherently multi-modal. For instance, one would pick up a cup differ-ently from a table depending on whether the goal is to pour it into the sink or take a sip from it. We handle this by pre-dicting multiple heatmaps for interaction points using the same model, building a spatial probability distribution. For ease of notation, we use (·)θas a catch-all for all parameterized modules and use fθto denote our complete network. Fig. 2 shows an overview of our model. Input im-ageItis encoded using a ResNet [43] visual encoder gconv θ to give a spatial latent representation zt, i.e., gconv θ(It) =zt. We then project this latent ztintoKprobability distribu-tions or heatmaps using deconvolutional layers; concretely, Ht=gdeconv θ(zt). Using a spatial softmax, σ2D, we get the estimation of the labels for GMM means, i.e.,µk. We found that keeping the covariance matrices fixed gave better results. Formally, the loss for contact point estimation is: Lcontact = µi−σ2D gdeconv θ (gconv θ(It)) 2(2) To estimate post-contact trajectory, we train a trajec-tory prediction network, Tθ, based on the latent represen-tation zt. We find that it is easier to optimize for rela-tiveshifts, i.e., the direction of movement instead of ab-solute locations, assuming that the first point ˆw0is 0, since the contact points are already spatially grounded. Based on the success of Transformers for sequential prediction, we employ self-attention blocks [111] and train to optimize Ltraj=∥τ− Tθ(zt)∥2. In a given scene, there are many objects a human could interact with, which may or may not be present in the training data. We tackle this uncertainty and avoid spurious correlations by sampling local crops of Itaround the contact points. These serve as the effective input to our network fθand enables better generalization. 3.3. Robot Learning from Visual Affordances Instead of finding a particular way to use our affordance model for robotics, we show that it can bootstrap existing robot learning methods. In particular, we consider four dif-ferent robotics paradigms as shown in Fig. 3. A. Imitation Learning from Offline Data Collection Imitation learning is conventionally performed on data col-lected by human demonstrations, teleoperation, or scripted policies – all of which are expensive and only allow for small-scale data collection [4, 6, 12, 58, 103, 122]. On the other hand, using the affordance model, fθ(·)to guide the robot has a high probability of yielding ‘interesting’ inter-actions.Given an image input It, the affordance model produces (c, τ) =fθ(It), and we store {(It,(c, τ))}in a dataset D. After sufficient data has been collected, we can use imita-tion learning to learn control policies, often to complete a specific task. A common approach for task specification is to use goal images that show the desired configuration of objects. Given the goal image, the k-Nearest Neighbors (k-NN) approach involves filtering trajectories in Dbased on their distance to the goal image in feature space. Fur-ther, the top (filtered) trajectories can be used for behavior cloning (BC) by training a policy, π(c, τ|It), We run both k-NN and behavior cloning on datasets collected by differ-ent methods in Sec. 4.1. Using the same IL approach for different datasets is also useful for comparing the relative quality of the data. This is because higher relative success for a particular dataset implies that the data is qualitatively better, given that the same IL algorithm achieves worse per-formance on a different dataset. This indicates that the goal (or similar images) were likely seen during data collection. B. Reward-Free Exploration The goal of exploration is to discover as many diverse skills as possible which can aid the robot in solving downstream tasks. Exploration meth-ods are usually guided by intrinsic rewards that are self-generated by the robotic agent, and are not specific to any task [9, 47, 49, 60, 69, 81, 85, 88, 93, 110]. However, start-ing exploration from scratch is too inefficient in the real world, as the robot can spend an extremely large amount of time trying to explore and still not learn meaningful skills to solve tasks desired by humans. Here our affordance model can be greatly beneficial by bootstrapping the exploration from the predicted affordances allowing the agent to focus on parts of the scene likely to be of interest to humans. To operationalize this, we first use the affordance model fθ(.) for data-collection. We then rank all the trajectories col-lected using a task-agnostic exploration metric, and fit a distribution hto the (c, τ)values of the top trajectories. For subsequent data collection, we sample from hwith some probability, and otherwise use the affordance model f. This process can then be repeated, and the elite-fitting scheme will bootstrap from highly exploratory trajectories to im-prove exploration even further. For the exploration met-ric in our experiments, we maximize environment change EC(Ii, Ij) =||ϕ(Ii)−ϕ(Ij)||2, (similar to previous explo-ration approaches [6, 83]) between first and last images in the trajectory, where ϕmasks the robot and the loss is only taken on non-masked pixels. C. Goal-Conditioned Learning While exploring the environment can lead to interesting skills, consider a robot that already knows its goal. Using this knowledge ( e.g. an image of the opened door), it supervise its policy search. Goal images are frequently used to specify rewards in RL [3, 33, 37, 70, 77, 78, 86, 114, 131]. Using our affordance 13782 Figure 4. Qualitative affordance model outputs for VRB,HOI [62],Hotspots [38] and HAP [38], showing the predicted contact point region, and post-grasp trajectory (green arrow for VRB, red for HOI [62]). We can see that VRB produces the most meaningful affordances. model can expedite the process of solving goal-specified tasks. Similar to the exploration setting, we rank trajec-tories and fit a distribution hto the ( c, τ) values of the top trajectories, but here the metric is to minimize distance to the goal image Ig. The metric used in our experiments is to minimize EC (IT, Ig), where ITis the last image in th |
Chen_Unsupervised_Inference_of_Signed_Distance_Functions_From_Single_Sparse_Point_CVPR_2023 | Abstract It is vital to infer signed distance functions (SDFs) from 3D point clouds. The latest methods rely on generalizing the priors learned from large scale supervision. Howev-er, the learned priors do not generalize well to various ge-ometric variations that are unseen during training, espe-cially for extremely sparse point clouds. To resolve this issue, we present a neural network to directly infer SDFs from single sparse point clouds without using signed dis-tance supervision, learned priors or even normals. Our in-sight here is to learn surface parameterization and SDFs inference in an end-to-end manner. To make up the spar-sity, we leverage parameterized surfaces as a coarse sur-face sampler to provide many coarse surface estimations in training iterations, according to which we mine super-vision and our thin plate splines (TPS) based network in-fers SDFs as smooth functions in a statistical way. Our method significantly improves the generalization ability and accuracy in unseen point clouds. Our experimental result-s show our advantages over the state-of-the-art method-s in surface reconstruction for sparse point clouds under synthetic datasets and real scans.The code is available at https://github.com/chenchao15/NeuralTPS . | 1. Introduction Signed distance functions (SDFs) have been a popular 3D representation that shows impressive performance in various tasks [1–4, 7,12,15,17,21,22,30,31,34,35,41,42, 44,47,50,57,62,64,68,69,77,80,84]. An SDF describes a signed distance field as a mapping from a coordinate to a signed distance, and represents a surface as a level set of The corresponding author is Yu-Shen Liu. This work was supported by National Key R&D Program of China (2022YFC3800600), the Nation-al Natural Science Foundation of China (62272263, 62072268), and in part by Tsinghua-Kuaishou Institute of Future Media Data.the field. We can learn SDFs from signed distance super-vision using coordinate-based neural networks. However, obtaining the signed distance supervision requires continu-ous surfaces such as water-tight manifolds, hence it is still challenging to infer signed distance supervision from raw point clouds due to the discrete character. Current methods [17, 21,28,30,36,41,50,51,55,56,58, 62,64] mainly leverage priors to infer SDFs for point cloud-s. They learn priors from well established signed distance supervision around point clouds during training, and then generalize the learned priors to infer SDFs for unseen point clouds during testing. Although local priors learned at a part level [7, 9,31,45,65,73] improve the generalization of global priors learned at a shape level [17, 21,30,41,50,60,62,64], the geometric variations that local priors can cover are still limited. Hence, some methods [1–3, 14,22,44,80,84] try to directly infer SDFs from single point clouds using various strategies [1, 2,12,22,44,84]. However, they require dense point clouds to assure the inference performance, which drastically limits their performance with sparse point cloud-s in real scans. Therefore, how to infer SDFs from sparse point clouds to achieve better generalization is still a chal-lenge. To overcome this challenge, we introduce a neural net-work to infer SDFs from single sparse point clouds. Our novelty lies in the way of inferring SDFs without signed distance supervision, learned priors or even normals, which significantly improves the generalization ability and accu-racy in unseen point clouds. We achieve this by learning surface parameterization and SDF inference in an end-to-end manner using a neural network that overfits a single sparse point cloud. To make up the sparsity, the end-to-end learning turns parameterized surfaces as a coarse surface sampler which produces many coarse surface estimations on the fly to statistically infer the SDF. To target extremely sparse point clouds, we parameterize the surface of a point cloud as a single patch on a 2D plane, where 2D samples can be mapped to 3D points that lead to a coarse surface es-timation. We further leverage the estimated coarse surface This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17712 as a reference to infer the SDF based on thin plate splines (TPS) in the feature space, which produces smooth signed distance fields. Our method can statistically infer the S-DFs from the permutation of coarse surfaces in different iterations, which reduces the effect of inaccuracy brought by each single coarse surface. Our method outperforms the latest methods under the widely used benchmarks. Our con-tributions are listed below. i) We introduce a neural network to infer SDFs from sin-gle sparse point clouds without using signed distance supervision, learned priors or even normals. ii) We justify the feasibility of learning surface parameter-ization and inferring SDFs from sparse point clouds in an end-to-end manner. We provide a novel perspective to use surface parameterization to mine supervision. iii) Our method outperforms the state-of-the-art methods in surface reconstruction for sparse point clouds under the widely used benchmarks. |
Chen_SparseViT_Revisiting_Activation_Sparsity_for_Efficient_High-Resolution_Vision_Transformer_CVPR_2023 | Abstract High-resolution images enable neural networks to learn richer visual representations. However, this improved per-formance comes at the cost of growing computational com-plexity, hindering their usage in latency-sensitive applica-tions. As not all pixels are equal, skipping computations for less-important regions offers a simple and effective mea-sure to reduce the computation. This, however, is hard to be translated into actual speedup for CNNs since it breaks the regularity of the dense convolution workload. In this paper, we introduce SparseViT that revisits activation spar-sity for recent window-based vision transformers (ViTs). As window attentions are naturally batched over blocks, actual speedup with window activation pruning becomes possible: i.e.,∼50% latency reduction with 60% sparsity. Different layers should be assigned with different pruning ratios due to their diverse sensitivities and computational costs. We intro-duce sparsity-aware adaptation and apply the evolutionary search to efficiently find the optimal layerwise sparsity con-figuration within the vast search space. SparseViT achieves speedups of 1.5×,1.4×, and 1.3×compared to its dense counterpart in monocular 3D object detection, 2D instance segmentation, and 2D semantic segmentation, respectively, with negligible to no loss of accuracy. | 1. Introduction With the advancement of image sensors, high-resolution images become more and more accessible: e.g., recent mo-bile phones are able to capture 100-megapixel photos. The increased image resolution offers great details and enables neural network models to learn richer visual representations and achieve better recognition quality. This, however, comes at the cost of linearly-growing computational complexity, making them less deployable for resource-constrained appli-cations ( e.g., mobile vision, autonomous driving). ∗indicates equal contributions (listed in alphabetical order). (a) Direct Downsample: Lower Resolution (0.5x), Dense (100%) (b) Window Activation Pruning: Higher Resolution (1.0x), Sparse (25%) Figure 1. Sparse, high-resolution features are far more informative than dense, low-resolution ones. Compared with direct downsam-pling, activation pruning can retain important details at a higher resolution, which is essential for most image recognition tasks. The simplest solution to address this challenge is to down-sample the image to a lower resolution. However, this will drop the fine details captured from the high-resolution sen-sor. What a waste! The missing information will bottleneck the model’s performance upper bound, especially for small object detection and dense prediction tasks. For instance, the detection accuracy of a monocular 3D object detector will degrade by more than 5% in mAP by reducing the height and width by 1.6 ×*. Such a large gap cannot be easily recovered by scaling the model capacity up. Dropping details uniformly at all positions is clearly sub-optimal as not all pixels are equally informative (Figure 1a). Within an image, the pixels that contain detailed object fea-tures are more important than the background pixels. Moti-*BEVDet [19] (with Swin Transformer [33] as backbone) achieves 31.2 mAP with 256 ×704 resolution and 25.90 mAP with 160 ×440 resolution. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2061 vated by this, a very natural idea is to skip computations for less-important regions ( i.e., activation pruning). However, activation sparsity cannot be easily translated into the actual speedup on general-purpose hardware ( e.g., GPU) for CNNs. This is because sparse activation will introduce randomly dis-tributed and unbalanced zeros during computing and cause computing unit under-utilization [54]. Even with dedicated system support [40], a high sparsity is typically required to realize speedup, which will hurt the model’s accuracy. Recently, 2D vision transformers (ViTs) have achieved tremendous progress. Among them, Swin Transformer [33] is a representative work that generalizes well across different visual perception tasks (such as image classification, object detection, and semantic segmentation). Our paper revisits the activation sparsity in the context of window-based ViTs. Different from convolutions, window attentions are naturally batched over windows, making real speedup possible with window-level activation pruning. We re-implement the other layers in the model ( i.e.,FFNs and LNs) to also execute at the window level. As a result, we are able to achieve around 50% latency reduction with 60% window activation sparsity. Within a neural network, different layers have different impacts on efficiency and accuracy, which advocates for a non-uniform layerwise sparsity configuration: e.g., we may prune layers with larger computation and lower sensitivity more, while pruning layers with smaller computation and higher sensitivity less. To this end, we make use of the evo-lutionary search to explore the best per-layer pruning ratios under a resource constraint. We also propose sparsity-aware adaptation by randomly pruning a different subset of the ac-tivations at each iteration. This effectively adapts the model to activation sparsity and avoids the expensive re-training of every candidate within the large search space. Our Sparse-ViT achieves speedups of 1.5×,1.4×, and 1.3×compared to its dense counterpart in monocular 3D object detection, 2D instance segmentation, and 2D semantic segmentation, respectively, with negligible to no loss of accuracy. |
Chen_Cascade_Evidential_Learning_for_Open-World_Weakly-Supervised_Temporal_Action_Localization_CVPR_2023 | Abstract Targeting at recognizing and localizing action instances with only video-level labels during training, Weakly-supervised Temporal Action Localization (WTAL) hasachieved significant progress in recent years. However , liv-ing in the dynamically changing open world where unknownactions constantly spring up, the closed-set assumption ofexisting WTAL methods is invalid. Compared with tradi-tional open-set recognition tasks, Open-world WTAL (OW-TAL) is challenging since not only are the annotations ofunknown samples unavailable, but also the fine-grained an-notations of known action instances can only be inferred ambiguously from the video category labels. To address this problem, we propose a Cascade Evidential Learning frame-work at an evidence level, which targets at OWTAL for thefirst time. Our method jointly leverages multi-scale tem-poral contexts and knowledge-guided prototype informa-tion to progressively collect cascade and enhanced evidencefor known action, unknown action, and background sepa-ration. Extensive experiments conducted on THUMOS-14and ActivityNet-v1.3 verify the effectiveness of our method. Besides the classification metrics adopted by previous open-set recognition methods, we also evaluate our method on lo-calization metrics which are more reasonable for OWTAL. | 1. Introduction Targeting at recognizing and localizing action instances with only video-level labels during training, Weakly-supervised Temporal Action Localization (WTAL) has at-tracted increasing attention from both academia and indus-try [ 9,11,18,19,37,43]. Unlike fully-supervised TAL, WTAL only requires video-level action labels during train-ing. However, the closed-set assumption of existing WTALTraining Phase Testing Phase Basketball Dunk Unknown (Cricket Bowling) Cricket Shot Unknown (Hammer Throw) Long Jump Basketball Dunk Cricket Shot Figure 1. Illustration of the training and testing phases of OWTAL. With only video-level labels for training, OWTAL aims to localizeboth known and unknown action instances in testing videos. methods is invalid in the dynamically changing real world, since with the development of society never-before-seen hu-man action categories are constantly emerging. Therefore,to address this problem, we consider a different WTAL set-ting in this work, termed as Open-world WTAL (OWTAL). Different from the traditional WTAL task, as shown in Figure 1, OWTAL allows testing videos to contain ac-tion instances of unknown categories, which have never ap-peared during training. Therefore, temporal boundries ofboth known and unknown action instances are expected tobe predicted. Compared with its fully-supervised counter-part Open Set TAL [ 3], OWTAL is challenging in two as-pects: (1)Ambiguity of annotations of closed-set (known) action instances. Previous works indicate that the closed-setand open-set performance are highly correlated [ 42]. How-ever, under the OWTAL setting, not only are the annotationsof unknown action instances unavailable, but also the fine-grained annotations of known ones can only be inferred am-biguously from the video category labels. During training,the known action instances that the model needs to focus onare prone to be disturbed by the background snippets, whichhinders the learning of the closed-set actions, thus making itextremely difficult to differentiate the unknown actions, the known actions, and the background. (2)Lack of reasonable metrics. The traditional Open Set Recognition (OSR) aims This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14741 for classification while the goal of OWTAL is to perform lo-calization instead, thus the classification metrics commonlyadopted by OSR are not sufficient for OWTAL. In order to alleviate the negative impact caused by the weak annotations of known action instances, we propose a Cascade Evidential Learning method for owtaL (CELL), which progressively collects cascaded evidence by con-sidering both temporal contexts in multi-scale ranges andinter-video correlations under the guidance of prior knowl-edge. Since the goal of OWTAL is to locate the consecu-tive known/unknown action segments of various temporallengths in open-world scenarios, perceiving temporal con-texts in diverse ranges is essential. We argue that it is mean-ingful to endow individual snippet features with the ability of sensing multi-scale neighborhood video segments, andthus a Multi-scale Extended-range Perception module (Sec-tion 3.2) is designed to obtain more discriminative video features for initial evidence collection, taking advantage of the temporal contexts. Due to the large intra-action vari-ation in visual patterns and the lack of prior knowledge guidance, the known action instances which visually de-viate from the common ones are likely to be misidentifiedwith the initial evidence collected from individual videos.Therefore, we design a Knowledge-guided Bipolar Proto-type Learning strategy (Section 3.3), where a semantic rela-tion graph is constructed to provide prior knowledge guid-ance for the bipolar prototype learning among videos, thusperceiving the open-world more comprehensively. We use this strategy to generate a series of evidence calibration fac-tors for further cascade evidence enhancement. Finally, aCascade Evidence Enhancement module (Section 3.4)i s designed for enhancing the initial evidence with the cali-bration factors, and the uncertainty estimated from the cas-caded evidence is used for the known/unknown judgment.Extensive experiments conducted on THUMOS-14 and Ac-tivityNet verify the effectiveness. Besides the various clas-sification metrics adopted by previous works, we also eval-uate our method on localization metrics which are more inline with the needs of real-world applications. To summarize, our contribution is threefold: •To tackle the unique challenges of OWTAL, we propose a cascade evidential learning framework, which progres-sively collects comprehensive evidence for known ac-tion, unknown action, and background separation. Lo-calization metrics which meet the needs of the real-world more closely are adopted for evaluation. •To achieve OWTAL without fine-grained annotations, the proposed CELL jointly leverages multi-scale tem-poral contexts and knowledge-guided prototype infor-mation during the evidence cascade learning process. •We conduct comprehensive experiments on two popu-lar WTAL benchmarks, THUMOS-14 and ActivityNet-v1.3, and achieve significant performance improvementover various baselines. Experiments show that CELL enables existing methods to well adapt to the more prac-tical open-world settings. |
Islam_Efficient_Movie_Scene_Detection_Using_State-Space_Transformers_CVPR_2023 | Abstract The ability to distinguish between different movie scenes is critical for understanding the storyline of a movie. How-ever, accurately detecting movie scenes is often challeng-ing as it requires the ability to reason over very long movie segments. This contrasts with most existing video recogni-tion models, which are typically designed for short-range video analysis. This work proposes a State-Space Trans-former model that can efficiently capture dependencies in long movie videos for accurate movie scene detection. Our model, called TranS4mer, is built using a novel S4A building block, combining the strengths of structured state-space se-quence (S4) and self-attention (A) layers. Given a sequence of frames divided into movie shots (uninterrupted periods where the camera position does not change), the S4A block first applies self-attention to capture short-range intra-shot dependencies. Afterward, the state-space operation in the S4A block aggregates long-range inter-shot cues. The fi-nal TranS4mer model, which can be trained end-to-end, is obtained by stacking the S4A blocks one after the other mul-tiple times. Our proposed TranS4mer outperforms all prior methods in three movie scene detection datasets, including MovieNet, BBC, and OVSD, while being 2×faster and re-quiring 3×less GPU memory than standard Transformer models. We will release our code and models. | 1. Introduction Imagine watching The Godfather movie as illustrated in Figure 1. The first scene starts in the office room of mafia boss Don Vito Corleone, where he is discussing family busi-ness affairs with his eldest son Sonny Corleone and the fam-ily lawyer Tom Hagen. Then in the second scene, the family meets Virgil Sollozzo to hear about his business proposi-tion. Afterward, the movie transitions into a third scene, where Don Vito chides his son Sonny for interfering. All of these movie scenes are inherently connected and create a complex storyline. Thus, the ability to recognize such *Research done while MI was an intern at Comcast Labs.movie scenes is essential for understanding the plot of a movie. Moreover, identifying movie scenes enables broader applications, such as content-driven video search, preview generation, and minimally disruptive ad insertion. Unlike standard video recognition tasks (e.g., action recognition), which require a short-range analysis of video clips lasting only 5–10 seconds, movie scene detection re-quires short-and long-range understanding, where videos might last several minutes. For instance, to successfully recognize the scenes in Figure 1, we need to identify the vi-sual concepts in each frame, such as characters, objects, and backgrounds (short-range modeling), while also reasoning about complex temporal events and activities (long-range modeling). For example, if we only look at the bound-ary frames of the first and second scenes of Figure 1 and identify the local properties (e.g., characters, objects, and backgrounds), it is quite difficult to determine the transi-tion between the two scenes. However, considering longer temporal windows with frames from both scenes, we can recognize global events and identify their boundaries. We also note that models that only use short-range tem-poral context typically produce many false positives due to abrupt low-level changes in the scene dynamics (e.g., cam-era switching between characters). For example, if we look at the middle two frames of scene 2 in Figure 1, we might mistake them for a scene boundary since the frames contain two characters. However, looking at all the frames in scene 2 makes it clear that they belong to the same scene. There-fore, a successful movie scene detection model should iden-tify the short-range visual elements and also reason about their long-range temporal dependencies in a movie segment. However, most existing movie scene detection meth-ods [11, 35] are built using convolutional neural networks (CNNs), which are inherently designed using local opera-tors (i.e., convolution kernels) for short-range video anal-ysis. Thus, such models often fail to capture the long-range dependencies crucial for accurate movie scene de-tection. Recently, transformers have been shown to effec-tively model short-range and long-range dependencies in various Natural Language Processing (NLP) tasks [1, 7, 14, 15, 32, 37, 47, 49]. Inspired by the success of transform-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18749 Figure 1. Three scenes from the movie The Godfather . In the first scene, The Godfather (Don Vito Corleone) discusses family business affairs with his eldest son Sony and family lawyer Tom Hagen. Then the movie transitions into the second scene, where the mafia family meets Virgil Sollozzo to hear his business proposition. Afterward, in the third scene, Don Vito chides his son Sonny for interfering in the middle of his talk. Generally, movies are composed of well-defined scenes, and detecting those scenes is essential to high-level movie understanding. Therefore, this work aims to detect high-level scenes from a long-range movie video. ers in NLP, subsequent works have shown the remarkable success of using transformers in several short-range vision tasks [8,16]. However, applying Transformers to long video sequences remains challenging due to the quadratic cost of self-attention. To overcome these issues, recent work proposed a structured state-space sequence (S4) operator [18, 19, 33] for efficient long-range sequence analysis. Motivated by the developments in transformer and S4-based architectures, we present TranS4mer, an efficient end-to-end model for movie scene boundary detection, which combines the strengths of transformers for short-range modeling and S4 operators for long-range modeling. Specifically, we propose a novel state-space self-attention (S4A) block and build our TranS4mer model by stacking multiple S4A blocks on top of each other. Our model takes a sequence of frames divided into movie shots as input, where a shot is defined as a series of frames captured by the same camera over an uninterrupted period [46]. The S4A block first captures the short-range intra-shot dependencies by ap-plying self-attention over the frames in each shot. Subse-quently, it aggregates the inter-shot interactions over long movie segments by employing the state-space operator over all shots in a given movie video. One key property of our model is that it independently applies self-attention to each shot, significantly reducing the cost of standard self-attention. For instance, if we have a video of 25 shots and each shot contains 3 frames, apply-ing a standard vision transformer [16] with a patch size of 32×32will result in 3,675 tokens and a single self-attention operation will require ∼13.5M pairwise comparisons. In contrast, applying self-attention within each shot parallelly reduces the number of comparisons 25×times. Simul-taneously, TranS4mer attains long-range modeling ability through the efficient state-space layers that do not require costly pairwise comparisons and operate over the entire in-put video. As a result, the proposed TranS4mer model can be efficiently applied to long movie videos. We experiment with three movie scene detection datasets: MovieNet [25], BBC [3], and OVSD [40], and report that our method outperforms prior approaches by large margins ( +3.38%AP,+4.66%and+7.36%re-spectively). TranS4mer is also 2×faster and requires 3× less GPU memory than the pure Transformer-based mod-els. Moreover, to evaluate the generalizability of our model,we experiment with several long-range video understand-ing tasks, such as movie clip classification and procedu-ral activity recognition. TranS4mer performs the best in 5 out of 7 movie clip classification tasks on LVU [48], the best in procedural activity classification on Breakfast [30], and the second-best in procedural activity classification on COIN [43] datasets. |
Ji_Multispectral_Video_Semantic_Segmentation_A_Benchmark_Dataset_and_Baseline_CVPR_2023 | Abstract Robust and reliable semantic segmentation in complex scenes is crucial for many real-life applications such as au-tonomous safe driving and nighttime rescue. In most ap-proaches, it is typical to make use of RGB images as in-put. They however work well only in preferred weather conditions; when facing adverse conditions such as rainy, overexposure, or low-light, they often fail to deliver sat-isfactory results. This has led to the recent investigation into multispectral semantic segmentation, where RGB and thermal infrared (RGBT) images are both utilized as input. This gives rise to significantly more robust segmentation of image objects in complex scenes and under adverse condi-tions. Nevertheless, the present focus in single RGBT im-age input restricts existing methods from well addressing dynamic real-world scenes. Motivated by the above observations, in this paper, we set out to address a relatively new task of semantic seg-mentation of multispectral video input, which we refer to as Multispectral Video Semantic Segmentation, or MVSS in short. An in-house MVSeg dataset is thus curated, con-sisting of 738 calibrated RGB and thermal videos, accom-panied by 3,545 fine-grained pixel-level semantic annota-∗Corresponding author.tions of 26 categories. Our dataset contains a wide range of challenging urban scenes in both daytime and nighttime. Moreover, we propose an effective MVSS baseline, dubbed MVNet, which is to our knowledge the first model to jointly learn semantic representations from multispectral and tem-poral contexts. Comprehensive experiments are conducted using various semantic segmentation models on the MVSeg dataset. Empirically, the engagement of multispectral video input is shown to lead to significant improvement in seman-tic segmentation; the effectiveness of our MVNet baseline has also been verified. | 1. Introduction As a fundamental computer vision problem, semantic segmentation concerns the assignment of category labels to each pixel in an image. It has received extensive research at-tention over the past decades [2,5,12,36,45,64,74,80]. Ex-isting semantic segmentation networks are predominantly designed to work with RGB images, which may fail in the presence of adverse conditions, such as rainy, low-light, or overexposure. On the other hand, we have evidenced a growing demand in using thermal images for semantic segmentation; a number of RGBT models have been subse-quently developed, to engage both RGB and thermal images This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1094 as input for semantic segmentation especially with complex scenes [21,55,76,82,83]. This may be attributed to the fact that thermal infrared imaging is relatively insensitive to il-lumination conditions, as it works by recording infrared ra-diations of an object above absolute zero temperature [19]. It is worth noting that the existing RGBT segmentation methods are based on single images. However, the lack of mechanism to account for the temporal contexts may limit their performance when working with video inputs contain-ing dynamic scenes, which are omnipresent in our daily lives. This leads us to explore in this paper a relatively new task of Multispectral Video Semantic Segmentation, or in short MVSS, with a specific focus on RGBT video inputs. Fig. 1 illustrates several exemplar multispectral video se-quences and their ground-truth semantic annotations. As shown, the RGB frames and thermal frames provide rich and often complementary information for identifying mov-ing foreground objects and static background scenes in low-light night or facing strong headlights. The new task opens up possibilities for applications that require a holistic view of video segmentation under challenging conditions, e.g., autonomous safe driving, nighttime patrol, and fire rescue. To our knowledge, this is the first work to address such mul-tispectral video semantic segmentation problem. In the deep learning era, benchmark datasets have be-come the critical infrastructure upon which the computer vision research community relies to advance the state-of-the-arts. Thanks to the publicly available benchmarks, such as MFNet [21], PST900 [55], Cityscapes [12], and CamVid [4], the related tasks of multispectral semantic seg-mentation (MSS) and video semantic segmentation (VSS) have evidenced notable progresses. Meanwhile, these ex-isting datasets provide as input either single pairs of RGB and thermal images, or RGB only video sequences. There unfortunately lacks a suitable dataset to train and evaluate learning based models for the proposed MVSS task. This leads us to curate a high-quality and large-scale MVSS dataset, referred to as MVSeg, that contains diverse situa-tions. Specifically, our MVSeg dataset comprises 738 syn-chronized and calibrated RGB and thermal infrared video sequences, with a total of 52,735 RGB and thermal image pairs. Among them, 3,545 image pairs are densely anno-tated with fine-grained semantic segmentation labels, con-sisting of a rich set of 26 object categories in urban scenes. In particular, as showcased in Fig. 1, our MVSeg dataset in-volves many challenging scenes with adverse lighting con-ditions. It is expected to provide a sufficiently realistic benchmark in this field. Furthermore, a dedicated baseline model is developed for this new task, which is called Multispectral Video se-mantic segmentation NETwork or simply MVNet. Our MVNet possesses two key components in addressing the main challenges of MVSS task. Considering the high com-plexity of processing large-volume multispectral video data, a prototypical MVFuse module is devised to attend to rich contextual multispectral video features with a moderate memory footprint. A novel MVRegulator loss is further introduced, which regularizes the feature learning process to reduce cross-spectral modality difference and promote better exploitation of multispectral temporal data. Com-prehensive experiments on various state-of-the-art seman-tic segmentation models are also carried out at the MVSeg dataset. Experimental results demonstrate the significance of multispectral video data for semantic segmentation, and verify the effectiveness of our MVNet model. We expect the MVSeg dataset and the MVNet baseline will facilitate future research activities toward the MVSS task. |
Hu_Efficient_Semantic_Segmentation_by_Altering_Resolutions_for_Compressed_Videos_CVPR_2023 | Abstract Video semantic segmentation (VSS) is a computationally expensive task due to the per-frame prediction for videos of high frame rates. In recent work, compact models or adaptive network strategies have been proposed for efficient VSS. However, they did not consider a crucial factor that affects the computational cost from the input side: the in-put resolution . In this paper, we propose an altering res-olution framework called AR-Seg for compressed videos to achieve efficient VSS. AR-Seg aims to reduce the computa-tional cost by using low resolution for non-keyframes. To prevent the performance degradation caused by downsam-pling, we design a Cross Resolution Feature Fusion (CR-eFF) module, and supervise it with a novel Feature Similar-ity Training (FST) strategy. Specifically, CReFF first makes use of motion vectors stored in a compressed video to warp features from high-resolution keyframes to low-resolution non-keyframes for better spatial alignment, and then selec-tively aggregates the warped features with local attention mechanism. Furthermore, the proposed FST supervises the aggregated features with high-resolution features through an explicit similarity loss and an implicit constraint from the shared decoding layer. Extensive experiments on CamVid and Cityscapes show that AR-Seg achieves state-of-the-art performance and is compatible with different segmenta-tion backbones. On CamVid, AR-Seg saves 67% computa-tional cost (measured in GFLOPs) with the PSPNet18 back-bone while maintaining high segmentation accuracy. Code: https://github.com/THU-LYJ-Lab/AR-Seg . | 1. Introduction Video semantic segmentation (VSS) aims to predict pixel-wise semantic labels for each frame in a video se-quence. In contrast to a single image, a video sequence *Corresponding author. &RPSUHVVHG9LGHR [&RQY(a) &RPSUHVVHG9LGHR ,IUDPH3IUDPH:DUS[&RQY[&RQY (b) &RPSUHVVHG9LGHR 3IUDPH6KDUHG[&RQYROXWLRQ&5H)) 09 (c) Figure 1. Comparison of different VSS methods: (a) per-frame framework, (b) Accel [19] that alters the depth of models, and (c) our AR-Seg. AR-Seg reduces the computational cost for non-keyframes by lowering the input resolution (depicted by narrow blocks), which is a dimension orthogonal to the depth of networks. is a series of consecutive image frames recorded at a cer-tain frame rate (usually 25fps or higher). Applying image-based segmentation methods [6, 25, 48, 52, 55] to a video frame by frame consumes considerable computational re-sources. To improve the efficiency of VSS, existing meth-ods mainly focus on the design of network architectures. A class of methods proposes compact and efficient image-based architectures to reduce the computational overhead per-frame [22,23,28,49,51,52,54]. Another class of meth-ods applies a deep model to keyframes and a shallow net-work for non-keyframes to avoid the repetitive computa-tion [19, 24, 27, 34] for videos. The work presented in this paper is based on an impor-tant observation: the above existing works ignored a cru-cial factor that affects the computational cost from the input side: the input resolution . For image-related tasks, the in-put resolution directly determines the amount of computa-tion, e.g., the computational cost of 2D convolution is pro-portional to the product of image width and height. Once we This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22627 downsample the input frame by 0.5×0.5, the computational overhead can be reduced by 75%. On the other hand, de-creasing resolution often leads to worse segmentation accu-racy due to the loss of information [43,54]. In this paper, we propose to prevent the accuracy degradation by using tem-poral correlation in the video. Since the contents of video frames are temporally correlated, the local features lacking in low-resolution (LR) frames can be inferred and enriched by finding correspondences in sparse high-resolution (HR) reference frames based on motion cues. In a compressed video, the motion vectors contain such motion cues and can be obtained along with the video frames from video decod-ing at almost no additional cost. Motivated by the above observation, we propose an al-tering resolution framework for compressed videos, named AR-Seg, to achieve efficient VSS. As shown in Figure 1(c), AR-Seg uses an HR branch to process keyframes at high resolution and an LR branch to process non-keyframes at low resolution. In particular, to prevent performance drop caused by downsampling, we insert a novel Cross Resolu-tion Feature Fusion (CReFF) module into the LR branch and supervise the training with a Feature Similarity Train-ing (FST) strategy to enrich local details in the LR fea-tures. CReFF fuses the HR keyframe features into LR non-keyframe features in two steps: 1) Align the spatial struc-tures of features from different frames by feature warping with motion vectors, which can be readily obtained from compressed videos at almost no additional cost; 2) Selec-tively aggregate the warped features (which may be noisy after warping) into LR features with the aid of local atten-tion mechanism. Since local attention assigns different im-portance to each location in the neighborhood, it is an ef-fective way to avoid misleading by noisy warped features. Furthermore, our proposed FST strategy guides the learning of the CReFF aggregated features. FST consists of anexplicit similarity loss (between the aggregated features and HR features inferred from non-keyframes) and an im-plicit constraint from the shared decoding layer across the HR and LR branches. Such a training strategy helps the LR branch to learn from features extracted from the HR branch, which is reliable and effective. Integrated with CReFF and FST, AR-Seg efficiently compensates for the accuracy loss of LR frames. Overall, AR-Seg significantly reduces the computational cost of VSS by altering input resolutions, while maintaining high segmentation accuracy. To sum up, we make three contributions in this paper: 1) We propose an efficient framework for compressed videos, named AR-Seg, that uses altering input resolution for VSS and significantly reduces the computational cost without losing segmentation accuracy. 2)We design an efficient CReFF module to prevent the accuracy loss by aggregat-ing HR keyframe features into LR non-keyframe features. 3)We propose a novel FST strategy that supervises theLR branch to learn from the HR branch through both ex-plicit and implicit constraints. Experiment results demon-strate the effectiveness of AR-Seg with different resolu-tions, backbones, and keyframe intervals. On both CamVid [3] and Cityscapes [9] datasets, compared to the constant-resolution baselines, AR-Seg reduces the computational cost by nearly 70% without compromising accuracy. |
Jain_A_Meta-Learning_Approach_to_Predicting_Performance_and_Data_Requirements_CVPR_2023 | Abstract We propose an approach to estimate the number of sam-ples required for a model to reach a target performance. We find that the power law, the de facto principle to esti-mate model performance, leads to a large error when using a small dataset (e.g., 5 samples per class) for extrapola-tion. This is because the log-performance error against the log-dataset size follows a nonlinear progression in the few-shot regime followed by a linear progression in the high-shot regime. We introduce a novel piecewise power law (PPL) that handles the two data regimes differently. To estimate the parameters of the PPL, we introduce a ran-dom forest regressor trained via meta learning that general-izes across classification/detection tasks, ResNet/ViT based architectures, and random/pre-trained initializations. The PPL improves the performance estimation on average by 37% across 16 classification and 33% across 10 detection datasets, compared to the power law. We further extend the PPL to provide a confidence bound and use it to limit the prediction horizon that reduces over-estimation of data by 76% on classification and 91% on detection datasets. | 1. Introduction More data translates to better performance, on aver-age, but higher cost. As data requirements scale, there is a natural desire to predict the cost to train a model and what performance it may achieve, as a function of cost, without training. Towards this goal, neural scaling laws [ 3,4,9,19,20,40] have been proposed that suggest that the performance of a model trained on a given dataset size follows a linear fit when plotted on a logarithmic scale (power law in linear scale). In practice, however, the power law provides only a fam-ily of functions and its parameters must be fitted to each specific case for performance prediction. A common use *Corresponding author.†Work done at AWS. Figure 1. Performance estimation curves using the powerlaw and piecewise power law (PPL) with estimated confidence. The PPL reduces over-estimation of the power law from 12 ⇥to 1.9 ⇥in 1 step, and further to 1.2 ⇥in 2 steps using the estimated confidence bounds to limit the prediction horizon n(1)in the first step. case is one where a small initial dataset is made available and can be used to obtain small-scale performance statistics that are relatively inexpensive to obtain and can be used to fit the power law parameters. Then, the fitted function is used to predict the performance for any dataset size train-ing through extrapolation. This approach is found empiri-cally to generalize across several datasets and deep learning models [ 40]. However, most applications of power law are done in the high-shot regime. The few-shot regime (e.g., 5 samples/class) is particularly useful given the prevalence of pre-trained initializations available for model training. In the few-shot regime, the performance curve exhibits a non-linear progression followed by a linear progression, see Fig-ure1. Thus, data requirements based on the power law can lead to significant errors incurring additional cost for ac-quiring data. In this paper, we propose a piecewise power law (PPL) that models the performance as a quadratic curve in the few-shot regime and a linear curve in the high-shot regime in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3623 the log-log domain, while ensuring continuity during the transition. To estimate the parameters of the PPL, we first identify the switching point between the quadratic and lin-ear curves using PowerRF, a random forest regressor trained via meta-learning, and then use the performance statistics to fit the remaining parameters. We show that our approach provides a better estimation of performance than the power law across several datasets, architectures, and initialization settings. We extend the PPL to provide a confidence es-timate that is used to further reduce the error in data esti-mation. In Figure 1, the confidence estimate controls the maximum number of samples in a step such that the PPL uses two steps to achieve the target performance with 1.2 ⇥ over-estimation compared to 12 ⇥with the power law. Our contributions are as follows. We propose an im-proved policy for predicting data size needed to achieve a target accuracy with three main innovations: (1) a piece-wise power law model (PPL) that approximates the per-formance error curve from low-to-high-shot regime, (2) a meta-learning approach to estimate the parameters of the PPL, and (3) incorporating the confidence interval of the estimates to limit the prediction horizon and reduce error in data estimation. We demonstrate the generalization of the PPL and the meta-model on 16 classification and 10 detec-tion datasets, improving the (a) performance estimates by 37% on classification and 33% on detection datasets and (b) data estimates by 76% on classification and 91% on de-tection datasets, with respect to the power law. |
Aoshima_Deep_Curvilinear_Editing_Commutative_and_Nonlinear_Image_Manipulation_for_Pretrained_CVPR_2023 | Abstract Semantic editing of images is the fundamental goal of computer vision. Although deep learning methods, such as generative adversarial networks (GANs), are capable of producing high-quality images, they often do not have an inherent way of editing generated images semantically. Re-cent studies have investigated a way of manipulating the latent variable to determine the images to be generated. However, methods that assume linear semantic arithmetic have certain limitations in terms of the quality of image editing, whereas methods that discover nonlinear semantic pathways provide non-commutative editing, which is incon-sistent when applied in different orders. This study proposes a novel method called deep curvilinear editing (DeCurvEd) to determine semantic commuting vector fields on the latent space. We theoretically demonstrate that owing to commu-tativity, the editing of multiple attributes depends only on the quantities and not on the order. Furthermore, we exper-imentally demonstrate that compared to previous methods, the nonlinear and commutative nature of DeCurvEd facili-tates the disentanglement of image attributes and provides higher-quality editing. | 1. Introduction Generating and editing realistic images is one of the fun-damental goals in computer vision. Generative adversar-ial networks (GANs) [20] have emerged as a major im-age generation approach owing to the quality of generated images [7, 33–36, 48] and provide various real-world ap-plications [1, 13, 18, 53, 63, 68]. Other notable methods include variational autoencoders (V AEs) [24], conditional PixelCNN [61], and diffusion-based models [26,56], which are collectively called deep generative models. However, deep generative models cannot inherently edit images se-mantically. They can be viewed as mappings from latent space to image space, and the latent variables determine thegenerated images. Therefore, several methods have been developed to train deep generative models such that each semantic attribute the user wants to edit is assigned to each element of the latent variable [10, 45, 46] (see the first col-umn of Table 1). However, this approach requires computa-tionally expensive training and can conflict with the quality of image generation. Other studies have developed image-to-image translation that translates images from one domain to another [28,64,68]. However, this approach also requires training from scratch and limits image editing to be discon-tinuous unless combined with latent variable manipulation. Therefore, a general and promising approach is necessary to identify manipulations on latent variables of already trained models that edit images semantically. A study reported that adding certain vectors to latent variables can modify the corresponding attributes of the ob-ject in the generated images [51]. This indicates that the latent space can be regarded as a linear vector space. Some studies have aimed to identify attribute vectors in a super-vised or an unsupervised manner [17, 19, 22, 27, 49, 50, 54, 55, 62, 69]. In any case, these studies introduce the strong assumption of linear semantic arithmetic on the latent space (see the second column of Table 1), which limits the qual-ity of image editing. Other studies have proposed methods to determine attribute vectors depending on the position in the latent space, that is, the attribute vector fields (or local attribute coordinates). [2, 11, 12, 17, 29, 37, 44, 52, 58, 60]; these methods edit an image attribute by moving the latent variable nonlinearly along the corresponding attribute vec-tor field. Although this approach is elegant, the edits of different attributes are generally non-commutative (see the third column of Table 1). That is, what we get is different when we edit attributes (denoted by e1ande2) one after an-other, or in the reverse order. This property can harm the disentanglement between attributes considering that the re-lationships among attributes change at different points. In contrast, linear arithmetic ensures that the edits of different attributes are commutative. To overcome this dilemma, this study proposes deep This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5957 Table 1. Comparison of Our Proposal against Related Methods. Training under constraints Linear arithmetic Vector fields/Local basis DeCurvEd [10, 45, 46] [27, 55, 62] [12, 52, 60] (proposed) Global coordinate Cartesian oblique (only local) curvilinear No retraining ✗ ✓ ✓ ✓ Nonlinear edit – ✗ ✓ ✓ Commutative edit ✓ ✓ ✗ ✓ Conceptual diagram e1e2 e1e2 e1e2 e1e2 curvilinear editing (DeCurvEd) , a method that determines a set of commuting attribute vector fields in the latent space of a pre-trained deep generative model. The key idea is adopted from the theorem that a set of vector fields is lo-cally expressed as partial derivatives of a coordinate chart if it is linearly independent and commuting [43]. There-fore, we define a curvilinear coordinate system globally [3] by employing a normalizing flow [9, 39], from which we derive commuting vector fields (see the rightmost panel of Table 1). The advantages of DeCurvEd are as follows (see also Table 1): 1. Edits of different attributes are always commutative, unlike previous methods that assume attribute vector fields (e.g., [60]). Therefore, an edited image does not depend on the order of editing, but on the amount of editing performed. |
Huynh_SimpSON_Simplifying_Photo_Cleanup_With_Single-Click_Distracting_Object_Segmentation_Network_CVPR_2023 | Abstract In photo editing, it is common practice to remove vi-sual distractions to improve the overall image quality and highlight the primary subject. However, manually select-ing and removing these small and dense distracting regions can be a laborious and time-consuming task. In this pa-per, we propose an interactive distractor selection method that is optimized to achieve the task with just a single click. Our method surpasses the precision and recall achieved by the traditional method of running panoptic segmenta-tion and then selecting the segments containing the clicks. We also showcase how a transformer-based module can be used to identify more distracting regions similar to the user’s click position. Our experiments demonstrate that the model can effectively and accurately segment unknown distracting objects interactively and in groups. By signif-icantly simplifying the photo cleaning and retouching pro-cess, our proposed model provides inspiration for explor-ing rare object segmentation and group selection with a single click. More information can be found at https: //github.com/hmchuong/SimpSON .1. Introduction Both professional photographers and casual users often require efficient photo retouching to enhance the quality of their images. One essential aspect of this task is the removal of visual distractions from photos [7]. These distractions can take various forms, such as unexpected pedestrians, ob-jects that are cropped out of the photo’s edge, dirty spots on the ground, repeated outlets on a wall, or even colorful and blurry lens flare. These distractions can be challenging to categorize due to their diverse appearance. As a result, users tend to select and mask them entirely and use photo editing software such as Photoshop to remove them. Segmentation is necessary for photo cleaning tasks be-cause rough masks may not be suitable for all scenarios. Accurate masks are required in situations where distractors are touching the main foreground subjects or where distrac-tors are small but dense in the image. User-drawn rough masks can result in the deletion of too much background texture when connected. In other cases, users may have a *This work was done during Chuong Huynh’s internship at Adobe This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14518 mask that covers the entire object but does not change the background too much. In all scenarios, our findings suggest that for inpainting, a tiny dilation from a highly accurate mask produces better background preservation and fewer leftover pixels of distractors. This finding is consistent with most of the existing inpainting models. The process of manually masking distracting elements in a photo can be a tedious and time-consuming task. Users often seek an automated tool that can efficiently select and segment all distractors. One approach is to train an instance segmentation model like Mask-RCNN [11] to detect and segment distractors in a supervised manner. However, iden-tifying distractors can be subjective, and collecting datasets requires scientific validation of the distractor annotations to ensure that most users agree. For instance, Fried et al. [7] invited 35 users to mark distractors on a single image and received varying feedback. Even with a model that detects distractors, it may not always satisfy users’ preferences. Therefore, tasks like these should rely heavily on user in-teraction, such as allowing users to click and decide where to retouch photos based on their own preferences. Our goal is to propose a single-click distractor seg-mentation model. With the rapid development of panop-tic segmentation technologies like PanopticFCN [20] and Mask2Former [5], can we utilize state-of-the-art models to retrieve distractor masks by clicking on the panoptic seg-mentation results? Unfortunately, most distractors belong to unknown categories, and some are tiny, making them dif-ficult to segment using models [2, 13] trained on datasets such as COCO [21], ADE20K [31], or Cityscapes [6] with a closed-set of categories. Qi et al. proposed entity segmen-tation [24] to train panoptic segmentation in a class-agnostic manner to address the long-tail problem, but it still may not be guaranteed to separate all regions in the photos. What if we use clicks as the input guidance for seg-mentation? Interactive segmentation models are closely re-lated to our task, and recent works like FocalClick [4] and RiTM [26] have achieved practical and high-precision seg-mentation performance. However, interactive segmentation aims to use multiple clicks, including positive and negative ones, to segment larger foreground objects accurately, espe-cially the boundary regions. In our task, we focus more on medium to small distracting objects and only require a sin-gle positive click to select semi-precise masks for inpainting purposes. The difference in our goal makes it challenging to follow the problem definition of interactive segmenta-tion. Additionally, previous interactive segmentation mod-els cannot select objects in groups, whereas most of our dis-tractors are repeated, dense, and evenly distributed across photos. This paper addresses the two challenges of accurate one-click universal class-agnostic segmentation and efficient similarity selection. Our proposed method can significantlyreduce the photo retouching process from hours (e.g., 100+ clicks) to minutes (e.g., 1-2 clicks) when removing dense and tiny distractors. Firstly, we optimize the click-based segmentation model to accurately segment distractor-like objects with a single click. This is achieved by utilizing the entity segmentation [24] method to discard category labels and using single-click embedding to guide the segmenta-tion of a single object. Secondly, we design a transformer-based Click Proposal Network (CPN) that mines similar distractor-like objects within the same image and regress click positions for them. Lastly, we rerun the single-click segmentation module using the proposed clicks to generate the mask and verify the similarity among the selected ob-jects via the Proposal Verification Module (PVM). We also run the process iteratively to ensure that more similar ob-jects are fully selected. In summary, our contributions con-sist of three main aspects: • We introduce a novel one-click Distractor Segmenta-tion Network (1C-DSN) that utilizes a single-click-based approach to segment medium to small distract-ing objects with high accuracy. Unlike other interac-tive segmentation methods, our model targets the seg-mentation of distracting objects with just one positive click. Our model is capable of generalizing well to ob-jects of any rare categories present in the photos. • We propose a Click Proposal Network (CPN) that mines all similar objects to the user’s single click. The proposed clicks are then reused in the segmentation model, and their similarity is verified using the Pro-posal Verification Module (PVM). This allows for the group selection of distracting objects with one click. • We further explore running the selection process iter-atively to fully select similar distractors with slightly diverse appearances. Our proposed distractor selection pipeline, which we call ’SimpSON,’ significantly sim-plifies the photo retouching process. By using Simp-SON, users can remove distracting objects in their pho-tos quickly and easily with just a few clicks. 2. Related works Visual Distraction in Photography Visual distracting el-ements in photos are elements that attract users’ attention but are not the primary subject of the photo. However, ac-cording to [7], the saliency map [14, 16–19] may not be highly correlated with visual distractors because the main subject usually has the peak in the attention map. Although efforts have been made to detect and retouch scratches [28], noise, and dirty dots in photos, and automatic and interac-tive face retouching [29] has already been widely deployed in commercial products, only a few research works [1] have targeted automatic general distractor detection and editing 14519 due to the high variance of distractor categories and appear-ances. In this work, our aim is to develop an interactive dis-tractor selection and masking method in photos, along with automatic grouping and selection of all similar distractors. Interactive Segmentation Interactive segmentation in-volves allowing users to provide a small amount of inter-action to complete the target segmentation. Xu et al. [30] proposed the first deep learning-based segmentation and in-troduced positive and negative clicks as inputs. BRS [15], and f-BRS [25] introduced online learning to optimize the segmentation results, while FCA-Net [23] by Lin et al. fo-cuses more on the initial click and uses feature attention to improve the segmentation results. RiTM [26] generates the following segmentation by fully utilizing the masking re-sults from previous iterations, while CDNet [3] presented how to use self-attention to propagate information among positive and negative clicks. FocalClick [4] revisited a se-ries of interactive segmentation techniques and proposed to use local inference for a more efficient and deployment-friendly network. In this paper, we draw from the experi-ence of interactive segmentation to use clicks as user in-puts. However, due to the nature of distractor removal tasks in photo retouching and cleaning use cases, users pre-fer to use an undo operation if the model over-predicts the mask, instead of switching between positive and negative clicks. Additionally, distractors are usually smaller than foreground objects, so we redefined our task with only pos-itive clicks and optimized the model with fewer positive clicks. Furthermore, previous works did not allow users to make group selections via self-similarity mining, while it is a highly demanded user need for distractor removal, which we address in our proposed method. 3. Methodology: SimpSON Figure 2 shows the overall pipeline of the proposed SimpSON pipeline. It consists of a feature extraction back-bone, a single-click Distractor Segmentation Network (1C-DSN), a similarity Click Proposal Network (CPN) designed for mining all the similar clicks, and a Proposal Verification Module (PVM) to check the similarity of the proposed click positions. The process can be run iteratively. 3.1. One-click Distractor Segmentation Network (1C-DSN) Motivation. When it comes to visual distractors in users’ photos, they often come in all shapes and sizes with dif-ferent appearances. We don’t always know what these ob-jects are, or how big or small they might be. To tackle this challenge, we need an interactive segmentation model that is highly adaptive, especially when dealing with unfamil-iar classes or small and medium-sized objects. It should beable to respond to clicks at any position, even if they fall on rare or unexpected objects, like cigarette butts, puddles, or bird droppings on the ground. To achieve this, we need to ensure that our model is optimized for high recall, so that users can remove unwanted objects with just one click. Difference with Previous Interactive Segmentation. When designing our pipeline, we imagined that users might wish to remove many distracting elements. For that sce-nario, we found it more intuitive and efficient to use only positive clicks in an iterative removal workflow, which could be particularly suited for mobile apps. As discussed in section 2, recent interactive segmentation works are de-signed for precise object segmentation with multiple pos-itive and negative clicks. We found state-of-the-art tools like [4,26] are not friendly to small and medium object seg-mentation with only a few positive clicks. However, for dis-tractor selection tasks, many obj | ects of small size should be easier to choose with one click. Larger and medium distrac-tors had better be quickly selected with few positive clicks. So the major difference between our segmentation model and previous works is we do not use negative clicks and fully optimize our models with fewer positive clicks. Network Structure. Figure 2 shows the network struc-ture of the single-click distractor segmentation network. Given an image I∈RH×W×3, the feature extractor net-work provides a pyramid feature map: F={X1, ..., X N} withXi∈Rhi×wi×dandH > h1> ... > hN, W > w1> ... > wN. For each feature level, we pair it with a binary click map Ic i∈ {0,1}hi×wiwhere Ic ix,y= 1 indi-cates the click at spatial location (x, y)inIc i. The click-embedded feature map X′ i∈Rhi×wi×(d+c)is then com-puted as X′ i=Xi⊕conv i(Ic i), where ⊕indicates the concatenation along the feature dimension and conv iis a mapping function which projects Ic itoRhi×wi×c. After obtaining the groups of click-embedded feature mapX′ i, we feed them to the detection head and segmen-tation head. We modify the bounding box filtering strategy by considering only keeping the boxes that overlap with the click positions. In this paper, we follow Entity Segmenta-tion [24] to design the detection and segmentation heads. The segmentation module finally outputs multiple binary segmentation masks Mj∈ {0,1}H×Wcorresponding to the user click positions. The 1C-DSN is trained with sim-ilar loss functions as in Entity Segmentation, which com-bines detection loss from FCOS [27] and the DICE loss from Entity Segmentation [24]. The design of the detection and segmentation parts can be replaced with any two-stage segmentation frameworks [11]. 14520 Single-Click Distractor Segmentation Network(1C-DSN)Click Proposal Network(CPN)PyramidFeatureExtractor BackboneUser Click MapQueryMaskClick ProposalMap1C-DSNProposal Verification Module (PVM)Input Image FinalMasks1C-DSNconvconv…………!!!"!!#!"#"!$""$Detection HeadMasking HeadSegmentation MaskFilteredbounding boxesROI-Align………L1L2L3Transformer Decoder………ClickProposalHeat MapCPN convFigure 2. The overview of SimpSON framework with 1C-DSN, CPN and PVM modules. It consists of a feature extraction backbone, a single-click Distractor Segmentation Network (1C-DSN), a similarity Click Proposal Network (CPN) designed for mining all the similar clicks, and a Proposal Verification Module (PVM) to check the similarity of the proposed click positions. The process of finding similar distractors can be run iteratively to fully generate the masks. 3.2. Click Proposal Network (CPN) In situations where there is only one instance of a dis-tractor, the 1C-DSN model can be sufficient for accurately segmenting it out. However, in many cases, we may come across multiple instances of distractors that share similar categories and appearances. In such scenarios, users would prefer to be able to select all of these instances with just a single click. To address this, we have designed a self-similarity mining module that can effectively identify all the distractors that are similar to the user’s click, thus enabling them to remove them all in one go. We propose this Click Proposal Network (CPN) to mine similar regions using cross-scale feature matching and regress the click positions from the high-confident regions. Then we can feed those click coordinates back to our 1C-DSN for masking to obtain the masks of all the similar dis-tractors. The design of the Click Proposal Network (CPN) is shown in Figure 2. The input to the CPN is a single query mask predicted from the previous 1C-DSN corresponding to the user’s single click. We utilize three levels of fea-ture maps with the spatial resolution to be1 4,1 8, and1 16 of the input image size. For the given query mask region, we apply ROI-Align [11] to extract features from the three levels of maps, resize them to k×k×d, where k= 3is a hyper-parameter for query size and dis the dimension of the features, and then apply the binary query mask to zero-out non-masking feature regions. We then obtain 3×k2feature vectors for similarity comparison with the original feature maps. We feed the query vectors into a cascade of trans-former decoder layers L1, L2, and L3, where each layertakes the keys and values from different levels of feature maps. We finally use the obtained aggregated feature vector to conduct spatial convolution with the largest feature map to obtain the prediction click position heatmap. During training, we follow CenterNet [32] to generate the ground truth heatmap using Gaussian filtering of the click map. The kernel size of the gaussian filter is set to the minimum value of the height and width of each mask. The module is then trained using a penalty-reduced pixel-wise logistic regression with focal loss as in CenterNet. During inference, we apply the Non-Maximum Suppression (NMS) to the heatmap to keep only the maximum value within a s×swindow and choose all the clicks having confidence larger than τc. Empirically, we set s= 32 andτc= 0.2. 3.3. Proposal Verification Module (PVM) To avoid false positive proposals in the heatmap and click map, we propose using a Proposal Verification Mod-ule (PVM) to ensure that the selected click positions are highly similar to the user’s clicks. This module performs pairwise comparisons between the generated masks and the initial click, and removes any click proposals that generate a mask that is significantly different from the initial query mask using a threshold. Specifically, we first feed all the click proposals into the 1C-DSN to generate separate instance masks for each click position. We refer to the mask of the initial user click as the target mask and all the other proposed masks as source masks. Figure 3 shows the module structure of PVM and the process of comparing two distractors. Given the origi-nal image I, the features X1, which is1 4of the spatial image 14521 Euclidean Distance!"!224Feature Extractor#77ROI-AlignconvGAPfcScale$"normTargetSource$#Embedding Extraction for Target MaskEmbedding Extraction for Source Mask linear0/1Figure 3. Proposal Verification Module (PVM). Given the original image I, the features X1, and the segmentation mask M, we ex-tract the region of interests from them. We then concatenate and feed the features from I,X1andMto obtain the 1D feature em-bedding, ztfor the target and zsfor the source. The Euclidean distance between them is fed to the fully-connected layer with a sigmoid activation to output the similarity score from 0 to 1. resolution, extracted from the pre-trained feature backbone in the 1C-DSN, and the segmentation mask M, we extract the region of interests from them. To preserve the aspect ratio of the objects, we extend the bounding box to square and use ROI-Align [11] to extract pixels or features. In this paper, we resize the cropped image patch to 224×224and feed it into a lightweight feature extractor, ResNet18 [12]. We then concatenate the image features (from I), backbone features (from X1), and resized masks (from M) together and feed them into neural layers to obtain the 1D feature embeddings, ztfor the target and zsfor the source. No-tice that we also add the scaling factorwb 224to guide the em-bedding learning, where wbis the bounding box size. The Euclidean distance between zsandztis input to the next fully-connected layer with a sigmoid activation to output the similarity score from 0 to 1. In training, we randomly sample pairs from the same im-age. A pair is considered positive if it is drawn from the same copy; otherwise, it will be a negative pair. Besides the binary cross entropy LBCE is computed on the last output with the pair labels, the max-margin contrastive loss [10] Lconis integrated on feature embedding zt, zsto make the model learning features better. The final training loss is a linear combination L=Lcon+LBCE . In testing, the PVM classifies each mask proposal with its exemplar by thresh-olding the similarity score. In our experiments, we choose 0.5 as the threshold. 3.4. Iterative Distractor Selection (IDS) We further run an iterative process to sample more sim-ilar distractors to ensure that we entirely select all the dis-tractors similar to the initial click. The details pseudo-code is shown in Algorithm 1. We update the Mewith the cor-rect masks for each iteration and progressively add high-confidence clicks to the result. By updating Me, we can avoid incorrect similarity findings caused by the incomplete initial exemplar mask. Picking top-kclicks and PVM mod-ule is essential in reducing false positive rates of CPN. In our experiments, we choose a kernel size of 5 for NMS, N= 5,k= 10 , andm= 3. Algorithm 1: IDS: Iterative Distractor Selection Data: Minit(Initial Mask), Me(Examplar Set), Macc(Accepted Masks), Cacc(Accepted Clicks), N(maximum iteractions) Result: Macc,Cacc itr←0; Me=Minit; Macc← {Minit}; Cacc← ∅; while itr≤Ndo Generate Heatmap Using Mein CPN; Apply NMS to obtain clicks Cnew; Remove Clicks from Cnewif within Macc; C′ new←top-kclicks with confidence ≥0.2; Cacc←Cacc+C′ new; PassCaccto 1C-DSN and Run PVM for Mnew; Macc←Mnew; Me←top-mconfident masks; end 4. Dataset Preparation Public Datasets We conducted single-click segmenta-tion experiments on the public COCO Panoptic and LVIS datasets. We pre-trained our model on the COCO Panop-tic dataset, which contains 118,287 images, and fine-tuned it on the LVIS dataset, which contains 1,270,141 objects across 99,388 images. Since there is some overlap between the LVIS validation set and the COCO train set, we only used 4,809 images with 50,672 instances from the original LVIS validation set for our evaluation. Self-Collected Distractor Datasets To gain a better un-derstanding of the distractors in users’ photos and im-prove the quality of our masking, we curated and anno-tated a dataset of distractor images. We began by creat-ing a list of common distractors found in photos, such as distracting people, shadows, lens flare, cigarette butts on the floor, construction cones, and so on. We then collected images from various public image websites, including but not limited to Flickr, Unsplash, and Pixabay, among oth-ers. To annotate our dataset of distractor images, we re-cruited three professional photographers to manually select and mask the distracting regions in each image that affect its overall aesthetic appeal. We found that having three 14522 annotators was sufficient to label all the distractors in a given photo. In total, our dataset contains 21,821 images, of which we used 20,790 images containing 178,815 dis-tracting instances for training, and 1,031 images contain-ing 8,956 instances for validation and evaluation. We have named our distractor dataset “Distractor20K” and the eval-uation dataset “DistractorReal-Val” in this paper. Data Synthesis for Similar Distractors Mining During the process of collecting our dataset, we observed that it is quite common for small, similar distractors (like bird drop-pings on the ground) to coexist in a single photo. However, our annotators may not be able to completely mask them. To our knowledge, there is no public dataset that includes annotations for these repeated distractors that we could use to train and evaluate our CPN model. Therefore, we propose a procedure to synthesize and generate similar distractors. This approach is inspired by [8], which demonstrated that copy-pasting can be an effective data augmentation tech-nique for instance segmentation tasks. To synthesize additional distractor data for our “Dis-tractor20K” dataset, we utilized instances from the LVIS dataset and adopted the Mask2Former [5] approach to ob-tain semantic segmentation masks of the images. We only synthesized distractors within the same semantic regions, including ground, ceiling, wall, sky, sea, and river, as can-didate regions. We first chose to copy objects that were either existing annotated distractors within those can |
Chen_Towards_Modality-Agnostic_Person_Re-Identification_With_Descriptive_Query_CVPR_2023 | Abstract Person re-identification (ReID) with descriptive query (text or sketch) provides an important supplement for gen-eral image-image paradigms, which is usually studied in a single cross-modality matching manner , e.g., text-to-imageor sketch-to-photo. However , without a camera-capturedphoto query, it is uncertain whether the text or sketch is available or not in practical scenarios. This motivates usto study a new and challenging modality-agnostic person re-identification problem. Towards this goal, we propose a unified person re-identification (UNIReID) architecture thatcan effectively adapt to cross-modality and multi-modality tasks. Specifically, UNIReID incorporates a simple dual-encoder with task-specific modality learning to mine andfuse visual and textual modality information. To dealwith the imbalanced training problem of different tasks in UNIReID, we propose a task-aware dynamic training strat-egy in terms of task difficulty, adaptively adjusting the train-ing focus. Besides, we construct three multi-modal ReID datasets by collecting the corresponding sketches from pho-tos to support this challenging study. The experimental results on three multi-modal ReID datasets show that our UNIReID greatly improves the retrieval accuracy and gen-eralization ability on different tasks and unseen scenarios. | 1. Introduction Person re-identification [ 42] involves using computer vi-sion techniques to identify pedestrians in video and stillimages. Given a monitored pedestrian image/video or a text description, ReID aims to retrieve all images/videos of that pedestrian across devices. ReID is widely used in in-telligent video surveillance, intelligent security, and other fields. The existing ReID researches include single-modal *Corresponding Author: Mang Ye ( yemang@whu.edu.cn )“The girl was wearing whiteshort sleeves, pink shorts and …” Query 3: Text +Sketch Uncertain Descriptive Query Paradigm 2: Sketch-to-RGB Paradigm 3: Text+Sketch-to-RGB Paradigm 1: Text-to-RGBQuery 2: Sketch “The girl was wearing white short sleeves, pink shorts and …”Query 1: Text U D Paradigm 4: Uncertain Query-to-RGB(a) Existing Methods (b) Our Method Unified ReID ModelModel 1 Query 1: Text Query 2: Sketch Query 3: Text+Sketch Model 2 Model 3 Figure 1. Illustration of our idea. Existing ReID methods [17,24,43] yield different paradigm models for different descrip-tive queries. However, it is uncertain whether the text or sketchis available or not in practical scenarios. Our unified ReID modelenables target pedestrian search under uncertain query inputs. Thegreen boxes match the query. ReID [ 4,13,21,47,48] and cross-modal ReID [ 2,17,41]. The former is restricted to retrieval between RGB images, whereas the latter allows retrieval of RGB images based on different query modalities ( e.g., IR, text, or sketch). Partic-ularly, the appearance of a suspect may only be described verbally in many criminal cases. Person re-identification with descriptive query (text or sketch) is well suited forthese scenarios and has greater research value for grounded applications of ReID models. Most existing text-based or sketch-based person re-identification methods rely on only one of the modalities as a query set to achieve pedestrian retrieval. Although the text modality is relatively easy to access, it fails to accurately de-pict visual appearance details, i.e., coarse-grained represen-tations [ 29]. As the saying goes, a picture is worth a thou-sand words, and the sketch image of a pedestrian is closerto the visual modality, showing specific information about This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15128 the target pedestrian, such as structural information. Since each task is trained independently, as shown in Fig 1(a), it is impossible to generalize to unseen modalities. For ex-ample, a ReID model trained on text-based datasets is basi-cally invalid on sketch-based scenes, and vice versa. This greatly limits the applicability of existing methods for prac-tical model deployment. Meanwhile, multi-modal fusion has been proven to be an important technique to improve model accuracy in com-puter vision, e.g., multi-modal face anti-spoofing [ 11], multi-modal generation and understanding [ 18]. Recently, Zhai et al. [43] first propose to implement multi-modal ReID using both sketch and text modalities as the query. Their experimental results indicate that the combination oftext and sketch modalities enhances the performance of the ReID model. However, this method adopts independenttext and image pre-training parameters for multi-modal rep-resentation learning, which has poor generalizability andyields low accuracy. More importantly, it is often uncer-tain whether text or sketch is available in a real scenario, i.e., modality deficit problems often arise when a specific modality is not available. Due to the independent training of tasks, existing cross-modal or multi-modal ReID methods are difficult to handle this problem. A smarter surveillancesystem should be capable of handling various modalities ofinformation efficiently. Therefore, in this paper, we propose the concept of modality-agnostic person re-identification tohandle the modality uncertainty issue. Specifically, we design a unified person re-identification architecture (UNIReID) in Fig 1(b), which is effective in both cross-modal and multi-modal applications. The great-est challenge in unifying learning across modalities is tomine a shared semantic representation space. At first, we propose a task-specific modality learning scheme to support individual task learning. Essentially, this scheme considersunified person re-identification as a set of retrieval tasks in-volving Text-to-RGB, Sketch-to-RGB, and Text+Sketch-to-RGB. Inspired by CLIP [ 25] which is a pre-trained model for matching image and text modalities, UNIReID uses asimple dual-encoder for visual modality and textual modal-ity feature extraction and fusion. All visual modalities sharea single image encoder. For multi-modal ReID, we fusethe sketch and text modalities into a single query by a sim-ple feature-level summation. Task-specific metric learn-ing explicitly minimizes the feature distances between var-ious types of query samples and gallery samples to learn modality-shared feature representations. In addition, considering that tasks have varying difficul-ties, unified ReID faces an additional challenge in balanc-ing learning among tasks, which may result in the overfit-ting of individual tasks. To handle this problem, we design a task-aware dynamic training strategy that adaptively ad-justs for training imbalances between tasks. The rationalefor dynamic training is to modulate the loss contribution of different retrieval tasks according to the training difficulty of tasks ( i.e., prediction confidence). Our dynamic train-ing strategy improves generalization capability by tendingto train difficult tasks. Finally, a cross-modality interactionis designed to align sketch and text feature representations.In view of the differences between sketch and text modalfeatures, we minimize the similarity distribution distancesbetween sketch-RGB and text-RGB pairs to align modal-ity information for modality fusion. With the help of rich multi-modal data, our model achieves mutual enhancement of tasks and improves the robustness against diverse query modality variations. To facilitate the modality-agnostic ReID study, we construct three multi-modal ReID datasets. Concretely, based on the text-based ReID datasets, namely CUHK-PEDES [ 17], ICFG-PEDES [ 7], and RSTPReid [ 50], we collect the sketch modality for each identity. Consider-ing that sketching by hand is time-consuming and labor-intensive, we propose to generate the sketch modality foreach identity from the photo modality. The detailed collec-tion information for the datasets can be found in Section 3. The main contributions of this paper are listed below: • We start the first attempt to investigate the modality-agnostic person re-identification with the descriptivequery, which provides a flexible solution to deal with query uncertainty in real-world scenarios. • We introduce a novel unified person re-identification (UNIReID) architecture based on a dual-encoder tojointly integrate cross-modal and multi-modal task learning. With task-specific modality learning and task-aware dynamic training, UNIReID enhances gen-eralization ability across tasks and domains. • We contribute three multi-modal ReID datasets to support unified ReID evaluation. Extensive experi-ments on both multi-modal matching and generalizedcross-modality matching have verified the advantage of UNIReID, achieving much higher accuracy than ex-isting counterparts. |
Han_Learning_a_3D_Morphable_Face_Reflectance_Model_From_Low-Cost_Data_CVPR_2023 | Abstract Modeling non-Lambertian effects such as facial specu-larity leads to a more realistic 3D Morphable Face Model. Existing works build parametric models for diffuse and specular albedo using Light Stage data. However, only dif-fuse and specular albedo cannot determine the full BRDF . In addition, the requirement of Light Stage data is hard to fulfill for the research communities. This paper proposes the first 3D morphable face reflectance model with spa-tially varying BRDF using only low-cost publicly-available data. We apply linear shiness weighting into parametric modeling to represent spatially varying specular intensity and shiness. Then an inverse rendering algorithm is devel-oped to reconstruct the reflectance parameters from non-Light Stage data, which are used to train an initial mor-phable reflectance model. To enhance the model’s gener-alization capability and expressive power, we further pro-pose an update-by-reconstruction strategy to finetune it on an in-the-wild dataset. Experimental results show that our method obtains decent rendering results with plausible fa-cial specularities. Our code is released here. | 1. Introduction 3D Morphable Face Models (3DMM) [4, 19] have at-tracted much attention in the past two decades, as it pro-vides a powerful and compact statistical prior of 3D face geometry and appearance with dense point-to-point corre-spondence to various downstream applications like face re-construction [14,22,52,55,56], rendering [13,53,57,58,70], and animation [3,7,9,20,21,68]. Existing works [52,54,55] have demonstrated promising results for improving the gen-eralization capability and expressive power of 3DMM under the assumption that faces are Lambertian surfaces. How-ever, it is still challenging to model non-Lambertian effects such as facial specularity in 3DMM, which can lead to a more realistic face model. A few recent works [37, 50] involve non-Lambertian fa-cial reflectance in the morphable face model. Using a Light Stage [11, 24, 39], they capture diffuse and specular albedo maps of tens of participants. Then, they model the dif-fuse and specular albedo by training a PCA model [50] or a deep generative network [37] on the acquired data. However, only the diffuse and specular albedo cannot de-termine the complete Bidirectional Reflectance Distribution Function (BRDF). Thus, other works [15–17] set the re-maining reflectance parameters ( e.g.specular exponent for the Blinn-Phong BRDF [5], roughness for the Torrance-Sparrow BRDF [59]) of all face vertices to a reasonable value to characterize specular shiness and obtain the com-plete BRDF. As shown in Figure 7, these spatially uniform parameters lead to unpleasing rendering results since face reflectance is inherently spatially varying [65]. Besides, the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8598 requirement of Light Stage data is hard to fulfill since build-ing a Light Stage is quite difficult, and no publicly available Light Stage dataset is sufficient to construct a 3DMM. To overcome these limitations, we propose and train the first morphable face reflectance model with spatially vary-ing BRDF from low-cost publicly-available data. Inspired by previous works [41,42], we represent face reflectance as a Lambertian BRDF combined with the linear combination of several Blinn-Phong BRDFs corresponding to different predefined specular exponents. Thus, the reflectance pa-rameters of each face vertex include an RGB color for the Lambertian BRDF and a set of weights for the Blinn-Phong BRDFs. As illustrated in Figure 2, our representation can naturally modulate specular intensity and shiness by adjust-ing the absolute and relative scales of the linear combination weights, respectively. Compared to previous works [37,50] not modeling specular shiness, we define a complete BRDF by this representation in 3DMM. Compared to the tradi-tional Blinn-Phong BRDF that models specular intensity and shiness in a nonlinear formulation [5], our linear rep-resentation (Equation (2)) is much easier to reconstruct the reflectance parameters from recorded images. With this lin-ear reflectance representation, we develop an inverse ren-dering approach to estimate the spatially varying reflectance parameters for the 128 selected identities in Multi-PIE [25], a public dataset with face images captured under controlled camera views and light directions. Then, we learn a PCA model for the estimated reflectance parameters as our initial morphable face reflectance model. Considering that the Multi-PIE dataset only contains 128 identities which is far from sufficient to capture the vari-ability of human faces, we propose to finetune the initial model on a large-scale in-the-wild dataset, FFHQ [29], to improve its generalization capability and expressive power. As the inputs are in-the-wild images with unknown lighting information, it is not easy to reconstruct accurate reflectance from them. Our key observation is that, on the one hand, we already have an initial parametric reflectance model that can better formulate the reflectance reconstruction from in-the-wild images. On the other hand, the reconstructed re-flectance from in-the-wild data could provide feedback to enhance the face prior knowledge in our morphable re-flectance model. Based on this observation, we jointly re-construct the face reflectance coefficients and update the pa-rameters of our morphable face reflectance model (the mean and bases). Another challenge here is to predict high-order spherical harmonics (SH) lighting [44] for in-the-wild im-ages, which is crucial for updating the high-frequency in-formation of our non-Lambertian reflectance model [45]. To solve this problem, we build another PCA model for real-world environment lighting in SH coefficients space, which largely reduces the searching space of the high-order SH coefficients. During face reconstruction, we first predictthe parameters of the PCA lighting model and then retrieve the high-order SH coefficients from it. Finally, the in-the-wild images are well reconstructed with our parametric re-flectance model, and the model itself is also updated gradu-ally in this process to achieve high generalization capability and expressive power. In summary, our contributions include: • We propose the first 3D morphable face reflectance model with spatially varying BRDF and a technique to train the model with low-cost publicly-available data. • We apply linear shiness weighting into parametric face modeling to represent spatially varying specular shi-ness and intensity and ease the process of reconstruct-ing reflectance from images. • We propose an update-by-reconstruction strategy to finetune our face reflectance model on an in-the-wild dataset, improving its generalization capability and ex-pressive power. |
Cao_Recurrent_Homography_Estimation_Using_Homography-Guided_Image_Warping_and_Focus_Transformer_CVPR_2023 | Abstract We propose the Recurrent homography estimation framework using Homography-guided image Warping and Focus transformer (FocusFormer), named RHWF . Both be-ing appropriately absorbed into the recurrent framework, the homography-guided image warping progressively en-hances the feature consistency and the attention-focusing mechanism in FocusFormer aggregates the intra-inter cor-respondence in a global →nonlocal →local manner. Thanks to the above strategies, RHWF ranks top in accuracy on a variety of datasets, including the challenging cross-resolution and cross-modal ones. Meanwhile, benefiting from the recurrent framework, RHWF achieves parame-ter efficiency despite the transformer architecture. Com-pared to previous state-of-the-art approaches LocalTrans and IHN, RHWF reduces the mean average corner error (MACE) by about 70% and 38.1% on the MSCOCO dataset, while saving the parameter costs by 86.5% and 24.6%. Sim-ilar to the previous works, RHWF can also be arranged in 1-scale for efficiency and 2-scale for accuracy, with the 1-scale RHWF already outperforming most of the previous methods. Source code is available at https://github. com/imdumpl78/RHWF . | 1. Introduction Homography is defined as a global projective mapping between two images captured from different perspectives. It has been widely applied in computer vision tasks rang-ing from the monocular camera system to the multi-camera system, such as image/video stitching [4, 17, 19], multi-scale gigapixel photography [3,34], multispectral image fu-sion [41,49], planar object tracking [44,45], SLAM [14,31], and GPS-denied UA V localization [18, 48]. Deep homography estimation was introduced in the pi-oneer [12] that uses a VGG-style network to predict the homography. Many following works have been presented to further improve the estimation accuracy, including cas-*Corresponding author. A A (a) Pre-warpingA (b) No warping A (c) Homography guided warping ^Hn^Hn A A A AA (d) Pure global attention (e) Pure local attention (f) Focus attentionFigure 1. Illustration of the difference of warping and attention strategies in RHWF and previous approaches. Our RHWF deploys (c) and (f). Please see text for details. cading multiple similar networks [15, 21, 22, 34] or design-ing iterable architectures such as the IC-LK iterator [7, 48] and the trainable CNN iterator [6]. The cascading strategy has improved the accuracy to some extent but is limited by the fixed number of networks. Worse still, stacking more networks cannot guarantee better accuracy [22]. The IC-LK (inverse compositional Lucas-Kanade [1]) based deep methods use deep feature extractor combined with the un-trainable iterator to improve the estimation performance, but is limited by the theoretical drawback of the untrainable iterator [6, 32]. IHN [6] avoids this limitation by designing an iterable and trainable network architecture, which fur-ther improves the estimation accuracy. However, the fea-ture inconsistency caused by the homography deformation has long been neglected in most current works. It has been well investigated in [9] that standard convo-lution is unable to keep the equivariance under the spatial transformation except translation. However, besides trans-lation, homography is composed of rotation, scaling, shear-ing, aspect ratio, and perspective transformations [37, 43], which leads to the inconsistency of the features from cor-responding points [25]. The inconsistency will hinder the homography estimation performance. Many efforts have been made to acquire the transformation-equivariance by either applying group convolutions in the network [9] or pre-warping [16, 20, 25, 43] the input image. But the above strategies need to exhaustively explore the possible trans-formation dimensions and degrees, as is illustrated in Fig. 1a, which is redundant in computation when coping with This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9833 the homography transformation with a DOF of 6. To cope with the above problem, homography-guided image warping, as shown in Fig. 1c, is adopted in our pro-posed recurrent homography estimation framework, dubbed RHWF. We note that homography-guided image warping has already been unconsciously employed in some of the previous cascading-based works [15, 22, 34]. However, the reason, effect, and technique of using homography-guided image warping, especially absorbing it properly in the recurrent framework, hasn’t ever been investigated. Different from the previous works, our RHWF combines the homography-guided image warping with the recurrent trainable network, which significantly improves the ac-curacy without the cost of network parameters. Com-pared to the previous cascading-based SOTA method Lo-calTrans [34], RHWF reduces the mean average corner er-ror (MACE) by about 70% on the MSCOCO dataset, while reducing the parameter cost of 86.5%. On the other side, transformer architecture [8, 13] has demonstrated its superior ability in computer vision and image processing tasks. The transformer architecture has also been introduced in the homography estimation task as in [21,34]. Following their pioneer exploration, we propose a transformer structure, named FocusFormer, that is pretty compatible with the homography-guided image warping and the recurrent framework. As illustrated in Fig. 1d, Fig. 1e, and Fig. 1f, unlike the attention mechanism in previous works that is pure global or local, FocusFormer employs the attention focusing mechanism. The scope of the attention mechanism shrinks along with the recurrence procedure, which captures the intra/inter correspondence information in a global →nonlocal →local1manner. We note that com-pared to the most widely adopted global attention mecha-nism, the attention-focusing mechanism can save computa-tion costs while improving the homography estimation per-formance simultaneously. We introduce the homography-guided image warping and FocusFormer into the recurrent homography estima-tion framework, named RHWF. The three parts, i.e.,recur-rent estimation, homography-guided image warping, and the FocusFormer cooperate well, with each part facilitat-ing the others. We evaluate RHWF on a variety of datasets including common RGB image data [24], cross-resolution data [34] and cross-modal data [6, 48], on which it outper-forms all other competitors by a large gap. We show that though adopting the transformer, our RHWF reduces the pa-rameter cost of 24.6% while achieving the accuracy gain of 38.1% (MSCOCO) and 34.1% (GoogleMap), compared to the previous SOTA method IHN [6]. In summary, our con-tributions are as follows: (1)We propose a novel Recurrent homography estimation framework using Homography-1As in most of the works that refer to “nonlocal” [5], it denotes a rela-tively large neighborhood around a pixel.guided image Warping and FocusFormer, dubbed RHWF. RHWF ranks top on a variety of datasets, including the chal-lenge scenes such as the cross-resolution and cross-modal ones. The recurrent estimation, homography-guided im-age warping, and FocusFormer facilitate the functionality of each other. (2)The reason, effect, and technique of using homography-guided image warping properly in the recur-rent framework is first fully investigated. With the assis-tance of homography-guided image warping, the extracted features gradually converge into consistency, and hence boosting the homography estimation accuracy. (3)The Fo-cusFormer is proposed to be the fundamental block of the recurrent homography estimation. The attention mecha-nism in FocusFormer works in a global →nonlocal →local manner, which significantly saves the computational costs while achieving a better performance. |
Agaram_Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields_CVPR_2023 | Abstract Coordinate-based implicit neural networks, or neural fields, have emerged as useful representations of shape and appearance in 3D computer vision. Despite advances, however, it remains challenging to build neural fields for categories of objects without datasets like ShapeNet that provide “canonicalized” object instances that are consis-tently aligned for their 3D position and orientation (pose). We present Ca nonical Fi eld Network ( CaFi-Net ), a self-supervised method to canonicalize the 3D pose of instances from an object category represented as neural fields, specif-ically neural radiance fields (NeRFs). CaFi-Net directly learns from continuous and noisy radiance fields using a Siamese network architecture that is designed to extract equivariant field features for category-level canonicaliza-tion. During inference, our method takes pre-trained neu-ral radiance fields of novel object instances at arbitrary 3D pose and estimates a canonical field with consistent 3D pose across the entire category. Extensive experiments on a new dataset of 1300 NeRF models across 13 object categories show that our method matches or exceeds the performance of 3D point cloud-based methods. | 1. Introduction Neural fields [59]—coordinate-based neural networks that implicitly parameterize signals—have recently gainedsignificant attention as representations of 3D shape [6, 22, 28], view-dependent appearance [24, 42], and motion [25]. In particular, neural radiance fields (NeRFs) [24], have been successfully used in problems such as novel view synthe-sis [3, 4, 66], scene geometry extraction [55, 61], capturing dynamic scenes [19, 29, 30, 33, 52], 3D semantic segmenta-tion [53, 68], and robotics [1, 13, 21]. Despite the progress, it remains challenging to build neu-ral fields that represent an entire category of objects. Pre-vious methods sidestep the problem by overfitting on a sin-gle instance [24], or learning [22, 28, 63] on datasets like ShapeNet [5] that contain objects that are manually canon-icalized – oriented consistently for 3D position and orien-tation (3D pose) across a category. This strong supervision makes it easier to learn over categories, but limits their ap-plication to data that contain these labels. Recent work has proposed methods for self-supervised learning of 3D pose canonicalization [40, 43, 47], however, these operate on 3D point clouds, meshes, or voxels – but not neural fields. In this paper, we present Ca nonical Fi eld Network (CaFi-Net ), a self-supervised method for category-level canonicalization of the 3D position and orientation of ob-jects represented as neural fields , specifically neural radi-ance fields. Canonicalizing neural fields is challenging be-cause, unlike 3D point clouds or meshes, neural fields are continuous , noisy, and hard to manipulate since they are pa-rameterized as the weights of a neural network [60]. To ad-dress these challenges, we first extend the notion of equiv-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4500 ariance to continuous vector fields and show how networks for processing 3D point clouds [50] can be extended to op-erate directly on neural radiance fields. We design CaFi-Net as a Siamese network that contains layers to extract equivariant features directly on vector fields. These field features are used to learn a canonical frame that is consis-tent across instances in the category. During inference, our method takes as input neural radiance fields of object in-stances from a category at arbitrary pose and estimates a canonical field that is consistent across the category. To handle noise in radiance fields from NeRF, our method in-corporates density-based feature weighting and foreground-background clustering. Our approach learns canonicalization without any super-vision labels on a new dataset of 1300 pre-trained NeRF models of 13 common ShapeNet categories in arbitrary 3D pose (see Figure 1). We introduce several self-supervision loss functions that encourage the estimation of a consistent canonical pose. In addition, we present extensive quantita-tive comparisons with baselines and other methods on stan-dardized canonicalization metrics [37] over 13 object cat-egories. In particular, we show that our approach matches or exceeds the performance of 3D point cloud-based meth-ods. This enables the new capability of directly operating on neural fields rather than converting them to point clouds for canonicalization. To sum up, we contribute: • Ca nonical Fi eld Network ( CaFi-Net ), the first method for self-supervised canonicalization of the 3D position and orientation (pose) of objects represented as neural radiance fields. • A Siamese neural network architecture with equivari-ant feature extraction layers that are designed to di-rectly operate on continuous and noisy radiance fields from NeRF. • A public dataset of 1300 NeRF models from 13 ShapeNet categories including posed images, and weights for evaluating canonicalization performance. |
Bannur_Learning_To_Exploit_Temporal_Structure_for_Biomedical_Vision-Language_Processing_CVPR_2023 | Abstract Self-supervised learning in vision–language processing (VLP) exploits semantic alignment between imaging and text modalities. Prior work in biomedical VLP has mostly relied on the alignment of single image and report pairs even though clinical notes commonly refer to prior im-ages. This does not only introduce poor alignment be-tween the modalities but also a missed opportunity to ex-ploit rich self-supervision through existing temporal con-tent in the data. In this work, we explicitly account for prior images and reports when available during both train-ing and fine-tuning. Our approach, named BioViL-T, uses a CNN–Transformer hybrid multi-image encoder trained jointly with a text model. It is designed to be versatile to arising challenges such as pose variations and miss-ing input images across time. The resulting model excels on downstream tasks both in single-and multi-image se-tups, achieving state-of-the-art (SOTA) performance on (I) progression classification, (II) phrase grounding, and (III) report generation, whilst offering consistent improvements on disease classification and sentence-similarity tasks. We release a novel multi-modal temporal benchmark dataset, MS-CXR-T, to quantify the quality of vision–language rep-resentations in terms of temporal semantics. Our experi-mental results show the advantages of incorporating prior images and reports to make most use of the data. | 1. Introduction Self-supervision from image–text pairs has enabled the development of flexible general-purpose vision–language models both in the general domain [40, 53, 77] and for specialised domains such as biomedicine and radiology ∗These authors contributed equally. †Corresponding author: ozan.oktay@microsoft.com "pleural fluid in the right base" "lung nodule remains unchanged " "pleural ef fusion is worsening "Image encoder Text encoderImage encoder Text encoder ✓× ✓× ? ✓×× ✓× ? ×× ×× ? ?? ?× Prior image Current imageSpatiotemporal modelling Spatial modellingCurrent image InfoNCE affinity matrix Current image Prior image (if available) Clinical report Existing methods Proposed method InfoNCE affinity matrix Clinical report (b)(a) (c) (d)Figure 1. (a) Existing visual–language pre-training approaches [9, 32, 81] often use only a single image for contrastive learning (e.g., InfoNCE [49]). (b) In such settings, discarding the temporal connectivity of images limits the alignment of image–text pairs as shown with the affinity matrix, leading to suboptimal pre-training and missed opportunity to create additional model supervision for free. (c, d) Our approach exploits this domain knowledge by learn-ing to incorporate a series of images and correlate them to reports, leading to pre-trained models that can generalise to a wider range of downstream tasks whilst achieving SOTA performance. [9, 32, 81]. Vision–language processing (VLP) has shown that cross-modal supervision can provide a richer signal for training both image [19] and text [9] models. However, the success of VLP relies on paired samples sharing semantics, i.e., given an image and text pair, the text should describe the image with minimal extraneous detail [15, 16, 35]. In this regard, VLP in biomedicine and radiology poses a distinctive challenge, as reports routinely include compar-isons to prior imaging studies [3, 47, 57]. Without knowl-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15016 edge of this prior image1, temporal information in the text modality, e.g. “Pneumonia is improving”, could pertain to any image containing “Pneumonia”, producing ambiguity during contrastive training (Figure 1). Despite this, the ex-isting VLP work to date considers alignment between only single images and reports [9,32,46,81], going so far as to re-move temporal content from reports in training data to pre-vent ‘hallucinations’ in downstream report generation [54]. However, temporal information can provide complementary self-supervision, solely by exploiting existing structure, and without requiring any additional data. In this work, we neither ignore nor remove temporal in-formation in the text modality, but explicitly account for it during pre-training. Rather than treating all image–report pairs in the dataset as independent, we exploit temporal cor-relations by making prior images available for comparison to a given report. To learn from this structure, we develop a temporal VLP pre-training framework named BioViL-T . A core component is its new multi-image encoder that can handle the absence of prior images and potential spatial misalignment between images across time. BioViL-T takes into account prior images where available, removing cross-modal ambiguity as illustrated in Fig. 1. Linking multi-ple images during pre-training proves beneficial to both im-age and text models: we report state-of-the-art (SOTA) per-formance on both temporal image classification and report generation. In the latter case, we show that prefixing the prior report substantially increases performance, again re-flecting the value of prior information. We emphasise that the benefit is not restricted to temporal downstream tasks: our approach also achieves SOTA on non-temporal tasks of pneumonia detection [60] and phrase grounding [10], un-derscoring the value of a cleaner learning signal during VLP without needing to modify or add to the training dataset. Our contributions can be summarised as follows: • We introduce a novel pre-training framework called BioViL-T . It leverages the temporal relationship of sam-ples to self-supervise VLP models, making commonly used biomedical VLP models (e.g., [9,32,81]) more ap-plicable to a wider range of downstream tasks without compromising performance on existing benchmarks. • We develop a generic multi-image encoder that handles missing image inputs and incorporates longitudinal in-formation without requiring explicit image registration. • We achieve SOTA results in chest X-ray (CXR) report generation, temporal image classification, and phrase grounding downstream benchmarks by accounting for prior context in self-supervised training and fine-tuning. • We release a new multimodal benchmark dataset, MS-CXR-T , curated by an expert radiologist. It enables 1In the MIMIC-CXR v2 dataset [36], around 40% of reports explicitly reference a previous image. See Appendix B for details.benchmarking of CXR VLP models in terms of tempo-ral semantics extracted from image and text data. |
Jiang_Cross-Modal_Implicit_Relation_Reasoning_and_Aligning_for_Text-to-Image_Person_Retrieval_CVPR_2023 | Abstract Text-to-image person retrieval aims to identify the tar-get person based on a given textual description query. The primary challenge is to learn the mapping of visual and tex-tual modalities into a common latent space. Prior works have attempted to address this challenge by leveraging sep-arately pre-trained unimodal models to extract visual and textual features. However, these approaches lack the nec-essary underlying alignment capabilities required to match multimodal data effectively. Besides, these works use prior information to explore explicit part alignments, which may lead to the distortion of intra-modality information. To alle-viate these issues, we present IRRA: a cross-modal Implicit Relation Reasoning and Aligning framework that learns re-lations between local visual-textual tokens and enhances global image-text matching without requiring additional prior supervision. Specifically, we first design an Implicit Relation Reasoning module in a masked language model-ing paradigm. This achieves cross-modal interaction by integrating the visual cues into the textual tokens with a cross-modal multimodal interaction encoder. Secondly, to globally align the visual and textual embeddings, Similar-ity Distribution Matching is proposed to minimize the KL divergence between image-text similarity distributions and the normalized label matching distributions. The proposed method achieves new state-of-the-art results on all three public datasets, with a notable margin of about 3%-9% for Rank-1 accuracy compared to prior methods. | 1. Introduction Text-to-image person retrieval aims to retrieve a person-of-interest from a large image gallery that best matches the *Corresponding Author: Mang Ye (yemang@whu.edu.cn) Image Encoder Text Encoder A woman in a Gray pair of shorts, a pair of Gray shoes and a white purse around her waist. (a) Early global matching paradigm Image Encoder Text Encoder A woman in a Gray pair of shorts , a pair of Gray shoes and a white purse around her waist. (b) Existing explicit local matching paradigm Image Encoder Text Encoder A woman in a Gray pair of shorts, a pair of Gray shoes and a [MASK] purse around her waist. white (c) Our implicit relation reasoning aided matching paradigm Alignment Attention Global Image Feature Global Text Feature Local Image Feature Local Text Feature Masked Token Figure 1. Evolution of text-to-image person retrieval paradigms. (a) Early global-matching method directly align global image and text embeddings. (b) Recent local-matching method, explicitly ex-tract and align local image and text embeddings. (c) Our implicit relation reasoning method, implicitly reasoning the relation among all local tokens to better align global image and text embeddings. text description query [30], which is a sub-task of both image-text retrieval [26, 33, 42] and image-based person re-identification (Re-ID) [15,32,45]. Textual descriptions pro-vide a natural and relatively comprehensive way to describe a person’s attributes, and are more easily accessible than im-ages. Text-to-image person retrieval thus received increas-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2787 ing attention in recent years, benefiting a variety of applica-tions from personal photo album search to public security. However, text-to-image person retrieval remains a chal-lenging task due to significant intra-identity variations and modality heterogeneity between vision and language. The former challenge stems from the fact that visual appear-ances of an identity differ based on pose, viewpoint, illu-mination, and other factors, while textual description varies by arbitrary descriptive order and textual ambiguity. The latter challenge is the primary issue in cross-modal tasks and is caused by inherent representation discrepancies be-tween vision and language. To tackle above two challenges, the core research problem in text-to-image person retrieval is to explore better ways to extract discriminative feature representations and to design better cross-modal matching methods to align images and texts into a joint embedding space. Early global-matching methods [53, 54] aligned im-ages and texts into a joint embedding space by designing cross-modal matching loss functions (Fig. 1 (a)). Typically, these approaches learned cross-modal alignments by using matching losses only at the end of the network, failing to achieve sufficient modality interaction in middle-level lay-ers, which are crucial to bridge the feature-level modality gap. Therefore, some later methods [5, 7, 21, 46] intro-duced the practice of local-matching by building the cor-respondence between the body parts and the textual entities (Fig. 1 (b)). Although this local matching strategy benefits retrieval performance, it introduces unavoidable noise and uncertainty in the retrieval process. Besides, the strategy requires extracting and storing multiple local part represen-tations of images and texts, computing pairwise similarity between all those representations during inference. These resource-demanding properties limit their applicability for practical large-scale scenarios. In this paper, we present IRRA: a cross-modal Implicit Relation Reasoning and Aligning framework, which per-forms global alignment with the aid of cross-modal im-plicit local relation learning. Unlike previous methods that heavily rely on explicit fine-grained local alignment, our approach implicitly utilizes fine-grained information to en-hance global alignment without requiring any additional su-pervision and inference costs (Fig. 1 (c)). Specifically, we design an Implicit Relation Reasoning module that effec-tively builds relations between visual and textual represen-tations through self-and cross-attention mechanisms. This fused representation is then utilized to perform masked lan-guage modeling (MLM) task to achieve effective implicit inter-modal and intra-modal fine-grained relation learning. MLM is generally utilized during the pre-training stage of vision-language pre-training (VLP) [6, 9, 27, 31, 41]. In this work, we make the first attempt to demonstrate the effec-tiveness of MLM in downstream fine-tuning tasks. Our main innovation is the design of a multimodal interactionencoder that can efficiently fuse visual and textual represen-tations, align cross-modal fine-grained features through the MLM task. This design helps the backbone network to ex-tract more discriminative global image-text representations without requiring additional supervision. To guide the image-text matching, commonly used loss functions include ranking loss and cross-modal projection matching (CMPM) [53] loss. Compared to ranking loss, the CMPM loss does not require the selection of specific triplets or margin parameter tuning. It exhibits great stabil-ity with varying batch sizes, making it widely used in text-to-image person retrieval [5, 39, 50]. However, we found that the projection in CMPM can be regarded as a variable weight that adjusts the distribution of softmax output log-its, similar to the temperature parameter [17] for knowledge distillation. Nevertheless, limited by the varying projection length, CMPM therefore cannot precisely control the pro-jection probability distribution, making it difficult to focus on hard-negative samples during model updates. To ex-plore more effective cross-modal matching objective, we further propose an image-text similarity distribution match-ing (SDM) loss. The SDM loss minimizes the KL diver-gence between the normalized image-text similarity score distributions and the normalized ground truth label match-ing distributions. Additionally, we introduce a temperature hyperparameter to precisely control the similarity distribu-tion compactness, which enables the model updates focus on hard-negative samples and effectively enlarges the vari-ance between non-matching pairs and the correlation be-tween matching pairs. To address the limitations of separate pre-trained mod-els on unimodal datasets, we leverage the Contrastive Language-Image Pre-training (CLIP) [35] as the initial-ization of our model. CLIP is pre-trained with abundant image-text pairs and has powerful underlying cross-modal alignment capabilities. Some previous approaches [13, 50] have either frozen some part of parameters or introduced only CLIP’s image encoder, which resulted in their inability to fully exploit CLIP’s powerful capabilities in image-text matching. With the proposed IRRA, we successfully trans-fer the powerful knowledge directly from the pre-trained full CLIP model and continue to learn fine-grained cross-modal implicit local relations on text-to-image person re-trieval datasets. In addition, compared to many recent meth-ods [5, 38, 50], IRRA is more efficient as it computes only one global image-text pair similarity score in the inference stage. The main contributions can be summarized as fol-lows: • We propose IRRA to implicitly utilize fine-grained in-teraction to enhance the global alignment without re-quiring any additional supervision and inference cost. • We introduce a new cross-modal matching loss named 2788 image-text similarity distribution matching (SDM) loss. It directly minimizes the KL divergence between image-text similarity distributions and the normalized label matching distributions. • We demonstrate that the full CLIP model can be ap-plied to text-to-image person retrieval and can outper-form existing state-of-the-art methods with straightfor-ward fine-tuning. Moreover, our proposed IRR module enables fine-grained image-text relation learning, al-lowing IRRA to learn more discriminative image-text representations. • Extensive experiments on three public benchmark datasets, i.e., CUHK-PEDES [30], ICFG-PEDES [7] and RSTPReid [55] show that IRRA consistently out-performs the state-of-the-arts by a large margin. |
Hu_REVEAL_Retrieval-Augmented_Visual-Language_Pre-Training_With_Multi-Source_Multimodal_Knowledge_Memory_CVPR_2023 | Abstract In this paper, we propose an end-to-end Retrieval-Augmented Visual Language Model (R EVEAL) that learns to encode world knowledge into a large-scale memory, and to retrieve from it to answer knowledge-intensive queries. REVEAL consists of four key components: the memory, the encoder, the retriever and the generator. The large-scale memory encodes various sources of multimodal world knowl-edge ( e.g. image-text pairs, question answering pairs, knowl-edge graph triplets, etc.) via a unified encoder. The retriever finds the most relevant knowledge entries in the memory, and the generator fuses the retrieved knowledge with the input query to produce the output. A key novelty in our approach is that the memory, encoder, retriever and generator are all pre-trained end-to-end on a massive amount of data. Fur-thermore, our approach can use a diverse set of multimodal knowledge sources, which is shown to result in significant gains. We show that R EVEAL achieves state-of-the-art re-sults on visual question answering and image captioning. The project page of this work is reveal.github.io . | 1. Introduction Recent large-scale models such as T5 [33], GPT-3 [4], PaLM [9], CoCa [49], Flamingo [2], BEIT-3 [43] and *This work was done when Ziniu was an intern at Google.PaLI [7] have demonstrated the ability to store substantial amounts of world knowledge, when scaled to tens of billions of parameters and trained on vast text and image corpora. These models achieve state-of-the-art results in downstream tasks such as image captioning, visual question answering and open vocabulary recognition. Yet, these models have a number of drawbacks: (i) they require massive scale, of parameters, data and computation, and (ii) they need to be re-trained every time the world knowledge is updated. To address these issues, we adopt a different approach. Instead of statically compiling world knowledge into model weights, we transform the knowledge into a key-value mem-ory through neural representation learning. Our model learns to utilize the memory for answering knowledge-intensive queries. By decoupling the knowledge memorization from reasoning, we enable our model to leverage various external sources of knowledge ( e.g., Wikipedia passages and im-ages [37], the WikiData knowledge graph [40], Web image-text pairs [5] and visual question answering data [12]). This enables the model parameters to focus on understanding the query and conducting reasoning, rather than being dedicated to memorization. Retrieval-augmented models have attracted a fair amount of attention in the fields of NLP [14, 18] and computer vi-sion [13,25]. Typically, these models often use a pre-existing single-modality backbone to encode and retrieve informa-tion from the knowledge corpus. Such approaches do not leverage all available modalities in the query and knowl-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23369 edge corpora, and hence they might not find the information that is most helpful for generating the model output. A key novelty in our approach is that we encode and store various sources of multimodal world knowledge into a unified mem-ory, which the retriever can access via multimodal query encodings, to find the most relevant information from across complementary sources. Our multimodal memory and re-triever are pre-trained end-to-end together together with the rest of the model, on a massive amount of data and using diverse knowledge sources. A key challenge of pre-training the multimodal retriever end-to-end is the lack of direct supervision. There is no ground-truth indicating which knowledge entries are most helpful for answering knowledge-intensive queries. Some of the existing works in NLP [14, 23, 34] propose to acquire training signal by assessing the usefulness of each retrieved knowledge entry independently for helping language mod-elling. This approach is inefficient, as it involves estimating hundreds of retrieved knowledge entries independently, and also inaccurate as it discards the dependency between dif-ferent knowledge entries in the retrieval set. In contrast, we propose to get this training signal while simultaneously considering multiple retrieved knowledge entries, by intro-ducing an attentive fusion layer that injects retrieval score into the attention calculation procedure. This enables the retrieval module to be differentiable and jointly pre-trained with the rest of the model. In summary, our key contributions are as follows: •We are the first to propose an end-to-end pre-training paradigm that learns to index into a large-scale memory to solve knowledge-intensive visual-language tasks. •Our method can construct a large-scale memory by en-coding various sources of multimodal world knowledge, including Wikipedia passage, web images with alt-text captions, and knowledge graph triplets. •REVEAL achieves state-of-the-art performance on sev-eral knowledge-intensive visual question answering and image captioning datasets. Notably on the OKVQA benchmark, R EVEAL achieves a new state-of-the-art, 59.1%accuracy, while using order of magnitude fewer parameters than previous works. |
Eisenmann_Why_Is_the_Winner_the_Best_CVPR_2023 | Abstract International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common char-acteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stagepipelines (61%), and a focus on augmentation (100%), im-age preprocessing (97%), data curation (79%), and post-processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyz-ing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain prob-lem. The insights of our study could help researchers (1) improve algorithm development strategies when approach-ing new problems, and (2) focus on open research questions revealed by this work. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19955 Figure 1. Overview of the IEEE ISBI 2021 and MICCAI 2021 challenges. Under the umbrella of 35 challenges (each represented by a teaser image and acronym), a total of 80 competitions with dedicated leaderboards were organized, as detailed in Suppl. A-C. We used data from participants, organizers, and winners to ad-dress the key research questions of this contribution: (RQ1) What is common practice in challenge participation? ,(RQ2) Do cur-rent competitions generate scientific progress? , and (RQ3) Which strategies characterize challenge winners? | 1. Introduction Validation of biomedical image analysis algorithms is typically conducted through so-called challenges – large international benchmarking competitions that compare al-gorithm performance on datasets addressing specific prob-lems. Recent years have not only seen an increase in the complexity of the machine learning (ML) models used to solve the tasks, but also a substantial increase in the sci-entific impact of challenges, with results often being pub-lished in prestigious journals (e.g., [9, 28, 34, 41, 46]), and winners receiving tremendous attention in terms of cita-tions and (sometimes) high monetary compensation [23]. However, despite this impact, little effort has so far been in-vested in investigating what can be learnt from a challenge. Firstly, we identified a notable gap in literature regarding in-sights into current common practices in challenges as well as studies that critically analyze whether challenges actually generate scientific progress. Secondly, while recent work has addressed the problem of deriving meaningful conclu-sions from challenges [29, 49], it still remains largely un-clear what makes winners the best and hence what consti-tutes a good strategy for approaching a new challenge or problem. The specific questions are manifold, e.g., Which specific training paradigms are used in current winning so-lutions? ,What are the most successful strategies for achiev-ing generalization? ,Is it beneficial to involve domain ex-perts or to work in a large team? . While ablation studies on the effects of ML model component removal could be used to address some questions, they suffer from the major draw-back of only providing insights into submitted solutions, but not into underlying strategies. Furthermore, they typically only allow for investigating few aspects of a solution, and come at the cost of a substantial carbon footprint. To overcome these issues, we chose an approach that al-lowed us to systematically assess all of the aforementioned questions related to biomedical image analysis competitions within one cohesive study. To this end, members of the Helmholtz Imaging Incubator (HI) and of the Medical Im-age Computing and Computer Assisted Intervention (MIC-CAI) Special Interest Group on biomedical image analy-sis challenges designed a series of comprehensive interna-tional surveys that were issued to participants, organizers, and winners of competitions conducted within the IEEE In-ternational Symposium on Biomedical Imaging (ISBI) 2021 and the International Conference on MICCAI 2021. By col-laborating with the organizers of all 80 competitions (100%, see overview in Suppl. A-C), we were able to link algo-rithmic design decisions and challenge participation strate-gies to the outcome captured in rankings. Based on the study data, we explicitly addressed three research ques-tions: (RQ1) What is common practice in challenge partici-pation? ,(RQ2) Do current competitions generate scientific progress? , and (RQ3) Which strategies characterize chal-lenge winners? |
Deng_PointVector_A_Vector_Representation_in_Point_Cloud_Analysis_CVPR_2023 | Abstract In point cloud analysis, point-based methods have rapidly developed in recent years. These methods have re-cently focused on concise MLP structures, such as Point-NeXt, which have demonstrated competitiveness with Con-volutional and Transformer structures. However, standard MLPs are limited in their ability to extract local features effectively. To address this limitation, we propose a Vector-oriented Point Set Abstraction that can aggregate neighbor-ing features through higher-dimensional vectors. To facil-itate network optimization, we construct a transformation from scalar to vector using independent angles based on 3D vector rotations. Finally, we develop a PointVector model that follows the structure of PointNeXt. Our experimental results demonstrate that PointVector achieves state-of-the-art performance 72.3% mIOU on the S3DIS Area 5 and 78.4% mIOU on the S3DIS (6-fold cross-validation) with only 58% model parameters of PointNeXt. We hope our work will help the exploration of concise and effective fea-ture representations. The code will be released soon. | 1. Introduction Point cloud analysis is a cornerstone of various down-stream tasks. With the introduction of PointNet [25] and PointNet++ [26], the direct processing of unstructured point clouds has become a hot topic. Many point-based net-works introduced novel and sophisticated modules to ex-tract local features, e.g., attention-based methods [52] ex-plore attention mechanisms as Fig.1a with lower consump-tion, convolution-based methods [36] explore the dynamic convolution kernel as Fig.1c, and graph-based methods [39] [53] use graph to model relationships of points. The appli-cation of these methods to the feature extraction module of PointNet++ brings an improvement in feature quality. How-ever, they are somewhat complicated to design in terms of network structure. PointNeXt [28] adapts the SetAbstrac-*Co-first authors with equal contribution to refining the theory and ex-perimental design †Corresponding authorstion (SA) module of PointNet++ [26] and proposes the In-verted Residual MLP (InvResMLP) module. The simple design of MLP network achieves good results. Motivated by this work, we try to further explore the potential of the MLP structure. 濄 濇ݓଵ ݓଷݓଶ (a) Attention 濄 濇濇 濾濸瀅瀁濸濿 濜瀁瀃瀈瀇澳瀃瀂濼瀁瀇瀆 (b) Templated-based method 濄 濇 (c) Dynamic Conv 濇 ݖݔ ݕ ݖ ݔݕݖݕݔ 濄 (d) Vector Figure 1. Illustrations of the core operations of the different meth-ods. (a) The features of each point are calculated separately by applying a fixed/isotropic kernel (black arrow) like Linear layer. Then, it imparts anisotropy by weights generated from inputs. (b) The displacement vector is used to filter points that approximate the kernel pattern for features aggregation. (c) It applies unique dynamic kernels with anisotropy for each point feature. (d) Dif-ferently, we generate vector representations based on features, and the aggregation methods for vectors are anisotropic due to the di-rection of the vectors. PointNeXt uses all standard MLPs, which has insuffi-cient feature extraction capability. In addition to atten-tion and dynamic convolution mechanisms, template-based methods as Fig.1b such as 3D-GCN [19] employ relative displacement vectors to modulate the association between input points and the convolutional kernel. We introduce a vector representation of features to extend the range of fea-ture variation with the intention of more effectively regulat-ing the connections between local features. Our approach as Fig.1d differs from template-based methods. Instead of using displacement vectors as a property of the kernel, we This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9455 generate a vector representation for each neighboring point and aggregate them. Our method introduces less inductive bias, resulting in improved generalization capabilities. Fur-thermore, we enhance the generation of 3D vector repre-sentations by utilizing a vector rotation matrix with two in-dependent angles in 3D space. This method facilitates the network to find the better solution. Influenced by PointNeXt [28] and PointNet++ [26], we present the VPSA module. This module adheres to the structure of Point Set Abstraction (SA) module of the Point-Net series. Vector representations are obtained from input features and aggregated using a reduction function. The vector of each channel is then projected into a scalar to de-rive local features. By combining VPSA and SA modules, we construct a PointVector model with an architecture akin to that of PointNeXt. Our model undergoes comprehensive validation on pub-lic benchmark datasets. It achieves state-of-the-art per-formance on the S3DIS [1] semantic segmentation bench-mark and competitive results on the ScanObjectNN [47] and ShapeNetPart [48] datasets. By incorporating a pri-ori knowledge of vectors, our model attains superior results with fewer parameters on S3DIS. Detailed ablation experi-ments further demonstrate the efficacy of our methodology. The contributions are summarized below: -We propose a novel immediate vector representation with relative features and positions to better guide local feature aggregation. -We explore the method of obtaining vector representa-tion and propose the generation method of 3D vector by utilizing the vector rotation matrix in 3D space. -Our proposed PointVector model achieves 72.3% mean Intersection over Union (mIOU) on S3DIS area5 and78.4% mIOU on S3DIS (6-fold cross-validation) with only 58% model parameters of PointNeXt. |
Ding_Revisiting_the_P3P_Problem_CVPR_2023 | Abstract One of the classical multi-view geometry problems is the so called P3P problem, where the absolute pose of a cal-ibrated camera is determined from three 2D-to-3D corre-spondences. Since these solvers form a critical component of many vision systems (e.g. in localization and Structure-from-Motion), there have been significant effort in develop-ing faster and more stable algorithms. While the current state-of-the-art solvers are both extremely fast and stable, there still exist configurations where they break down. In this paper we algebraically formulate the problem as finding the intersection of two conics. With this formulation we are able to analytically characterize the real roots of the polynomial system and employ a tailored solution strategy for each problem instance. The result is a fast and stable solver, that is able to correctly solve cases where competing methods might fail. Our experimental evaluation shows that we outperform the current state-of-the-art methods both in terms of speed and success rate. | 1. Introduction and Related Work Registering a new image to a given 3D model is a criti-cal step in many computer vision pipelines, e.g. visual po-sitioning and localization [22], augmented reality [23] or autonomous mapping and navigation [18]. In addition, it has been combined with deep learning to perform learning and geometric optimization end-to-end [4]. The problem is generally solved by establishing a sparse set of 2D-3D point correspondences between the image and the model using feature-based matching [17]. To deal with the po-tential mismatches, robust estimators based on hypothesis-and-test frameworks such as RANSAC [7] are employed. These methods work by generating multiple candidate mod-els from randomly selected minimal subsets of the data (to reduce the risk of outlier contamination). In the context of the absolute pose problem, i.e. estimating the position and orientation of a camera given a set of 2D-3D point corre-spondences, the minimal is called Perspective-Three-Point (P3P). As the name suggests, the problem is minimal with three point correspondences in the calibrated setting, and Figure 1. The perspective-three-point problem has up to four real solutions. The P3P problem has a long history, predating the field of computer vision by a large margin. The geometric prob-lem itself (though not in the context of cameras) was con-sidered as early as 1773 by Lagrange [16] (see [24] for de-tails). In his work Lagrange showed that it had at most four solutions and could be reduced to a quartic polyno-mial. Almost a century later, in 1841, Grunert [10] revis-ited the problem and provided a direct solution method. In the early 20th century the problem was also studied in the photogrammetry community, though the main focus was on refinement-based methods instead of solving the problem from scratch (see Haralick [12] for details). Finstenvalder and Scheufele [6] first show that the P3P problem only re-quired to find a root of a cubic polynomial and the roots of two quadratic polynomials. The problem later resurfaced in the computer vision community in the seminal RANSAC paper from Fischler and Bolles [7]. Due to the success of RANSAC-based estimators the problem has since received significant attention. Based on the degree of the final univariate polynomial, the P3P solutions can be mainly divided into two categories: solving a quartic equation and solving a cubic equation. Most of the modern papers focus on converting the P3P problem into solving a quartic equation. Gao et al. [8] used Wu-Ritt’s zero decomposition algorithm [27] to give a first This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4872 complete analytical solution to the P3P problem. Kneip [15] proposed a direct method for solving the P3P-problem as a computation of the absolute camera position and orien-tation, which avoids doing the eigenvalue decomposition or singular value decomposition. Ke et al. [14] proposed an approach which directly determine the camera’s atti-tude by employing the corresponding geometric constraints. Banno [1] and [19] proposed direct P3P methods estimating the distance in the intermediate coordinate system so that the rotation matrix can be formulated as a linear representa-tion of the distances. Unlike the quartic equation based methods, the cubic equation based formulation has not been given much atten-tion in the P3P problem literature. Since the work of [6], the cubic formulation has also been used in the work by Gra-farend et al. [9]. They seeked to reduce (3) to homogenous form and then they use the same technique as [6]. Haral-icket al. [12] reviewed the major cubic based solutions to the P3P problem and discussed the numerical accuracy. Re-cently, Persson and Nordberg [21] showed more details on finding the rotation and translation and proposed an efficient algorithm using a single root of a cubic. To the best of our knowledge, the solver by Persson and Nordberg [21] has better numerical accuracy and is faster than previous work. In this paper we again revisit the P3P problem. We fo-cus on the solution strategy that is based on intersecting two conics, which was also used in recent work [21]. The rel-ative position of two ellipses has been studied in several papers [5, 26]. However, none of them considered the com-putation of the intersection points. By contrast, We provide a fast and stable solver based on the characterization of the possible solution configurations. Experimentally we show that these extreme cases are the reason for failures and in-stabilities in previous methods. Finally, leveraging our new understanding we design a novel P3P algorithm that explic-itly handles the dangerous cases. The result is a stable P3P solver that as an added benefit is faster than previous ap-proaches. |
Chen_PAniC-3D_Stylized_Single-View_3D_Reconstruction_From_Portraits_of_Anime_Characters_CVPR_2023 | Abstract We propose PAniC-3D, a system to reconstruct stylized 3D character heads directly from illustrated (p)ortraits of (ani)me (c)haracters. Our anime-style domain poses unique challenges to single-view reconstruction; compared to nat-ural images of human heads, character portrait illustra-tions have hair and accessories with more complex and diverse geometry, and are shaded with non-photorealistic contour lines. In addition, there is a lack of both 3D model and portrait illustration data suitable to train and evaluate this ambiguous stylized reconstruction task. Facing these challenges, our proposed PAniC-3D architecture crosses the illustration-to-3D domain gap with a line-filling model, and represents sophisticated geometries with a volumetric radiance field. We train our system with two large new datasets (11.2k Vroid 3D models, 1k Vtuber portrait illus-trations), and evaluate on a novel AnimeRecon benchmark of illustration-to-3D pairs. PAniC-3D significantly outper-forms baseline methods, and provides data to establish the task of stylized reconstruction from portrait illustrations. | 1. Introduction & Related Work With the rise of AR/VR applications, there is increased demand for not only high-fidelity human avatars, but also non-photorealistic 3D characters, especially in the “anime” style. Most character designers typically create concept illustrations first, allowing them to express complex and highly diverse characteristics like hair, accessories, eyes, skins, headshapes, etc. Unfortunately, the process of de-veloping illustrated concept art into an AR/VR-ready 3D asset is expensive, requiring professional 3D artists trained to use expert modeling software. While template-based cre-ators democratize 3D avatars to an extent, they are often re-stricted to 3D assets compatible with a specific body model. We propose PAniC-3D, a system to automatically recon-struct a stylized 3D character head directly from illustrated (p)ortraits of (ani)me (c)haracters. We formulate our prob-lem in two parts: 1) implicit single-view head reconstruc-tion, 2) from across an illustration-3D domain gap. To sum-marize our contributions: •PAniC-3D : a system to reconstruct the 3D radiance field of a stylized character head from a single line-based portrait illustration. • The Vroid 3D dataset of 11.2k character models and renders, the first such dataset in the anime-style do-main to provide 3D assets with multiview renders. • The Vtuber dataset of 1.0k reconstruction-friendly portraits (aligned, front-facing, neutral-expression) that bridges the illustration-render domain gap through the novel task of line removal from drawings. • The AnimeRecon benchmark with 68 pairs of aligned 3D models and corresponding illustrations, en-abling quantitative evaluation of both image and geom-etry metrics for stylized reconstruction. 1.1. Implicit 3D Reconstruction While there has been much work on mesh-based recon-struction from images [23], these systems are not expres-sive enough to capture the extreme complexity and diver-sity of topology of our 3D characters. Inspired by the re-cent successes in generating high-quality 3D radiance fields [4, 5, 25, 39], we instead turn to implicit representations. However, to achieve high-quality results, recent implicit re-construction work such as PixelNerf [40] tend to operate solely from 2D images, due to the lack of publicly-available high-quality 3D data. Some implicit reconstruction systems like Pifu [31] employing complex 3D assets have shown reasonable success using point-based supervision, but re-quire careful point sampling techniques and loss balancing. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21068 Figure 1. Overview of contributions. Our (A) PAniC-3D system is able to reconstruct a 3D radiance field directly from a line-based portrait illustration. We gather a new (B) Vtuber illustration dataset and (C) Vroid 3D models dataset in order to cross the illustration-render domain gap and supervise reconstruction. To evaluate, we provide a new (D) AnimeRecon benchmark of paired illustrations and 3D models, establishing the novel task of stylized single-view reconstruction of anime characters. (Art attributions in suppl.) There is also a body of work on sketch-based model-ing, where 3D representations are recovered from contour images. For example, Rui et al. [24] use a multi-view de-coder to predict sketch-to-depth and normals, which are then used for surface reconstruction. Song et al. [44] ad-ditionally try to compensate multi-view drawing discrepan-cies by learning to realign the inputs. While related to our single-view portrait reconstruction problem, these methods require multi-view sketches that are difficult for character artists to draw consistently, and cannot handle color input. For our case with complex high-quality 3D assets, we demonstrate the superiority of differentiable volumetric ren-dering for reconstruction. We build off of recent uncondi-tional generative work (EG3D [4]), formulating the prob-lem of reconstruction as a conditional generation, propos-ing several architecture improvements, and applying direct |
Cong_Combining_Implicit-Explicit_View_Correlation_for_Light_Field_Semantic_Segmentation_CVPR_2023 | Abstract Since light field simultaneously records spatial informa-tion and angular information of light rays, it is considered to be beneficial for many potential applications, and seman-tic segmentation is one of them. The regular variation of image information across views facilitates a comprehensive scene understanding. However, in the case of limited mem-ory, the high-dimensional property of light field makes the problem more intractable than generic semantic segmenta-tion, manifested in the difficulty of fully exploiting the re-lationships among views while maintaining contextual in-formation in single view. In this paper, we propose a novel network called LF-IENet for light field semantic segmenta-tion. It contains two different manners to mine complemen-tary information from surrounding views to segment cen-tral view. One is implicit feature integration that leverages attention mechanism to compute inter-view and intra-view similarity to modulate features of central view. The other is explicit feature propagation that directly warps features of other views to central view under the guidance of disparity. They complement each other and jointly realize complemen-tary information fusion across views in light field. The pro-posed method achieves outperforming performance on both real-world and synthetic light field datasets, demonstrating the effectiveness of this new architecture. | 1. Introduction Semantic segmentation is a pixel-level task that assigns a class label to each pixel of the given image, serving as a key fundamental of visual understanding. Due to the partial visibility incurred by occlusion as well as high intra-class variation with diverse appearances, viewpoints and scales, *Corresponding author ··· ······ ··· Macro-pixel image Sub-aperture imageFigure 1. Illustration of light field imaging. Red rectangle shows an occlusion scene, in which the front wheel of bicycle is di-vided into two areas (enclosed in blue and yellow boxes) and the back wheel is complete (enclosed in green box). Since viewpoints are arranged on a regular grid in angular plane, the location and scale of these areas are regularly changed across views, which is a unique advantage of light field. Influenced by the pedestrian with big disparity, the changes near the front wheel are significant. accurate segmentation is a fairly challenging problem. A series of image segmentation methods [4, 11, 41, 43] have been proposed to address these challenges. Furthermore, [1, 7, 23, 37] take depth information into consideration to overcome the deficiency of single image. Recently, [16,24] employ light field to achieve impressive performance, pro-viding a new perspective for semantic segmentation. Compared to traditional imaging system, 4D light field records intensity for rays in terms of position and direction, yielding a regularly distributed multi-view image array. The information embedded in additional angular dimensions is beneficial for detail analysis to thoroughly parse scenes. As shown in Fig. 1, the front wheel of bicycle is occluded, forming two small areas that are hard to assign labels. With the transformation of viewpoint, the scale of areas changes accordingly. Capturing such regular change with the help This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9172 (a) Central view image (b) Image-based input (c) Patch-based input Figure 2. Illustration of input form for light field with an angular resolution of 5 × 5. Red rectangles in (a) show three patches with similar color, different disparity and semantic labels. (c) are corre-sponding patch arrays composed of all SAIs. Super-resolution and disparity estimation can use patch-based input because the former only needs to consider surrounding pixels, and the latter empha-sizes pixel matching across views. On the contrary, semantic seg-mentation cannot assign correct labels based on local information in (c). It requires context information from (b). of disparity facilitates eliminating ambiguity around object boundaries and occlusion. To this end, it is meaningful to introduce light field into semantic segmentation. As an emerging research field, light field semantic seg-mentation can absorb foundations from the great progress of generic semantic segmentation. For instance, a straight-forward solution is to organize light field into a 2D macro-pixel image (MacPI) and then apply image semantic seg-mentation. Video semantic segmentation is also available because light field can be converted into a sub-aperture im-age (SAI) array, which is similar in form to video sequence. Since depth and disparity are used interchangeably, the dis-parity contained in light field is workable for RGB-D se-mantic segmentation. Nevertheless, directly applying these three kinds of methods cannot make full use of advantages of 4D light field. First, treating light field as 2D image in-evitably ignores angular information. Second, the regular 2D angular information between SAIs is more compact and intact than 1D temporal information in video. Third, RGB-D-based methods merely extract depth as input, lacking fur-ther process about light field. Consequently, it is necessary to design a framework tailored for light field. In order to realize an overall extraction of 4D informa-tion in light field, an effective way is to model spatial re-lationships in each SAI separately, and then perform inter-actions across all views along the angular dimension. Thismodeling mechanism is commonly used in research of light field like super-resolution [33, 38] and disparity estimation [27, 32]. Considering prohibitive inference cost and lim-ited memory usage, each SAI is cropped into multiple small patches for calculation. However, as illustrated in Fig. 2, it is catastrophic and unsuitable for semantic segmentation because independent small patches discard valuable contex-tual information [35]. Resizing all SAIs to an extremely low scale along the spatial dimension is an alternative manner, but it gives up resolution and granularity which are critical for dense prediction tasks. In the light of above issues, we present a well-engineered framework, which includes an implicit branch and an ex-plicit branch to fully explore structural information in light field for robust semantic segmentation of central view. The implicit branch only processes a few SAIs and utilizes self-attention and cross-attention mechanisms to calculate simi-larity for feature integration. The explicit branch processes all SAIs to estimate disparity for subsequent feature prop-agation. In brief, our framework realizes feature enhance-ment for central view through implicit feature integration and explicit feature propagation. It is worth noting that two branches transmit supplemental information to each other. Specifically, implicit branch leverages estimated disparity from explicit branch to adjust the weight of cross-attention, enhancing the perception of variation among views. On the other hand, the features to be warped in explicit branch derive from implicit branch. The output features of two branches are fused for final prediction. Our contributions can be summarized as follows. (1) We present a network called LF-IENet which incorporates im-plicit and explicit view correlation to exploit light field. The former learns a unified representation within target view and across views. The latter uses disparity to propagate features to target view. (2) The proposed network exists information interaction between two manners, acting as a supplemen-tary item for one another rather than standalone and jointly improving the utilization efficiency of light field. (3) Exten-sive experiments on the light field semantic segmentation dataset confirm the effectiveness of our method. |
Dave_TimeBalance_Temporally-Invariant_and_Temporally-Distinctive_Video_Representations_for_Semi-Supervised_Action_Recognition_CVPR_2023 | Abstract Semi-Supervised Learning can be more beneficial for the video domain compared to images because of its higher an-notation cost and dimensionality. Besides, any video un-derstanding task requires reasoning over both spatial and temporal dimensions. In order to learn both the static and motion related features for the semi-supervised ac-tion recognition task, existing methods rely on hard in-put inductive biases like using two-modalities (RGB and Optical-flow) or two-stream of different playback rates. Instead of utilizing unlabeled videos through diverse in-put streams, we rely on self-supervised video represen-tations, particularly, we utilize temporally-invariant and temporally-distinctive representations. We observe that these representations complement each other depending on the nature of the action. Based on this observation, we propose a student-teacher semi-supervised learning frame-work, TimeBalance, where we distill the knowledge from a temporally-invariant and a temporally-distinctive teacher. Depending on the nature of the unlabeled video, we dy-namically combine the knowledge of these two teach-ers based on a novel temporal similarity-based reweight-ing scheme. Our method achieves state-of-the-art perfor-mance on three action recognition benchmarks: UCF101, HMDB51, and Kinetics400. Code: https://github. com/DAVEISHAN/TimeBalance . | 1. Introduction Recent development in action recognition have opened up a wide range of real-world applications: visual security systems [19, 50, 56], behavioral studies [35], sports analyt-ics [46], elderly person fall detection systems [10, 49, 85], etc. Most of these developments are mainly courtesy of large-scale curated datasets like Kinetics [12], HVU [21], and HACS [87]. However, labeling such a massive video dataset requires an enormous amount of annotation time and human effort. At the same time, there is a vastamount of unlabeled videos available on the internet. The goal of semi-supervised action recognition is to use such large-scale unlabeled dataset to provide additional supervi-sion along with the labeled supervision of the small-scale dataset. Semi-supervised learning for image classification has seen tremendous progress in recent years [1, 57, 63, 68]. In semi-supervised action recognition, recent approaches have adapted these image-based methods by incorporat-ing motion-related inductive biases into the setup. For instance, some methods [77, 79] use two different input modalities where the original RGB video promotes learn-ing appearance-based features while optical flow/temporal gradients promotes learning of motion-centric features. An-other set of methods uses input streams of different sam-pling rates to achieve this [64, 70]. Although these input-level inductive biases are simple-yet-very-effective to pro-vide unlabeled supervision for action recognition, they are not suitable for large-scale datasets due to their multiplica-tive storage requirement and high preprocessing overhead. Contrastive Self-supervised Learning (CSL) has emerged as a powerful technique to learn meaningful representations from unlabeled videos. Existing video CSL methods deal with mainly two different kinds of objec-tives: (1) Learning similarities across clips of the same video i.e temporally-invariant representations [29, 53, 55] (2) Learning differences across clips of the video i.e. temporally-distinctive representations [18, 36, 75]. Each objective has its own advantages, depending on the nature of the unlabeled videos being used. Our experiments reveal a clear difference in the class-wise performance of both methods, as illustrated in Fig. 1. The right half of the figure shows the action classes where the temporally-invariant model is dominant. We can ob-serve that all such action classes are atomic actions with high repetitions, e.g.,Fencing ,Knitting . Any two clips from such videos are highly similar, hence, increasing agreement between them i.e. learning temporal-invariant representation is more meaningful. The left half of the fig-ure shows action classes where temporally-distinctive rep-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2341 Classwise performance difference for action recognition task (% Accuracy of T emporally-Distinctive model) -(% Accuracy of Temporally-Invariant model) Temporally-Distinctive Representations perform better Clips share high-similarity , hence learning temporally-invariant representation is useful Fencing Random Clip-1 Random Clip-2 Clips do not share high similarity , hence learning temporally-different(distinctive) representation is useful Random Clip-1 Random Clip-2Javelin ThrowTemporally-Invariant Representations perform betterFigure 1. Motivation for Temporally-Distinctive and Temporally-Invariant Representations. In order to leverage the unlabeled videos effectively, we consider two kinds of self-supervised video representation learning techniques with complementary goals: (1) Temporally Invariant Representations (Bottom Right ) encourage learning the commonalities of the clips, hence it mainly focuses on learning features related to highly frequent repetitions and appearance. (2) Temporally Distinctive Representations (Bottom Left ) encourage learning the dissimilarities between clips of the same video, hence it encourages learning features for sub-actions within the video. The plot shows the activity-wise UCF101 performance difference of finetuned models which were self-supervised pretrained with temporally-distinctive and temporally-invariant objectives. The plot shows extreme 25-25 classes after sorting the all classwise differences. resentations perform better. We can observe that such ac-tion classes are slightly more complex i.e. they contain sub-actions, e.g.,JavelinThrow first involves running and then throwing. Any two clips from such videos are visually very different, hence if we maximize agreement between them then it results in loss of the temporal dynamics. There-fore, temporally-distinctive representation is more suitable in such videos. Based on our observation, we aim to leverage the strengths of both temporally-invariant and temporally-distinctive representations for semi-supervised action recognition. To achieve this, we propose a semi-supervised framework based on a student-teacher setup. The teacher supervision includes two models pre-trained using CSL with temporally-invariant and temporally-distinctive objec-tives. After pre-training, the teachers are fine-tuned with the labeled set to adapt to the semi-supervised training of the student. During semi-supervised training, we weigh each teacher model based on the nature of the unlabeled video instance. We determine the nature of the instance by computing its similarity score using the temporal self-similarity matrices of both teachers. This way, the student is trained using the labeled supervision from the labeled set and the unlabeled supervision from the weighted aver-age of the teachers. It is worth noting that our framework doesn’t depend on complicated data-augmentation schemeslike FixMatch [65]. The contributions of this work are summarized as follows: • We propose a student-teacher-based semi-supervised learning framework that consists of two teachers with complementary self-supervised video representations: Temporally-Invariant and Temporally-Distinctive. • In order to leverage the strength of each representation (invariant or distinctive), we weigh a suitable teacher according to an unlabeled video instance. We achieve it by the proposed temporal-similarity-based reweight-ing scheme. • Our method outperforms existing approaches and achieves state-of-the-art results on popular ac-tion recognition benchmarks, including UCF101, HMDB51, and Kinetics400. |
Chen_Private_Image_Generation_With_Dual-Purpose_Auxiliary_Classifier_CVPR_2023 | Abstract Privacy-preserving image generation has been impor-tant for segments such as medical domains that have sen-sitive and limited data. The benefits of guaranteed privacy come at the costs of generated images’ quality and utility due to the privacy budget constraints. The utility is cur-rently measured by the gen2real accuracy (g2r%), i.e., the accuracy on real data of a downstream classifier trained using generated data. However, apart from this standard utility, we identify the “reversed utility” as another cru-cial aspect, which computes the accuracy on generated data of a classifier trained using real data, dubbed as real2gen accuracy (r2g%). Jointly considering these two views of utility, the standard and the reversed, could help the gen-eration model better improve transferability between fake and real data. Therefore, we propose a novel private image generation method that incorporates a dual-purpose auxil-iary classifier, which alternates between learning from real data and fake data, into the training of differentially private GANs. Additionally, our deliberate training strategies such as sequential training contributes to accelerating the gen-erator’s convergence and further boosting the performance upon exhausting the privacy budget. Our results achieve new state-of-the-arts over all metrics on three benchmarks: MNIST, Fashion-MNIST, and CelebA. | 1. Introduction By combining game theory with the powerful deep neu-ral networks, Generative Adversarial Network (GAN) [19] and its variants [2, 21, 24, 27] have shown impressive capa-bility to learn the data distribution and synthesise data of high fidelity and diversity that are challenging to be differ-entiated from the real ones. Therefore, they are appealing data augmentation methods in domains where real data is too rare or contains sensitive information, such as the med-ical domain. For example, GANs can be used to generate synthetic liver lesions [16], MRIs [5], and CT scans [34] Figure 1. In each training loop, the proposed dual-purpose aux-iliary classifier is trained sequentially to improve on both two as-pects of transferability and provide feedback to the generator. that could then be fed into machine learning models to un-leash their power for building high-quality medical analyt-ics systems. Ideally, this could also protect the privacy of real patient data and encourage data sharing between insti-tutions by only releasing the synthetic ones generated by GANs. This seems to solve the two problems mentioned, the scarcity and sensitivity of data. Unfortunately, recent works have shown that GANs are not safe from leaking sensitive information about training sample [3, 29, 40] as GANs are subject to model inversion attacks and membership inference attacks in both white-box and black-box settings [15, 23, 35, 42]. To preserve privacy, recent works have made progress by adopting Dif-ferential Privacy (DP) [12], a rigorously privacy-guaranteed mechanism, in GAN training [8, 14, 25, 30, 37]. Along this line, GS-WGAN [8] is a current state-of-the-art method, which demonstrated that DP can be achieved by only se-lectively sanitising the generator, while leaving the discrim-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20361 inator non-private. Despite the success of recent works, there are still two main gaps to be filled for this task. Firstly , the current util-ity in the literature only focuses on the transferability from fake data to real data. It computes the gen2real accuracy (g2r%), i.e. the classification accuracy on real data of a clas-sifier trained using fake data. Such utility is surely impor-tant by definition since it reflects how useful the generated data will be in downstream applications. Nonetheless, the gen2real accuracy only covers one direction of data trans-ferability, while neglecting the other way around, namely from real to fake data. It was previously less investigated that whether blending both these two aspects of transfer-ability in model design could lead to better private GANs. Secondly , the gained privacy largely sacrifices the gener-ated outputs’ quality and utility. This is because the pri-vacy budget constraints the maximum number of generator updates, which makes the generator difficult to converge. Prior works [6, 8, 30, 37] have hardly synthesised images of both high quality and utility within standard privacy bud-get under DP framework, especially for RGB image gener-ation such as on CelebA dataset. Private GANs still need to accelerate the generator convergence within the budget to achieve a better privacy-quality/utility trade-off. In this paper, the following attempts are made to close the two aforementioned gaps. Firstly , we recognize the “re-versed utility” as another critical aspect for transferability, which is defined as the real2gen accuracy (r2g%) computed as the classification accuracy of the classifier trained with real data and tested on the generated data. The intuition is that for an output to generalise well, it should be difficult to tell from the real ones in its corresponding class. There-upon, a novel method for private image generation with the standard and reversed utility unified in the training pro-cess is proposed. This is based on a dual-purpose auxiliary classifier as illustrated in Fig. 1, which switches between training on real data and fake data, and then provide feed-back for the generator to enhance the transferability in both two direction. Concretely, we build the proposed method on GS-WGAN [8], since its sanitisation mechanism could keep the generator differentially private when integrating an auxiliary classifier that is exposed to real data. Secondly , different from the conventional training scheme of GANs where the discriminator learns from real and fake data si-multaneously, we devise our training procedure of the clas-sifier in a sequential manner. This could assist the classifier in learning from different domains separately and reducing noisy gradients during updates, which enables the classifier to provide more valuable feedback to the generator and ac-celerate its convergence within a given privacy budget. Experiments on standard datasets for private image gen-eration: MNIST, FashionMNIST and CelebA, demonstrate that the proposed method could achieve outstanding per-formance over state-of-the-art approaches on all evaluation metrics including quality and utility. In summary, our con-tributions are three-fold: 1) The “reversed utility” is identi-fied as an beneficial part of an improved design of private GANs. 2) A dual-purpose auxiliary classifier is developed in alignment with both the standard and reversed utility. 3) The classifier is trained with strategies like sequentialisation to accelerate the convergence of generator. |
Hua_SOOD_Towards_Semi-Supervised_Oriented_Object_Detection_CVPR_2023 | Abstract Semi-Supervised Object Detection (SSOD), aiming to ex-plore unlabeled data for boosting object detectors, has be-come an active task in recent years. However, existing SSOD approaches mainly focus on horizontal objects, leav-ing multi-oriented objects that are common in aerial images unexplored. This paper proposes a novel Semi-supervised Oriented Object Detection model, termed SOOD, built upon the mainstream pseudo-labeling framework. Towards ori-ented objects in aerial scenes, we design two loss func-tions to provide better supervision. Focusing on the orien-tations of objects, the first loss regularizes the consistency between each pseudo-label-prediction pair (includes a pre-diction and its corresponding pseudo label) with adaptive weights based on their orientation gap. Focusing on the layout of an image, the second loss regularizes the similar-ity and explicitly builds the many-to-many relation between the sets of pseudo-labels and predictions. Such a global consistency constraint can further boost semi-supervised learning. Our experiments show that when trained with the two proposed losses, SOOD surpasses the state-of-the-art SSOD methods under various settings on the DOTA-v1.5 benchmark. The code will be available at https: //github.com/HamPerdredes/SOOD . | 1. Introduction Sufficient labeled data is essential for fully-supervised object detection. However, the data labeling process is time-consuming and expensive. Recently, Semi-Supervised Ob-ject Detection (SSOD), where object detectors are learned from labeled data as well as easy-to-obtain unlabeled data, has attracted increasing attention. Existing SSOD meth-ods [16, 24, 44, 50] mainly focus on detecting objects with horizontal bounding boxes in general scenes. Nevertheless, in more complex scenes, such as aerial scenes, objects usu-*Equal contribution.†Corresponding author. Work done when Dingkang Liang was an intern at Baidu. Aerial scene (b) Small and dense objects (a) Arbitrary rotating objectsFigure 1. Arbitrary rotating (a), small and dense (b) objects are common in aerial scenes, which are often regularly arranged on the image. From a global perspective, this pattern indicates that an aerial can be regarded as a layout. ally need to be annotated with oriented bounding boxes. Considering the higher annotation cost of oriented boxes *, semi-supervised oriented object detection is worth studying. Compared with general scenes, the main characteristics of objects in aerial scenes (or aerial objects for short) are three-fold: arbitrary orientations, small scales, and agglom-eration, as shown in Fig. 1. The mainstream SSOD meth-ods are based on the pseudo-labeling framework [3, 35, 36] consisting of a teacher model and a student model. The teacher model, an Exponential Moving Average (EMA) of the student model at historical training iterations, gener-ates pseudo-labels for unlabeled images. Thus, the student model can learn from both labeled and unlabeled data. To extend the framework to oriented object detection, we think the following two aspects need to be addressed: 1) As orien-tation is an essential property of multi-oriented objects, how to use the orientation information when guiding the student with pseudo-labels is critical. 2) As aerial objects are often dense and regularly distributed in an image, we can utilize the layout to facilitate the learning of each pair instead of treating them individually. This paper proposes the first Semi-supervised Oriented *Annotation cost of an oriented box is about 36.5% (86$ vs. 63$ per 1k at 2022.11) more than a horizontal box according to https://cloud. google.com/ai-platform/data-labeling/pricing . This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15558 Object Detection method, termed SOOD. Following [50], SOOD is built upon the dense pseudo-labeling framework, where the pseudo labels are filtered from the raw pixel-wise predictions (including box coordinates and confidence scores). The key design is two simple yet effective losses that enforce the instance-level and set-level consistency be-tween the student’s and the teacher’s predictions. To be specific, considering that the pseudo-label-prediction pairs are not equally informative, we propose the Rotation-aware Adaptive Weighting (RAW) loss. It utilizes the orientation gap of each pair, which reflects the difficulty of this sample in a way, to weight the corresponding loss dynamically. In this manner, we can softly pick those more useful supervision signals to guide the learning of the stu-dent. In addition, considering that the layout of an aerial im-age can potentially reflect components’ overall status (e.g., objects’ density and location distribution) and help the de-tection process, we propose the Global Consistency (GC) loss. It measures the similarity of the pseudo-labels and the predictions from a global perspective, which can allevi-ate the disturbance of noise in pseudo-labels and implicitly regularizes the mutual relations between different objects. We extensively evaluate SOOD under various settings on DOTA-v1.5, a popular aerial object detection benchmark. Our SOOD achieves consistent performance improvement when using 10%, 20%, 30%, and full of labeled data, com-pared with the state-of-the-art SSOD methods (using the same oriented object detector). The ablation study also ver-ifies the effectiveness of the two losses. In summary, this paper makes an early exploration of semi-supervised learning for oriented object detection. By analyzing the distinct characteristics of oriented objects from general objects, we propose two novel loss functions to adapt the pseudo-label framework to this task. We hope that this work can provide a good starting point for semi-supervised oriented object detection and serve as a simple yet strong baseline for future research. |
Hachiuma_Unified_Keypoint-Based_Action_Recognition_Framework_via_Structured_Keypoint_Pooling_CVPR_2023 | Abstract This paper simultaneously addresses three limitations associated with conventional skeleton-based action recog-nition; skeleton detection and tracking errors, poor va-riety of the targeted actions, as well as person-wise and frame-wise action recognition. A point cloud deep-learning paradigm is introduced to the action recognition, and a uni-fied framework along with a novel deep neural network ar-chitecture called Structured Keypoint Pooling is proposed. The proposed method sparsely aggregates keypoint features in a cascaded manner based on prior knowledge of the data structure (which is inherent in skeletons), such as the instances and frames to which each keypoint belongs, and achieves robustness against input errors. Its less con-strained and tracking-free architecture enables time-series keypoints consisting of human skeletons and nonhuman ob-ject contours to be efficiently treated as an input 3D point cloud and extends the variety of the targeted action. Fur-thermore, we propose a Pooling-Switching Trick inspired by Structured Keypoint Pooling. This trick switches the *Equal contribution.pooling kernels between the training and inference phases to detect person-wise and frame-wise actions in a weakly supervised manner using only video-level action labels. This trick enables our training scheme to naturally intro-duce novel data augmentation, which mixes multiple point clouds extracted from different videos. In the experiments, we comprehensively verify the effectiveness of the pro-posed method against the limitations, and the method out-performs state-of-the-art skeleton-based action recognition and spatio-temporal action localization methods. | 1. Introduction Recognizing the actions of a person in a video plays an essential role in various applications such as robotics [28, 41] and surveillance cameras [11, 25, 49]. The approach to the action recognition task differs depending on whether leveraging appearance information in a video or human skeletons1detected in the video. The former appearance-based approaches [2,7,11,18,20–23,25,32,45,51,52,56,58] directly use video as an input to deep neural networks 1Joints or keypoints specific to a person are referred to as skeletons for clarity, although some are not actual human joints. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22962 (DNNs) and thus even can recognize actions with relatively small movements. However, they are less robust to appear-ances of the people or scenes that differ from the training data [34, 55]. On the other hand, the latter skeleton-based approaches [5,9,10,13,17,29,33,34,49,57,60] are relatively robust to such appearance changes of a scene or a person be-cause they only input low-information keypoints detected using the multi-person pose estimation methods [6, 42, 50]. Starting from ST-GCN [57], various skeleton-based ap-proaches employing graph convolutional networks (GCNs) have emerged [5,9,10,13,33,44]. These approaches model the relationship among keypoints by densely connecting them in a spatio-temporal space using GCNs, which treat every keypoint as a node at each time step. However, most approaches exhibit low scalability in practical scenarios, and further performance improvement is required since they exhibit three limitations regarding network architectures or their problem settings, as described below. Skeleton Detection and Tracking Errors. Conventional GCN-based methods heavily rely on dense graphs, whose node keypoints are accurately detected and grouped by the same instance. These methods assume that the DNN fea-tures are correctly propagated. Therefore, if false positives (FPs) or false negatives (FNs) occur during keypoint detec-tion, or if the multi-person pose tracking [39,47] fails, such assumptions no longer hold, and the action recognition ac-curacy is degraded [17, 62]. Poor Variety of the Targeted Actions. Conventional ap-proaches limit the number of input skeletons to at most one or two. Therefore, the recognition of actions performed by many people or those interacting with nonhuman objects is an ill-posed problem. On the other hand, for a wide range of applications, it is desirable to eliminate such restrictions and target a variety of action categories. Person-wise and Frame-wise Action Recognition. Con-ventional approaches classify an entire video into actions, while practical scenes are complex and include multiple persons performing different actions in different time win-dows. Hence, recognizing each person’s action for each frame ( spatio-temporal action localization ) is necessary. In this paper, a unified action recognition framework and a novel DNN architecture called Structured Keypoint Pool-ing, which enhances the applicability and scalability of the skeleton-based action recognition (see Fig. 1), is proposed to simultaneously address the above three limitations. Un-like previous methods, which concatenate the keypoint co-ordinates and input them into a DNN designed on a pre-defined graph structure of a skeleton, the proposed method introduces a point cloud deep-learning paradigm [37,38,61] to the action recognition and treats a set of keypoints as an input 3D point cloud. PointNet [37], which was proposed in such a paradigm, is an innovative research, whose output is permutation-invariant to the order of the input points. Itextracts the features for each input point and sparsely aggre-gates them to the output feature vector using Max-Pooling . Unlike PointNet, the proposed network architecture aggre-gates the features extracted from the point cloud in a cas-caded manner based on prior knowledge of the data struc-ture, which is inherent in the point cloud, such as the frames or the detection results of the persons (instances) to which each keypoint belongs. As a result, it is less constrained than conventional approaches and tracking-free. Also, its feature propagation among keypoints is relatively sparse. Therefore, the range of the DNNs affected by the keypoint errors ( e.g., FPs, FNs, and tracking errors) associated with the first robustness limitation can also be limited. In addition, the permutation-invariant property of the in-put in the proposed network architecture eliminates the con-straints of the data structure and size ( e.g., number of in-stances and pose tracking) found in the GCN-based meth-ods. This property is exploited, and the nonhuman object keypoints2defined on the contour of the objects are used as an input in addition to human skeletons. Thus, the sec-ond target-action limitation mentioned above is addressed by increasing the input information without relying on the appearances while avoiding overfitting on them [14,34,55]. Finally, the third multi-action limitation is addressed by extending the proposed network architecture concept to a weakly supervised spatio-temporal action localiza-tion, which only requires a video-level action label dur-ing training. This is achieved using the proposed Pooling-Switching Trick inspired by Structured Keypoint Pooling, which switches the pooling structures according to the training and inference phases. Furthermore, this pooling-switching technique naturally enables the proposed training scheme to introduce novel data augmentation, which mixes multiple point clouds extracted from different videos. In summary, our main contributions are three-fold: (1) We propose Structured Keypoint Pooling based on point cloud deep-learning in the context of action recognition. This method incorporates prior knowledge of the data struc-ture to which each keypoint belongs into a DNN architec-ture as an inductive bias using a simple Max-Pooling oper-ation. (2) In addition to the human skeletons, object key-points are introduced as an additional input for skeleton-based action recognition. (3) A skeleton-based, weakly su-pervised spatio-temporal action localization is achieved by introducing a Pooling-Switching Trick, which exploits the feature aggregation scheme of Structured Keypoint Pooling. |
Jena_Beyond_mAP_Towards_Better_Evaluation_of_Instance_Segmentation_CVPR_2023 | Abstract Correctness of instance segmentation constitutes count-ing the number of objects, correctly localizing all predic-tions and classifying each localized prediction. Average Precision is the de-facto metric used to measure all these constituents of segmentation. However, this metric does not penalize duplicate predictions in the high-recall range, and cannot distinguish instances that are localized correctly but categorized incorrectly. This weakness has inadvertently led to network designs that achieve significant gains in AP but also introduce a large number of false positives. We therefore cannot rely on AP to choose a model that provides an optimal tradeoff between false positives and high recall. To resolve this dilemma, we review alternative metrics in the literature and propose two new measures to explicitly measure the amount of both spatial and categorical dupli-cate predictions. We also propose a Semantic Sorting and NMS module to remove these duplicates based on a pixel occupancy matching scheme. Experiments show that mod-ern segmentation networks have significant gains in AP , but also contain a considerable amount of duplicates. Our Se-mantic Sorting and NMS can be added as a plug-and-play module to mitigate hedged predictions and preserve AP . | 1. Introduction Tasks like classification and semantic segmentation have a fixed output space, i.e. the K-dimensional probability dis-tribution of the classes and the per-pixel semantic class re-spectively. For classification, we can use the zero-one loss, and for semantic segmentation we can use a per-pixel cross entropy loss. On the other hand, instance segmentation is a challenging problem because the output is a set contain-ing an arbitrary number of objects, and the network does not have knowledge of the number of objects in the scene apriori . Therefore, the model has to count the correct num-ber of objects in the scene, localize them all and classify †Correspondence to: rjena@seas.upenn.edu *Equal contributionthem correctly. Deep learning for instance segmentation has two broad paradigms -top-down and bottom-up instance segmentation. In bottom-up instance segmentation, the im-age is converted into per-pixel features, and pixel features are aggregated to predict objects. This is typically done by grouping or clustering the pixels based on some similarity in the feature space [2, 7, 13, 30, 36, 41]. In top-down in-stance segmentation, a model proposes a set of candidate proposals, out of which proposals not containing an object are removed. This leaves us with a smaller set of propos-als which are further passed into a localization and clas-sification branch. This is typically followed by an NMS step, since an object may have multiple candidate propos-als, so duplicates must be removed. Popular approaches are dominated by top-down methods where the network re-gresses a bounding box, mask, and category. Mask-RCNN [14, 16, 24] approaches it as a two-stage problem: localize the object, then predict the associated instance segmenta-tion mask. SOLO [37, 38] builds on an anchor-free frame-work and directly regresses an object segmentation using a spatial grid feature as a probe. More recent work based on Transformers ( [6, 12]) explicitly learn a query in the network memory, then refines this prediction. We can interpret all these top-down methods as implementing the query-key paradigm. Each uses different query designs: an-chor box-based object proposal for Mask R-CNN, grid-cell for SOLO, or learnable latent features for DETR/QueryInst. The Query-Key interaction aims to extract different repre-sentations of the object: ROI pooled features for MaskR-CNN, center-based convolution filters for SOLO, and cross-attention features in DETR. In analyzing why top-down methods consistently per-form better than bottom-up methods, we make an un-usual observation. The qualitative performance of bottom-up methods is at par with that of top-down methods, but there is a significant gap in mAP. Upon further analysis of the precision-recall curves in top-down methods, we find that mAP can be increased by increasing the number of low-confidence predictions. We observe that recent design choices in the literature has exacerbated this problem. In This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11309 Figure 1. Top: Toy example demonstrating how AP changes with a reordering of the same set of detections (9 TPs, 1FP). Note that in(b)the last FP doesn’t contribute to AP. A detection that does not predict this example will also have the same AP. The last pre-diction in (b)is therefore a hedged prediction. Middle, Bottom: SOLOv2 with Matrix and Mask NMS respectively for the same network parameters. (c)shows the qualitative result and (d)is the corresponding P/R curve for the image. Note that hedged predic-tions do not penalize AP. (e)shows the P/R curve for airplane category over entire COCO val dataset. Note that AP increases by 1 point, but number of false positives increase 3-fold. this work, we take a step back and analyze how mAP can be ‘gamed’ by increasing false positives, explore other metrics in the literature, and propose metrics that explicitly quan-tify this amount of false positives, both spatially and cate-gorically. Furthermore, we propose a Semantic Sorting and NMS module to improve all metrics related to this excessive amount of prediction, only with a minimal dip in mAP. |
Feng_Generating_Aligned_Pseudo-Supervision_From_Non-Aligned_Data_for_Image_Restoration_in_CVPR_2023 | Abstract Due to the difficulty in collecting large-scale and per-fectly aligned paired training data for Under-Display Cam-era (UDC) image restoration, previous methods resort to monitor-based image systems or simulation-based methods, sacrificing the realness of the data and introducing domain gaps. In this work, we revisit the classic stereo setup for training data collection – capturing two images of the same scene with one UDC and one standard camera. The key idea is to “copy” details from a high-quality reference im-age and “paste” them on the UDC image. While being able to generate real training pairs, this setting is suscep-tible to spatial misalignment due to perspective and depth of field changes. The problem is further compounded by the large domain discrepancy between the UDC and normal images, which is unique to UDC restoration. In this paper, we mitigate the non-trivial domain discrepancy and spatial misalignment through a novel Transformer-based frame-work that generates well-aligned yet high-quality target data for the corresponding UDC input. This is made possi-ble through two carefully designed components, namely, the Domain Alignment Module (DAM) and Geometric Align-ment Module (GAM), which encourage robust and accurate discovery of correspondence between the UDC and nor-mal views. Extensive experiments show that high-quality and well-aligned pseudo UDC training pairs are benefi-cial for training a robust restoration network. Code and the dataset are available at https://github.com/ jnjaby/AlignFormer . | 1. Introduction Under-Display Camera (UDC) is an imaging system with cameras placed underneath a display. It emerges as a promising solution for smartphone manufacturers to com-pletely hide the selfie camera, providing a notch-free view-ing experience on smartphones. However, the widespread (a) UDC(b) Ref(d) AlignFormer(Ours) (c) Anaglyph (UDC-Ref)(e) Anaglyph (UDC-AlignFormer)Figure 1. Domain and geometric misalignment in UDC. Stereo pairs (a) and (b) are captured by Under-Display Camera and high-end camera, respectively. The two images deviate significantly due to the color shift and severe degradation the UDC image. Anaglyph (c) illustrates the large spatial displacement between UDC and reference images despite a careful hardware setup and rough alignment. Our AlignFormer aligns the image pair and min-imizes the parallax. commercial production of UDC is prevented by poor imag-ing quality caused by diffraction artifacts. Such artifacts are unique to UDC, caused by the gaps between display pixels that act as an aperture. As shown in Figure 1(a), typical diffraction artifacts entail flare, saturated blobs, blur, haze, and noise. The complex and diverse distortions make the reconstruction problem extremely challenging. Training a deep network end-to-end for UDC restora-tion has been found challenging due to the need for a large-scale dataset of real-world degraded images and their high-quality counterparts. Existing methods [29, 52] build datasets with a monitor-based imaging system. As dis-cussed in Feng et al. [3], such a paradigm is inherently lim-ited by the dynamic range and spatial resolution of the mon-itor. To address the problem, Feng et al. [3] present a syn-thetic dataset grounded on the imaging formation model [3]. Both datasets exhibit degradation that deviates from the ac-tual physical imaging process, leading to poor generaliz-ability to diverse real-world test cases. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5013 To circumvent the hurdle in collecting real paired data, we opt for an alternative setup, i.e., to construct paired dataset with a stereo setting. Specifically, we capture two images of the same scene with one Under-Display Cam-era and one normal camera, denoted as UDC and Reference image, respectively. An example is shown in Figure 1(a-b) The key challenge lies in two aspects. i) Domain discrep-ancy. The different camera configurations inevitably give rise to variations in illuminance and severe color inconsis-tency, especially under the presence of color shift and severe diffraction artifacts in the UDC image. ii) Geometric mis-alignment. The contents in the UDC image and reference image are misaligned due to different focal lengths and field of views (FOV). Due to the unique nature of UDC restoration, existing solutions are not effective in addressing the two aforemen-tioned challenges. In particular, the low-level vision com-munity has made attempts on this stereo setup for super-resolution [1], deblurring [32], and learnable ISP [9]. In addition, Contextual loss [25] and CoBi loss [48] are de-vised to alleviate mild spatial misalignment. As shown in our experiments, those methods are less stable and robust due to the difficulty of reliable matching when one image is severely distorted. In particular, the over-exposed regions caused by diffraction require strong pixel-wise supervision to enforce constraints during the training. The key idea of our solution is to generate high-quality and well-aligned pseudo pairs from the non-aligned stereo data (UDC and reference) to enable end-to-end training of a deep network. The challenge lies in solving the domain and spatial misalignment so that the process resembles ‘copy-ing’ details from the reference image selectively and then ‘pasting’ on the degraded image. To this end, we devise a simple yet effective Transformer-based framework, namely AlignFormer , with a Domain Alignment Module (DAM) and a Geometric Alignment Module (GAM). The DAM is inspired by AdaIN [7], aiming to mitigate the domain dis-crepancy between the UDC and reference images, allowing more robust and accurate correspondence matching in the subsequent stage. The GAM establishes accurate dense cor-respondences through incorporating geometric cues in at-tention. Specifically, GAM can flexibly work with any off-the-shelf pre-trained optical flow estimators to build pixel-wise correspondence between the UDC and reference im-ages. The discovered correspondence then guides the sparse attention in our Transformer to search for the matching pix-els accurately and effectively within local regions. Figure 1(d-e) show that AlignFormer produces well-aligned image pairs. The results of AlignFormer can serve as pseudo ground-truth data and one can easily train an im-age restoration network end-to-end with common training settings, i.e., using pixel losses such as L1that assume exact spatial alignment, the perceptual loss [14], and theadversarial loss. Moreover, the constructed pseudo-paired dataset allows us to enjoy the merits of any advanced ar-chitectures of neural networks designed for image restora-tion problems. The generated data do not suffer from the limited dynamic range of spatial resolution as in previous monitor-based imaging systems. The data also experience a far lower domain gap than simulation-based approaches. The main contributions are three-fold: • We propose a data generation framework that is specif-ically designed for UDC. It presents a promising direc-tion beyond previous monitor-based and simulation-based data collection approaches, leading to improved generalizability of UDC image restoration. • Our AlignFormer properly integrates optical flow guidance into up-to-date Transformer architectures. • Experimental results demonstrate significant progress in practical UDC image restoration in real-world sce-narios. |
Bansal_Neural_Pixel_Composition_for_3D-4D_View_Synthesis_From_Multi-Views_CVPR_2023 | Abstract We present Neural Pixel Composition (NPC), a novel ap-proach for continuous 3D-4D view synthesis given a dis-crete set of multi-view observations as input. Existing state-of-the-art approaches require dense multi-view super-vision and an extensive computational budget. The pro-posed formulation reliably operates on sparse and wide-baseline multi-view images/videos and can be trained effi-ciently within a few seconds to 10 minutes for hi-res (12MP) content. Crucial to our approach are two core novelties: 1) a representation of a pixel that contains color and depth information accumulated from multi-views for a particular location and time along a line of sight, and 2) a multi-layer perceptron (MLP) that enables the composition of this rich information provided for a pixel location to obtain the fi-nal color output. We experiment with a large variety of multi-view sequences, compare to existing approaches, and achieve better results in diverse and challenging settings.1. Introduction Novel views can be readily generated if we have access to the underlying 6D plenoptic function R(✓,d,⌧)[1,23] of the scene that models the radiance incident from direc-tion✓2R2to a camera placed at position d2R3at time ⌧. Currently, no approach exists that can automatically re-construct an efficient space-and-time representation of the plenoptic function given only a (potentially sparse) set of multi-view measurements of the scene as input. The core idea of image-based rendering [22, 38] is to generate novel views based on re-projected information from a set of cal-ibrated source views. This re-projection requires a high-quality estimate of the scene’s geometry and is only correct for Lambertian materials, since the appearance of specu-lar surfaces is highly view-dependent. Building a dense 3D volume from multi-view inputs that provides correct 3D in-formation for a pixel is a non-trivial task. Recent approaches such as Neural Radiance Fields (NeRF) [27] and Neural V olumes (NV) [20] attempt to cre-ate rich 3D information along a ray of light by sampling 3D This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 290 MLP 3D from multi-views <latexit sha1_base64="0cPwKkkO95NY8wGwaYvNK98ASSQ=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBahgpSkFPVY9OKxgmkLbSib7aZdutmE3Y0YQn+DFw+KePUHefPfuG1z0NYHA4/3ZpiZ58ecKW3b31ZhbX1jc6u4XdrZ3ds/KB8etVWUSEJdEvFIdn2sKGeCupppTruxpDj0Oe34k9uZ33mkUrFIPOg0pl6IR4IFjGBtJLf6dJGeD8oVu2bPgVaJk5MK5GgNyl/9YUSSkApNOFaq59ix9jIsNSOcTkv9RNEYkwke0Z6hAodUedn82Ck6M8oQBZE0JTSaq78nMhwqlYa+6QyxHqtlbyb+5/USHVx7GRNxoqkgi0VBwpGO0OxzNGSSEs1TQzCRzNyKyBhLTLTJp2RCcJZfXiXtes25rDXuG5XmTR5HEU7gFKrgwBU04Q5a4AIBBs/wCm+WsF6sd+tj0Vqw8plj+APr8wf0ko4j</latexit>(x, y)<latexit sha1_base64="iOi4if5NkMIV0hgFuxOxEMPZMJw=">AAAB63icbVBNS8NAEJ3Ur1q/qh69LBbBU0lE1GPRi8cK9gPaUDbbTbt0dxN2J0IJ/QtePCji1T/kzX9j0uagrQ8GHu/NMDMviKWw6LrfTmltfWNzq7xd2dnd2z+oHh61bZQYxlsskpHpBtRyKTRvoUDJu7HhVAWSd4LJXe53nrixItKPOI25r+hIi1AwirnUR5oMqjW37s5BVolXkBoUaA6qX/1hxBLFNTJJre15box+Sg0KJvms0k8sjymb0BHvZVRTxa2fzm+dkbNMGZIwMllpJHP190RKlbVTFWSdiuLYLnu5+J/XSzC88VOh4wS5ZotFYSIJRiR/nAyF4QzlNCOUGZHdStiYGsowi6eSheAtv7xK2hd176p++XBZa9wWcZThBE7hHDy4hgbcQxNawGAMz/AKb45yXpx352PRWnKKmWP4A+fzByQbjlE=</latexit>⌧ <latexit sha1_base64="bjZKlndsQoNCsTyK0Vnl6lIXSuM=">AAACB3icbVDLSgMxFM34rPVVdSlIsAgVSpmRoi6LblxWsA9ohyGTZtrQzIPkjjgt3bnxV9y4UMStv+DOvzHTzkJbDySce869JPe4keAKTPPbWFpeWV1bz23kN7e2d3YLe/tNFcaSsgYNRSjbLlFM8IA1gINg7Ugy4ruCtdzhdeq37plUPAzuIImY7ZN+wD1OCWjJKRyVpPNQxtJJ0mtUxpCWkJbgjE6dQtGsmFPgRWJlpIgy1J3CV7cX0thnAVBBlOpYZgT2mEjgVLBJvhsrFhE6JH3W0TQgPlP2eLrHBJ9opYe9UOoTAJ6qvyfGxFcq8V3d6RMYqHkvFf/zOjF4l/aYB1EMLKCzh7xYYAhxGgrucckoiEQTQiXXf8V0QCShoKPL6xCs+ZUXSfOsYp1XqrfVYu0qiyOHDtExKiELXaAaukF11EAUPaJn9IrejCfjxXg3PmatS0Y2c4D+wPj8AeXql28=</latexit>(rx,ry,rz,tx,ty,tz)<latexit sha1_base64="vaw+wV1TjBdWbAS6dfHFDVHDHZM=">AAAB8XicbVBNS8NAEJ3Ur1q/qh69LBahgpSkFPVY9OKxgv3ANoTNZtMu3WzC7kYoof/CiwdFvPpvvPlv3LY5aOuDgcd7M8zM8xPOlLbtb6uwtr6xuVXcLu3s7u0flA+POipOJaFtEvNY9nysKGeCtjXTnPYSSXHkc9r1x7czv/tEpWKxeNCThLoRHgoWMoK1kR6rxGMXKPDYuVeu2DV7DrRKnJxUIEfLK38NgpikERWacKxU37ET7WZYakY4nZYGqaIJJmM8pH1DBY6ocrP5xVN0ZpQAhbE0JTSaq78nMhwpNYl80xlhPVLL3kz8z+unOrx2MyaSVFNBFovClCMdo9n7KGCSEs0nhmAimbkVkRGWmGgTUsmE4Cy/vEo69ZpzWWvcNyrNmzyOIpzAKVTBgStowh20oA0EBDzDK7xZynqx3q2PRWvBymeO4Q+szx8G+4/b</latexit>(ci,di)<latexit sha1_base64="wEI8JztgPf6N92x6CusVkC/0LZg=">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkqMeiF48V7Ae0oUy2m3btZhN2N0IJ/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqKGvSWMSqE6BmgkvWNNwI1kkUwygQrB2Mb2d++4kpzWP5YCYJ8yMcSh5yisZKrR6KZIT9csWtunOQVeLlpAI5Gv3yV28Q0zRi0lCBWnc9NzF+hspwKti01Es1S5COcci6lkqMmPaz+bVTcmaVAQljZUsaMld/T2QYaT2JAtsZoRnpZW8m/ud1UxNe+xmXSWqYpItFYSqIicnsdTLgilEjJpYgVdzeSugIFVJjAyrZELzll1dJ66LqXVZr97VK/SaPowgncArn4MEV1OEOGtAECo/wDK/w5sTOi/PufCxaC04+cwx/4Hz+AI5vjyE=</latexit>↵ <latexit sha1_base64="4W3lAUEm3GNkjod4S2CEllMiBO0=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkqMeiF48V7Qe0oWw2k3bpZhN2N0Ip/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAqujet+O4W19Y3NreJ2aWd3b/+gfHjU0kmmGDZZIhLVCahGwSU2DTcCO6lCGgcC28Hodua3n1BpnshHM07Rj+lA8ogzaqz0EPZ5v1xxq+4cZJV4OalAjka//NULE5bFKA0TVOuu56bGn1BlOBM4LfUyjSllIzrArqWSxqj9yfzUKTmzSkiiRNmShszV3xMTGms9jgPbGVMz1MveTPzP62YmuvYnXKaZQckWi6JMEJOQ2d8k5AqZEWNLKFPc3krYkCrKjE2nZEPwll9eJa2LqndZrd3XKvWbPI4inMApnIMHV1CHO2hAExgM4Ble4c0Rzovz7nwsWgtOPnMMf+B8/gBFBo3N</latexit>di <latexit sha1_base64="Bq1eeodPDpjU1Tsq2zKK/yYD0bE=">AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkqMeiF48V7Qe0oWy2k3bpZhN2N0IJ/QlePCji1V/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHstHM0nQj+hQ8pAzaqz0wPq8X664VXcOskq8nFQgR6Nf/uoNYpZGKA0TVOuu5ybGz6gynAmclnqpxoSyMR1i11JJI9R+Nj91Ss6sMiBhrGxJQ+bq74mMRlpPosB2RtSM9LI3E//zuqkJr/2MyyQ1KNliUZgKYmIy+5sMuEJmxMQSyhS3txI2oooyY9Mp2RC85ZdXSeui6l1Wa/e1Sv0mj6MIJ3AK5+DBFdThDhrQBAZDeIZXeHOE8+K8Ox+L1oKTzxzDHzifP0OAjcw=</latexit>ci target 2D view<latexit sha1_base64="wplLSeZG7rOgHUVcPa+dCJtP3Xo=">AAAB7XicbVDLSgNBEJyNrxhfUY9eBoPgKexKUI9BLx4jmAckS+idzCZj5rHMzAphyT948aCIV//Hm3/jJNmDJhY0FFXddHdFCWfG+v63V1hb39jcKm6Xdnb39g/Kh0cto1JNaJMornQnAkM5k7RpmeW0k2gKIuK0HY1vZ377iWrDlHywk4SGAoaSxYyAdVKrNwQhoF+u+FV/DrxKgpxUUI5Gv/zVGyiSCiot4WBMN/ATG2agLSOcTku91NAEyBiGtOuoBEFNmM2vneIzpwxwrLQrafFc/T2RgTBmIiLXKcCOzLI3E//zuqmNr8OMySS1VJLFojjl2Co8ex0PmKbE8okjQDRzt2IyAg3EuoBKLoRg+eVV0rqoBpfV2n2tUr/J4yiiE3SKzlGArlAd3aEGaiKCHtEzekVvnvJevHfvY9Fa8PKZY/QH3ucPidmPHg==</latexit> <latexit sha1_base64="BRsAni67Q/Onf11Gu9eQg8UTdzM=">AAACGHicbVDLSsNAFJ34rPUVdelmsAiCUBMp6qZQdONKKtgHNDXcTKft0JkkzEyEEvIZbvwVNy4Ucdudf+P0sdDWAxcO59zLvfcEMWdKO863tbS8srq2ntvIb25t7+zae/t1FSWS0BqJeCSbASjKWUhrmmlOm7GkIAJOG8HgZuw3nqhULAof9DCmbQG9kHUZAW0k3z7zApApyXAZez0QAvAp9lQi/JSV3ewxvctSD3jcB59lxGfYtwtO0ZkALxJ3Rgpohqpvj7xORBJBQ004KNVynVi3U5CaEU6zvJcoGgMZQI+2DA1BUNVOJ49l+NgoHdyNpKlQ44n6eyIFodRQBKZTgO6reW8s/ue1Et29aqcsjBNNQzJd1E041hEep4Q7TFKi+dAQIJKZWzHpgwSiTZZ5E4I7//IiqZ8X3Yti6b5UqFzP4sihQ3SETpCLLlEF3aIqqiGCntErekcf1ov1Zn1aX9PWJWs2c4D+wBr9ANSzn60=</latexit> ¯c= +NX i=1↵ici Color for<latexit sha1_base64="0cPwKkkO95NY8wGwaYvNK98ASSQ=">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBahgpSkFPVY9OKxgmkLbSib7aZdutmE3Y0YQn+DFw+KePUHefPfuG1z0NYHA4/3ZpiZ58ecKW3b31ZhbX1jc6u4XdrZ3ds/KB8etVWUSEJdEvFIdn2sKGeCupppTruxpDj0Oe34k9uZ33mkUrFIPOg0pl6IR4IFjGBtJLf6dJGeD8oVu2bPgVaJk5MK5GgNyl/9YUSSkApNOFaq59ix9jIsNSOcTkv9RNEYkwke0Z6hAodUedn82Ck6M8oQBZE0JTSaq78nMhwqlYa+6QyxHqtlbyb+5/USHVx7GRNxoqkgi0VBwpGO0OxzNGSSEs1TQzCRzNyKyBhLTLTJp2RCcJZfXiXtes25rDXuG5XmTR5HEU7gFKrgwBU04Q5a4AIBBs/wCm+WsF6sd+tj0Vqw8plj+APr8wf0ko4j</latexit>(x, y) Figure 2. Color for a pixel location: Our goal is to estimate the color for every pixel location (x, y)for a time ⌧given cam-era extrinsic parameters (rx,ry,rz,tx,ty,tz). We collect a rich 3D descriptor consisting of color (c ) and depth (d ) information from multiple stereo-pairs using an off-the-shelf disparity estima-tion module [47]. We learn a multi-layer perceptron (MLP) to compose color and depth. The final output color ¯cis obtained by a simple dot-product of a blending weight ↵(output of MLP) and the corresponding color samples. is a regressed color correction term per pixel. points at regular intervals given a min-max bound. Radi-ance fields are highly flexible 3D scene representations that enables them to represent a large variety of scenes including semi-transparent objects. The price to be paid for this flex-ibility is that current approaches are restricted to datasets that provide dense 3D observations [20, 27,32–34, 49], can only model bounded scenes [5, 20,25,27,44,48], and require intensive computational resources [20, 27,49]. In contrast, we introduce a multi-view composition approach that com-bines the insights from image-based rendering [39] with the power of neural rendering [42] by learning how to best ag-gregate information from different views given only imper-fect depth estimates as input. Figure 1shows novel views synthesized using our approach for different multi-view se-quences and the reconstructed details for each example. We accumulate rich 3D information (color and depth) for a pixel location using an off-the-shelf disparity estima-tion approach [47] given multiple stereo pairs as input. We then learn a small multi-layer perceptron (MLP) for a given multi-view sequence that inputs the per-pixel information at a given camera position and outputs color at the loca-tion. Figure 2illustrates the components of our approach. We train an MLP for a sequence by sampling random pix-els given multi-views. In our experiments, we observe that a simple 5-layer perceptron is sufficient to generate high-quality results. Our model roughly requires 1GB | of GPU memory and can be trained within a few seconds to 10min-utes from scratch. The trained model allows us to perform a single forward-pass at test time for each pixel location in a target camera view. A single forward pass per pixel is more efficient than radiance field based approaches that re-quire hundreds of samples along each ray. Finally, the alpha values (↵ i) allow us to perform dense 3D reconstruction of the scene by selecting appropriate depth values at a givenlocation. In summary, our contributions are: • A surprisingly simple, yet effective approach that re-quires limited computational resources for novel view synthesis from calibrated multi-view images or videos. The proposed method allows us to synthesize novel views given sparse unconstrained multi-views, where existing state-of-the-art approaches struggle. • Our approach offers a natural extension to the 4D view synthesis problem. Our approach is also able to obtain dense depth map and 3D reconstruction on challenging in-the-wild scenes. • Our approach enables us to reconstruct small details in the scene better than existing methods. Finally, we study the generalizability of the learned model us-ing hi-res studio captures. We observe that the model learned on a single time-instant for one subject gen-eralizes to unseen time instances and unseen subjects without any fine-tuning. 2. Related Work Our novel view synthesis work is closely related to sev-eral research domains, such as classical 3D reconstruction and plenoptic modeling, as well as neural rendering for static and dynamic scenes. In the following, we cover the most related approaches. For a detailed discussion of neural rendering approaches, we refer to the surveys [39, 42,43]. Plenoptic Modeling and NeRF: Plenoptic function [1, 23] does not require geometric modeling. A plenoptic or a light-field camera [10, 18,28] captures all possible rays of light (in a bounded scene), which in turns enables the synthesis of a new view via a per-ray look-up. Recent approaches such as NeRF [27] and follow-up work [4, 46,49] employ a multi-layer perceptron (MLP) that infers color and opacity values at 3D locations along each camera ray. These color and opacity values along the ray are then being integrated to obtain the final pixel color. This requires: 1) dense multi-view inputs [5, 48]; 2) perfect camera parameters [14, 19]; and 3) a min-max bound to sample 3D points along a ray of light [33, 49]. We observe degenerate outputs if all three conditions are not met (as shown in Figure 3). Different ap-proaches either use prior knowledge or a large number of multi-view sequences [5, 35,44,48], additional geometric optimization [14, 19,29], or large capacity models to sepa-rately capture foreground and background [49]. We use an off-the-shelf disparity estimation module [47] that allows us to accumulate 3D information for a given pixel. A sim-ple MLP provides us with blending parameters that enable the composition of color information. This allows us to overcome the above-mentioned three challenges albeit us-ing limited computational resources to train/test the model. 291 Ground Truth Ours DS-NeRF NeRF COLMAP: Dense 3D reconstruction Figure 3. View synthesis given sparse and spread-out multi-views: Our approach allows us to operate on sparse multi-views of unbounded scenes [3]. We show novel view points for a fixed time instant for three unbounded scenes. Prior approaches such as NeRF [27] and DS-NeRF [6] lead to degenerate outputs on these sequences. We also show the 3D reconstruction using COLMAP [36, 37] for the sequence in the top-row. We observe that dense 3D reconstruction from sparse views is non-trivial for COLMAP. 3D Reconstruction and View Synthesis: Another ap-proach to solve the problem is to obtain dense 3D recon-struction from the input images [11] and project 3D points to the target view. There has been immense progress in densely reconstructing the static world from multi-view imagery [8, 15], internet scale photos [2, 12,36,41], and videos [37]. Synthesizing a novel view from accumulated 3D point clouds may not be consistent due to varying illumi-nation, specular material, and different cameras used for the capture of the various viewpoints. Riegler et al. [33, 34] use a neural network to obtain consistent visuals given a dense 3D reconstruction. This works well for dense multi-view observations [17]. However, 3D reconstruction is sparse given wide-baseline views or scenes with specular surfaces. This is highlighted in Figure 3, which shows 3D recon-struction results of COLMAP [36, 37] using one of the se-quences. Recently, DS-NeRF [6] use sparse 3D points from COLMAP along with NeRF to learn better and faster view synthesis. As shown in Figure 3, adding explicit depth in-formation enables DS-NeRF to capture scene structure but still struggles with details. Layered Depth and Multi-Plane Images: Closely related to our work are layered depth images [24, 26,30,31,38,51] that learn an alpha composition for multi-plane images at discrete depth positions. In this work, we did not restrict our approach to 2D planes or specific depth locations. In-stead, we learn a representation for a pixel at arbitrary depth locations. A pixel-wise representation not only allows usto interpolate, but also to extrapolate, and obtain dense 3D reconstruction results. Since we have a pixel-wise repre-sentation, we are able to generate 12MP resolution images without any modifications of our approach. Prior work has demonstrated results on a max 2MP resolution content. 4D View Synthesis: Most approaches are restricted to 3D view synthesis [26, 27] and would require drastic modifica-tions [7, 32] to be applied to the 4D view-synthesis problem. Lombardi et al. [21] employ a mixture of animated volumet-ric primitives to model the dynamic appearance of human heads from dense multi-view observations. Open4D [3] re-quires foreground and background modeling for 4D visual-ization. Our work does not require major modifications to extend to 4D view-synthesis. In addition, we do not require explicit foreground-background modeling for 4D view syn-thesis. We demonstrate our approach on the challenging Open4D dataset [3] where the minimum distance between two cameras is 50cm. Our composition model trained on a single time instant also enables us to do 4D visualization for unseen time instances. Finally, the model learned for view synthesis enable dense depth map and 3D reconstruc-tion from multi-views. 3. Method We are given Mmulti-view images with camera param-eters (intrinsics and extrinsics) as input. Our goal is to learn a function, f, that inputs pixel information (p),p2RNp, and outputs color (¯c2R3)at that location, i.e., f:p!¯c. 292 (c) Neural Composition (b) Naive Composition++ (a) Naive Composition (d) Ground Truth Figure 4. Naive Composition vs. Neural Composition: The baseline naively uses multiple stereo-pairs to generate the final output (Naive Composition). (a)For each pixel location, we select the color value for the closest depth location. (b)We also take the average of color values for the three closest depth locations (Naive Composition++). (c)We contrast these results with Neural Composition which uses an MLP to compose the color values. We observe that the MLP nicely composes the color values despite noisy depth estimates and fills the missing regions. (d)We also show ground truth for reference. Best viewed in electronic format. Learning such a function is challenging since we live in a 3D-4D world and images provide only 2D measurements. We present two crucial components: 1) a representation of apixel that contains relevant multi-view information for high-fidelity view synthesis; and 2) a multi-layer percep-tron (MLP) that inputs the pixel information and outputs the color. Overview: We input a pixel location (x, y)given corre-sponding camera parameters (rx,ry,rz,tx,ty,tz)at time, ⌧, along with an array of possible 3D points along the line of sight. The ithlocation of this array contains depth (d i) and color (c i). The MLP outputs alpha (↵ i) values for the ithlocation that allow us to obtain the final color at (x, y). The MLP also outputs gamma, 2R3, which is a correc-tion term learned by the model. We get the final color at pixel location (x, y)as:¯c= +PN i=1↵ici, where Nis the number of points in the array. We describe our representa-tion of a pixel in Sec. 3.1and the MLP in Sec. 3.2. 3.1. Representation of a Pixel Given a pixel location (x, y)for a camera position (rx,ry,rz,tx,ty,tz), our goal is to collect dense 3D infor-mation that contains depth and color information at all pos-sible 3D points along a line of sight. We obtain 3D points via two-view geometry [11] by forming M 2 stereo-pairs. The estimated disparity between a stereo pair provides the depth for the 3D point locations. Multiple stereo pairs allow us to densely populate 3D points along the rays. Color and Depth Array: We use multiple stereo pairs to build an array of depth ( d) and color (c) for a pixel. We store the values in order of increasing depth, i.e., di+1 di. The array is similar to a ray of light that travels in a particular direction connecting the 3D points. We limit the number of 3D points to be N. If there are less than Ndepth observations, we set di=0andci=( 0,0,0). If there are more than Nobservation, we take closest N3D points.Uncertainty Array: In this work, we use an off-the-shelf disparity estimation module from Yang et al. [47]. This approach provides an estimate of uncertainty (entropy) for each prediction. We also keep an array of uncertainty values (H) of equal size as the depth array (obtained from dispar-ity and camera p |
Fel_CRAFT_Concept_Recursive_Activation_FacTorization_for_Explainability_CVPR_2023 | Abstract Attribution methods, which employ heatmaps to iden-tify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practical value of these methods, at-tributed in part to their narrow focus on the most prominent regions of an image – revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas. In this work, we try to fill in this gap with CRAFT – a novel approach to identify both “what” and “where” by generat-ing concept-based explanations. We introduce 3 new ingre-dients to the automatic concept extraction literature: ( i) a recursive strategy to detect and decompose concepts across layers, ( ii) a novel method for a more faithful estimation of concept importance using Sobol indices, and ( iii) the use of implicit differentiation to unlock Concept Attribution Maps. We conduct both human and computer vision experi-ments to demonstrate the benefits of the proposed approach.We show that the proposed concept importance estimation technique is more faithful to the model than previous meth-ods. When evaluating the usefulness of the method for hu-man experimenters on a human-centered utility benchmark, we find that our approach significantly improves on two of the three test scenarios. | 1. Introduction Interpreting the decisions of modern machine learning models such as neural networks remains a major challenge. Given the ever-increasing range of machine learning appli-cations, the need for robust and reliable explainability meth-ods continues to grow [ 11,35]. Recently enacted Euro-pean laws (including the General Data Protection Regula-tion (GDPR) [ 37] and the European AI act [ 43]) require the assessment of explainable decisions, especially those made by algorithms. In order to try to meet this growing need, an array of explainability methods have already been proposed [ 14,50, 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2711 Figure 2. CRAFT results for the prediction “chain saw”. First, our method uses Non-Negative Matrix Factorization (NMF) to extract the most relevant concepts used by the network (ResNet50V2) from the train set (ILSVRC2012 [ 10]). The global influence of these concepts on the predictions is then measured using Sobol indices (right panel). Finally, the method provides local explanations through concept attribution maps (heatmaps associated with a concept, and computed using grad-CAM by backpropagating through the NMF concept values with implicit differentiation). Besides, concepts can be interpreted by looking at crops that maximize the NMF coefficients. For the class “chain saw”, the detected concepts seem to be: •the chainsaw engine, •the saw blade, •the human head, •the vegetation, •the jeans and •the tree trunk. 59,61,62,66,69,71,76]. One of the main classes of meth-ods called attribution methods yields heatmaps that indicate the importance of individual pixels for driving a model’s decision. However, these methods exhibit critical limita-tions [ 1,27,63,65], as they have been shown to fail – or only marginally help – in recent human-centered bench-marks [ 8,26,41,49,60,64]. It has been suggested that their limitations stem from the fact that they are only capable of explaining where in an image are the pixels that are criti-cal to the decision but they cannot tell what visual features are actually driving decisions at these locations. In other words, they show where the model looks but not what it sees. For example, in the scenario depicted in Fig. 1, where an ImageNet-trained ResNet mistakenly identifies an image as containing a shovel, the attribution map displayed on the left fails to explain the reasoning behind this misclassifica-tion. A recent approach has sought to move past attribution methods [ 40] by using so-called “concepts” to communi-cate information to users on how a model works. The goal is to find human-interpretable concepts in the activation space of a neural network. Although the approach exhibited po-tential, its practicality is significantly restricted due to the need for prior knowledge of pertinent concepts in its orig-inal formulation and, more critically, the requirement for a labeled dataset of such concepts. Several lines of work have focused on trying to automate the concept discovery process based only on the training dataset and without ex-plicit human supervision. The most prominent of these tech-niques, ACE [ 24], uses a combination of segmentation and clustering techniques but requires heuristics to remove out-liers. However, ACE provides a proof of concept that it might be possible to discover concepts automatically and at scale – without additional labeling or human supervision. Nevertheless, the approach suffers several limitations: byconstruction, each image segment can only belong to a sin-gle cluster, a layer has to be selected by the user to be used to retrieve the relevant concepts, and the amount of infor-mation lost during the outlier rejection phase can be a cause of concern. More recently, Zhang et al. [ 77] proposes to leverage matrix decompositions on internal feature maps to discover concepts. Here, we try to fill these gaps with a novel method called CRAFT which uses Non-Negative Matrix Factoriza-tion (NMF) [ 46] for concept discovery. In contrast to other concept-based explanation methods, our approach provides an explicit link between their global and local explanations (Fig. 2) and identifies the relevant layer(s) to use to repre-sent individual concepts (Fig. 3). Our main contributions can be described as follows: (i)A novel approach for the automated extraction of high-level concepts learned by deep neural networks. We validate its practical utility to users with human psy-chophysics experiments. (ii)A recursive procedure to automatically identify con-cepts and sub-concepts at the right level of granularity – starting with our decomposition at the top of the model and working our way upstream. We validate the benefit of this approach with human psychophysics experiments showing that (i) the decomposition of a concept yields more coherent sub-concepts and (ii) that the groups of points formed by these sub-concepts are more refined and appear meaningful to humans. (iii)A novel technique to quantify the importance of in-dividual concepts for a model’s prediction using Sobol in-dices [ 34,67,68] – a technique borrowed from Sensitivity Analysis. (iv)The first concept-based explainability method which produces concept attribution maps by backpropagating con-2 2712 cept scores into the pixel space by leveraging the implicit function theorem in order to localize the pixels associated with the concept of a given input image. This effectively opens up the toolbox of both white-box [ 15,59,62,66,69, 71,78] and black-box [ 14,47,55,57] explainability methods to derive concept-wise attribution maps. |
Ge_Policy_Adaptation_From_Foundation_Model_Feedback_CVPR_2023 | Abstract Recent progress on vision-language foundation models have brought significant advancement to building general-purpose robots. By using the pre-trained models to encode the scene and instructions as inputs for decision making, the instruction-conditioned policy can generalize across differ-ent objects and tasks. While this is encouraging, the policy still fails in most cases given an unseen task or environment. In this work, we propose Policy Adaptation from Founda-tion model Feedback (PAFF). When deploying the trained policy to a new task or a new environment, we first let the policy play with randomly generated instructions to record the demonstrations. While the execution could be wrong, we can use the pre-trained foundation models to provide feed-back to relabel the demonstrations. This automatically pro-vides new pairs of demonstration-instruction data for pol-icy fine-tuning. We evaluate our method on a broad range *Work done during internship at UCSD. †Work done outside of Amazon.of experiments with the focus on generalization on unseen objects, unseen tasks, unseen environments, and sim-to-real transfer. We show PAFF improves baselines by a large mar-gin in all cases. | 1. Introduction Learning generalizable manipulation policies have been a long standing problem in robotics. The goal is to train a general-purpose robot which can tackle multiple tasks with different object compositions in diverse environments. However, most current policy learning approaches with im-itation learning or reinforcement learning can only learn to solve one task at a time, and usually operate on a fixed set of objects. To achieve human-level generalization, many efforts have been made on performing robotic manipulation tasks specified by natural language [34, 35, 56]. Language can not only bring its compositionality to low-level robot skills, but also operate as a high-level planner for long-horizon tasks. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19059 Recently, there is a trend on leveraging the pre-trained vision-language foundation models [38,47] as the backbone encoders for generalizing robot skills. For example, CLI-PORT [56] uses the pre-trained CLIP model [47] as the image observation and language instruction encoder, and learn a manipulation policy on top for tasks across different object arrangements. While encouraging results have been shown in this line of research on generalization across train-ing tasks, it is still very challenging for the learned policy to generalize to unseen tasks and environments. For example, as shown in Figure 1 (a), our experiments show that if we train such a policy on two tasks including (i) pack objects of different shapes in the brown box and (ii) put blocks of different colors in the bowls of different colors, it is very challenging for the policy to generalize to a task that put objects of different shapes in different colored bowls. Fur-thermore, the difficulty drastically increases when we need to perform this task in the real world with a different robot. Our key insight is that, while the action generator (i.e., the policy) cannot generalize well, the action classifier with foundation models can still achieve a high accuracy even when “zero-shot” transferred to unseen environments [35, 56]. In this paper, we leverage vision-language founda-tion models to provide feedback during deploying the pol-icy in unseen tasks and environments. We utilize the feed-back from foundation models to fine-tune the policy fol-lowing test-time training [28, 57, 61], which updates model parameters during test-time. Specifically, we propose Pol-icy Adaptation from Foundation model Feedback (PAFF) with a play and relabel pipeline. When adapting a trained policy to a new task or new environment, we first let the policy play, that is, the model continuously generates and performs actions given a series of language instructions in the new task and we record the demonstrations including the visual observations and model’s actions. Of course, the instructions and the outcome demonstrations will often not match under the out-of-distribution environment. We then let the model relabel to make the correction, that is, the recorded demonstrations can be automatically relabeled by the vision-language pre-trained model. By taking the vi-sual observations of recorded demonstrations as inputs, the pre-trained model can retrieve accurate language instruc-tions correspondingly. Given the accurate paired demon-strations and instructions in the new environment, we can fine-tune and adapt the policy with them. We emphasize that the whole process of PAFF performs in an automatic way using trained models without human interventions. We carefully design a broad range of language condi-tioned robotic adaptation experiments to evaluate the pol-icy adaptation across object composition, tasks and envi-ronments including from simulation to the real world. Our evaluations consist of (i) Compositional Generalization in Fig. 1 (a), where we train a policy to pack objects of dif-ferent shapes in the brown box, and put blocks of differ-ent colors in the bowls of different colors, and adapt it to put objects of different shapes in the bowls of different col-ors. (ii) Out-of-distribution Generalization in Fig. 1 (c), where we train a policy to pack certain objects in the brown box, and adapt it to unseen objects; and in Fig. 1 (d), where we adapt a policy trained on seen environments to an un-seen environment with different textures and placements of static elements such as the sliding door, the drawer and the switch. (iii) Sim-to-real Transfer in Fig. 1 (b), where we adapt a policy trained on simulation data to the real-world. We show PAFF improves baselines by a large-margin in all evaluations. In sim-to-real transfer, our method signifi-cantly improves the success rate by an average of 49.6% on four tasks than the baseline. Our pipeline fills the domain gap between simulation and real world through utilizing the generalization capability of the foundation model. Our method also increases the success rate from 17.8% to 35.0% in the compositional generalization evaluation, and from 48.4% to 63.8% for packing unseen objects. When adapting the policy to an unseen environment, our method increases the success rate of completing 5 chains of language instruc-tions from 5% to 11% over the baseline method. The exten-sive evaluation results show that PAFF can effectively adapt a language conditioned policy to unseen objects, tasks, en-vironments, and realize sim-to-real transfer. |
Choi_N-Gram_in_Swin_Transformers_for_Efficient_Lightweight_Image_Super-Resolution_CVPR_2023 | Abstract While some studies have proven that Swin Transformer (Swin) with window self-attention (WSA) is suitable for sin-gle image super-resolution (SR), the plain WSA ignores the broad regions when reconstructing high-resolution im-ages due to a limited receptive field. In addition, many deep learning SR methods suffer from intensive computa-tions. To address these problems, we introduce the N-Gram context to the low-level vision with Transformers for the first time. We define N-Gram as neighboring local win-dows in Swin, which differs from text analysis that views N-Gram as consecutive characters or words. N-Grams in-teract with each other by sliding-WSA, expanding the re-gions seen to restore degraded pixels. Using the N-Gram context, we propose NGswin, an efficient SR network with SCDP bottleneck taking multi-scale outputs of the hierar-chical encoder. Experimental results show that NGswin achieves competitive performance while maintaining an efficient structure when compared with previous leading methods. Moreover, we also improve other Swin-based SR methods with the N-Gram context, thereby building an en-hanced model: SwinIR-NG. Our improved SwinIR-NG out-performs the current best lightweight SR approaches and establishes state-of-the-art results. Codes are available at https://github.com/rami0205/NGramSwin. | 1. Introduction The goal of single image super-resolution (SR) is to re-construct high-resolution (HR) images from low-resolution (LR) images. Many deep learning-based methods have worked in this field. In particular, several image restora-tion studies [16, 27, 55, 61, 63, 66] have adapted the win-dow self-attention (WSA) proposed by Swin Transformer (Swin) [32] as it integrates long-range dependency of Vi-sion Transformer [14] and locality of conventional convolu-tion. However, two critical problems remain in these works. First, the receptive field of the plain WSA is limited within a small local window [52,56,58]. It prevents the models from utilizing the texture or pattern of neighbor windows to re-*Corresponding author. NGswin (ours) FMEN (CVPRW22) HNCT (CVPRW22) CARN (ECCV18)RFDN -L (ECCV20) SRPN -Lite (ICLR22)LatticeNet (ECCV20) IMDN (ACMMM19)28.6 Mult -Adds (G)2028.5 28.4 28.3 28.2 28.1 40 60 80 100 12039.2 w/o N -Gram w/ N -GramPSNR (Urban100 x3) PSNR (Manga10 9x2) 39.0 38.8 38.6NGswin SwinIR -light HNCT28.0Figure 1. Two tracks of this paper using the N-Gram context. (Left) NGswin outperforms previous leading SR methods with an efficient structure. (Right) Our proposed N-Gram context im-proves different Swin Transformer-based SR models. cover degraded pixels, producing the distorted images. Sec-ond, recent state-of-the-art SR [9,27,61,66] and lightweight SR [6, 15, 35, 63] networks require intensive computations. Reducing operations is essential for real-world applications if the parameters are kept around a certain level ( e.g., 1M, 4MB sizes), because the primary consumption of semicon-ductor energy (concerning time) for neural networks is re-lated to Mult-Adds operations [17, 47]. To overcome these problems, we define the N-Gram con-text as the interaction of neighbor local windows. Neighbor uni-Gram embeddings interact with each other by sliding-WSA to produce the N-Gram context features before win-dow partitioning. The uni-Gram embeddings result from a channel-reducing group convolution [10] to decrease the complexity of N-Gram interaction (see Fig. 3c). Our N-Gram context efficiently expands the receptive field of WSA for recovery tasks. This work introduces N-Gram to low-level vision with Transformers for the first time, inspired by the following facts: N-Gram language models treat the extended context beyond each separate word to understand text statistically [8]. Since images have heavy spatial redun-dancy, some degraded pixels can be recovered from contex-tual information of neighbor pixels [18]. As shown in Fig. 1, our work progresses in two tracks. Mainly , to solve the problem of the intensive opera-tions in SR, we propose an efficient N-Gram Swin Trans-former ( NGswin ). As illustrated in Fig. 3a, NGswin con-sists of five components: a shallow module, three hier-archical encoder stages (with patch-merging ) that contain NSTBs (N-Gram Swin Transformer Blocks), SCDP Bot-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2071 tleneck (pixel-Shuffle, Concatenation, Depth-wise convo-lution, Point-wise projection), a small decoder stage with NSTBs, and a reconstruction module. NSTBs employ our N-Gram context and the scaled-cosine attention proposed by Swin V2 [31]. SCDP bottleneck, which takes multi-scale outputs of the encoder, is a variant of bottleneck from U-Net [46]. Experimental results demonstrate that the components above contribute to the efficient and com-petitive performance of NGswin. Secondly , focusing on improved performances, we apply the N-Gram context to other Swin-based SR models, such as SwinIR-light [27] and HNCT [16]. Notably, SwinIR-NG (improved SwinIR-light with N-Gram) establishes state-of-the-art lightweight SR. The main contributions of this paper are summarized as: (1) We introduce the N-Gram context to the low-level vi-sion with Transformer for the first time. It enables the SR networks to expand the receptive field to recover each degraded pixel by sliding-WSA . For efficient cal-culation of N-Gram WSA, we produce uni-Gram em-beddings by a channel-reducing group convolution. (2) We propose an efficient SR network, NGswin. It ex-ploits the hierarchical encoder (with patch-merging ), an asymmetrically small decoder, and SCDP bottle-neck. These elements are critical for competitive per-formance in the efficient SR on ×2,×3, and×4tasks. (3) The N-Gram context improves other Swin Trans-former methods. The improved SwinIR-NG achieves state-of-the-art results on lightweight SR. |
Dai_Hybrid_Neural_Rendering_for_Large-Scale_Scenes_With_Motion_Blur_CVPR_2023 | Abstract Rendering novel view images is highly desirable for many applications. Despite recent progress, it remains challenging to render high-fidelity and view-consistent novel views of large-scale scenes from in-the-wild images with inevitable artifacts (e.g., motion blur). To this end, we develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images. Besides, images captured in the wild inevitably contain ar-tifacts, such as motion blur, which deteriorates the qual-ity of rendered images. Accordingly, we propose strategies to simulate blur effects on the rendered images to mitigate the negative influence of blurriness images and reduce their importance during training based on precomputed quality-aware weights. Extensive experiments on real and syn-thetic data demonstrate our model surpasses state-of-the-art point-based methods for novel view synthesis. The code is available at https://daipengwa.github.io/ Hybrid-Rendering-ProjectPage/ . | 1. Introduction Novel-view synthesis of a scene is one critical feature re-quired by various applications, e.g., AR/VR, robotics, and video games, to name a few. Neural radiance field (NeRF) [23] and its follow-up works [3, 19, 24, 39, 43, 47] enable high-quality view synthesis on objects or synthetic data. However, synthesizing high-fidelity and view-consistent novel view images of real-world large-scale scenes remains challenging, especially in the presence of inevitable arti-facts from the data-capturing process, such as motion blur (see Figure 1 & supplementary material). To improve novel view synthesis, mainstream research can be mainly categorized into two lines. One line of meth-ods directly resorts to features from training data to synthe-size novel view images [4,11,29,40], namely image-based rendering. By directly leveraging rich high-quality fea-tures from neighboring high-resolution images, these meth-*Equal contribution †Corresponding author Point-NeRFOursGT Figure 1. Our hybrid neural rendering model generates high-fidelity novel view images. Please note characters in the book where the result of Point-Nerf is blurry and the GT is contami-nated by blur artifacts. ods have a better chance of generating high-fidelity images with distinctive details. Nevertheless, the generated im-ages often lack consistency due to the absence of global structural regularization, and boundary image pixels often contain serious artifacts. Another line of work attempts to equip NeRF with explicit 3D representations in the form of point cloud [28,43], surface mesh [30,44] or voxel grid fea-tures [9,19,46], namely neural 3D representation. Thanks to the global geometric regularization from explicit 3D repre-sentations, they can efficiently synthesize consistent novel view images but yet struggle with producing high-fidelity images in large-scale scenes (see the blurry images from Point-NeRF [43] in Fig. 1). This may be caused by low-resolution 3D representations [19], noisy geometries [1, 7], imperfect camera calibrations [2], or inaccurate rendering formulas [3], which make encoding a large-scale scene into a global neural 3D representation non-trivial and inevitably loses high-frequency information. Albeit advancing the field, the above work all suffer im-mediately from low-quality training data, e.g., blurry im-ages. Recently, Deblur-NeRF [21] aims to address the prob-lem of blurry training data and proposed a pipeline to simu-late blurs by querying multiple auxiliary rays, which, how-ever, is computation and memory inefficient, hindering their applicability in large-scale scenes. In this paper, we aim at synthesizing high-fidelity and view-consistent novel view images in large-scale scenes us-ing in-the-wild unsatisfactory data, e.g., blurry data. First, to simultaneously address high fidelity and view consis-tency, we put forward a hybrid neural rendering approach that enjoys the merits of both image-based representation and neural 3D representation. Our fundamental design This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 154 centers around a 3D-guided neural feature fusion module, which employs view-consistent neural 3D features to in-tegrate high-fidelity 2D image features, resulting in a hy-brid feature representation that preserves view consistency whilst simultaneously upholding quality. Besides, to avoid the optimization of the hybrid representation being biased toward one modality, we develop a random feature drop strategy to ensure that features from different modalities can all be well optimized. Second, to effectively train the hybrid model with un-satisfactory in-the-wild data, we design a blur simulation and detection approach to alleviate the negative impact of low-quality data on model training. Specifically, the blur simulation module injects blur into the rendered image to mimic the real-world blurry effects. In this way, the blurred image can be directly compared with the blurry reference image while providing blur-free supervisory signals to train the hybrid model. Besides, to further alleviate the influence of blurry images, we design a content-aware blur detection approach to robustly assess the blurriness scores of images. The calculated scores are further used to adjust the impor-tance of samples during training. In our study, we primarily focus on the blur artifact due to its prevalence in real-world data ( e.g., ScanNet); however, our “simulate-and-detect” approach can also be applied to address other artifacts. While our model is built upon the state-of-the-art 3D-and image-based neural rendering models, our contribu-tion falls mainly on studying their combinatorial benefits and bridging the gap between NeRF and unsatisfactory data captured in the wild. Our major contributions can be sum-marized as follows. • We study a hybrid neural rendering model for synthe-sizing high-fidelity and consistent novel-view images. • We design blur simulation and detection strategies that facilitate offering blur-free training signals for opti-mizing the hybrid rendering model. • Extensive experiments on real ( i.e., ScanNet [5]) and synthetic data ( i.e., Habitat-sim [22,35]) showcase that our method outperforms state-of-the-art point-based methods designed for novel view synthesis. |
Hu_Label-Free_Liver_Tumor_Segmentation_CVPR_2023 | Abstract We demonstrate that AI models can accurately segment liver tumors without the need for manual annotation by us-ing synthetic tumors in CT scans. Our synthetic tumors have two intriguing advantages: (I) realistic in shape and texture, which even medical professionals can confuse with real tumors; (II) effective for training AI models, which can perform liver tumor segmentation similarly to the model trained on real tumors—this result is exciting because no existing work, using synthetic tumors only, has thus far reached a similar or even close performance to real tumors. This result also implies that manual efforts for annotating tumors voxel by voxel (which took years to create) can be significantly reduced in the future. Moreover, our synthetic tumors can automatically generate many examples of small (or even tiny) synthetic tumors and have the potential to im-prove the success rate of detecting small liver tumors, which is critical for detecting the early stages of cancer. In addi-tion to enriching the training data, our synthesizing strategy also enables us to rigorously assess the AI robustness. | 1. Introduction Artificial intelligence (AI) has dominated medical image segmentation [21,26,73–75], but training an AI model ( e.g., U-Net [48]) often requires a large number of annotations. Annotating medical images is not only expensive and time-consuming, but also requires extensive medical expertise, and sometimes needs the assistance of radiology reports and biopsy results to achieve annotation accuracy [12, 52, 59, 69–71]. Due to its high annotation cost, only roughly 200 CT scans with annotated liver tumors are publicly available (provided by LiTS [5]) for training and testing models. *Corresponding author: Zongwei Zhou (zzhou82@jh.edu)To minimize annotation expenses, generating synthetic tumors is an emerging research topic. Early attempts in-clude, but only limited to, synthesizing COVID-19 infec-tions [41,63], lung nodules [19] abdominal tumors [27], dia-betic lesions [57], and brain tumors [60]. However, the syn-thetic tumors in those studies appear very different from the real tumors; due to this, AI models trained using synthetic tumors perform significantly worse than those trained us-ing real tumors. What makes synthesizing tumors so hard? There are several important factors: shape, intensity, size, location, and texture. In this paper, we handcraft a strat-egy to synthesize liver tumors in abdominal CT scans. Our key novelties include (i)location without collision with ves-sels, (ii)texture with scaled-up Gaussian noise, and (iii) shape generated from distorted ellipsoids. These three as-pects are proposed according to the clinical knowledge of liver tumors (detailed in §3.2). The resulting synthetic tu-mors are realistic—even medical professionals usually con-fuse them with real tumors in the visual examination (Fig-ure 1; Table 2). In addition, the model trained on our syn-thetic tumors achieves a Dice Similarity Coefficient (DSC) of 59.81% for segmenting real liver tumors, whereas AI trained on real tumors obtains a DSC of 57.63% (Figure 2), showing that synthetic tumors have the potential to be used as an alternative to real tumors in training AI models. These results are exciting because using synthetic tu-mors only, no previous work has thus far reached a sim-ilar (or even close) performance to the model trained on real tumors [24]. Moreover, our synthesizing strategy can exhaustively generate tumors with desired locations, sizes, shapes, textures, and intensities, which are not limited to a fixed finite-size training set (the well-known limitation of the conventional training paradigm [65]). For example, it is hard to collect sufficient training examples with small tu-mors. It is because early-stage tumors may not cause symp-toms, which can delay detection, and these tumors are rela-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7422 Figure 1. [Better viewed in color and zoomed in for details] Can you tell which liver tumors are real and which are fake? The answers are provided in Appendix. We have recruited two medical professionals with at least six years of experience to distinguish fake tumors, generated by our method, from the real ones (namely, Visual Turing Test). Our synthetic tumors have passed the Visual Turing Test on both two medical professionals ( <50% fake tumors were picked out). More importantly, using our label-free synthetic tumors, AI models can segment real tumors with performance similar to the AI models trained on real tumors with expensive, detailed, per-voxel annotation. tively small and exhibit subtle abnormal textures that make it difficult for radiologists to manually delineate the tumor boundaries. In contrast, our synthesis strategy can generate a large number of examples featuring small tumors. The key contribution of ours is a synthetic tumor generator, which offers five advantages as summarized below. 1. The synthesis strategy embeds medical knowledge into an executable program, enabling the generation of re-alistic tumors through the collaboration of radiologists and computer scientists (§5.1; Table 2; Figure 3). |
Garcia_Uncurated_Image-Text_Datasets_Shedding_Light_on_Demographic_Bias_CVPR_2023 | Abstract The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations. It is known that even small but manually annotated datasets, such as MSCOCO, are affected by societal bias. This problem, far from being solved, may be getting worse with data crawled from the In-ternet without much control. In addition, the lack of tools to analyze societal bias in big collections of images makes addressing the problem extremely challenging. Our first contribution is to annotate part of the Google Conceptual Captions dataset, widely used for training vision-and-language models, with four demographic and two contextual attributes. Our second contribution is to conduct a comprehensive analysis of the annotations, focus-ing on how different demographic groups are represented. Our last contribution lies in evaluating three prevailing vision-and-language tasks: image captioning, text-image CLIP embeddings, and text-to-image generation, showing that societal bias is a persistent problem in all of them. https://github.com/noagarcia/phase | 1. Introduction The training paradigm in vision-and-language models has shifted from manually annotated collections, such as MS-COCO [30] and Visual Genome [27], to massive datasets with little-to-none curation automatically crawled from the Internet [17, 42, 43]. Figure 1 illustrates this ten-dency by comparing the size of paired image-text datasets over time. Whereas manually annotated datasets, widely used in the last decade, contained a few hundred thousand images each, the latest automatically crawled collections are composed of several million samples. This large amount of data has led to training some disruptive models in the field, such as CLIP [37] trained on 400 million image-text pairs; Imagen [41] trained on 860 million image-text pairs; Flamingo [1] trained on 2.3 billion images and short videos paired with text; DALL-E 2 [38] trained on 650 million im-ages; or Stable Diffusion [39], trained on 600 million cap-MSCOCO 2014 Visual Genome 2016 GCC 2018 RedCaps 2021 LAION-400M 2021 Manually annotated Automatically crawled 400 12 3.3 0.1 0.33Figure 1. Evolution of paired image-text datasets in terms of num-ber of samples (in million). Datasets scaled up with data auto-matically crawled from the Internet, reaching the current status in which models are trained with hundreds of millions of samples. tioned images. Those models have been shown to learn vi-sual and language representations that outperform the pre-vious state-of-the-art on tasks such as zero-shot classifica-tion [37] or text-to-image generation [38, 39]. Despite the impressive results on controlled benchmarks, a critical drawback arises: the larger the training set, the less control over the data. With toxic content easily acces-sible on the Internet, models trained under uncurated col-lections are more prone to learn harmful representations of the world, including societal bias, which results in mod-els performing differently for different sociodemographic groups [57]. The risk of obtaining unfair representations is high, as not only do models trained on biased datasets learn to reproduce bias but also amplify it by making predictions more biased than the original data [22, 53, 56]. This turns out to be harmful when, far from controlled research envi-ronments, models are used in the real-world [10]. Manually annotated datasets [16,30] have been shown to be affected by societal bias [21, 32, 60, 61], but the problem gets worse in automatically crawled datasets [8,9]. To over-come societal bias, fairness protocols must be included both in the dataset and in the model development phase. Data analysis [8,9,21,32,52,58], evaluation metrics [22,40,53], This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6957 Table 1. Image-text datasets with annotations for bias detection. Dataset Image source Annotation process Labels Images Regions Attributes Zhao et al. [61] MSCOCO [30] automatic (captions) image 33,889 -gender Zhao et al. [60] MSCOCO [30] (val) crowd-sourcing region 15,762 28 ,315 gender skin-tone PHASE GCC [43] crowd-sourcing region 18,889 35,347 age gender skin-tone ethnicity emotion activity and mitigation techniques [5, 11, 23, 54] are essential tools for developing fairer models, however, they require demo-graphic attributes, such as gender or skin-tone, to be avail-able. These annotations are currently scarce, and only exist for a few datasets and attributes [60, 61]. In this paper, we contribute to the analysis, evalua-tion, and mitigation of bias in vision-and-language tasks by annotating six types of demographic1and contextual2 attributes in a large dataset: the Google Conceptual Cap-tions (GCC) [43], which was one of the first automati-cally crawled datasets with 3.3 million image-caption pairs. We name our annotations PHASE (Perceived H uman Annotations for S ocial E valuation), and we use them to conduct a comprehensive analysis of the distribution of de-mographic attributes on the GCC dataset. We complement our findings with experiments on three main vision-and-language tasks: image captioning, text-image embeddings, and text-to-image generation. Overall, we found that a dataset crawled from the internet like GCC presents big un-balances on all the demographic attributes under analysis. Moreover, when compared against the demographic annota-tions in MSCOCO by Zhao et al. [60], GCC has bigger rep-resentation gaps in gender and skin-tone. As for the down-stream tasks, the three of them show evidence of different performance for different demographic groups. |
Dong_Adversarial_Robustness_via_Random_Projection_Filters_CVPR_2023 | Abstract Deep Neural Networks show superior performance in various tasks but are vulnerable to adversarial attacks. Most defense techniques are devoted to the adversarial training strategies, however, it is difficult to achieve satis-factory robust performance only with traditional adversar-ial training. We mainly attribute it to that aggressive per-turbations which lead to the loss increment can always be found via gradient ascent in white-box setting. Although some noises can be involved to prevent attacks from deriv-ing precise gradients on inputs, there exist trade-offs be-tween the defense capability and natural generalization. Taking advantage of the properties of random projection, we propose to replace part of convolutional filters with ran-dom projection filters, and theoretically explore the geomet-ric representation preservation of proposed synthesized fil-ters via Johnson-Lindenstrauss lemma. We conduct suffi-cient evaluation on multiple networks and datasets. The experimental results showcase the superiority of proposed random projection filters to state-of-the-art baselines. The code is available on GitHub. | 1. Introduction Although Deep Neural Networks (DNNs) have become a popular technique in various scientific fields [16,46,47,50], the vulnerability of DNNs reveals the high risk of deploy-ment in real scenarios, especially under the attack of ad-versarial examples [11, 28]. Some tiny and imperceptible perturbations to network inputs could result in the major changes of outputs, which can be easily crafted through various adversarial attack strategies [1, 6, 11]. Adversar-ial attacks could be generally categorized into two streams, the white-box and black-box attacks. In black-box set-ting, the attackers have no knowledge of victim models but can estimate the strong perturbation via surrogate models or huge number of queries [15, 19]. In white-box setting, the attackers have full knowledge of victim model, includ-*Corresponding author.ing the model parameters, network architecture, and infer-ence strategy [28, 39]. Since the gradients of victim models can be directly fetched, the crafted adversarial examples are more aggressive and the performance under white-box at-tacks is one of the key criteria of robustness evaluation. Seeking adversarial robust networks becomes a key chal-lenge when it comes to the deployment of DNNs. One of the most popular and effective techniques is adversarial train-ing [28], which arguments the training data with adversarial examples within a fixed perturbation size. With the involve-ment of adversarial examples, DNNs are optimized to pre-serve their outputs for perturbed samples within the ℓpball of all training input data. However, due to the increasingly advanced attack techniques, it is difficult for existing adver-sarially trained networks to achieve satisfactory robustness against all potential attacks. Furthermore, the training on stronger adversarial examples could hurt the natural gen-eralization of models [52], and there exists a trade-off be-tween robustness and accuracy [51]. Besides the traditional adversarial training, the utiliza-tion of randomization in adversarial robustness has been proven effective. For example, Liu et al. [25] propose to inject noise which is sampled from Gaussian distribution to the inputs of convolution layers. Some theoretical analy-ses have shown that randomized classifiers can easily out-perform deterministic ones in defending against adversar-ial attacks [32, 33]. We mainly attribute the improvement of randomization in adversarial robustness evaluation to the fusion of features with noises, which prevents white-box at-tackers from obtaining the precise gradients of loss with re-spect to the inputs. Although the involvement of noise in the networks can be an effective defense mechanism, the design of noises, such as the way of injection, the magnitude of noise, etc., can also significantly influence the natural gen-eralization of networks in practice. The trade-offs between the adversarial robustness and optimization difficulty are al-ways ignored in the randomized techniques, which limits their superiority to deterministic models. In this paper, we introduce randomness into deep neu-ral networks with the help of random projection filters. Random projection is a simple yet effective technique This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4077 for dimension reduction, which can approximately pre-serve the pairwise distance between any two data points from a higher-dimensional space in the projected lower-dimensional space under certain conditions. The theoretical and empirical advantages offered by random projection thus inspire a new way to explore the potential of noise injection with better trade-offs in Convolutional Neural Networks (CNNs). We propose to partially replace the convolutional filters with the random projection filters . Theoretically, we extend the scope of Johnson-Lindenstrauss Lemma [41] to cover the convolutions, where partial convolutional filters are randomly sampled from a zero-mean Gaussian distribu-tion. Pairwise example distance can also be approximately preserved under the new convolutions defined by random projection filters, if the number of random projection fil-ters is lower bounded in terms of the weight norm of the remaining convolutional filters. Motivated by these obser-vations, we introduce a simple and efficient defense scheme via the proposed Random Projection Filters (RPF). As pa-rameters of random projection filters are randomly sampled during forwarding, the attackers have no knowledge of up-coming sampled parameters even if in white-box attack set-tings. The effectiveness of proposed RPF is verified via ex-tensive empirical evaluations in our experiments. |
Hu_Self-Guided_Diffusion_Models_CVPR_2023 | Abstract Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process. However, guidance requires a large amount of image-annotation pairs for train-ing and is thus dependent on their availability and correct-ness. In this paper, we eliminate the need for such annota-tion by instead exploiting the flexibility of self-supervision signals to design a framework for self-guided diffusion mod-els. By leveraging a feature extraction function and a self-annotation function, our method provides guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks. Our experiments on single-label and multi-label image datasets demonstrate that self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annota-tion. Self-guided diffusion is simple, flexible and expected to profit from deployment at scale. | 1. Introduction Diffusion models have recently enabled tremendous ad-vancements in many computer vision fields related to image synthesis, but counterintuitively this often comes with the cost of requiring large annotated datasets [49, 55]. For ex-ample, the image fidelity of samples from diffusion models can be spectacularly enhanced by conditioning on class la-bels [17]. Classifier guidance goes a step further and offers control over the alignment with the class label, by using the classifier gradient to guide the image generation [17]. Classifier-free guidance [28] replaces the dedicated classifier with a diffusion model trained by randomly dropping the condition during training. This has proven a fruitful line †Equal contribution Source code at: https://taohu.me/sgdm/ . Self-guidanceSelf-annotationImages withoutannotations Self-labeledcluster-ID 23 Self-segmented Self-boxed ,cluster-ID 2ExampleconditioningGeneratedsamplesk-means cluster-ID 890 Figure 1. Self-guided diffusion framework. Our method can leverage large and diverse image datasets without any annotations for training guided diffusion models. Starting from a dataset with-out ground-truth annotations, we apply a self-supervised feature extractor to create self-annotations. Using these, we train diffusion models with either self-labeled, self-boxed, or self-segmented guid-ance that enable controlled generation and improved image fidelity. of research for several other condition modalities, such as text [50, 55], image layout [53], visual neighbors [3], and image features [20]. However, all these conditioning and guidance methods require ground-truth annotations. In many domains, this is an unrealistic and too costly assumption. For example, medical images require domain experts to annotate very high-resolution data, which is infeasible to do exhaus-tively [45]. In this paper, we propose to remove the necessity of ground-truth annotation for guided diffusion models. We are inspired by progress in self-supervised learning [11,13], which encodes images into semantically meaningful latent vectors without using any label information. It usually does so by solving a pretext task [2, 21, 24, 69] on image-level to remove the necessity of labels. This annotation-free paradigm enables the representation learning to upscale to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18413 larger and more diverse image datasets [19]. The holistic image-level self-supervision has recently been extended to more expressive dense representations, including bounding boxes (e.g., [41, 57]) and pixel-precise segmentation masks (e.g., [22, 72]). Some self-supervised learning methods even outperform supervised alternatives [11, 24]. We hypothesize that for diffusion models, self-supervision may also provide a flexible and competitive, possibly even stronger guidance signal than ground-truth labeled guidance. In this paper, we propose self-guided diffusion models , a framework for image generation using guided diffusion without the need for any annotated image-label pairs, the detailed structure is shown in Figure 1. The framework en-compasses a feature extraction function and a self-annotation function, that are compatible with recent self-supervised learning advances. Furthermore, we leverage the flexibil-ity of self-supervised learning to generalize the guidance signal from the holistic image level to (unsupervised) lo-cal bounding boxes and segmentation masks for more fine-grained guidance. We demonstrate the potential of our pro-posal on single-label and multi-label image datasets, where self-labeled guidance always outperforms diffusion models without guidance and may even surpass guidance based on ground-truth labels. When equipped with self-supervised box or mask proposals, our method further generates visually diverse yet semantically consistent images, without the need for any class, box, or segment label annotation. |
Chen_HNeRV_A_Hybrid_Neural_Representation_for_Videos_CVPR_2023 | Abstract Implicit neural representations store videos as neural networks and have performed well for various vision tasks such as video compression and denoising. With frame in-dex or positional index as input, implicit representations (NeRV , E-NeRV , etc.) reconstruct video frames from fixed and content-agnostic embeddings. Such embedding largely limits the regression capacity and internal generalization for video interpolation. In this paper, we propose a Hy-brid Neural Representation for Videos ( HNeRV ), where a learnable encoder generates content-adaptive embeddings, which act as the decoder input. Besides the input em-bedding, we introduce HNeRV blocks, which ensure model parameters are evenly distributed across the entire net-work, such that higher layers (layers near the output) can have more capacity to store high-resolution content and video details. With content-adaptive embeddings and re-designed architecture, HNeRV outperforms implicit meth-ods in video regression tasks for both reconstruction qual-ity (+4.7PSNR) and convergence speed ( 16×faster), and shows better internal generalization. As a simple and effi-cient video representation, HNeRV also shows decoding ad-vantages for speed, flexibility, and deployment, compared to traditional codecs (H.264, H.265) and learning-based compression methods. Finally, we explore the effectiveness of HNeRV on downstream tasks such as video compression and video inpainting. | 1. Introduction Given the massive amount of videos generated every day, storing and transferring them efficiently is a key task in computer vision and video processing. Even for modern storage systems, the space requirements of raw video data can be overwhelming. Despite storage becoming cheaper, network speeds and I/O processing remain a bottleneck and make transferring and processing videos expensive. Traditional video codecs, such as H.264 [47] and Input frame VideoEmbedding Encoder Output frame Hybrid neural representation (HNeR V) Decoder c 300 600 1200 2400 4800 Epoch3032343638PSNR 16x faster 30.8731.6832.1332.4132.63 32.0933.234.1534.7634.9735.5736.1936.9337.2737.56 NeRV E-NeRV HNeRVFigure 1. Top: hybrid neural representation with learnable and content-adaptive embedding (ours). Bottom: video regression for hybrid and implicit neural representations. HEVC [41], rely on a manually-designed encoder and de-coder based on discrete cosine transform [3]. With the suc-cess of deep learning, many attempts [2, 7, 11, 23, 24, 27, 36, 37, 49] have been made to replace certain components of existing compression pipelines with neural networks. Although these learning-based compression methods show high potential in terms of rate-distortion performance, they suffer from complex pipelines and expensive computation, not just to train, but also to encode and decode. To address the complex pipelines and heavy computa-tion, implicit neural representations [6, 33, 35, 38, 40] have become popular due to their simplicity, compactness, and efficiency. These methods show great potential for visual data compression, such as COIN [8] for image compres-sion, and NeRV [4] for video compression. By representing videos as neural networks, video compression problems can This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10270 be converted to model compression problems, which greatly simplifies the encoding and decoding pipeline. Implicit representation methods for video compression present a major trade-off: they embrace simplicity at the expense of generalizability. Given a frame index tas in-put, NeRV [4] uses a fixed position encoding function and a learnable decoder to reconstruct video frames from temporal embeddings. Another implicit representation, E-NeRV [22], takes a temporal embedding and spatial embed-ding to reconstruct video frames. Since the embeddings of NeRV and E-NeRV are based on spatial and/or temporal in-formation only; without connection to the actual content of frames, they are content-agnostic. For decoding, NeRV-like models compute these embeddings using frame index alone, without access to the original frame. This is quite elegant for video compression, since instead of storing many frame embeddings, one would only need to store model weights and basic metadata ( e.g., number of frames). However, this comes with some major disadvantages. Firstly, since embeddings are content-agnostic, and due to how the temporal embeddings are computed, there is no way to meaningfully interpolate between frames. Secondly, and more importantly, the positional embedding used by the fully-implicit models provides no visual prior and limits the regression capacity, since all the information needs to be learned by and stored in the video decoder. In this paper, we propose a learnable encoder as a key component of hybrid neural representation for videos (HN-eRV , Figure 1 (top). Our proposed neural representation is a hybrid between implicit (network-centric) and explicit (embedding-centric) approaches since it stores videos in two parts: the tiny content-adaptive frame embeddings and a learned neural decoder. Besides the issue of content-agnostic embedding, prior work such as NeRV also suf-fers from an imbalance in the distribution of model parame-ters. In these decoders, later layers (closer to the output im-age) have much fewer parameters than earlier layers (closer to the embedding). This hinders NeRV’s ability to effec-tively reconstruct massive video content while preserving frame details. To rectify this, we introduce the HNeRV block, which increases kernel sizes and channel widths at later stages. With HNeRV blocks, we can build video de-coders with parameters that are more evenly distributed over the entire network. As a hybrid method, HNeRV improves reconstruction quality for video regression and boosts the convergence speed by up to 16×compared to implicit meth-ods, shown in Figure 1 (bottom). With content-adaptive em-beddings, HNeRV also shows much better internal general-ization (ability to encode and decode frames from the video that were not seen during training), and we verify this by frame interpolation results in Section 4.2. HNeRV only requires a network forward operation for video decoding, which offers great advantages over tradi-tional codecs and prior deep learning approaches in terms of speed, flexibility, and ease of deployment. Additionally, most other video compression methods are auto-regressive and there is a high dependency on the sequential order of video frames. In contrast, there is no dependency on the sequential order of frames for HNeRV , which means it can randomly access frames efficiently to decode frames in par-allel. Such simplicity and parallelism make HNeRV a good codec for further speedups, like a special neural processing unit (NPU) chip, or parallel decoding with huge batches. HNeRV is still viable for video compression, while also showing promising performance for video restoration tasks. We design our encoder such that it can also be compressed; additionally, our HNeRV decoder blocks perform well in the model compression regime, such that HNeRV is com-petitive with state-of-the-art methods. We posit that neural representation can be robust to distortion in pixel space and therefore restore videos which have undergone distortions. We verify this observation on the video inpainting task. In summary, we propose a hybrid neural representa-tion for videos. With content-adaptive embedding and re-designed architecture, HNeRV shows much better video re-gression performance over implicit methods, in reconstruc-tion quality ( +4.7PSNR), convergence speed ( 16×faster), and internal generalization. As an efficient video codec, HNeRV is easy to deploy, and is simple, fast, and flexible during video decoding. Finally, HNeRV shows good perfor-mance over downstream tasks like video compression and video inpainting. |
Hirota_Model-Agnostic_Gender_Debiased_Image_Captioning_CVPR_2023 | Abstract Image captioning models are known to perpetuate and amplify harmful societal bias in the training set. In this work, we aim to mitigate such gender bias in image caption-ing models. While prior work has addressed this problem by forcing models to focus on people to reduce gender mis-classification, it conversely generates gender-stereotypical words at the expense of predicting the correct gender. From this observation, we hypothesize that there are two types of gender bias affecting image captioning models: 1) bias that exploits context to predict gender, and 2) bias in the prob-ability of generating certain (often stereotypical) words be-cause of gender. To mitigate both types of gender biases, we propose a framework, called LIBRA, that learns from syn-thetically biased samples to decrease both types of biases, correcting gender misclassification and changing gender-stereotypical words to more neutral ones. | 1. Introduction In computer vision, societal bias, for which a model makes adverse judgments about specific population sub-groups usually underrepresented in datasets, is increasinglyconcerning [4, 6, 7, 11, 17, 22, 41, 43, 52, 57]. A renowned example is the work by Buolamwini and Gebru [7], which demonstrated that commercial facial recognition models predict Black women with higher error rates than White men. The existence of societal bias in datasets and models is extremely problematic as it inevitably leads to discrimina-tion with potentially harmful consequences against people in already historically discriminated groups. One of the computer vision tasks in which societal bias is prominent is image captioning [49,58], which is the task of generating a sentence describing an image. Notably, image captioning models not only reproduce the societal bias in the training datasets, but also amplify it. This phenomenon is known as bias amplification [10,15,24,42,64] and makes models produce sentences more biased than the ones in the original training dataset. As a result, the generated sen-tences can contain stereotypical words about attributes such as gender that are sometimes irrelevant to the images. Our study focuses on gender bias in image captioning models. First, based on the observations in previous work [5,8,18,44,51], we hypothesize that there exist two different types of biases affecting captioning models: Type 1 .context →gender bias, which makes captioning models exploit the context of an image and precedently This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15191 A woman in a suit Classifier A man in a suit Swap man woman T5 (finetuned) A woman in a suit Mask A woman in a <mask> A woman in a skirt Biased Captions A woman in a suit Original Captions A man in a suit Mask A <mask> in a suit Filtering A woman in a skirt Transformer Decoder Biased Caption Synthesis context → gender gender → context A woman in a suit Debiasing Caption Generator Transformer Encoder Cross Attention Original Caption Mask Textual / Visual Embeddings Figure 2. Overview of LIBRA. For the original captions ( i.e., ground-truth captions written by annotators), we synthesize biased captions with context →gender or/and gender →context bias (Biased Caption Synthesis). Then, given the biased captions and the original images, we train an encoder-decoder captioner, Debiasing Caption Generator, to debias the input biased captions ( i.e., predict original captions). generated words, increasing the probability of predict-ing certain gender, as shown in Figure 1 (a). Type 2 .gender →context bias, which increases the prob-ability of generating certain words given the gender of people in an image, as shown in Figure 1 (b). Both types of biases can result in captioning models gener-ating harmful gender-stereotypical sentences. A seminal method to mitigate gender bias in image cap-tioning is Gender equalizer [8], which forces the model to focus on image regions with a person to predict their gen-der correctly. Training a captioning model using Gender equalizer successfully reduces gender misclassification (re-ducing context →gender bias). However, focusing only on decreasing such bias can conversely amplify the other type of bias [18,51]. For example, as shown in Figure 6, a model trained to correctly predict the gender of a person can pro-duce other words that are biased toward that gender (ampli-fying gender →context bias). This suggests that methods for mitigating bias in captioning models must consider both types of biases. We propose a method called LIBRA (model -agnosti c debiasing fra mework) to mitigate bias amplification in im-age captioning by considering both types of biases. Specif-ically, LIBRA consists of two main modules: 1) Bi-ased Caption Synthesis (BCS), which synthesizes gender-biased captions (Section 3), and 2) Debiasing Caption Generator (DCG), which mitigates bias from synthesized captions (Section 4). Given captions written by anno-tators, BCS synthesizes biased captions with gender → context or/and context → gender biases. DCG is then trained to recover the original caption given a⟨synthetic biased caption, image ⟩pair. Once trained, DCG can be used on top of any image captioning models to miti-gate gender bias amplification by taking the image and gen-erated caption as input. Our framework is model-agnostic and does not require retraining image captioning models. Extensive experiments and analysis, including quantita-tive and qualitative results, show that LIBRA reduces both types of gender biases in most image captioning models on various metrics [8, 18, 44, 66]. This means that DCG can correct gender misclassification caused by the context of the image/words that is biased toward a certain gender, mitigating context →gender bias (Figure 1 (a)). Also, it tends to change words skewed toward each gender to less biased ones, mitigating gender →context bias (Figure 1 (b)). Furthermore, we show that evaluation of the generated captions’ quality by a metric that requires human-written captions as ground-truth ( e.g., BLEU [30] and SPICE [1]) likely values captions that imitate how annotators tend to describe the gender ( e.g.,women posing vs.men standing ). |
Huang_Local_Implicit_Ray_Function_for_Generalizable_Radiance_Field_Representation_CVPR_2023 | Abstract We propose LIRF (LocalImplicit RayFunction), a gen-eralizable neural rendering approach for novel view render-ing. Current generalizable neural radiance fields (NeRF) methods sample a scene with a single ray per pixel and may therefore render blurred or aliased views when the input views and rendered views capture scene content with dif-ferent resolutions. To solve this problem, we propose LIRF to aggregate the information from conical frustums to con-struct a ray. Given 3D positions within conical frustums, LIRF takes 3D coordinates and the features of conical frus-tums as inputs and predicts a local volumetric radiance field. Since the coordinates are continuous, LIRF renders high-quality novel views at a continuously-valued scale via volume rendering. Besides, we predict the visible weights for each input view via transformer-based feature matching *Work was done during an internship at Tencent AI Lab. †Corresponding authors.to improve the performance in occluded areas. Experimen-tal results on real-world scenes validate that our method outperforms state-of-the-art methods on novel view render-ing of unseen scenes at arbitrary scales. | 1. Introduction Novel view synthesis has garnered recent attention with compelling applications of neural rendering in virtual and augmented reality. Different from image-based rendering [6, 19, 32, 42, 74], Neural Radiance Fields (NeRF) [43] im-plicitly represents the 3D scenes within multilayer percep-trons (MLPs) by mapping coordinates to color and geom-etry of scenes. To render a pixel, the ray projected to that pixel is traced and the color of each sampled point along the ray is accumulated based on volume rendering. Despite NeRF and its variants having demonstrated re-markable performance in providing immersive experiences in various view synthesis tasks, their practical applications This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 97 are constrained by the requirement of training from scratch on each new scene, which is time-consuming. To overcome this problem, many researches [10, 12, 29, 33, 37, 58, 64, 70] introduce image-based rendering techniques to NeRF, which achieves generalization on unseen scenes. They take into consideration the image features (from nearby views) of a 3D point. The common motivation is to predict the density and color of this point by matching the multi-view features, which is similar to the stereo matching methods [20,54,68] that find a surface point by checking the consis-tency of multi-view features. While these methods generalize well on new scenes when the distance of input and testing views are roughly constant from the scene (as in NeRF), they cannot prop-erly deal with the less constrained settings such as differ-ent resolutions or varying focal length and produce results with blurring or aliasing artifacts. Since a single ray is cast through each pixel whose size and shape are ignored, it’s challenging to query the accurate feature of the target ray from input images as shown in Fig. 2(a), and the model learns an ambiguous result as shown in Fig. 2(b). Mip-NeRF [3], a NeRF variant of per-scene optimization, pro-poses an anti-aliasing design that models the ray through a pixel as a cone and uses a 3D Gaussian to approximate the sampled conical frustum (a cone cut perpendicular to its axis) for volumetric representation. However, directly ex-tending Mip-NeRF to a generalizable method is also chal-lenging to extract the accurate features of the ray from input images due to the subpixel precision. Consequently, an ef-ficient solution is to supersample each pixel by marching multiple rays according to its footprint, similar to the strat-egy used in offline raytracing. Our key insight is the local implicit ray function (LIRF) that represents the feature aggregation of samples within ray conical frustum in a continuous manner, as shown in Fig. 1. Specifically, given any 3D sampled position within a con-ical frustum, our LIRF outputs the aggregated feature by taking the 3D coordinate of this position and the features of vertices within the conical frustum as inputs (the vertices of a conical frustum are defined with eight points (red points) as shown in Fig. 1). The continuous sampled position al-lows our method to arbitrarily upsample the rendered rays and thus synthesize novel views of the same unseen scene at multiple levels of detail (anti-blurring and anti-aliasing). Furthermore, recent generalizable NeRF methods [29, 37] introduce multi-view depth estimation to reduce the arti-facts caused by occlusions, but it is computationally expen-sive to construct the cost volume for each view. We instead match local multi-view feature patches to estimate the vis-ibility weights of each sample for anti-occlusion. Overall, our main contributions are: 1. A new generalizable approach that renders pixels by casting cones and outperforms existing methods on Figure 2. Most generalizable variants of NeRF represent a ray as a set of infinitesimal samples (shown here as dots) along that ray and map these samples into input views to query image features for volumetric representation prediction. However, this results in two drawbacks when training on multi-scale images with less con-strained settings: (a) Inaccurate features. The sampling strategy which ignores the shape and size of each ray is difficult to query accurate image features. (b) Ambiguous supervisions. The same 3D position captured by cameras under different scales results in different colors because these pixels are the integral of regions with different shapes and sizes (shown here as trapezoids). During the training, the network learns to map the same image features (from the source view) to these different colors, which causes am-biguous results. novel view synthesis at multiple scales. |
Huang_Divide_and_Adapt_Active_Domain_Adaptation_via_Customized_Learning_CVPR_2023 | Abstract Active domain adaptation (ADA) aims to improve the model adaptation performance by incorporating activelearning (AL) techniques to label a maximally-informativesubset of target samples. Conventional AL methods do notconsider the existence of domain shift, and hence, fail to identify the truly valuable samples in the context of domainadaptation. To accommodate active learning and domainadaption, the two naturally different tasks, in a collabo-rative framework, we advocate that a customized learn-ing strategy for the target data is the key to the successof ADA solutions. We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instancesinto four categories with stratified transferable properties. With a novel data subdivision protocol based on uncertaintyand domainness, DiaNA can accurately recognize the mostgainful samples. While sending the informative instances for annotation, DiaNA employs tailored learning strate-gies for the remaining categories. Furthermore, we pro-pose an informativeness score that unifies the data parti-tioning criteria. This enables the use of a Gaussian mix-ture model (GMM) to automatically sample unlabeled datainto the proposed four categories. Thanks to the “divide-and-adapt” spirit, DiaNA can handle data with large vari-ations of domain gap. In addition, we show that DiaNA cangeneralize to different domain adaptation settings, such asunsupervised domain adaptation (UDA), semi-superviseddomain adaptation (SSDA), source-free domain adaptation(SFDA), etc. | 1. Introduction Domain adaptation (DA) approaches strive to general-ize model trained on a labeled source domain to a targetdomain with rare annotation [ 5,13,16,24] by coping with †Corresponding author is Guanbin Li. TV Source Target Backpack Fan BikeCentroids (1) Confident Consistent Transferable centroids(2) Uncertain Consistent Transferable margins(3) Uncertain Inconsistent Need Annotation(4) Confident Inconsistent Pan Challenging SamplesFor source model: Figure 1. The illustration of our proposed Divide-and-Adapt mechanism to divide target samples into different data subsets forcustomized learning. the domain disparity. Nevertheless, DA methods are signifi-cantly outperformed by their supervised counterparts due tothe scarceness of annotation as demonstrated in [ 4,14,27]. In practice, it is cost-effective to get a moderate amountof target samples labeled to boost the performance of do-main adaptation. Active learning (AL) approaches seekto select samples with uncertainty [ 9,12,29,30] and di-versity [ 19,25] to best benefit the model, which properly matches the demand. However, previous AL methods as-sume that both the labeled and unlabeled data follow thesame distribution, such a strategy may become ineffectiveto the DA scenarios where the target data suffer from do-main shift. The recently proposed active domain adaptation(ADA) [ 4,22,31] aims to resolve this issue by actively se-lecting the maximally-informative instances such that theperformance of the transferred model can be best boostedwith a limited annotation budget. The key to the success of ADA is to strike a good balance between the highly coupled yet inherently different tasks:active learning and domain adaptation. The real-world tar-get data typically exhibit either of the two characteristics: This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7651 source-like ortarget-specific . While the former has simi-lar feature distribution with the source data, the latter tendsto be the unique part of target domain and deviates greatlyfrom the source distribution [ 4,26,38]. On one hand, to achieve domain adaptation, applying the same adaptationstrategy to all target data equally cannot generalize wellto scenarios with varying degrees of domain shift. This isparticularly true when the gap between the source-like andtarget-specific data is unknown. On the other hand, in activelearning tasks, the samples with a learning difficulty will bemore likely to be selected for labeling. Nonetheless, withlarge domain gap, incorporating such difficult samples inthe adaptation task would hamper the learning of the adapta-tion model, making the training highly unstable. However,despite the impressive progress that has been made, none ofthe existing works has fully addressed the above issues. In this work, we propose Divide-And-Adapt (DiaNA), a novel ADA framework that can scale to large variationsof domain gaps while achieving cost-effective data label-ing with a significant performance boost for domain adapta-tion. Our key observation is that customized learning strate-gies are vital for target data with different characteristics. In particular, DiaNA divides the target data into four sub-sets with different levels of transferable properties (see Fig-ure 1), each of which is handled with a customized learning strategy. Unlike traditional AL methods that would sim-ply label the most uncertain data [ 29,30,37], we propose to withhold the most challenging samples (Figure 1cate-gory (4)) for training the domain adaption models. Instead,the selected samples for active annotation would maintain aproper stimulus for the source model, providing informativedomain knowledge without jeopardizing the training stabil-ity. The subdivision of target data is dynamically updated as the domain disparity is gradually mitigated with more la-beled data. Hence, the previous challenging samples couldbe classified as transferable in the later stage and exploitedin the network training. We introduce a novel protocol for subdividing the target samples for customized learning. In addition to the uncer-tainty of model prediction, we advocate that the consistencywith the learned prior distribution, i.e. the domainness, is another key criterion for active domain adaptation [ 4,26]. To this end, we divide the target data into four categories asshown in Figure 1according to the domainness and uncer-tainty of the instances. We further propose that the sampleswith 1) being uncertain to the model and 2) having incon-sistent prediction with the label of its closest category pro-totype in the learned feature space ( i.e. high domainness) are the most “profitable” instances for bringing informativeknowledge of target domain if annotated. Thereby, we iden-tify the uncertain inconsistent samples for labeling while applying tailored learning strategies for the remaining cate-gories to boost the selectivity of the sampling.To avoid heuristic thresholding for data subdivision, we propose an automatic data sampling mechanism based onGaussian mixture model (GMM). In particular, we proposeaninformativeness function that incorporates the domain-ness and uncertainty in a unified scoring system. The com-puted informativeness score of the labeled data is used totrain a four-component GMM model, which is then appliedto sample the unlabeled target data into four categories. We evaluate DiaNA over a large variety of domain shift scenarios on DomainNet [ 21], Office-Home [ 28] and CIFAR-10 [ 11]. Furthermore, the proposed sampling strat-egy of DiaNA can be generalized to various domain adap-tion problems with different supervision settings, includingunsupervised domain adaptation (UDA), semi-superviseddomain adaptation (SSDA), and source-free domain adap-tation (SFDA). We summarize our contributions as follows: •A general “divide-and-adapt” framework, coded Di-aNA, for active domain adaptation that can handle di-versified degrees of domain gaps while being able togeneralize to different domain adaptation problems, in-cluding UDA, SSDA, and SFDA. •A new target data partition strategy based on domain-ness and uncertainty that enables stratified learning toachieve more stable training, superior adaptation per-formance, and better generality. •A novel informativeness scoring system and the cor-responding sampling paradigm based on GMM modelfor automatic data partitioning. •New state-of-the-art performance over the mainstream public datasets in the task of active domain adaptation. |
Iglesias_expOSE_Accurate_Initialization-Free_Projective_Factorization_Using_Exponential_Regularization_CVPR_2023 | Abstract Bundle adjustment is a key component in practically all available Structure from Motion systems. While it is crucial for achieving accurate reconstruction, convergence to the right solution hinges on good initialization. The recently introduced factorization-based pOSE methods formulate a surrogate for the bundle adjustment error without reliance on good initialization. In this paper, we show that pOSE has an undesirable penalization of large depths. To address this we propose expOSE which has an exponential regular-ization that is negligible for positive depths. To achieve ef-ficient inference we use a quadratic approximation that al-lows an iterative solution with VarPro. Furthermore, we extend the method with radial distortion robustness by de-composing the Object Space Error into radial and tangen-tial components. Experimental results confirm that the pro-posed method is robust to initialization and improves recon-struction quality compared to state-of-the-art methods even without bundle adjustment refinement.1 | 1. Introduction Factorization is a long-established method in Structure from Motion (SfM). It originates from [38] by Tomasi and Kanade showing how, under the orthographic camera model, structure and motion can be computed simultane-ously from an image sequence using singular value de-composition (SVD). The method was later reformulated for affine cameras, including weak perspective projection [32]. Strum and Triggs [36] further extended factorization to pro-jective cameras by accounting for projective depths. One appeal of these factorization algorithms is they can yield a closed-form solution by using the SVD. It is how-ever only possible to use the SVD if every considered scene 1This work has been funded by the Swedish Research Council (grant no. 2018-05375), the Swedish Foundation for Strategic Research project, Semantic Mapping and Visual Navigation for Smart Robots (grant no. RIT15-0038), and the Wallenberg AI, Autonomous Systems and Software Program (WASP). Figure 1. (Left) Examples of two of the images in the Fountain se-quence. (Right) Reconstruction obtained with expOSE (top) and pOSE (bottom) for 3 different values of η. Our method achieves the same convergence rate as pOSE while having a higher recon-struction quality and being less dependent on the choice of η. point is visible throughout the whole image sequence. In cases of missing data, the SVD can be replaced with itera-tive methods. Simple splitting methods [4,8,22] are able to regularize singular values when computing a proximal op-erator, but can give rather erroneous solutions because of a low convergence rate close to the optimum. [5, 8] give an idea of convex formulation using the nuclear norm, but are usually too weak for SfM in the presence of noise [19, 30]. The papers [1, 9, 10, 31] suggest different ways to assure that direct bilinear optimization only has a global minimum. However, SfM problems with local minima do not fulfill their required conditions [3]. It was recently shown by Hong et al. [14–17] that direct bilinear estimation of structure and motion can be made ro-bust to local minima in combination with the Variable Pro-jection (VarPro) method. In [15] the objective is exchanged for the Pseudo Object Space Error (pOSE) which is a trade-off between the object space error and a quadratic regular-ization term. This was later extended to a radial distortion invariant version RpOSE, presented in [18]. With their bi-linear factorization structure and a large basin of conver-gence when using VarPro, these pOSE models tend to find This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8959 a global minimum independently of the initialization. Addi-tionally, both pOSE and RpOSE have in [18] been shown to be local approximations of the reprojection error, enabling iterative refinement to the maximum likelihood solution. In this paper, we show that the regularization term in the pOSE formulation overly penalizes large positive depths and can thereby limit the range of feasible depths too much to achieve satisfactory solutions. We instead propose reg-ularization with an exponential penalty that is negligible for positive depths. To achieve efficient inference we use a quadratic approximation of the exponential term suitable for optimization with VarPro. Moreover, we extend the method with radial distortion robustness by decomposing the OSE into radial and tangent components. In short, the main contributions of this paper are: • We investigate the pOSE models’ undesirable penal-ization of large depths and propose expOSE which has negligible regularization of positive depths; • We formulate a quadratic approximation of the expo-nential regularization term in expOSE to make it suit-able for optimization with VarPro and show that, with random initialization, the model achieves convergence rates similar to pOSE with significantly higher recon-struction quality; • We extend expOSE with radial distortion robustness by decomposing the Object Space Error (OSE) into radial and tangent components and propose an SfM pipeline that is able to obtain a complete and accurate Euclidean reconstruction from uncalibrated cameras. |
Fan_OpenGait_Revisiting_Gait_Recognition_Towards_Better_Practicality_CVPR_2023 | Abstract Gait recognition is one of the most critical long-distance identification technologies and increasingly gains popular-ity in both research and industry communities. Despite the significant progress made in indoor datasets, much evidence shows that gait recognition techniques perform poorly in the wild. More importantly, we also find that some con-clusions drawn from indoor datasets cannot be generalized to real applications. Therefore, the primary goal of this paper is to present a comprehensive benchmark study for better practicality rather than only a particular model for better performance. To this end, we first develop a flexible and efficient gait recognition codebase named OpenGait. Based on OpenGait, we deeply revisit the recent develop-ment of gait recognition by re-conducting the ablative ex-periments. Encouragingly,we detect some unperfect parts of certain prior woks, as well as new insights. Inspired by these discoveries, we develop a structurally simple, empiri-cally powerful, and practically robust baseline model, Gait-Base. Experimentally, we comprehensively compare Gait-Base with many current gait recognition methods on mul-tiple public datasets, and the results reflect that GaitBase achieves significantly strong performance in most cases re-gardless of indoor or outdoor situations. Code is available athttps://github.com/ShiqiYu/OpenGait . | 1. Introduction Gait recognition has recently gained growing interest from the vision research community. It utilizes the physi-ological and behavioral characteristics from walking videos to authenticate individuals’s identities [34]. Compared with other biometrics, e.g., face, fingerprint, and iris, gait pat-terns can be captured from a distance in uncontrolled set-tings, without requiring any physical contact. As a walking *Corresponding Author CASIA-BOUMVLPGREWGait3DGaitSet (AAAI 2019) GaitPart(CVPR2020) GaitGL (ICCV 2021) CSTL(ICCV2021)3DLocal (ICCV 2021) SMPLGait(CVPR2022)OurBaseline: GaitBase8688909294 CASIA-BOUMVLPGREWGait3DDataset604020Rank-1accuracy(%) Indoor datasetOutdoor datasetFigure 1. Performance of popular models and ours baseline on 4 major gait datasets [2, 5, 11, 12, 18, 41]. The left two are indoor datasets [32, 38], the right two are outdoor datasets [41, 43]. behavior, gait is also hard to disguise and thus promisingly robust for usual subject-related covariates, such as dress-ing, carrying, and standing conditions. These advantages make gait recognition suitable for public security applica-tions, e.g., criminal investigation, and suspect tracking [35]. With the boom of deep learning, gait recognition in the laboratory [32, 38] has achieved significant progress [2, 5, 18] over the last decade. However, much evidence [41, 43] reveal that gait recognition techniques may not perform op-timally in the wild. As shown in Figure 1, most existing gait recognition methods suffer an over 40 %accuracy degra-dation when transitioning from indoor to outdoor datasets. Typically, this performance gap should be mainly caused by real-world noisy factors, such as complex occlusion, back-ground variation, and illumination changes. Nevertheless, our further ablative study shows that this situation is not unique, as many of the conclusions drawn This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9707 in prior works vary with different datasets. Therefore, be-sides proposing an improved model for better performance, the primary objective of this paper is to present a compre-hensive benchmark study to revisit gait recognition for en-hanced practicality. To this end, we make efforts in the fol-lowing three aspects. Firstly, to the best of our knowledge, previous works mainly develop the models on their code repository and rely heavily on the indoor gait datasets, particularly CASIA-B [38] and OU-MVLP [32]. To accelerate the real-world applications, we appeal to pay more attention to outdoor gait datasets, such as GREW [43] and Gait3D [41]. Addi-tionally, this paper also considers building a unified eval-uation platform, which covers the various state-of-the-art methods and testing datasets, is highly desired. Accord-ingly, we propose a flexible and efficient gait recognition codebase with PyTorch [24] and name it OpenGait . To ensure extensibility and reusability, OpenGait sup-ports the following features: (1) Various datasets ,e.g., the indoor CASIA-B [38] and OU-MVLP [32], the outdoor GREW [43] and Gait3D [41]. (2) State-of-the-art meth-ods,e.g., GaitSet [2], GaitPart [5], GLN [10], GaitGL [18], SMPLGait [41], GaitEdge [16], and so on. (3) Multiple popular frameworks ,e.g., the end-to-end, multi-modality, and contrastive learning paradigms. Recently, OpenGait has been widely employed in two of the major interna-tional gait recognition competitions, i.e.. HID [37]1, and GREW [43]. Encouragingly, all of the top-10 winning teams at HID2022 [37] have utilized OpenGait as their codebase and extended OpenGait to develop new solutions. Based on OpenGait, we reproduce many progressive methods [2, 5, 18], and the results have been shown in Fig-ure 1. More importantly, we conduct a comprehensive re-evaluation of various commonly accepted conclusions by re-implementing the ablation study on recently-built out-door gait datasets. To our surprise, we find that the MGP branch proposed by GaitSet [2], the FConv proposed by GaitPart [5], the local feature extraction branch proposed by GaitGL [18], and the SMPL branch proposed by SMPL-Gait [41], do not exhibit superiority on the outdoor datasets. Moreover, our thorough exploration of potential causes re-veals several hidden limitations of prior gait research, such as the lack of comprehensive ablation study, outdoor dataset evaluation, and a strong backbone network. Inspired by the above discoveries, we develop a simple yet strong baseline model, named GaitBase . Specifically, GaitBase is composed of several essential parts, some of which are simple and commonly utilized rather than in-tricately constructed modules. Even no bells or whistles, GaitBase can achieve comparable or even superior perfor-mance on indoor gait datasets. As to the datasets in the wild, GaitBase outperforms recently proposed methods and 1HID 2023: https://hid2023.iapr-tc4.orgreaches a new state-of-the-art. Furthermore, we also con-duct a comprehensive study to verify that GaitBase is a structurally simple, experimentally powerful, and empiri-cally robust baseline model for gait recognition. In summary, this paper contributes future works from three aspects: (1) OpenGait, a unified and extensible plat-form, is built to facilitate the comprehensive study of gait recognition. (2) We deeply revisit the recent gait recog-nition developments and consequently bring many new in-sights for further gait recognition research. (3) We provide a structurally simple and experimentally powerful baseline model, GaitBase, which can inspire the future design of gait recognition algorithms. We hope the works in the paper can inspire researchers to develop more advanced gait recogni-tion methods for real-world applications. |
Barath_A_Large-Scale_Homography_Benchmark_CVPR_2023 | Abstract We present a large-scale dataset of Planes in 3D, Pi3D, of roughly 1000 planes observed in 10 000 images from the 1DSfM dataset, and HEB, a large-scale homography estimation benchmark leveraging Pi3D. The applications of the Pi3D dataset are diverse, e.g. training or evaluat-ing monocular depth, surface normal estimation and image matching algorithms. The HEB dataset consists of 226 260 homographies and includes roughly 4M correspondences. The homographies link images that often undergo signifi-cant viewpoint and illumination changes. As applications of HEB, we perform a rigorous evaluation of a wide range of robust estimators and deep learning-based correspon-dence filtering methods, establishing the current state-of-the-art in robust homography estimation. We also evalu-ate the uncertainty of the SIFT orientations and scales w.r.t. the ground truth coming from the underlying homographies and provide codes for comparing uncertainty of custom de-tectors. The dataset is available at https://github. com/danini/homography-benchmark . | 1. Introduction The planar homography is a projective mapping between images of co-planar 3D points. The homography induced by a plane is unique up to a scale and has eight degrees-of-freedom (DoF). It encodes the intrinsic and extrinsic camera parameters and the parameters of the underlying 3D plane. The homography plays an important role in the geom-etry of multiple views [ 30] with hundreds of papers pub-lished in the last few decades about its theory and ap-plications. Estimating planar homographies from image pairs is an important task in computer vision with a num-ber of applications. For instance, monocular SLAM sys-tems [ 55,63,70] rely on homographies when detecting pure rotational camera movements, planar scenes, and scenes with far objects. As a homography induced by a plane at infinity represents rotation-only camera motion, it is one of Figure 1. Example image pairs and homographies with their inlier correspondences shown, from the proposed Homography Estima-tion Benchmark (HEB) dataset. Outliers are not drawn. the most important tools for stitching images [ 1,15]. The generated images cover a larger field-of-view and are useful in various applications, e.g. image-based localization [ 3], SLAM [ 34,39], autonomous driving [ 65], sport broadcast-ing [17], surveillance [ 68], and augmented and virtual real-ity [33,44]. Homographies play an important role in cali-bration [ 18,73], metric rectification [ 21,41], augmented re-ality [ 58,76], optical flow based on piece-wise planar scene modeling [ 67], video stabilization [ 28,77], and incremen-tal [56] and global [ 48,62] Structure-from-Motion. The traditional approach of finding homographies in im-age pairs consists of two main stages. First, similarly as in most algorithms working with pairs, feature points are detected and matched [ 15,35,43,54,57]. They are then often filtered by the widely-used second nearest neighbors (SNN) ratio [ 42,43] or by deep learned filtering meth-ods [ 51,61,69,75], to remove gross outliers and, therefore, improve the robust estimation procedure that follows. The found tentative point correspondences are contaminated by various sources of noise due to, e.g., measurement and quantization, and a large proportion of them are still out-liers – correspondences inconsistent with the sought model manifold. Consequently, some form of robust estimation This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21360 (a) Homogr Dataset (b) HPatches Dataset (c) ExtremeView Dataset (d) Proposed HEB Dataset Figure 2. Typical image pairs (a-c) from widely used datasets for homography estimator benchmarking and (d) from HEB. has to be applied to find a set of inliers and to estimate the parameters of the sought homography. In practice, either a randomized RANSAC-like [ 24] robust estimator or an iter-atively re-weighted least squares fitting [ 31] is applied. The number of datasets on which recent homography and, in general, robust estimation papers evaluate their al-gorithms is severely limited. The Homogr dataset [ 38] con-sists only of a few image pairs with relatively small base-lines and, thus, high inlier ratios. Given that recent robust estimators, e.g. [6], report lower than 0.5pixel average re-projection errors on the provided manually labeled corre-spondences, it is safe to say that this dataset is solved. The HPatches dataset [ 4] consists of a few hundreds of image pairs, all looking at an almost completely planar scene, with either significant illumination or viewpoint (mostly in tilt angle) changes. While [ 4] is a useful tool for evaluating lo-cal feature detector and image matching methods, it is very easy for robust estimators [ 5]. The ExtremeView (EVD) dataset [ 46] poses a significantly more challenging problem for homography estimation than the previous two. The im-ages undergo extreme view-point changes, therefore mak-ing both the feature matching and robust estimation tasks especially challenging. However, EVD consists only of 15 image pairs, severely limiting its benchmarking power. Besides the data part, a good benchmark has well-defined parameter tuning (training) and evaluation protocols and training-test set split. Otherwise, as it happens in other fields, the seemingly rapid progress might be an artifact of tuning the algorithms on the test data, or an artifact of the flawed evaluation procedure [ 12,27,49]. In short, there are no available large-scale benchmarks with ground truth (GT) homographies that allow evaluating new algorithms on standard internet photos, i.e., ones not necessarily looking at completely planar scenes. As the first contribution , we create a large-scale datasetof1046 large Planes in 3D (Pi3D) from a standard landmark dataset [ 66]. We use the scenes from the 1DSfM dataset as input and find 3D planes in the reconstructions. Second , we use the Pi3D dataset to find image pairs with estimatable homographies and create a large-scale homography bench-mark (HEB) containing a total of 226260 homographies that can be considered GT when testing new algorithms (see Fig.1for examples). A large proportion of the image pairs capture significant viewpoint and illumination changes. The homographies typically have low inlier ratio, thus making the robust estimation task challenging. Third , we compare a wide range of robust estimators, including recent ones based on neural networks, establishing the current state-of-the-art in robust homography estimation. As the forth contribution, we demonstrate that the dataset can be used to evaluate the uncertainty of partially or fully affine covariant features de-tectors [ 43,47]. While we show it on DoG features [ 42], the homographies can be leveraged similarly for the compari-son with other detectors. Existing Datasets. The datasets traditionally used for eval-uating homography estimators are the following. The Ho-mogr dataset [ 38] consists of 16 image pairs with GT ho-mographies. The GT comes from (also provided) hand-labeled correspondences, which later were optimized to im-prove the localization accuracy. There is no train-test split, nor a benchmark protocol. The ExtremeView dataset [ 46] consists of 15 image pairs, taken under extreme view-point change, together with GT homographies and corre-spondences. The homographies are derived from hand-labeled correspondences that stem from multiple local fea-ture detectors paired with an affine view synthesis proce-dure [ 46] and RootSIFT descriptor [ 2] matching. There is no train-test split, nor a benchmark protocol. The HPatches dataset [ 4] was introduced in form of local patches for benchmarking descriptors and metric learning methods, 21361 21362 21363 21364 ters, the camera position noise largely affects the translation angle. Thus, Eq. ( 2) distorts the evaluation by returning large errors even when the camera barely moves in the real world. We select the averages of the rotation and translation mAA scores to be our main metric. Metrics comparison. We plot the angular pose accuracy vs. metric pose accuracy in Fig. 5(right). They are mostly in agreement, except for a few methods, e.g., EAS [ 23] and Affine GC-RANSAC [ 9]. The mAA of the re-proj. error is also in agreement with the mAA of the pose error (Fig. 5; 3rd) with some exceptions, e.g., LO+-RANSAC. The number of inliers (Fig. 5, two left graphs) greatly depends not only on image resolution, but also on the in-lier threshold and particulars of each algorithm – MAGSAC outputs many more inliers, while having similar pose ac-curacy to other methods, while the LMEDS pose is much worse with the same number of inliers as the rest. Training and Test Protocols. One of the drawbacks of the existing homography estimation datasets is the lack of tun-ing and test protocols. We propose the following proce-dure for fair evaluation. The main principle is as follows: one should not not make more than one or two evaluation runs on the test set. That it why all the hyper-parameters of the algorithms are fixed when running on the test set. The tuning and learning are done on the training set, which has similar, but not equal properties and no overlap in terms of content with the test set. We tune all the hyper-parameters with grid search for simplicity. Training protocol. We fix number of iterations to 1000 for all methods. With each method, grid search is performed on the training set to determine the optimal combination of the hyper-parameters, such as inlier-outlier threshold θ, the SNN ratio threshold and other algorithm-specific pa-rameters, such as the spatial weight of GC-RANSAC. Note that, unlike IMC [ 35], inlier-outlier and SNN thresholds are tuned jointly and not consequently – we found that it leads to slightly better hyper-parameters. We tested the robust estimators on correspondences fil-tered by the predicted score of recent deep learning models. After obtaining the scores, we post-processed them in one of the two ways: (a) thresholding the scores at θand re-moving tentative correspondences below it; and (b) sorting the correspondences by their score and keeping the top K best. Both θandKwere found by running grid search on the trai |
Jiao_MSMDFusion_Fusing_LiDAR_and_Camera_at_Multiple_Scales_With_Multi-Depth_CVPR_2023 | Abstract Fusing LiDAR and camera information is essential for accurate and reliable 3D object detection in autonomous driving systems. This is challenging due to the difficulty of combining multi-granularity geometric and semantic fea-tures from two drastically different modalities. Recent ap-proaches aim at exploring the semantic densities of cam-era features through lifting points in 2D camera images (re-ferred to as “ seeds ”) into 3D space, and then incorporate 2D semantics via cross-modal interaction or fusion tech-niques. However, depth information is under-investigated in these approaches when lifting points into 3D space, thus 2D semantics can not be reliably fused with 3D points. Moreover, their multi-modal fusion strategy, which is im-plemented as concatenation or attention, either can not effectively fuse 2D and 3D information or is unable to perform fine-grained interactions in the voxel space. To this end, we propose a novel framework with better uti-lization of the depth information and fine-grained cross-modal interaction between LiDAR and camera, which con-sists of two important components. First, a Multi-Depth Unprojection (MDU) method is used to enhance the depth quality of the lifted points at each interaction level. Sec-ond, a Gated Modality-Aware Convolution (GMA-Conv) block is applied to modulate voxels involved with the cam-era modality in a fine-grained manner and then aggre-gate multi-modal features into a unified space. Together they provide the detection head with more comprehensive features from LiDAR and camera. On the nuScenes test benchmark, our proposed method, abbreviated as MSMD-Fusion, achieves state-of-the-art results on both 3D ob-ject detection and tracking tasks without using test-time-augmentation and ensemble techniques. The code is avail-able at https://github.com/SxJyJay/MSMDFusion. *Equal contribution. †Corresponding authors. CameraLiDAR+Camera*LiDAR**Fusion*Modality-specific convolution kernelsconvolution kernelsretrievedecompose…retrieveGateGatespconvspconvspconvspconvSelect with gateAggregate 12 11112222 FPS & RetrievalIndices Distribution xyz projectto 2Ddepth-awarefeaturesDepthmapImage featuresLiDAR pointsMDUGMA-Conv Detection resultVirtual pointswith semanticsDetection Headmultiple depthunprojectto 3DFigure 1. Illustration of our MSMDFusion pipeline. The yellow arrows indicate information passing or interaction between the Li-DAR and camera modalities. | 1. Introduction Detecting 3D objects [1, 22, 29] is regarded as a fun-damental task for autonomous driving. Aiming at the ro-bust environmental perception, LiDARs and cameras are widely equipped on autonomous driving vehicles since they can provide complementary information. Characterized by point clouds, LiDARs can capture accurate spatial infor-mation, while cameras contain rich semantics and context with images. Therefore, developing multi-modal detec-tors that enjoy the benefits of the two worlds is promising. Such an idea has catalyzed the emergence of a set of re-cent researches [1, 3, 12, 13, 15, 17, 19, 20, 25, 26]. Early works [1, 3, 11, 12, 18–20, 26] perform LiDAR-camera fu-sion by projecting 3D LiDAR points (or region proposals generated from them) onto 2D image planes to collect use-ful 2D semantics. However, such a paradigm suffers from the signal density mismatching of multi-modality sensors. Since LiDAR points are much sparser than camera pixels, this projection manner will inevitably waste semantically rich 2D features [1, 13]. Recently, another paradigm [9,10,13,15,25] for LiDAR-camera fusion has emerged. Instead of collecting 2D se-mantic features with 3D queries, these methods first esti-mate depths of pixels, and then directly lift 2D pixels with This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21643 their semantics to the 3D world (we refer to these pixels and corresponding lifted points as “seeds” and “virtual points” in this paper) to fuse with the real 3D point cloud. Two methods with the same name of BEVFusion [13, 15] treat every image feature pixel as a seed and generate vir-tual points in the BEV space. MVP [25] and VFF [10] sample pixels from foreground regions and lift them to the voxel space. Benefited from the dense virtual points, such a paradigm not only maintains the semantic consistency in the image [15], but also complements the geometric cues for the sparse LiDAR point cloud [25]. Despite significant improvements have been made, ex-isting methods in this line suffer from two major prob-lems, which hampers benefiting from virtual points. First, depth, as the key to the quality of virtual points, is under-investigated in generating virtual points. On the one hand, depth directly determines the spatial location in 3D space of a seed via perspective projection which can thereby sig-nificantly affect 3D object detection results. On the other hand, depth can also enhance the semantics carried by vir-tual points by providing color-insensitive cues in describ-ing objects [27], since combining RGB information with depth guidance correlates camera pixels of similar depths and enables them to jointly contribute to capturing instance-related semantics when lifted as virtual points. Existing multi-modal detectors [9, 13, 15] mainly pay attention on interacting LiDAR points with camera virtual points, while ignoring the importance of seed depths in generating the virtual points. Second, the fine-grained cross-modal interaction be-tween virtual points and 3D points in the uncompressed space (e.g., the voxel space) is crucial but non-trivial. Gen-erated virtual points are geometrically and semantically in-consistent with real LiDAR points due to imperfect depths and inherent modality gap. Hence, in order to benefit from the semantically rich virtual points, it is necessary to adap-tively select helpful information from virtual points under the guidance of real LiDAR points in a fine-grained and controllable manner. However, such a cross-modal inter-action is constrained by the intensive memory and compu-tation cost brought by the massive amounts and unstruc-tured nature of point cloud data. Alternatively, existing ap-proaches combine the multi-modal information with simple concatenate [25] or add operations [9] in the voxel space, or perform cross-attention in a compressed BEV space [23]. Aiming at unlocking the potential of virtual points and addressing the drawbacks of existing methods, we pro-pose a multi-scale fusion framework, called MSMDFusion, and within each scale, there are two key novel designs, namely the Multi-Depth Unprojection (MDU) and Gated Modality-Aware Convolution (GMA-Conv). As shown in Fig.1, MDU is mainly for enhancing the quality of gen-erated virtual points in terms of geometric accuracy andsemantic richness. To be specific, when lifting 2D seeds from image into the 3D space, multiple depths are ex-plored within a reference neighborhood to generate virtual points with more reliable depth. Next, the camera feature and depth are combined to produce depth-aware features as stronger 2D semantics to decorate these virtual points. GMA-Conv takes real LiDAR points and generated virtual points as inputs, and performs fine-grained interaction in aselect-then-aggregate manner. Concretely, we first adap-tively select useful information from camera voxel features under the guidance of reference LiDAR voxels, then aggre-gate them grouped sparse convolutions for sufficient multi-modal interaction. We also specifically adopt a voxel sub-sampling strategy to efficiently obtain reliable LiDAR ref-erences when implementing our GMA-Conv. Finally, with the resulting multi-modal voxel features from multiple scales, we further associate them with cascade connections across scales to aggregate multi-granularity information. With the above designs, the cam-era semantics encapsulated in the virtual points are consis-tently combined with LiDAR points, and thereby providing a stronger multi-modal feature representation for boosting the 3D object detection. As shown in the Table 3, with 100 times fewer generated virtual points than the two BEVFu-sion methods [13, 15], our MSMDFusion can still achieve state-of-the-art performances. In summary, our contributions lie in threefold: (1) We propose a novel MSMDFusion approach, which encour-ages sufficient LiDAR-Camera feature fusion in the multi-scale voxel space. (2) Within each scale, we propose a Multi-Depth Unprojection strategy (MDU) to promote vir-tual points generation with better locations and semantics by fully leveraging depth of pixels, as well as a Gated Modality-Aware Convolution (GMA-Conv) to achieve fine-grained controllable multi-modal interaction. (3) Extensive experimental results on the large-scale nuScenes dataset prove the effectiveness of our MSMDFusion and its compo-nents. We achieve state-of-the-art performances with 71.5% mAP and 74.0% NDS on the challenging nuScenes detec-tion track using single model1. When combining the sim-ple greedy tracking strategy [24], our method also achieves strong tracking results with 74.0% AMOTA. |
Geng_Dense-Localizing_Audio-Visual_Events_in_Untrimmed_Videos_A_Large-Scale_Benchmark_and_CVPR_2023 | Abstract Existing audio-visual event localization (AVE) handles manually trimmed videos with only a single instance in each of them. However, this setting is unrealistic as nat-ural videos often contain numerous audio-visual events with different categories. To better adapt to real-life ap-plications, in this paper we focus on the task of dense-localizing audio-visual events, which aims to jointly lo-calize and recognize all audio-visual events occurring in an untrimmed video. The problem is challenging as it re-quires fine-grained audio-visual scene and context under-standing. To tackle this problem, we introduce the first Untrimmed Audio-Visual (UnAV-100) dataset, which con-tains 10K untrimmed videos with over 30K audio-visual events. Each video has 2.8 audio-visual events on aver-age, and the events are usually related to each other and might co-occur as in real-life scenes. Next, we formulate the task using a new learning-based framework, which is capable of fully integrating audio and visual modalities to localize audio-visual events with various lengths and cap-ture dependencies between them in a single pass. Extensive experiments demonstrate the effectiveness of our method as well as the significance of multi-scale cross-modal percep-tion and dependency modeling for this task. The dataset and code are available at https://unav100.github.io . | 1. Introduction Understanding real-world scenes and events is inher-ently a multisensory perception process for humans [15,31]. However, for machines, how to integrate multi-modal in-formation, especially audio and visual ones, to facilitate comprehensive video understanding is still a challenging problem. In recent years, with the introduction of many audio-visual datasets [7,8,11,36], we have seen progress in ∗Corresponding author 10s0s2s4s6s8s ............ playing violinplaying trumpetmale singing ............ 0s10s20s30s40s Figure 1. Different from the previous A VE task, dense-localizing audio-visual events involves localizing and recognizing all audio-visual events occurring in an untrimmed video. In real-life audio-visual scenes, there are often multiple audio-visual events that might be very short or long, and occur concurrently. The top and bottom examples are from the current A VE dataset [36] and our UnA V-100 dataset, respectively. learning joint audio-visual representations [1, 27, 28], spa-tially localizing visible sound sources [7, 22] and tempo-rally localizing audio-visual events [39, 40, 46], etc. While the success of these algorithms is encouraging, they all fo-cus on manually trimmed videos that often just contain a single audio-visual instance/object in each of them. In par-ticular, audio-visual event localization (A VE) [36] aims to localize a single event that is both audible and visible at the same time in a short, trimmed video, as shown in the upper part of Fig. 1. The task setting is impractical as a real-life video is usually long, untrimmed and associated to multiple audio-visual events from different categories, and these events might have various duration and occur simulta-neously. For example, as illustrated at the bottom of Fig. 1, a man starts singing and other people accompany him on This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22942 trumpet and violin, and they pause several times along with the music. Therefore, we argue that it is necessary to re-examine and re-define the A VE task to better adapt to real-life audio-visual scenarios. In this work, we conduct in-depth research starting from dataset construction to technical solutions. On the one hand, different from the existing A VE dataset [36] that only con-tains a single audio-visual event in each 10s trimmed video, we introduce a large-scale Untrimmed Audio-Visual (UnA V-100) dataset. It consists of more than 10K untrimmed videos with over 30K audio-visual events covering 100 dif-ferent event categories. Our dataset spans a wide range of domains, including human activities, music performances, and sounds from animals, vehicles, tools, nature, etc. As the first audio-visual dataset built on untrimmed videos, UnA V-100 is quite challenging for many reasons. For instance, each video contains 2.8 audio-visual events on average (23 events maximum), and around 25% of videos have concur-rent events. Besides, the length of audio-visual events varies greatly from 0.2s to 60s. There are also rich temporal de-pendencies among events occurring in a video, e.g., people often clap when cheering, and rain is usually with thunder, etc. We believe that the UnA V-100 dataset, with its realistic complexity, can promote the exploration on comprehensive audio-visual video understanding. On the other hand, facing such a complex real-life scene, current methods [36, 39–41, 46] formulate the A VE task as a single-label segment-level classification problem and can only identify one audio-visual event for each segment in a trimmed video. They fail to locate concurrent events and provide an exact temporal extent for each event in untrimmed videos. To address the above issues, we re-define A VE as an instance-level localization problem, called dense-localizing audio-visual events . We also present a new framework to flexibly recognize all audio-visual events in an untrimmed video and meanwhile regress their tempo-ral boundaries in a single pass. Firstly, the sound and its visual information are both critical to identify an audio-visual event, and the events can range across multiple time scales. Hence, we propose a cross-modal pyramid trans-former encoder that enables the model to fully integrate in-formative audio and visual signals and capture both very short as well as long audio-visual events. Secondly, with the observation that the events in a video are usually related to one another, we conduct temporal dependency modeling to learn such correlations, allowing the model to use con-text to localize events more correctly. Finally, we design a class-aware regression head for decoding temporal bound-aries of overlapping events, together with a classification head to obtain the final localization results. Extensive ex-periments demonstrate the effectiveness of our method, and show that it outperforms related state-of-the-art methods for untrimmed videos by a large margin.Our contributions can be summarized as follows: • We introduce a large-scale UnA V-100 dataset, as the first audio-visual benchmark based on untrimmed videos. There exist multiple audio-visual events in each video, and these events are usually related to one another and co-occur as in real-life scenes. • We shift the A VE task to a more realistic setup of dense-localizing audio-visual events , and propose a new framework, allowing to flexibly recognize all audio-visual events in an untrimmed video and regress their temporal boundaries in a single pass. • Extensive experiments demonstrate the significance of multi-scale cross-modal perception and dependency modeling for the task. Our method can achieve supe-rior performance over related state-of-the-art methods for untrimmed videos by a large margin. |
Du_Weak-Shot_Object_Detection_Through_Mutual_Knowledge_Transfer_CVPR_2023 | Abstract Weak-shot Object Detection methods exploit a fully-annotated source dataset to facilitate the detection perfor-mance on the target dataset which only contains image-level labels for novel categories. To bridge the gap be-tween these two datasets, we aim to transfer the object knowledge between the source (S) and target (T) datasets in a bi-directional manner. We propose a novel Knowledge Transfer (KT) loss which simultaneously distills the knowl-edge of objectness and class entropy from a proposal gen-erator trained on the S dataset to optimize a multiple in-stance learning module on the T dataset. By jointly opti-mizing the classification loss and the proposed KT loss, the multiple instance learning module effectively learns to clas-sify object proposals into novel categories in the T dataset with the transferred knowledge from base categories in the S dataset. Noticing the predicted boxes on the T dataset can be regarded as an extension for the original annotations on the S dataset to refine the proposal generator in return, we further propose a novel Consistency Filtering (CF) method to reliably remove inaccurate pseudo labels by evaluating the stability of the multiple instance learning module upon noise injections. Via mutually transferring knowledge be-tween the S and T datasets in an iterative manner, the de-tection performance on the target dataset is significantly im-proved. Extensive experiments on public benchmarks vali-date that the proposed method performs favourably against the state-of-the-art methods without increasing the model parameters or inference computational complexity. | 1. Introduction Recent rapid development of supervised object detection models [17, 20, 22, 23] largely relies on massive human-annotated bounding boxes and category labels. Since ob-taining these annotations, especially the bounding boxes, are expensive and time-consuming on large-scale datasets, it motivates the researches of alternative algorithms with *Equal contribution. †Corresponding author. Figure 1. Overview of the proposed Mutual Knowledge Transfer scheme for the weak-shot object detection task. less annotation cost. Weakly Supervised Object Detection (WSOD) methods [1, 8,12,13,24,26,34] only require image-level object category labels to train an object detector on a target dataset. Though the annotation cost is consider-ably reduced, a prominent performance gap exists between WSOD and full-supervised models. While noticing class-invariant visual evidence can be transferred from base categories to unseen ones [14, 30], researches [3, 15,18,32,36] show that the WSOD perfor-mance can be further improved by utilizing an additional source dataset with fully annotated data. This learning paradigm is referred to as the Weak-shot Object Detection (WSHOD) [21], for which a widely adopted model archi-tecture is the combination of a proposal generator (PG) trained on the source (S) dataset and a multiple instance learning (MIL) module trained on the target (T) dataset. The S dataset contains both object category and bounding box annotations, while the T dataset has only image-level category labels and the object categories are not overlapped with those in the S dataset. Although a well-trained PG on a full-annotated S dataset can assist the training of the MIL module on the T dataset, it is still essential to bridge the gap between these two datasets for handling non-overlapping categories. Previous efforts to address this issue mainly focus on transferring the knowledge about base categories from the S dataset to the T dataset by post-processing the predicted boxes [15] or designing various transferring scores [18, 32]. Zhong et al. [36] constrain the training of the MIL module by an ob-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19671 jectness regularization loss. Unfortunately, this loss tends to exacerbate the classification ambiguity of novel cate-gories since it enlarges multiple class probabilities for the same proposal (see details in Sec. 3.2). Some researches also adopt the predicted boxes on the T dataset as pseudo labels to refine the training of the PG module. The pre-dicted boxes can be directly used as pseudo labels with con-fidence thresholding [36], adjusted by confidence maps [3], or softly weighted [21]. However, these practices are lim-ited in discriminating inaccurate pseudo labels in the T dataset. For example, the intra-class feature variance can be significant, especially for the novel categories, which makes weighting these pseudo labels upon feature similar-ity [21] fail in discriminating inliers from outliers. More-over, it is not exploited in previous works to incorporate the MIL module into discriminating inaccurate pseudo labels, which enables the knowledge transfer from the T dataset to S dataset. To address the aforementioned issues in narrowing the gap between the S and T datasets, we design the Mutual Knowledge Transfer scheme for the WSHOD task, as illus-trated in Fig. 1. Within this scheme, a novel Knowledge Transfer (KT) loss performs knowledge transfer from the S dataset to T dataset by constraining the training of the MIL module. In contrast to the regularization loss in [36], our KT loss enforces the predicted objectness score and class entropy of the MIL module to be consistent with the pre-dictions of the PG, which helps to transfer the knowledge from S dataset to facilitate the training of the MIL module. Through mathematical analysis, we reveal that the formu-lation of KT loss intrinsically alleviates the class ambiguity issue of the regularization loss in [36]. Furthermore, we propose a novel and statistically robust Consistency Filtering (CF) method to improve the quality of the pseudo labels and boost knowledge transfer from the T dataset to S dataset. The intuition is that, by inject-ing noises into random regions in the feature maps of the predicted boxes1, inaccurate boxes tend to be less stable in maintaining the original predictions than accurate ones. In-accurate boxes usually only cover the most discriminative object fragment, which is a commonly addressed challenge in previous works [18, 21], so the corresponding probability distribution of novel categories probably becomes uncertain when the designed noises are injected into the features of the MIL module. In contrast, accurate boxes usually con-tain the entire object and tend to be more stable against the injected feature noises. A detailed statistical verification for this intuition can be found in Tab. 7. We thus discover the inaccurate pseudo labels by evaluating the stability of the MIL outputs when varying noises are injected. The pro-posed CF method essentially takes advantage of the object 1“Feature maps of the predicted boxes” refers to the features produced by the RoIAlign layer given the predicted boxes.knowledge of the MIL module regarding the discrimina-tion of novel categories and transfers it to the PG module through refinement training with filtered pseudo labels. By using the mutual knowledge transfer scheme itera-tively, the detection performance on the T dataset with novel categories can be greatly improved. Through theoretical analysis and extensive experiments, we demonstrate that the proposed method significantly outperforms previous state-of-the-art WSHOD methods without increasing the model parameters or inference computational complexity. |
Fei_Masked_Auto-Encoders_Meet_Generative_Adversarial_Networks_and_Beyond_CVPR_2023 | Abstract Masked Auto-Encoder (MAE) pretraining methods ran-domly mask image patches and then train a vision Trans-former to reconstruct the original pixels based on the un-masked patches. While they demonstrates impressive per-formance for downstream vision tasks, it generally requires a large amount of training resource. In this paper, we intro-duce a novel Generative Adversarial Networks alike frame-work, referred to as GAN-MAE, where a generator is used to generate the masked patches according to the remain-ing visible patches, and a discriminator is employed to pre-dict whether the patch is synthesized by the generator. We believe this capacity of distinguishing whether the image patch is predicted or original is benefit to representation learning. Another key point lies in that the parameters of the vision Transformer backbone in the generator and discriminator are shared. Extensive experiments demon-strate that adversarial training of GAN-MAE framework is more efficient and accordingly outperforms the standard MAE given the same model size, training data, and com-putation resource. The gains are substantially robust for different model sizes and datasets, in particular, a ViT-B model trained with GAN-MAE for 200 epochs outperforms the MAE with 1600 epochs on fine-tuning top-1 accuracy of ImageNet-1k with much less FLOPs. Besides, our approach also works well at transferring downstream tasks. | 1. Introduction In recent years, Transformer [62] has become the de facto standard architecture in computer vision, and has surpassed state-of-the-art Convolutional Neural Network (CNN) [31, 58] feature extractors in vision tasks through models such as the Vision Transformer [21]. Meanwhile, self-supervised learning (SSL) algorithms [12, 14, 27, 29] aims to learn transferable representation from unlabeled *The corresponding author. 200 400 600 800 1000 1200 1400 1600 Epoch82.583.083.584.084.585.0T op-1 Acc. 79.4h256.4h MAE GAN-MAE Pre-training timeFigure 1. Performance comparison in different pre-training epochs for ImageNet-1K Fine-tuning top-1 accuracy. Com-pared to MAE trained for 1600 epochs, GAN-MAE achieves com-parable accuracy with much less training time at 200 epochs. data by performing instance-level pretext tasks, and has been a long-standing target in the vision community. Par-ticularly, masked image modeling (MIM) in SSL for vi-sion transformers has shown remarkably impressive down-stream performance in a wide variety of computer vision tasks [3, 28], attracting increasing attention. MIM is a simple pretext task that first randomly masks some patches of an image, and then predicts the contents of the masked patches according to the remaining, using various reconstruction targets, e.g., visual tokens [3,19], se-mantic features [1, 77] and raw pixels [28, 70]. Essentially, it learns the transferable representation by modeling the im-age structure itself as content prediction. While more ef-fective than conventional pre-training, masked autoencoder modeling approaches still exist some issues: ( i) reconstruc-tion optimization with MSE loss leads to blurrier output im-ages than the raw input, it would be better to use a more perceptual loss over pixels to guide the fine-grained seman-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24449 tic understanding and representation learning, leading to more plausible synthesized patches; ( ii) inner dependency between masked patches is lacked [71], i.e., generation of masked image patches may lack the surrounding informa-tion. This situation becomes more serious when the image patch masking ratio is large. We alleviate this problem by introducing confident synthesized patches as complemen-tary information during training; ( iii) mask-reconstruction methods incur a substantial computation cost because the network only learns from part of the visible patches and misses the information of masked patches. In this paper, we propose a Generative Adversarial Networks-based pre-training framework, referred to as GAN-MAE, which contains two components: a generator model learns to reconstruct the masked patches according to visible patches in the encoder-decoder architecture and a discriminator model learns to distinguish real image patches from plausible but synthesized remains. Generally, given an image from training dataset, our method first randomly masks parts of patches and reconstructs them using the rest visible patches with a generator, which serves as a standard MAE model. Then we build the corrupt image as the com-bination of visible and synthesis patches, which is then fed into the discriminator to predict whether each patch is from raw image or synthesized results. In this manner, the dis-criminator provides a valid guiding for more delicate image patch modeling. Then, with the development of generator capacity, a key advantage of discriminative task is that it in-tegrates the synthesized patches into corrupt images as com-plementary information, which fills the missing inner rela-tionship between patches during pre-training. Moreover, we shared the parameters of vision transformer backbone in the generator and discriminator to promote memory reduction, training efficiency, as well as performance enhancement. Our experiments follow the same architecture, settings, and pre-training recipe as MAE [28], and we find that the simple incorporation of a discriminator consistently outper-forms MAE in variant models, e.g., ViT-S, ViT-B, and ViT-L, when fine-tuning for top-1 accuracy of ImageNet clas-sification. We also conduct extensive ablation studies to validate the effectiveness of our core designs in backbone parameter sharing and advarisal training. As pre-training with more epochs usually results in a better downstream performance, we argue that an important consideration for pre-training methods should be computation efficiency as well as absolute downstream performance. From this view-point, we also demonstrate that discrimination of pseudo-image patches forces GAN-MAE to train more efficiently than standard MAE. We further provide a comprehensive comparison with MAE in various epochs and various mod-els and show our framework achieves consistently better performance. In particular, as presented in Figure 1, for the ViT-B model structure, our GAN-MAE achieves compara-ble classification performance with only 200 pre-training epochs vs. standard MAE 1600 pre-training epochs. Fur-thermore, the GAN-MAE achieves 0.7 points improvement when pre-training 1600 epochs. Finally, we summarize our contribution as follows: • We propose a new and effective GAN-alike frame-work for visual representation self-supervised learn-ing, which to our best knowledge is the first trial of integrating GAN idea into MAE framework. As a generic approach, we suggest that this framework can be easily applied on many other MIM-based tasks. • We introduce two core designs: shared weight for the main backbones of generator and discriminator, and an adversarial training process, both of which cost fewer amounts of computing resources while obtaining ap-preciable performance improvements. • Extensive experiments demonstrate that compared with the original MAE, our method is more compute-efficient and results in better transfer representation learning on downstream tasks. |
Geng_Learning_Neural_Volumetric_Representations_of_Dynamic_Humans_in_Minutes_CVPR_2023 | Abstract This paper addresses the challenge of efficiently recon-structing volumetric videos of dynamic humans from sparse multi-view videos. Some recent works represent a dynamic human as a canonical neural radiance field (NeRF) and a motion field, which are learned from input videos through differentiable rendering. But the per-scene optimization gen-erally requires hours. Other generalizable NeRF models leverage learned prior from datasets to reduce the optimiza-tion time by only finetuning on new scenes at the cost of visual fidelity. In this paper, we propose a novel method for learning neural volumetric representations of dynamic hu-mans in minutes with competitive visual quality. Specifically, we define a novel part-based voxelized human representa-tion to better distribute the representational power of the network to different human parts. Furthermore, we propose ∗Equal contribution.†Corresponding author.a novel 2D motion parameterization scheme to increase the convergence rate of deformation field learning. Experiments demonstrate that our model can be learned 100 times faster than previous per-scene optimization methods while being competitive in the rendering quality. Training our model on a512×512video with 100 frames typically takes about 5 minutes on a single RTX 3090 GPU. The code is available on our project page: https://zju3dv.github.io/instant nvr. | 1. Introduction Creating volumetric videos of human performers has many applications, such as immersive telepresence, video games, and movie production. Recently, some methods [58, 93] have shown that high-quality volumetric videos can be recovered from sparse multi-view videos by representing dynamic humans with neural scene representations. How-ever, they typically require more than 10 hours of training on This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8759 a single GPU. The expensive time and computational costs limit the large-scale application of volumetric videos. Gener-alizable methods [34,100] utilize learned prior from datasets of dynamic humans to reduce the training time by only fine-tuning on novel human performers. These techniques could increase the optimization speed by a factor of 2-5 at the cost of some visual fidelity. To speed up the process of optimizing a neural represen-tation for view synthesis of dynamic humans, we analyze the structural prior of the human body and motion, and propose a novel dynamic human representation that achieves 100x speedup during optimization while maintaining competitive visual fidelity. Specifically, to model a dynamic human, we first transform world-space points to a canonical space using a novel motion parameterization scheme and inverse linear blend skinning (LBS) [35]. Then, the color and density of these points are estimated using the canonical human model. The innovation of our proposed representation is two-fold. First, we observe that human body parts have different levels of complexity in terms of both shape and texture. For exam-ple, the face of a human performer typically exhibits higher complexity than a flatly textured torso region, thus requiring more representational power to depict. Motivated by this, our method decomposes the canonical human body into multiple parts and represents the human body with a structured set of voxelized NeRF [47] networks to bring the convergence rate of these different parts to the same level. In contrast to a single-resolution representation, the part-based body model utilizes the human body prior to represent the human shape and texture efficiently, heuristically distributing variational representational power to human parts with spatially varying complexity. Second, we notice that non-rigid deformation of human geometry typically occurs around a surface instead of in a volume, that is, nearby surface points on a parametric human model tend to have similar motion behavior. Thus we propose a novel motion parameterization technique that models the 3D human deformation in a 2D domain, enabling modeling of the motion field using 3D voxelized represen-tation to accelerate the learning. This idea is similar to the displacement map and bump map [14,15] in traditional com-puter graphics to represent detailed deformation on a 2D texture domain. We extend the technique of displacement map [14, 15] to represent human motions by restricting the originally 3D deformation field [40, 56, 59] on the 2D sur-face of a parametric human model, such as SMPL [43]. This technique allows us to use hybrid representations [48, 98] to model non-rigid deformation by reducing the memory footprint, largely boosting the convergence of the field. Experiments demonstrate that our method significantly accelerates the optimization of neural human representations while being competitive with state-of-the-art human model-ing methods on rendering quality. As shown in Figure 1, ourmodel can be trained within 5 minutes to produce a volumet-ric video of a dynamic human from a 100-frame monocular video of a 512×512resolution on an RTX 3090 GPU. To summarize, our key contributions are: •A novel part-based voxelized human representation for more efficient human body modeling. •A 2D motion parameterization scheme for more effi-cient deformation field modeling. •100x speedup in optimization compared to previous neural human representations while maintaining com-petitive rendering quality. |
Balabanov_Bayesian_Posterior_Approximation_With_Stochastic_Ensembles_CVPR_2023 | Abstract We introduce ensembles of stochastic neural networks to approximate the Bayesian posterior , combining stochasticmethods such as dropout with deep ensembles. The stochas-tic ensembles are formulated as families of distributions andtrained to approximate the Bayesian posterior with varia-tional inference. We implement stochastic ensembles basedon Monte Carlo dropout, DropConnect and a novel non-parametric version of dropout and evaluate them on a toyproblem and CIF AR image classification. F or both tasks,we test the quality of the posteriors directly against Hamil-tonian Monte Carlo simulations. Our results show that stochastic ensembles provide more accurate posterior esti-mates than other popular baselines for Bayesian inference. | 1. Introduction Bayesian neural networks provide a principled way of reasoning about model selection and assessment with pos-terior distributions of model parameters [6, 8, 35, 49]. Al-though the analytical Bayesian posteriors can answer ques-tions about model parameter uncertainty, the immense com-putational challenge for commonly used neural network ar-chitectures makes them practically infeasible 1. Instead, we are forced to resign to approximation methods that makes a trade-off between posterior accuracy and computationalcomplexity [27]. A prominent method to approximate the Bayesian pos-terior is deep ensembles [20, 31, 55], that can be shown to correspond to a variational inference approximation with adelta distribution family [24]. This method is implementedby simply training an ensemble of models and treating themas samples from the model posterior. In the variational in-ference formulation, this corresponds to approximating the posterior by sampling from many sets of maximum a poste-riori parameters. 1There are examples of closed-form solutions for small architectures [56].To further reduce the computational effort in evaluat-ing the approximate posterior, stochastic methods such asMonte Carlo dropout [47] and DropConnect [51] inference have also been used extensively [13, 14, 40]. They benefitfrom computationally cheaper inference by virtue of sam-pling stochastically from a single model. Formulated as avariational approximation to the posterior, dropout samplesfrom a family of parameter distributions where parameterscan be randomly set to zero. Although this particular fam-ily of distributions might seem unnatural [11], it turns outthat the stochastic property can help to find more robust re-gions of the parameter space, a fact well-known from a longhistory of using dropout as a regularization method. Recently, there has been great progress towards under-standing the analytical posteriors of larger neural networksby means of direct Markov Chain Monte Carlo sampling of the parameter posterior [26]. Through impressive com-putational efforts, the posteriors for models as large asResNet-20 have been sampled using Hamiltonian Monte Carlo (HMC) simulations. The sampled posterior has beenshown to provide more accurate predictions that are surpris-ingly sensitive to data distribution shift as compared to stan-dard maximum likelihood estimation training procedures. These HMC computations have made it possible to compare approximation methods such as dropout inference and deep ensembles directly to the Bayesian posterior. Ensembling and stochastic methods such as dropout have been used suc-cessfully to find posterior approximations in many settings,but without a Bayesian formulation that includes both en-sembling and stochastic methods it is difficult to understandif and how the two approaches can complement each other. Recent work have shown that uncertainty quantification is subjective to neural network architectures, and that theaccuracy of posterior approximations depend non-triviallyon model architecture and dataset complexity [33]. To find computationally efficient methods that can accurately ap-proximate the Bayesian posterior, for different data domains and architectures, is therefore an important goal with practi-cal implications for applications that require accurate uncer-tainty quantification to assess network predictions [1, 17]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13701 In this paper we combine deep ensembles and stochastic regularization methods into stochastic ensembles of neu-ral networks. We formulate the stochastic ensemble con-struction within the Bayesian variational formalism, wheremultiple stochastic methods such as regular Monte Carlodropout, DropConnect, and others are combined with en-sembling into one general variational ansatz. We then con-duct a series of tests using a simple toy model (syntheticdata) and CIFAR (image classification), where stochasticdeep ensembles are found to provide more accurate poste-riors than MultiSWA [55] and regular deep ensembles in anumber of settings. In particular, for CIFAR we use a neu-ral network architecture evaluated by Izmailov et al. [26] in their large-scale experiments, allowing us to make a direct comparison of the ensemble methods to the “ground truth”HMC posterior. |
Jang_Difficulty-Based_Sampling_for_Debiased_Contrastive_Representation_Learning_CVPR_2023 | Abstract Contrastive learning is a self-supervised representation learning method that achieves milestone performance in various classification tasks. However, due to its unsuper-vised fashion, it suffers from the false negative sample prob-lem: randomly drawn negative samples that are assumed to have a different label but actually have the same label as the anchor. This deteriorates the performance of contrastive learning as it contradicts the motivation of contrasting se-mantically similar and dissimilar pairs. This raised the at-tention and the importance of finding legitimate negative samples, which should be addressed by distinguishing be-tween 1) true vs. false negatives; 2) easy vs. hard negatives. However, previous works were limited to the statistical ap-proach to handle false negative and hard negative samples with hyperparameters tuning. In this paper, we go beyond the statistical approach and explore the connection between hard negative samples and data bias. We introduce a novel debiased contrastive learning method to explore hard neg-atives by relative difficulty referencing the bias amplifying counterpart. We propose triplet loss for training a biased encoder that focuses more on easy negative samples. We theoretically show that the triplet loss amplifies the bias in self-supervised representation learning. Finally, we empiri-cally show the proposed method improves downstream clas-sification performance. | 1. Introduction The key idea of contrastive learning [4, 5, 31] is to learn the representation that projects samples from the same class to be closer to each other than samples from different classes in the embedding space. To ensure this property in an unsupervised manner, we randomly draw a sample (anchor, xa) and enforce it to stay closer to its own aug-mentations (positive samples, x+) and be apart from the other samples (negative samples, x−) also randomly drawn *Corresponding author.from the same training dataset. Such approach achieves superior performance over conventional supervised classi-fication methods in various tasks, such as object detection [12, 39, 43] and natural language processing [28]. Recent works study sampling methods to draw good-quality positive and negative samples to train effective self-supervised contrastive learning models. Various augmen-tation and positive sampling techniques are developed to boost the performance and generalization. For example, random noise perturbations [4, 7, 39] are adopted in the computer vision domain to preserve semantic information such as random cropping, random noise injection, and tilt-ing. Unlike positive sampling, finding legitimate negative samples is not a trivial problem. First, negative samples are not guaranteed to have a different class from the an-chor [8, 19] due to the unsupervised fashion of contrastive learning. Thus, the debiasing method [8] was proposed to address this false negatives problem by decomposing the marginal data distribution. Second, finding hard negative samples, i.e.,hard to distinguish from the anchor, is crucial as they are more informative [36]. Supervised contrastive learning [19] validated the importance of hard negative min-ing. However, this has been rarely studied in the literature. In supervised learning, e.g., classification, some works observed hard samples are related to data bias. Because some models tend to be misled by some correlation between biasing attributes and target labels, such as texture, color , andbackground in image classification [2,25], and race and gender in face recognition [17], samples against such cor-relation are likely to be hard samples. For instance, in the animal classification task, if most bird images in the train-ing set are assumed to have skyas a background instead of others, skywould be strongly correlated with the class bird. However, birds may also exist in other backgrounds, such as water, rock, etc. We can consider these birds in the background other than the sky as bias-conflicting sam-ples. Then, it is natural to emphasize bias-conflicting sam-ples (birds on water) more than bias-aligned ones (birds in the sky) for better performance and generalization as they are more informative. From the contrastive learning view-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24039 point, these bias-conflicting samples are likely to be hard to distinguish from the anchor ( e.g.,frog on water) and are nat-urally linked to hard negatives in the representation space. To address this, some methods [2, 20] specified the bias based on empirical observations on the task. For exam-ple, CNN is known to be biased towards the texture [10]. Bahng et al. [2] proposed an adversary that focuses exclu-sively on texture and limits the size of the receptive field of the convolutional layer to predict the target. However, it is almost infeasible to pre-define the bias attributes for each task, and also, the debiasing would be limited to the speci-fied attributes. Recent studies [25, 26, 30] proposed to em-phasize bias-conflicting samples by up-weighting hard sam-ples without pre-defined bias information in the supervised classification task. Yet, this approach is limited to classifi-cation tasks. Despite the importance of finding hard nega-tives [36], little attention is paid to finding bias-conflicting samples in self-supervised representation learning methods. Unlike the previous studies, we delve into the question: what makes a sample hard negative or easy negative in self-supervised learning? To the best of our knowledge, few studies in contrastive learning have been done from this perspective. In this work, we propose a novel contrastive learning method to effectively find hard negative samples from the data bias perspective. We employ triplet loss [38] to learn bias-amplified representations in a self-supervised manner. In Section 5.2, we theoretically show that minimiz-ing triplet loss enforces a model to focus on easy samples and ignores hard samples. Along with the biased model, we train the debiased model based on the relative difficulty of each sample by measuring relative distance between the representation from two models and the anchor as the sur-rogate of sample difficulty. The contribution of this work is summarized as follows: 1. We propose a debiased contrastive learning method that addresses two types of biases: hard vs. easy nega-tives and true vs. false negatives. |
Hu_Physically_Realizable_Natural-Looking_Clothing_Textures_Evade_Person_Detectors_via_3D_CVPR_2023 | Abstract Recent works have proposed to craft adversarial clothes for evading person detectors, while they are either only ef-fective at limited viewing angles or very conspicuous to hu-mans. We aim to craft adversarial texture for clothes based on 3D modeling, an idea that has been used to craft rigid adversarial objects such as a 3D-printed turtle. Unlike rigid objects, humans and clothes are non-rigid, leading to diffi-culties in physical realization. In order to craft natural-looking adversarial clothes that can evade person detectors at multiple viewing angles, we propose adversarial camou-flage textures (AdvCaT) that resemble one kind of the typical textures of daily clothes, camouflage textures. We leverage the Voronoi diagram and Gumbel-softmax trick to param-eterize the camouflage textures and optimize the parame-ters via 3D modeling. Moreover, we propose an efficient augmentation pipeline on 3D meshes combining topologi-cally plausible projection (TopoProj) and Thin Plate Spline (TPS) to narrow the gap between digital and real-world ob-jects. We printed the developed 3D texture pieces on fabric materials and tailored them into T-shirts and trousers. Ex-periments show high attack success rates of these clothes against multiple detectors. | 1. Introduction Deep Neural Networks(DNNs) have been widely used in many real-world systems such as face recognition and ob-ject detection [31, 32, 36, 48]. However, it is well known Equal contribution.yCorresponding author. (a) (b) (e)(d)(c)Figure 1. Visualization of several adversarial clothes. (a) Adver-sarial patch [38]. (b) Adversarial T-shirt [43]. (c) Naturalistic patch [18]. (d) Adversarial Texture [19]. (e) Left: daily cam-ouflage texture; Right: our adversarial camouflage texture. that DNNs are vulnerable to adversarial examples [16, 35]. Adversarial examples can be crafted by adding small per-turbations to the clean inputs, rendering the DNNs’ outputs incorrect. Such vulnerabilities could result in severe safety problems when deploying DNN-based systems. This has become a hot research topic recently [7, 11, 23, 26–28]. Adversarial examples were first identified in the digi-tal world. However, adversarial examples also exist in the physical world, posing more risks in real-world scenarios. Recently, many works [1,12–14,33,39–41,45–47] have de-signed physical adversarial examples to deceive DNNs in the real world. Among them, hiding persons [18–20, 38, 42,43] from DNN-based object detectors is especially chal-lenging due to the difficulties of modeling non-rigid object surfaces (i.e., clothes). Most works [18, 20, 38, 42, 43] print This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16975 Voronoi DiagramGumbel SoftmaxHuman ClothesRender Scene Dataset DetectorsImage Augmentation+ Control Points BProbability Map PTexture Map T Differentiable Texture Generation Non -rigid Mesh Augmentation Geometrically Plausible ProjTopologically Plausible Proj+ TPS Warp Detection Loss Figure 2. The training pipeline of the adversarial camouflage textures. adversarial patches on the front side of clothes to hide peo-ple from being detected. We call them patch-based adver-sarial examples . These patches are usually conspicuous to humans, making the clothes look strange and easily notice-able by human observers. Efforts have been put on making the adversarial patches more natural-looking [12, 18, 40]. However, these patch-based adversarial clothes can only at-tack object detectors at a narrow range of viewing angles (i.e., when the camera faces the front of the person). To attack the detector at a wider range of viewing angles, one may print the adversarial patches everywhere on the clothes, which would make the clothes unnatural-looking again. For example, a dog-head-like patch on the front of a T-shirt is natural, but putting this patch everywhere on the T-shirt would make the T-shirt look weird. Another way to craft physical adversarial examples is to design the textures on the surface of the target objects [1, 13, 19, 39, 44], i.e., crafting texture-base adversarial ex-amples . Unlike patch-based adversarial examples, texture-based ones are usually adversarially effective at multiple viewing angles. They are mostly optimized via 3D mod-eling or using clone networks, and printed on the surface of rigid objects such as turtles [1] and cars [13, 39, 44]. How-ever, it is much harder to realize the 3D textures of non-rigid objects like humans and clothes in the physical world while maintaining their adversarial effectiveness, since there is a huge gap between a 3D human model and a real-world per-son. To circumvent this difficulty, Hu et al. [19] propose to craft texture-based adversarial clothes by extending patches into textures with repetitive patterns, which does not require 3D modeling. However, their textures are very conspicuous to humans, and obtaining natural-looking textures can be difficult under the constraint of repetitive patterns.In this paper, we propose a 3D modeling pipeline to pro-duce natural-looking adversarial clothes that are physically realizable and can hide people at multiple viewing angles. Specifically, we craft adversarial camouflage texture (Adv-CaT) patterns and apply them on clothes. We choose cam-ouflage texture patterns mainly because they are typical tex-ture patterns widely used in daily clothes, therefore making the clothes more natural-looking In order to make the tex-ture patterns more generalizable when applied to deformed and unseen 3D models, we propose a novel 3D augmen-tation method combining topologically plausible projection (TopoProj) and thin plate spline (TPS) [3,10,37,43] for non-rigid objects such as clothes. We optimized several AdvCaT patterns to attack widely used detection models, including YOLOv3 [31], Faster RCNN [32], and deformable DETR [48], and applied the texture patterns on clothes in the physical world. See Fig. 1 for the visualization of our adversarial clothes com-pared with others. Experiments showed that our adversarial clothes could evade different detectors at multiple viewing angles. A subjective test experiment indicated that the nat-uralness score of our adversarial clothes covered with Ad-vCaT is significantly higher than other adversarial clothes and close to daily clothes. |
Guo_ShadowDiffusion_When_Degradation_Prior_Meets_Diffusion_Model_for_Shadow_Removal_CVPR_2023 | Abstract Recent deep learning methods have achieved promising results in image shadow removal. However, their restored images still suffer from unsatisfactory boundary artifacts, due to the lack of degradation prior embedding and the de-ficiency in modeling capacity. Our work addresses these issues by proposing a unified diffusion framework that in-tegrates both the image and degradation priors for highly effective shadow removal. In detail, we first propose a shadow degradation model, which inspires us to build a novel unrolling diffusion model, dubbed ShandowDiffusion. It remarkably improves the model’s capacity in shadow re-moval via progressively refining the desired output with both degradation prior and diffusive generative prior, which by nature can serve as a new strong baseline for image restoration. Furthermore, ShadowDiffusion progressively refines the estimated shadow mask as an auxiliary task of the diffusion generator, which leads to more accurate and robust shadow-free image generation. We conduct exten-sive experiments on three popular public datasets, including ISTD, ISTD+, and SRD, to validate our method’s effective-ness. Compared to the state-of-the-art methods, our model achieves a significant improvement in terms of PSNR, in-creasing from 31.69dB to 34.73dB over SRD dataset.1 | 1. Introduction Shadow removal aims to enhance visibility of the im-age shadow regions, pursuing a consistent illumination dis-tribution between shadow and non-shadow regions. Deep learning-based methods [4, 7, 10, 20, 54] achieved superior performance recently by fully utilizing the power of large collections of data. While most of the existing methods fo-*Corresponding author: Bihan Wen. This work was carried out at ROSE Lab, supported in part by the MOE AcRF Tier 1 (RG61/22) and Start-Up Grant. Input GT Fu et al. BMNet (c) ShadowDif fusion PSNR: 29.20 SSIM: 0.813PSNR: 36.49 SSIM: 0.969 PSNR: 39.53 SSIM: 0.984 (b) Previous Methods (a) Input+GT ... ...... ...Figure 1. (a) Input shadow image and corresponding ground truth shadow-free image, (b) shadow removal results of two most recent competing methods Fu et al. [7] and BMNet [54], and (c) our pro-posed ShadowDiffusion iteratively ( T→0) restores the shadow-free image and refines the shadow mask, in which the x0andm0 are the final enhanced result and refined mask, respectively. cused on learning the discriminative models for shadow re-moval, modeling the underlying distribution of nature im-ages is overlooked in their restoration process. Conse-quently, the shadow removal results usually contain severe boundary artifacts and remaining shadow patterns, as shown in Figure 1(b). Though the adversarial loss can alleviate this issue, these approaches [19,38] require careful adjustment during train-ing, might overfit certain visual features or data distribution, and might hallucinate new content and artifacts. Very re-cently, various diffusion models, such as the popular diffu-sion denoising diffusion probability model (DDPM) [17], have gained wide interest in the field of low-level vi-sion [33, 34]. Comparing to other deep generative mod-1https://github.com/GuoLanqing/ShadowDiffusion 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14049 els, diffusion models are more powerful for modeling image pixel distribution, which provides great potential for signif-icantly improving visual quality and benefits high-quality image restoration. However, no work to-date has exploited diffusion models for shadow removal tasks. Moreover, there are two major limitations in existing shadow removal methods: First, the shadow degradation prior that reflects its corresponding physical properties has not been well exploited in deep learning. Though recent work [25] attempted to incorporate simple shadow model as a linear and uniform degradation, such an assumption is too restrictive for restoring real shadow images subjec-tive to complicated lighting conditions. Second, most of the deep shadow removal methods requires an estimated shadow mask as the inputs, which are either provided by the benchmark datasets [38] or generated by a pre-trained shadow detector [5]. However, these mask estimates are usually inaccurate, e.g., wrong indicators near the boundary or small shadow objects. Even the carefully hand-crafted masks sometimes contain coarse boundaries. Since existing methods blindly rely on the estimated masks without ex-ploiting their correlation to the actual shadow images for re-finement, there are usually severe boundary artifacts in their shadow removal results [7, 54], as shown in Figure 1(b). To alleviate the challenges in shadow removal, we first introduce a general shadow model of spatially-variant degradation, by decomposing the degradation matrix into the shadow mask and shadow intensities. Based on the new shadow model, we propose a novel unrolling diffusion-based shadow removal framework, called ShadowDiffu-sion, which integrates both the generative and degradation priors. Specifically, we formulate the shadow removal prob-lem as to jointly pursue the shadow-free image and refined shadow mask. Mask refinement is designed as an auxiliary task of the diffusion generator to progressively refine the shadow mask along with shadow-free image restoration in an interactive manner as shown in Figure 1(c). After that, we further propose an unrolling-inspired diffusive sampling strategy to explicitly integrate the degradation prior into the diffusion framework. Experimental results show that Shad-owDiffusion can achieve superior performance consistently over the three widely-used shadow removal datasets and significantly outperform the state-of-the-art methods. Be-sides, our model can be applied to other image enhancement tasks, e.g., low-light image enhancement and exposure cor-rection. Our main contributions are summarized as follows: • We propose the first diffusion-based model for shadow removal. A novel dynamic mask-aware diffusion model (DMDM) is introduced to jointly pursue a shadow-free image and refined shadow mask, which leads to robust shadow removal even with an inaccu-rate mask estimate.• We further propose an unrolling-inspired diffusive sampling strategy to explicitly integrate the shadow degradation prior into the intrinsic iterative process of DMDM. • Extensive experimental results on the public ISTD, ISTD+, and SRD datasets show that the pro-posed ShadowDiffusion outperforms the state-of-the-art shadow removal methods by large margins. Be-sides, our method can be generalized to a series of im-age enhancement tasks. |
Bai_FFHQ-UV_Normalized_Facial_UV-Texture_Dataset_for_3D_Face_Reconstruction_CVPR_2023 | Abstract We present a large-scale facial UV-texture dataset that contains over 50,000 high-quality texture UV-maps with even illuminations, neutral expressions, and cleaned facial regions, which are desired characteristics for rendering re-alistic 3D face models under different lighting conditions. The dataset is derived from a large-scale face image dataset namely FFHQ, with the help of our fully automatic and ro-bust UV-texture production pipeline. Our pipeline utilizes the recent advances in StyleGAN-based facial image editing approaches to generate multi-view normalized face images from single-image inputs. An elaborated UV-texture extrac-tion, correction, and completion procedure is then applied to produce high-quality UV-maps from the normalized face images. Compared with existing UV-texture datasets, our dataset has more diverse and higher-quality texture maps. We further train a GAN-based texture decoder as the nonlin-ear texture basis for parametric fitting based 3D face recon-struction. Experiments show that our method improves the *Work done during an internship at Tencent AI Lab. †Corresponding author.reconstruction accuracy over state-of-the-art approaches, and more importantly, produces high-quality texture maps that are ready for realistic renderings. The dataset, code, and pre-trained texture decoder are publicly available at https://github.com/csbhr/FFHQ-UV . | 1. Introduction Reconstructing the 3D shape and texture of a face from single or multiple images is an important and challenging task in both computer vision and graphics communities. Since the seminal work by Blanz and Vetter [3] showed that the reconstruction can be effectively achieved by parametric fitting with a linear statistical model, namely 3D Morphable Model (3DMM), it has received active research efforts in the past two decades [14]. While most 3DMM-based recon-struction approaches focused on improving the shape esti-mation accuracy, only a few works addressed the problem on texture UV-map recovery [2, 15, 23, 24, 26, 35, 41]. There are two key aspects that deserve attention in the texture map recovery problem, which are the fidelity and the quality of the acquired texture maps. In order to recover a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 362 high-fidelity texture map that better preserves the face iden-tity of the input image, the texture basis in a 3DMM needs to have larger expressive capacities. On the other hand, a higher-quality texture map requires the face region to be evenly illuminated and without undesired hairs or acces-sories, such that the texture map can be used as facial assets for rendering under different lighting conditions. The method GANFIT [15] trains a generative adversar-ial network (GAN) [19] from 10,000 UV-maps as a tex-ture decoder to replace the linear texture basis in 3DMM to increase the expressiveness. However, their UV-maps in the training dataset are extracted from unevenly illuminated face images, and the resulting texture maps contain obvious shadows and are not suitable for differently lighted render-ings. The same problem exists in another work [24] based on UV-GAN [11]. The work AvatarMe [23] combines a linear texture basis fitting with a super-resolution network trained from high-quality texture maps of 200 individuals under controlled conditions. HiFi3DFace [2] improves the expressive capacity of linear texture basis by introducing a regional fitting approach and a detail refinement network, which is also trained from 200 texture maps. The Nor-malized Avatar work [26] trains a texture decoder from a larger texture map dataset with over 5,000 subjects, consist-ing of high-quality scan data and synthetic data. Although the quality of the resulting texture maps of these methods is pretty high, the reconstruction fidelity is largely limited by the number of subjects in the training dataset. Besides, all these texture map datasets are not publicly available. A recent high-quality, publicly accessible texture map dataset is in the Facescape dataset [42], obtained in a controlled en-vironment. However, the dataset only has 847 identities. In this paper, we intend to contribute a large-scale, pub-licly available facial UV-texture dataset consisting of high-quality texture maps extracted from different subjects. To build such a large-scale dataset, we need a fully auto-matic and robust pipeline that can produce high-quality tex-ture UV-maps from large-scale “in-the-wild” face image datasets. For the produced texture map, we expect it to have even illumination, neutral expression, and complete facial texture without occlusions such as hair or accessories. This is not a trivial task, and there exist several challenges: 1) The uncontrolled conditions of the in-the-wild face images cannot provide high-quality normalized textures; 2) From a single-view face image, the complete facial texture cannot be extracted; 3) Imperfect alignment between the face im-age and the estimated 3D shape would cause unsatisfactory artifacts in the unwrapped texture UV-maps. To address these issues, we first utilize StyleGAN-based image editing approaches [1, 21, 37] to generate multi-view normalized faces from a single in-the-wild image. Then a UV-texture extraction, correction, and completion process is developed to fix unsatisfactory artifacts caused by imper-fect 3D shape estimation during texture unwrapping, so that high-quality texture UV-maps can be produced stably. With the proposed pipeline, we construct a large-scale normal-ized facial UV-texture dataset, namely FFHQ-UV , based on the FFHQ dataset [20]. The FFHQ-UV dataset inherits the data diversity of FFHQ, and consists of high-quality texture UV-maps that can directly serve as facial assets for realistic digital human rendering (see Fig. 1 for a few examples). We further train a GAN-based texture decoder using the pro-posed dataset, and demonstrate that both the fidelity and the quality of the reconstructed 3D faces with our texture de-coder get largely improved. In summary, our main contributions are: • The first large-scale, publicly available normalized fa-cial UV-texture dataset, namely FFHQ-UV , which con-tains over 50,000 high-quality, evenly illuminated fa-cial texture UV-maps that can be directly used as facial assets for rendering realistic digital humans. • A fully automatic and robust pipeline for producing the proposed UV-texture dataset from a large-scale, in-the-wild face image dataset, which consists of StyleGAN-based facial image editing, elaborated UV-texture ex-traction, correction, and completion procedure. • A 3D face reconstruction algorithm that outperforms state-of-the-art approaches in terms of both fidelity and quality, based on the GAN-based texture decoder trained with the proposed dataset. |
He_Camouflaged_Object_Detection_With_Feature_Decomposition_and_Edge_Reconstruction_CVPR_2023 | Abstract Camouflaged object detection (COD) aims to address the tough issue of identifying camouflaged objects visually blended into the surrounding backgrounds. COD is a chal-lenging task due to the intrinsic similarity of camouflaged objects with the background, as well as their ambiguous boundaries. Existing approaches to this problem have de-veloped various techniques to mimic the human visual sys-tem. Albeit effective in many cases, these methods still struggle when camouflaged objects are so deceptive to the vision system. In this paper, we propose the FEature De-composition and Edge Reconstruction (FEDER) model for COD. The FEDER model addresses the intrinsic similarity of foreground and background by decomposing the features into different frequency bands using learnable wavelets. It then focuses on the most informative bands to mine sub-tle cues that differentiate foreground and background. To achieve this, a frequency attention module and a guidance-based feature aggregation module are developed. To com-bat the ambiguous boundary problem, we propose to learn an auxiliary edge reconstruction task alongside the COD task. We design an ordinary differential equation-inspired edge reconstruction module that generates exact edges. By learning the auxiliary task in conjunction with the COD task, the FEDER model can generate precise prediction maps with accurate object boundaries. Experiments show that our FEDER model significantly outperforms state-of-the-art methods with cheaper computational and memory costs. The code will be available at https://github. com/ChunmingHe/FEDER . | 1. Introduction Camouflaged object detection (COD) aims to detect and segment objects “seamlessly” integrated into surrounding *Corresponding author. IS+ED ED (a)Origin (b)GT IS IS (c)SegMaR (d)FEDER Figure 1. Results of SegMaR [14] and our method under the in-trinsic similarity (IS) and edge disruption (ED) challenges. Our method better localizes the objects and produces clearer edges. environments. COD is a challenging task as it needs to com-bat against excellent camouflage strategies, including back-ground matching [41], disruptive coloration [33], etc., and distinguish the subtle differences between candidate objects and their backgrounds. Research in COD can simultane-ously facilitate the development of visual perception for nu-ance discrimination and promote various valuable real-life applications, ranging from concealed defect detection [17] in industry to pest monitoring [35] in agriculture. COD faces two main challenges. The first is the intrinsic similarity (IS) challenge, which occurs when camouflaged objects share similar colors and patterns with their back-grounds. This makes it difficult to even roughly localize those camouflaged objects. The second is the edge dis-ruption (ED) challenge, which arises from extremely am-biguous object boundaries. Even if a rough localization is achieved, precise segmentation can barely be obtained. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22046 To tackle these challenges, most existing works aim to develop models that mimic the human visual system [6,28]. However, since camouflage strategies are designed by prey to confuse the predator’s visual system, and the intrinsic topological properties of candidate objects are not distinc-tive, such human perception-oriented attempts may struggle to identify subtle discriminative features and fail to effec-tively address the above challenges. For instance, as illus-trated in Fig. 1, the state-of-the-art human perception-based COD method can only generate inaccurate prediction maps, such as the vague caddisfly and incomplete dog (Row 2 and 4), or even fail to detect camouflaged objects like the bird and snake (Row 1 and 3). Therefore, a better COD method should compensate for the “flaw” in human perception by emphasizing subtle discriminative features. Based on the biological study [41], camouflaged objects often employ various camouflage strategies to conceal their discriminative differences, which mainly exist in texture de-tails and global information distribution, within surround-ing environments. Such a study inspires us to cope with the COD task by decomposing the camouflage scenario into different parts. This allows for the disentanglement of var-ious intricate connections, enabling each part to be sepa-rately handled to fully excavate subtle discriminative cues. With this inspiration, we propose the FEature Decom-position and Edge Reconstruction (FEDER) model for the COD task, which compensates for the deficiencies of hu-man perception by emphasizing subtle discriminative fea-tures and effectively addresses the IS and ED challenges. Specifically, to combat the intractable localization problem caused by the IS challenge, we design the deep wavelet-like decomposition (DWD) strategy, which decomposes the ex-tracted features into different frequency bands using learn-able wavelet-like modules. Then, we focus on the most informative bands by filtering out noteworthy parts where discriminative cues are most likely to exist by a novel fre-quency attention (FA) module. Moreover, a guidance-based feature aggregation (GFA) module is proposed to aggregate the multi-scale decomposed features with attentional guid-ance to further emphasize discriminative information. To address the ambiguous boundary problem of the ED challenge, we propose learning an auxiliary edge recon-struction task to encourage the network to excavate edge de-tails. We design the ordinary differential equation (ODE)-inspired edge reconstruction (OER) module to reconstruct accurate and complete edge prediction maps using a high-order ODE solver, specifically, the second-order Runge-Kutta. Incorporating this auxiliary task with the COD task can facilitate the generation of precise segmentation results with accurate object boundaries. Our contributions are summarized as follows: We propose the FEature Decomposition and Edge Re-construction (FEDER) model for the COD task. Tothe best of our knowledge, we are the first to approach COD from a decomposition perspective. To highlight the subtle discriminative features, we pro-pose frequency attention modules to filter out the note-worthy parts of corresponding features and design the Guidance-based Feature Aggregation module to aggre-gate the multi-scale features with attentional guidance. We propose to learn an auxiliary edge reconstruction task along with the COD task to help generate pre-cise segmentation maps with accurate object bound-aries and design the ODE-inspired edge reconstruction module for complete edge prediction. The proposed FEDER significantly outperforms the state-of-the-art methods on four datasets by a large margin with cheaper computational and memory costs. |
Guo_ALOFT_A_Lightweight_MLP-Like_Architecture_With_Dynamic_Low-Frequency_Transform_for_CVPR_2023 | Abstract Domain generalization (DG) aims to learn a model that generalizes well to unseen target domains utilizing mul-tiple source domains without re-training. Most existing DG works are based on convolutional neural networks (CNNs). However, the local operation of the convolution kernel makes the model focus too much on local represen-tations (e.g ., texture), which inherently causes the model more prone to overfit to the source domains and hampers its generalization ability. Recently, several MLP-based meth-ods have achieved promising results in supervised learn-ing tasks by learning global interactions among different patches of the image. Inspired by this, in this paper, we first analyze the difference between CNN and MLP meth-ods in DG and find that MLP methods exhibit a better generalization ability because they can better capture the global representations (e.g ., structure) than CNN methods. Then, based on a recent lightweight MLP method, we ob-tain a strong baseline that outperforms most state-of-the-art CNN-based methods. The baseline can learn global structure representations with a filter to suppress structure-irrelevant information in the frequency space. Moreover, we propose a dynAmic LOw-Frequency spectrum Trans-form (ALOFT) that can perturb local texture features while preserving global structure features, thus enabling the fil-ter to remove structure-irrelevant information sufficiently. Extensive experiments on four benchmarks have demon-strated that our method can achieve great performance im-provement with a small number of parameters compared to SOTA CNN-based DG methods. Our code is available at https://github.com/lingeringlight/ALOFT/. *Corresponding authors: Yinghuan Shi and Lei Qi. Work supported by NSFC Program (62222604, 62206052, 62192783), CAAI-Huawei Mind-Spore (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Founda-tion Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund. /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000018/uni00000013 /uni00000019/uni00000013 /uni00000031/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni00000033/uni00000044/uni00000055/uni00000044/uni00000050/uni00000048/uni00000057/uni00000048/uni00000055/uni00000056/uni00000003/uni0000000b/uni00000030/uni0000000c/uni0000001b/uni00000013/uni0000001b/uni00000015/uni0000001b/uni00000017/uni0000001b/uni00000019/uni0000001b/uni0000001b/uni0000001c/uni00000013/uni0000001c/uni00000015/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000055/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000029/uni00000024/uni00000030/uni0000002f/uni00000033/uni00000010/uni00000025 /uni0000000b/uni00000024/uni00000055/uni0000003b/uni0000004c/uni00000059/uni0000000a/uni00000015/uni00000015/uni0000000c/uni00000029/uni00000024/uni00000030/uni0000002f/uni00000033/uni00000010/uni00000036 /uni0000000b/uni00000024/uni00000055/uni0000003b/uni0000004c/uni00000059/uni0000000a/uni00000015/uni00000015/uni0000000c /uni00000027/uni00000048/uni00000048/uni00000053/uni00000024/uni0000004f/uni0000004f/uni00000003/uni0000000b/uni00000055/uni00000048/uni00000056/uni00000014/uni0000001b/uni0000000c /uni0000000b/uni00000024/uni00000024/uni00000024/uni0000002c/uni0000000a/uni00000015/uni00000013/uni0000000c/uni00000027/uni00000048/uni00000048/uni00000053/uni00000024/uni0000004f/uni0000004f/uni00000003/uni0000000b/uni00000055/uni00000048/uni00000056/uni00000018/uni00000013/uni0000000c /uni0000000b/uni00000024/uni00000024/uni00000024/uni0000002c/uni0000000a/uni00000015/uni00000013/uni0000000c/uni00000029/uni00000024/uni00000026/uni00000037/uni00000003/uni0000000b/uni00000055/uni00000048/uni00000056/uni00000014/uni0000001b/uni0000000c /uni0000000b/uni00000026/uni00000039/uni00000033/uni00000035/uni0000000a/uni00000015/uni00000014/uni0000000c/uni00000029/uni00000024/uni00000026/uni00000037/uni00000003/uni0000000b/uni00000055/uni00000048/uni00000056/uni00000018/uni00000013/uni0000000c /uni0000000b/uni00000026/uni00000039/uni00000033/uni00000035/uni0000000a/uni00000015/uni00000014/uni0000000c/uni00000030/uni00000039/uni00000027/uni0000002a/uni00000003/uni0000000b/uni00000055/uni00000048/uni00000056/uni00000014/uni0000001b/uni0000000c /uni0000000b/uni00000028/uni00000026/uni00000026/uni00000039/uni0000000a/uni00000015/uni00000015/uni0000000c /uni00000030/uni0000002f/uni00000033/uni00000010/uni00000025 /uni0000000b/uni00000031/uni00000048/uni00000058/uni00000055/uni0000002c/uni00000033/uni00000036/uni0000000a/uni00000015/uni00000014/uni0000000c/uni00000035/uni00000048/uni00000056/uni00000030/uni0000002f/uni00000033/uni00000010/uni00000036 /uni0000000b/uni00000037/uni00000033/uni00000024/uni00000030/uni0000002c/uni0000000a/uni00000015/uni00000015/uni0000000c/uni0000004a/uni00000030/uni0000002f/uni00000033 /uni0000000b/uni00000031/uni00000048/uni00000058/uni00000055/uni0000002c/uni00000033/uni00000036/uni0000000a/uni00000015/uni00000014/uni0000000c/uni0000002a/uni00000029/uni00000031/uni00000048/uni00000057 /uni0000000b/uni00000031/uni00000048/uni00000058/uni00000055/uni0000002c/uni00000033/uni00000036/uni0000000a/uni00000015/uni00000014/uni0000000c/uni00000024/uni0000002f/uni00000032/uni00000029/uni00000037/uni00000003/uni0000000b/uni00000032/uni00000058/uni00000055/uni00000056/uni0000000c/uni0000002c/uni00000047/uni00000048/uni00000044/uni0000004f/uni00000003/uni00000033/uni00000052/uni0000004c/uni00000051/uni00000057Figure 1. Comparison of the SOTA CNN-based methods, the latest MLP-like models, and our method on PACS. Among the SOTA CNN-based and MLP-based methods, our method can achieve the best performance with a relatively small-sized network. | 1. Introduction Most deep learning methods often degrade rapidly in performance if training and test data are from different dis-tributions. Such performance degradation caused by distri-bution shift ( i.e., domain shift [3]) hinders the applications of deep learning methods in real world. To address this is-sue, unsupervised domain adaptation (UDA) assumes that the unlabeled target domain can be utilized during training to help narrow the potential distribution gap between source and target domains [12, 31, 57]. However, UDA methods cannot guarantee the performance of model on unknown target domains that could not be observed during training [38, 48]. Since the target domain could not always be avail-able in reality, domain generalization (DG) is proposed as a more challenging yet practical setting, which aims to learn a model from observed source domains that performs well on arbitrary unseen target domains without re-training. To enhance the robustness of model to domain shifts, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24132 many DG methods intend to learn domain-invariant rep-resentations across source domains, mainly via adversarial learning [9, 63], meta-learning [5, 58], data augmentation [15,24,44], etc. Existing DG works are primarily built upon convolution neural networks (CNNs). However, due to the local processing in convolutions, CNN models inherently learn a texture bias from local representations [2,29], which inevitably leads to their tendency to overfit source domains and perform unsatisfactorily on unseen target domains. To tackle this drawback, some pioneers propose to replace the backbone architecture of DG with transformer or MLP-like models, which can learn global representations with atten-tion mechanisms [18, 39, 61, 62]. Although these methods have achieved remarkable performance, few of them have analyzed how the differences between the MLP and CNN architectures affect the generalization ability of model in the DG task. These methods also suffer from excessive net-work parameters and high computational complexity, which hinders their applications in real-world scenarios. In this paper, we first investigate the generalization abil-ity of several MLP methods in the DG task and conduct the frequency analysis [2] to compare their differences with CNN methods. We observe that MLP methods are bet-ter at capturing global structure information during infer-ence, hence they can generalize better to unseen target do-mains than CNN methods. Based on the observation, we propose an effective lightweight MLP-based framework for DG, which can suppress local texture features and empha-size global structure features during training. Specifically, based on the conventional MLP-like architecture [10, 35], we explore a strong baseline for DG that performs better than most state-of-the-art CNN-based DG methods. The strong baseline utilizes a set of learnable filters to adaptively remove structure-irrelevant information in the frequency space, which can efficiently help the model learn domain-invariant global structure features. Moreover, since the low-frequency components of images contain the most domain-specific local texture information, we propose a novel dy-nAmic LOw-Frequency spectrum Transform (ALOFT) to further promote the ability of filters to suppress domain-specific features. ALOFT can sufficiently simulate potential domain shifts during training, which is achieved by model-ing the distribution of low-frequency spectrums in differ-ent samples and resampling new low-frequency spectrums from the estimated distribution. As shown in Fig. 1, our framework can achieve excellent generalization ability with a small number of parameters, proving its superiority in DG. Our contributions are summarized as follows: • We analyze how the MLP-like methods work in DG task from a frequency perspective. The results indicate that MLP-like methods can achieve better generaliza-tion ability because they can make better use of global structure information than CNN-based methods.• We propose a lightweight MLP-like architecture with dynamic low-frequency transform as a competitive al-ternative to CNNs for DG, which can achieve a large improvement from the ResNet with similar or even smaller network size as shown in Fig. 1. • For dynamic low-frequency transform, we design two variants to model the distribution of low-frequency spectrum from element-level and statistic-level, re-spectively. Both variants can enhance the capacity of the model in capturing global representations. We demonstrate the effectiveness of our method on four standard domain generalization benchmarks. The results show that compared to state-of-the-art domain generaliza-tion methods, our framework can achieve a significant im-provement with a small-sized network on all benchmarks. |
Gao_SurfelNeRF_Neural_Surfel_Radiance_Fields_for_Online_Photorealistic_Reconstruction_of_CVPR_2023 | Abstract Online reconstructing and rendering of large-scale in-door scenes is a long-standing challenge. SLAM-based methods can reconstruct 3D scene geometry progressively in real time but can not render photorealistic results. While NeRF-based methods produce promising novel view syn-thesis results, their long offline optimization time and lack of geometric constraints pose challenges to efficiently han-dling online input. Inspired by the complementary advan-tages of classical 3D reconstruction and NeRF , we thus in-vestigate marrying explicit geometric representation with NeRF rendering to achieve efficient online reconstruction and high-quality rendering. We introduce SurfelNeRF , a variant of neural radiance field which employs a flexible and scalable neural surfel representation to store geomet-ric attributes and extracted appearance features from input images. We further extend the conventional surfel-based fusion scheme to progressively integrate incoming input frames into the reconstructed global neural scene represen-tation. In addition, we propose a highly-efficient differen-tiable rasterization scheme for rendering neural surfel radi-ance fields, which helps SurfelNeRF achieve 10×speedups in training and inference time, respectively. Experimental results show that our method achieves the state-of-the-art 23.82 PSNR and 29.58 PSNR on ScanNet in feedforward inference and per-scene optimization settings, respectively.1 | 1. Introduction Large-scale scene reconstruction and rendering is a cru-cial but challenging task in computer vision and graphics with many applications. Classical visual simultaneous lo-calization and mapping (SLAM) systems [6, 12, 16, 24, 39, 41] can perform real-time 3D scene reconstruction. How-ever, they usually represent the scene geometry as solid sur-faces and appearance as vertex color or texture maps; thus, 1Project website: https://gymat.github.io/SurfelNeRF-web … … … … …t time t+k Online Input Stream … … … Scene Reconstruction Novel View SynthesisNovel View Renderin g Input View Novel ViewFigure 1. Examples to illustrate the task of online photorealistic reconstruction of an indoor scene. The online photorealistic re-construction of large-scale indoor scenes: given an online input image stream of a previously unseen scene, the goal is to progres-sively build and update a scene representation that allows for high-quality rendering from novel views. the reconstructed results fail to fully capture the scene con-tent and cannot be used for photorealistic rendering. Re-cently, neural radiance fields (NeRF) and its variants [2, 3, 22,27,35,42] have achieved unprecedented novel view syn-thesis quality on both object-centric and large-scale scenes. However, NeRFs suffer from long per-scene optimization time and slow rendering speed, especially for large-scale scenes. Although recent advances [7, 14, 19, 23, 33] achieve faster optimization and rendering via incorporating explicit representations, they still require gathering all input images in an offline fashion before optimizing each scene. In this paper, we target the challenging task of online photorealistic reconstruction of large-scale indoor scenes: given an online input image stream of a previously unseen scene, the goal is to progressively build and update a scene representation that allows for high-quality rendering from novel views. The online setting can unlock a variety of real-time interactive applications, providing crucial immediate feedback to users during 3D capture. However, this task brings multiple extra requirements, including the scalabil-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 108 Methods Representation Generalization Real-time Rendering Online Fusion Scalability DeepSurfel [21] Surfels ✗ ✓ ✗ ✗ Instant-NGP [23] Hash Grids ✗ ✓ ✗ ✗ PointNeRF [43] Point Clouds ✓ ✗ ✗ ✓ VBA [10] B+ Trees ✗ ✗ ✓ ✓ NeRFusion [50] V oxel Grids ✓ ✗ ✓ ✗ Ours Surfels ✓ ✓ ✓ ✓ Table 1. Comparison of representation and features with existing methods. ity of the underlying scene representation, the ability to per-form on-the-fly updates to the scene representation, and op-timizing and rendering at interactive framerates. Recently, NeRFusion [50] followed NeuralRecon [34] to unproject in-put images into local sparse feature volumes, fusing them to a global volume via Gated Recurrent Units (GRUs), and then generating photorealistic results from the global fea-ture volume via volume rendering. However, updating the sparse volumetric feature involves computationally heavy operations; the volume rendering is also very slow since it requires hundreds of MLP evaluations to render a pixel. Thus, although NeRFusion achieves efficient online scene reconstruction, it still needs dozens of seconds to render a frame. VBA [10] is another recent approach to online pho-torealistic scene capture, but it only applies to object-centric scenes. We compare with representation and key features used in online photorealistic rendering with existing meth-ods, which is shown in Tab. 1. We propose surfel-based neural radiance fields, Sur-felNeRF, for online photorealistic reconstruction and ren-dering of large-scale indoor scenes. Surfels ( surface elements) [25] are point primitives containing geometric at-tributes, e.g., position, color, normal, and radius. We ex-tend this representation to neural surfels , storing extra neu-ral features that encode the neural radiance field of the target scene. Compared with volumetric representations, neural surfels are more compact and flexible and can easily scale to large scenes. Besides, we further employ a fast and dif-ferentiable rasterization process to render neural surfel ra-diance fields, which produces a pixel with only a few MLP evaluations based on the rasterized neural surfels. Inspired by classical real-time surfel-based geometric re-construction methods [6, 15, 41], we propose an efficient neural radiance field fusion method to progressively build the scene representation by integrating neighboring neural surfels. Unlike point-based representations [1, 26, 30, 43] that are computationally heavy when finding neighboring points, it is easier to locate overlapping surfels and then merge neural features from multiview observations. By coupling the SurfelNeRF representation, the efficient neural surfel fusion approach, and the fast neural surfel rasteriza-tion algorithm, we achieve high-quality, photorealistic 3D scene reconstruction in an online manner. We conduct experiments on the large-scale indoor scenedataset ScanNet [11], which contains complex scene struc-tures and a large variety of scene appearances. We train the SurfelNeRF end-to-end across the scenes on the ScanNet, obtaining a generalizable model that enables both feedfor-ward inference on unseen data and per-scene fine-tuning. We demonstrate in experiments that the proposed Surfel-NeRF achieves favorably better rendering quality than the state-of-the-art approaches in both feedforward and fine-tuning settings while maintaining high training and render-ing efficiency. We believe the proposed online photorealis-tic reconstruction framework has great potential in practical applications. |
Banani_Learning_Visual_Representations_via_Language-Guided_Sampling_CVPR_2023 | Abstract Although an object may appear in numerous contexts, we often describe it in a limited number of ways. Language al-lows us to abstract away visual variation to represent and communicate concepts. Building on this intuition, we pro-pose an alternative approach to visual representation learn-ing: using language similarity to sample semantically sim-ilar image pairs for contrastive learning. Our approach diverges from image-based contrastive learning by sam-pling view pairs using language similarity instead of hand-crafted augmentations or learned clusters. Our approach also differs from image-text contrastive learning by relying on pre-trained language models to guide the learning rather than directly minimizing a cross-modal loss. Through a se-ries of experiments, we show that language-guided learning yields better features than image-based and image-text rep-resentation learning approaches. | 1. Introduction Consider the images in Fig. 1, is the center image more similar to its left or right neighbor? Despite the difference in background and pose, it is clear that the right pair cap-tures the same concept: a flying snow owl. Nevertheless, a self-supervised image model will judge the left pair as more similar. Human perception and language abstract away ap-pearance differences to capture conceptual similarity rather than just visual similarity. Ideally, we could learn visual features that capture conceptual similarity and generalize effectively to other visual tasks. In this work, we show how language can be a proxy for conceptual similarity; allowing us to sample better pairs for contrastive learning and train more generalizable visual models. Image-only contrastive learning uses visual similarity as a proxy for conceptual similarity. This is based on the ob-servation that discriminative approaches can discover inter-class similarity– e.g., cheetahs are similar to lions– without requiring explicit annotations [ 106]. The core idea is to train a discriminative model where each instance is treated as a separate class, and the model is trained to map augmentedℒVisual Embedding Snowy owl lifting offSnow owl taking offLanguage EmbeddingLate after post thunderstorm, north OhioFigure 1. Language allows us to find conceptually similar image pairs even if they are visually dissimilar. We use those pairs for contrastive learning to learn generalizable visual features. views of the same image to similar features [ 12–15,106]. While successful, instance discrimination ignores the simi-larity between different instances as it assumes all other im-ages are unrelated. Later work focused on inter-image rela-tionships by estimating clusters [ 3,9,10] or finding nearest neighbors [ 28]. However, those relationships are estimated using visual embeddings; resulting in visually, rather than conceptually, similar pairs. Language similarity is a strong proxy for semantic re-lationships. Consider the example in Fig. 1; images that depict the same concept are often described similarly. Rad-ford et al.[76] propose language-image contrastive learn-ing by mapping images and text to a shared representa-tion space and achieve impressive generalization capabili-ties. However, it is unclear whether forcing models to map onto a shared space is optimal for visual learning. Although linguistic and visual similarity might align for similar in-stances, it is unclear whether all distances in one space should map exactly to the other. Instead of learning a joint vision-and-language representations, we argue that it is bet-ter to use linguistic similarity to guide visual learning. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19208 To this end, we propose language-guided contrastive learning : a simple adaptation to contrastive learning that uses language models to find conceptually-similar image pairs for visual learning. Our approach is motivated by the observation that language models, despite never train-ing on visual data, can still be used to sample caption pairs that belong to conceptually similar images, as seen in Fig. 2. Such sampled images exhibit desirable varia-tions in pose, lightning, and context which are very dif-ferent from hand-crafted augmentations which can be ill-suited to downstream tasks [ 108] or too focused on back-ground textures [ 81]. We use the sampled pairs instead of image augmentations within standard self-supervised visual learning approaches such as SimCLR [ 12], SimSiam [ 15], and SLIP [ 67]. Our approach departs from image-only con-trastive learning by relying on conceptually-similar image pairs rather than visually similar augmentations or cluster-assignment. We also depart from image-text pre-training by allowing the model to be guided by language similarity rather than learning a joint embedding space. We conduct a series of controlled experiments to ana-lyze our approach and compare it to commonly used rep-resentation learning paradigms on generalization to down-stream classification tasks. In controlled settings, our ap-proach outperforms all baselines on linear probe and few-shot classification on a range of downstream classification datasets. Our analysis suggests that while learning multi-modal joint embeddings can result in good representations, it is better to use one modality to guide the training of the other. Furthermore, we find that our approach is ro-bust to the specific choice of sampling strategy or language model. Our code and pre-trained models are available at https://github.com/mbanani/lgssl . |
He_Geometric_Visual_Similarity_Learning_in_3D_Medical_Image_Self-Supervised_Pre-Training_CVPR_2023 | Abstract Learning inter-image similarity is crucial for 3D medi-cal images self-supervised pre-training, due to their sharingof numerous same semantic regions. However , the lack ofthe semantic prior in metrics and the semantic-independentvariation in 3D medical images make it challenging to geta reliable measurement for the inter-image similarity, hin-dering the learning of consistent representation for samesemantics. We investigate the challenging problem of thistask, i.e., learning a consistent representation between im-ages for a clustering effect of same semantic features. Wepropose a novel visual similarity learning paradigm, Geo-metric Visual Similarity Learning, which embeds the priorof topological invariance into the measurement of the inter-image similarity for consistent representation of semanticregions. To drive this paradigm, we further construct anovel geometric matching head, the Z-matching head, tocollaboratively learn the global and local similarity of se-mantic regions, guiding the efficient representation learn-ing for different scale-level inter-image semantic features.Our experiments demonstrate that the pre-training withour learning of inter-image similarity yields more power-ful inner-scene, inter-scene, and global-local transferringability on four challenging 3D medical image tasks. Ourcodes and pre-trained models will be publicly available | 1. 1. Introduction Learning inter-image similarity [ 26,33,44,47] is crucial for 3D medical image (e.g., CT, MR) self-supervised pre-training (SSP) [ 20]. As shown in Fig. 1, different from nat-ural images which are widely researched in SSP, 3D med-ical images share numerous same semantic regions due tothe consistency of human anatomies [ 28] and the complete spatial information in 3D vision [ 35], bringing a strong prior for effective SSP. Therefore, it targets on constrain-∗Corresponding author: yang.list@seu.edu.cn 1https://github.com/YutingHe-list/GVSL Lung RV RAAoLV LA DoMyoMyo Lung SpineLung Lung Ao LALVRV RA DoMyo LungSpineMyo Lung b) Numerous same semantic regions between 3D medical imagesa) Large semantic difference between natural imagesxConsistent human anatomies xComplete spatial information in 3D vision on 3D medical image 3D medical image Figure 1. Learning inter-image similarity is crucial for 3D medical image SSP. a) Natural images have large semantic difference be-tween images whose inter-image similarity is weak. b) 3D medicalimages share numerous same semantic regions between images due to the consistent human anatomies and the complete spatialinformation in 3D vision, having large inter-image similarity. ing the pre-training network for a consistent representation of these same semantic regions between images without an-notations. Once successful, it will bring great clusteringeffect for same semantic features, powerful representabilityof pre-trained network, and effective transferring for poten-tial downstream tasks. Although the existing SSP works have achieved promis-ing results in their tasks, they are limited in the learning ofinter-image similarity in 3D medical images. 1) Clustering-based SSP methods [ 2,24] measure the features’ similarity between images for their clustering pattern in an embedding space, and learn to aggregate same cluster’s features. How-ever, they simply employ the Mahalanobis or Euclidean dis-tance as the measurement function which is extremely inter-fered by images’ semantics-independent variations (Fig. 2). 2) Contrastive learning works [ 3,4] directly learn to sep-arate their features for inter-image dissimilarity . This vi-olates the learning inter-image similarity which is crucial in 3D images and will make the network represent distinctfeatures for same semantic regions. Although some othercontrastive learning works [ 4,7,41] have removed the sep-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9538 Image A Image B Image C b)Same semantic regions with dissimilar appearancea)Different semantic regions with similar appearance Similar RA RA Myo Dissimilar Figure 2. It is challenging to measure a reliable inter-image simi-larity. a) There is a large similarity between the Myo and the RAregions between images A and B. b) Due to the variation of thescanning protocol, RA regions are different in images B and C. aration learning, they are still unable to learn the consis-tency of inter-image same semantics. 3) Generation-basedmethods [ 23,25,40,48] construct pretext labels via designed transformation methods (e.g., rotation [ 23]) and train net-works to predict these labels. These methods implicitly im-pose a bias into SSP via manually designing the transfor-mation methods. However, the bias extremely relies on the manual design which makes pre-training networks focus onthe biased features of pretext labels and become sensitive tothe change of scenario [ 25]. Thinking the limitations in above existing works, the large-scale mis-measurement for inter-image similarity is the key challenge in 3D medical SSP, interfering the dis-covery of semantics’ correspondence and hindering thelearning of consistent representation for same semantic re-gions. Semantic-independent variations (Fig. 2) make the 3D medical images have different appearance. Differentsemantic regions have similar appearances and same se-mantic regions have different appearances between images.The direct measurement in the embedding space, like the clustering-based SSP methods [ 2,24], is sensitive due to lack of semantic prior in their metrics. Therefore, in thenon-supervision situation, once the features changed causedby the variations, these metrics will make mis-measurementof similarities for large-scale semantics, bringing their mis-correspondence. It will train network to aggregate the fea-tures with different semantic but similar appearance, caus-ing mis-representation. Topological invariance [ 18,27] of the visual semantics in 3D medical images provides a motivation to construct a re-liable measurement for inter-image similarity (Fig. 3). Due to the consistency of human anatomies [ 28], 3D medical images have consistent context topology between the visualsemantics in image space (e.g., the four chambers of humanhearts have a fixed space relationship), and the same seman-tic regions have similar shapes in different images (e.g., thevessels (AO) have a stable tubular structure), constructingan invariant topology for the visual semantics. Therefore,Consistent topology of visual semantics Topology of heart structures in image CTopology of heart structuresMyoLVLAPA RVRAAO MyoLVLAPA RVRAAO Topology of heart structuresMyoLVLAPA RVRAAO RVRA LAAO PA LVMyoRVRA LA AO PA LV Myo RV RV RA RA LAAO PA LVMyo Image A Image B Image C RARV Myo LV LAAo MyoLVLAPA RVRAAO Figure 3. The topological invariance of the visual semantics be-tween the 3D medical images provides a motivation to discovertheir inter-image correspondence. according to the semantic prior of topological invariance, the semantic regions are able to be transformed to align inthe image space via a topology-invariant mapping [ 10], thus discovering their reliable inter-image correspondence evenwith large variations in appearance. An intuitive strategy isto use the registration or geometric matching (GM) meth-ods [ 11,13,15,16,32] to discover correspondence indexes between images, and use these indexes to constrain the con-sistent representation for corresponding regions. However,the errors in these indexes will bring mis-correspondence. In this paper, we propose a novel SSP paradigm, Ge-ometric Visual Similarity Learning (GVSL), to learn theinter-image similarity in 3D medical images. It embeds theprior of topological invariance into the measurement of the similarities, and train network to estimate semantics’ cor-respondence from the represented features in GM. Due tothis effective semantic prior, the measurement will considerthe semantic-related topology similarity avoiding the largeinterference of semantic-independent variation. Therefore,when learning to enlarge this similarity between two images for more accurate estimation of correspondence, the gradi-ent in backpropagation will constrain the network to clus-ter the corresponding features in embedding space for moreconsistent representation. To drive the GM learning, we fur-ther propose a Z-Matching head to explore the global and local collaborative representation learning of inter-imagesimilarity in our GVSL paradigm. It constructs a collab-orative learning head with affine (global matching) and de-formable (local matching) transformations [ 13], thus em-bedding the pre-trained model with a powerful transferringability for potential downstream tasks. Our contributions are summarized as follows: 1) Our work advances the learning of inter-image similarity in 3Dmedical image SSP, and pre-trains the network to learn aconsistent representation for same visual semantics betweenimages without annotation, pushing the representability ofpre-trained models. 2) We propose the Geometric VisualSimilarity Learning (GVSL) that embeds the prior of topo-9539 logical invariance into the metric for a reliable measure-ment of inter-image similarity, learning a consistent repre-sentation for same semantic regions between images. 3) Wepresent a novel GM head, Z-Matching head, for simultane-ously powerful global and local representation. It collabora-tively learns the affine and deformable matching, realizingan effective optimization for the representation of differentsemantic granularity in our GVSL, and finally achieving apowerful transferring ability. |
Huang_Inverting_the_Imaging_Process_by_Learning_an_Implicit_Camera_Model_CVPR_2023 | Abstract Representing visual signals with implicit coordinate-based neural networks, as an effective replacement of the traditional discrete signal representation, has gained con-siderable popularity in computer vision and graphics. In contrast to existing implicit neural representations which focus on modelling the scene only, this paper proposes a novel implicit camera model which represents the physical imaging process of a camera as a deep neural network. We demonstrate the power of this new implicit camera model on two inverse imaging tasks: i) generating all-in-focus pho-*Work was done during an internship at Tencent AI Lab. †Corresponding authors.tos, and ii) HDR imaging. Specifically, we devise an im-plicit blur generator and an implicit tone mapper to model the aperture and exposure of the camera’s imaging process, respectively. Our implicit camera model is jointly learned together with implicit scene models under multi-focus stack and multi-exposure bracket supervision. We have demon-strated the effectiveness of our new model on a large num-ber of test images and videos, producing accurate and visu-ally appealing all-in-focus and high dynamic range images. In principle, our new implicit neural camera model has the potential to benefit a wide array of other inverse imaging tasks. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21456 | 1. Introduction Using deep neural networks to learn an implicit repre-sentation of visual signal of a scene has received remarkable success ( e.g., NeRF [27]). It has been used to represent vi-sual signals ( e.g., images [10,35], videos [5,18], and volume density [27]) with many impressive results. Besides implicit scene modelling (e.g., modelling scene radiance field via an MLP), the physical imaging process of a camera is also im-portant for the image formation process ( i.e., from scene ra-diance field to RGB values of the sensor of a camera [36]). However, to the best of our knowledge, little in the lit-erature has ever tapped into the issue of finding an implicit representation to model the physical imaging process of a camera. Instead, most existing neural rendering methods assume that each pixel’s RGB values are precisely the cap-tured radiance field. In reality, before the light rays hit the imaging sensors, they need to pass through both the aper-ture and shutter, resulting in possible image blur caused by finite-sized aperture as well as varied dynamic range dic-tated by exposure time of the shutter. Moreover, the image signal processor (ISP) inside a dig-ital camera may also alter the obtained image, e.g., lumi-nance change, depth of field (DoF), as well as image noises. The above observation prompts us to address two questions in this paper: •Can we learn an implicit camera model to represent the imaging process and control camera parameters? •Can we invert the imaging process from inputs with varying camera settings and recover the raw scene content? Recently, learning-based methods simulating the map-ping from raw images to sRGB images have been presented [13, 30, 51]. They allow photo-realistic image generation controlled by the shutter or aperture, but inverse problems of raw image restoration are challenging to model. Al-though a few NeRF-based methods have simply simulated cameras, they still face many issues, e.g., either RawN-eRF [26] only models a camera forward mapping for con-trollable exposures or HDR-NeRF [14] only builds a tone-mapper module with the NeRF on static scenes to inversely recover the high-dynamic-range (HDR) radiance. It is not clear whether a unified coordinate-based MLP module of different implicit camera models can be applied to various implicit neural scene representations for inverting the imag-ing process in a self-supervised manner, especially for dy-namic scenes. To this end, this paper proposes a novel implicit neural camera model as a general implicit neural representation. Tested on two challenging tasks of inverse imaging, namely all-in-focus and HDR imaging, we have demonstrated the effectiveness of our new implicit neural camera model, as illustrated in Fig. 1.The key contributions of this paper are: 1. We propose an interesting component, an implicit neu-ral camera model including a blur generator module (Sec. 3.2) for the point spread function and a tone mapper module (Sec. 3.3) for the camera response function, to model the camera imaging process. |
Jin_Fast_Contextual_Scene_Graph_Generation_With_Unbiased_Context_Augmentation_CVPR_2023 | Abstract Scene graph generation (SGG) methods have histori-cally suffered from long-tail bias and slow inference speed. In this paper, we notice that humans can analyze relation-ships between objects relying solely on context descriptions, and this abstract cognitive process may be guided by expe-rience. For example, given descriptions of cupandtable with their spatial locations, humans can speculate possible relationships < cup, on, table > or< table, near, cup > . Even without visual appearance information, some impossi-ble predicates like flying in andlooking at can be empiri-cally excluded. Accordingly, we propose a contextual scene graph generation (C-SGG) method without using visual in-formation and introduce a context augmentation method. We propose that slight perturbations in the position and size of objects do not essentially affect the relationship between objects. Therefore, at the context level, we can produce di-verse context descriptions by using a context augmentation method based on the original dataset. These diverse context descriptions can be used for unbiased training of C-SGG to alleviate long-tail bias. In addition, we also introduce a context guided visual scene graph generation (CV-SGG) method, which leverages the C-SGG experience to guide vision to focus on possible predicates. Through extensive experiments on the publicly available dataset, C-SGG al-leviates long-tail bias and omits the huge computation of visual feature extraction to realize real-time SGG. CV-SGG achieves a great trade-off between common predicates and tail predicates. | 1. Introduction SGG is a challenging technology that identifies triplet relationships < subject, predicate, object > between ob-jects from images. With the development of artificial intel-ligence, SGG has gradually become a bridge from image recognition to image understanding. Scene graphs are an *Corresponding Authorindispensable part of complex visual understanding tasks, such as visual question answering [24], visual grounding [21] and visual-language navigation [40]. However, the re-searches [3, 19, 36]on SGG suffer from two insurmount-able obstacles. The first is the long-tail bias derived from datasets. More common predicates such as onandnear have more samples than tail predicates such as from and above , causing the model to prefer to classify the common predicates. The second is the low-speed inference in prac-tical applications. Analyzing the predicate between each objects-pair to generate a scene graph is a quadratic time complexity problem, making real-time inference difficult. On the one hand, some SGG methods [3, 26, 31, 34] are dedicated to solving the long-tail bias. Tang [26] and Chiou [2] introduce the causal graph and the label frequency to reason tail predicates and attempt unbiased SGG inference based on biased training. Li [11] and Desai [3] propose to optimize label distribution and rebalance category sam-pling, which realize unbiased SGG training. However, these methods increase the recall of tail predicates, but inevitably reduce the recall of common predicates. The current SGG methods are difficult to consider and balance the recall of common predicates and tail predicates simultaneously. We think the internal reason is that there are not enough data samples for each predicate. On the other hand, some SGG methods [18, 33] focus on improving the inference speed of the SGG task. In de-tail, Yang [33] designs all objects in a fully connected graph structure and prunes the connections between objects. It can reduce the time complexity of SGG inference. Liu [18] transforms the predicate inference into an integral on rela-tionship affinity fields. Although the time complexity is not reduced, the computation amount of integral operation is much less than that of deep learning calculation of visual features. These methods improve the inference speed, but sacrifice the recall performance. We reflect on the human cognitive process of predicate analysis between objects and discover two overlooked phe-nomena. First, humans can roughly infer the predicate be-tween objects based on the context descriptions only includ-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6302 Figure 1. A. The example of human speculate predicates based on the context description. B. (a) Use software to change objects in the image to produce fake images; (b) Project object pairs in the fake image to the context level. C. Examples of C-SGG outputs in different context descriptions. D. Examples of CV-SGG to further analyze high-confidence relationships and possible predicates. ing categories and positions. In other words, humans can speculate and analyze possible relationships through con-text descriptions even without seeing objects. As shown in Fig.1 A, when humans know the subject man and the ob-jectbike, they can speculate and analyze which predicates are possible (’ riding ’, ’sitting on ’ , ’near’) and which predi-cates are impossible (’ wears ’, ’flying in ’, ’eating ’) based on past experience. Second, when humans analyze the predi-cate between the objects-pair, the apparent features of the objects themself are not important. As shown in Fig.1 B (a), we use adobe photoshop software to move the human body and replace the style of the glasses, but the relation-ship< man, wears, glass > remains the same. Therefore, we argue that context features may be more important than visual features in rough predicates judgment. Based on this thought, we weaken the role of vision in the SGG task and propose a contextual SGG (C-SGG) method with context augmentation. As we did in Fig.1 B (a), software such as photoshop can be used to modify the image by moving the position and replacing the like objects without changing the predicate. Different from traditional image augmentation, HSV variation and size scale chang-ing the entire image, this kind of image modification will change the shape and position of a certain object. However, using software to modify images is an extremely complex task. We project the modified image to the context level, as shown in Fig.1 B (b), which is the slight translation and scale of the object position with a cheap cost, and we named it context augmentation. In our C-SGG method, we only use context descriptions to predict predicates, and context augmentation can increase context description samples of any tail predicate for unbiased training. To some extent, through the context augmentation, during our training forC-SGG, there are no two identical context descriptions. In addition, since there are no visual image features, we do not need complex computational models. Although it is still a quadratic time complexity task, the computation amount per object pair is extremely cheap. Certainly, the C-SGG lacks the analysis of the visual in-teraction information between objects. We also propose a context guided visual SGG method (CV-SGG) to confirm truth predicates between object pairs further. As shown in Fig.1 C, C-SGG can roughly analyze the confidence in the existence of relationships between objects-pairs and the possible types of predicates. Our CV-SGG focuses on those high-confidence relationships and the high possible predi-cates. We use a simple visual model to extract visual fea-tures and fuse them with contextual features. During the training, we apply a ReLuL1 function and only calculate the loss on high possible predicates. In this way, CV-SGG only pays attention to possible predicates from C-SGG and ignores impossible predicates. As shown in Fig.1 D, con-text guided visual SGG is used to boost the truth predicate and suppress other possible but false predicates. We validate our methods on the most common SGG dataset VG [10] and the latest SGG dataset PSG [32]. Our methods achieve the best balance between common pred-icates and tail predicates, and accomplish real-time SGG. The contributions of this paper can be summarized as: 1) Inspired by the human cognitive process, we propose context augmentation to produce diverse context de-scriptions at the context level for unbiased training, which weakens the role of vision. 2) We propose two methods for SGG: C-SGG which only uses context descriptions and CV-SGG which guides vi-6303 sual attention based on C-SGG results. 3) Based on extensive experiments on two SGG datasets VG and PSG, our methods have obvious advantages in dealing with long-tail bias and inference speed. |
Chen_Enhanced_Training_of_Query-Based_Object_Detection_via_Selective_Query_Recollection_CVPR_2023 | Abstract This paper investigates a phenomenon where query-based object detectors mispredict at the last decoding stage while predicting correctly at an intermediate stage. We review the training process and attribute the overlooked phenomenon to two limitations: lack of training emphasis and cascading errors from decoding sequence. We design and present Selective Query Recollection (SQR), a simple and effective training strategy for query-based object detec-tors. It cumulatively collects intermediate queries as decod-ing stages go deeper and selectively forwards the queries to the downstream stages aside from the sequential struc-ture. Such-wise, SQR places training emphasis on later stages and allows later stages to work with intermediate queries from earlier stages directly. SQR can be easily plugged into various query-based object detectors and sig-nificantly enhances their performance while leaving the in-ference pipeline unchanged. As a result, we apply SQR on Adamixer, DAB-DETR, and Deformable-DETR across var-ious settings (backbone, number of queries, schedule) and consistently brings 1.4∼2.8AP improvement. Code is available at https://github.com/Fangyi-Chen/ SQR | 1. Introduction Object detection is a long-established topic in computer vision aiming to localize and categorize objects of interest. Previous methods [4, 7, 10, 11, 16, 18, 21, 25, 26, 29, 32, 33, 35–37] rely on dense priors tiled at feature grids so as to detect in a sliding-window paradigm, and have dominated object detection for the recent decade, but these methods fail to shake off many hand-crafted processing steps such as anchor generation or non-maximum suppression, which block end-to-end optimization. Recent research attention has been geared towards query-based object detection [3, 17, 20, 23, 28, 31, 38] since the thriving of transformer [30] and DETR [3]. By view-20 40 60 80 100 120 140 160 inference time (ms)35.037.540.042.545.047.550.052.5AP DAB-R50-50eDAB-SwinB-50e Deformable-R50-1xDeformable-R50-50e Ada-R50-100q-1xAda-R50-1x Ada-R101-100q-3xAda-R101-3xw/o SQR w/ SQRFigure 1. The inference speed and AP for various networks on the MS-COCO val set. The red stars are the results trained with SQR. The blue circles are the results of baselines without SQR. SQR enhances the training of query-based object detectors while leaving the inference pipeline unchanged. ing detection as a direct set prediction problem, the new archetype represents the set of objects using a set of learn-able embeddings, termed as queries, which are fed to a decoder consisting of a stack (typically six) of decoding stages. Each stage performs similar operations: (1) in-teracting queries with image features via an attention-like mechanism, so the queries are aggregated with valuable in-formation that represents objects; (2) reasoning the relation among all queries so that global dependency on objects co-occurrence and duplicates could be captured; (3) interpret-ing bounding box and category from each query by a feed forward network. Queries are sequentially processed stage-by-stage, and each stage is formulated to learn a residual function with reference to the former stage’s output, aiming to refine queries in a cascaded style. As such wise, the decoding procedure implies that de-tection should be stage-by-stage enhanced in terms of IoU and confidence score. Indeed, monotonically improved AP is empirically achieved by this procedure. However, when visualizing the stage-wise predictions, we surprisingly ob-serve that decoder makes mistakes in a decent proportion of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23756 stage 1 stage 2 stage 3 stage 4 stage 5 stage 6 Traffic light | 0.41 Traffic light | 0.38 Traffic light | 0.38 Traffic light | 0.37 Traffic light | 0.21 Traffic light | 0.27 Remote | 0.25 Remote | 0.22 Cell phone | 0.26 Remote | 0.12 Cell phone | 0.35 Remote | 0.12 Cell phone | 0.39 Remote | 0.12 Cell phone | 0.42 Remote | 0.10 Cell phone | 0.23 Cell phone | 0.22 Figure 2. Are query-based object detectors always enhancing predictions stage-by-stage? The traffic light at stage 1 gets a confident score of 0.41, while from stage 2 to 5 the confidence gradually decreases to 0.21 (Upper); the remote at stage 3 was wrongly classified as a cell phone, and from stage 3 to 6 the mistake was amplified from 0.26 to 0.42 (Lower). The visualization is acquired from Adamixer-R50 (42.5 AP) tested on COCO val set. cases where the later stages degrade true-positives and up-grade false-positives from the former stages. As shown in Fig.2, the traffic light at stage 1 gets categorical confidence of 0.41, while from stage 2 to 5 the confidence gradually decreases to 0.21; the remote at stage 3 was wrongly clas-sified as a cell phone, while from stage 3 to 6 the error was exacerbated from 0.26 to 0.42. We present a more detailed statistic in Section 3. This phenomenon inspires us to review the current train-ing strategy and bring two conjectures. Firstly , the respon-sibility that each stage takes is unbalanced, while supervi-sion applied to them is analogous. An early stage could make mistakes without causing too much impact because it gets chances to be corrected later, and the later stages are more responsible for the final prediction. But during train-ing, all of these stages are supervised in an equivalent man-ner and there lacks such a mechanism that places particular training emphasis on later stages. Secondly , due to the se-quential structure of the decoder, an intermediate query re-fined by a stage -no matter whether this refinement brings positive or negative effects -will be cascaded to the fol-lowing stages, while the query prior to the refinement never gets an opportunity to be propagated forward even though it emerges unscathed and might be more representative than the refined one. The cascading errors increase the diffi-culty of convergence and the sequential structure impedes the later stages from seeing prior queries during training. Based on these intuitions, we present Query Recollection (QR) as a training strategy for query-based object detectors. It cumulatively collects intermediate queries as stages go deeper, and feeds the collected queries to the downstream stages aside from the sequential structure. By each stage, the new add-ins alongside the original inputs are indepen-dently treated among each other, so the attentions and losses are calculated individually. In such a manner, QR enjoys two key features: (1)The number of supervision signals per stage grows in geometric progression, so that later stages get more supervision than the former ones, for example, the sixth stage got 32 times more supervision than the first; (2)Later stages get chance to view the outputs beyond its neighboring stage for training, which mitigates the poten-tial impact due to cascading errors. We further discover that selectively forward queries to each stage, not with the entire query collection but only those from the prior two stages, can raise the number of supervision in a Fibonacci sequence which halves the extra computing cost and brings even bet-ter results. We name it Selective Query Recollection (SQR). Our contributions are summarized in three folds: (1) We quantitatively investigate the phenomenon where query-based object detectors mispredict at the last decoding stage 23757 while predicting correctly at an intermediate one. (2)We attribute the overlooked phenomenon to two training limi-tations, and propose a simple and effective training strategy SQR that elegantly fits query-based object detectors. (3)We conduct experiments on Adamixer, DAB DETR, and De-formable DETR across various training settings that verify its effectiveness (Fig.1). |
Bandara_AdaMAE_Adaptive_Masking_for_Efficient_Spatiotemporal_Learning_With_Masked_Autoencoders_CVPR_2023 | Abstract Masked Autoencoders (MAEs) learn generalizable rep-resentations for image, text, audio, video, etc., by recon-structing masked input data from tokens of the visible data. Current MAE approaches for videos rely on random patch, tube, or frame based masking strategies to select these tokens. This paper proposes AdaMAE, an adaptive masking strategy for MAEs that is end-to-end trainable. Our adaptive mask-ing strategy samples visible tokens based on the semantic context using an auxiliary sampling network. This network estimates a categorical distribution over spacetime-patch tokens. The tokens that increase the expected reconstruction error are rewarded and selected as visible tokens, motivated by the policy gradient algorithm in reinforcement learning. We show that AdaMAE samples more tokens from the high spatiotemporal information regions, thereby allowing us to mask 95% of tokens, resulting in lower memory require-ments and faster pre-training. We conduct ablation studies on the Something-Something v2 (SSv2) dataset to demon-strate the efficacy of our adaptive sampling approach and report state-of-the-art results of 70.0% and 81.7% in top-1 accuracy on SSv2 and Kinetics-400 action classification datasets with a ViT-Base backbone and 800 pre-training epochs. Code and pre-trained models are available at: https://github.com/wgcban/adamae.git . | 1. Introduction Self-supervised learning (SSL) aims to learn transferable representations from a large collection of unlabeled data for downstream applications (e.g., classification and detection). SSL is conducted in a two-stage framework [21], consisting of pre-training on an unlabeled dataset, and fine-tuning on a downstream task. Pre-training has shown to improve perfor-mance [3,11], convergence speed [11], and robustness [2,18], and reduce model overfitting [11, 14] on downstream tasks. Recently, masked autoencoders (MAEs) [6, 11, 16, 17, 20, 48, 57] and contrastive learning [21, 39, 47] approaches are Figure 1. Comparison of our adaptive masking with existing random patch [11], tube [48, 52], and frame [27, 37, 42] masking for masking ratio of 80%. Our adaptive masking approach selects more tokens from the regions with high spatiotemporal information while a small number of tokens from the background. mainly used for SSL. In MAEs , the input (image or video) is patchified and converted into a set of tokens. A small percentage ( e.g., 5-10%) of these tokens, namely visible to-kens, are passed through a Vision Transformer (ViT) [8]. The resulting token embeddings are then concatenated with a learnable representation for masked tokens, and are fed into a shallow decoder transformer to reconstruct masked patches. On the other hand, contrastive learning takes two augmented views of the same input and pulls them together in the embedding space, while embeddings of different in-puts are pushed away [39]. MAEs have recently gained more attention over contrastive learning methods due to the inher-ent use of a high masking ratio, which enables simple and memory-efficient training. Mask sampling techniques are critical to the success of MAEs [11, 54]. Previous studies have investigated differ-ent sampling techniques that include random “patch” [11], “tube” [54], and “frame” [54] masking (see Fig. 1). Random This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14507 patch sampling has shown to work well compared to its coun-terparts in some cases [11]. However, since not all tokens have equal information, assuming a uniform probability dis-tribution over all input tokens (for selection of visible tokens) is sub-optimal. In other words, with these random masking strategies, the visible tokens are sampled from redundant or low information regions instead of high information ones, hence resulting in inaccurate reconstructions. This inhibits MAEs from learning meaningful representations, besides requiring a relatively larger number of training iterations compared to contrastive learning methods. In this paper, we propose an adaptive sampling approach that simultaneously optimizes an MAE and an adaptive token sampling network. Our approach selects patches based on their spatiotemporal information . Unlike uniform random sampling, we first estimate the categorical distribution over all input tokens using an auxiliary network, and then sample visible tokens from that distribution. Since sampling is a non-differentiable operation, we propose an auxiliary loss for optimizing the adaptive token sampling network. Our solution is motivated by the REINFORCE algorithm [56], which comes under the family of policy gradient algorithms in Reinforcement Learning (RL) [23]. We empirically show that our adaptive token sampling network leads to sampling more tokens from high spatiotem-poral information regions compared to random masking tech-niques as shown in Fig. 1. This efficient token allocation also enables high masking ratios (i.e., 95%) for pre-training MAEs. This ultimately reduces the GPU memory require-ments and expedites pre-training while improving accuracy on downstream tasks. In summary, our contributions are: •We propose AdaMAE, a novel, adaptive, and end-to-end trainable token sampling strategy for MAEs that takes into account the spatiotemporal properties of all input tokens to sample fewer but informative tokens. •We empirically show that AdaMAE samples more to-kens from high spatiotemporal information regions of the input, resulting in learning meaningful representa-tions for downstream tasks. •We demonstrate the efficiency of AdaMAE in terms of performance and GPU memory against random “patch”, “tube”, and “frame” sampling by conducting a thorough ablation study on the SSv2 dataset. •We show that our AdaMAE outperforms state-of-the-art (SOTA) by 0.7% and 1.1% (in top-1) improvements on SSv2 and Kinetics-400, respectively. |
Chen_Detecting_Human-Object_Contact_in_Images_CVPR_2023 | Abstract Humans constantly contact objects to move and perform tasks. Thus, detecting human-object contact is important for building human-centered artificial intelligence. How-ever, there exists no robust method to detect contact between the body and the scene from an image, and there exists no dataset to learn such a detector. We fill this gap with HOT (“Human-Object conTact”), a new dataset of human-object contacts in images. To build HOT , we use two data sources: (1) We use the PROX dataset of 3D human meshes moving in 3D scenes, and automatically annotate 2D image areas for contact via 3D mesh proximity and projection. (2) We use the V-COCO ,HAKE and Watch-n-Patch datasets, and ask trained annotators to draw polygons around the 2D im-age areas where contact takes place. We also annotate the involved body part of the human body. We use our HOT dataset to train a new contact detector, which takes a single color image as input, and outputs 2D contact heatmaps as well as the body-part labels that are in contact. This is a new and challenging task, that extends current foot-ground or hand-object contact detectors to the full generality of the whole body. The detector uses a part-attention branch to guide contact estimation through the context of the sur-rounding body parts and scene. We evaluate our detector extensively, and quantitative results show that our model outperforms baselines, and that all components contribute to better performance. Results on images from an online repository show reasonable detections and generalizabil-ity. Our HOT data and model are available for research at https://hot.is.tue.mpg.de . | 1. Introduction Contact is an important part of people’s everyday lives. We constantly contact objects to move and perform tasks. We walk by contacting the ground with our feet, we sit by contacting chairs with our buttocks, hips and back, we grasp and manipulate tools by contacting them with our hands. Therefore, estimating contact between humans and objects is useful for human-centered AI, especially for applications Figure 1. Our contact detector, trained on HOT (“Human-Object conTact”) dataset, estimates contact between humans and scenes from an image taken in the wild. Contact is important for interacting humans, yet, standard in-the-wild datasets unfortunately lack such information. Our contact dataset and detector are a step towards providing this in the wild. Images are from pexels.com. such as AR/VR [1,15,26,30], activity recognition [22,39,44], affordance detection [13, 25, 34, 67], fine-grained human-object interaction detection [27, 37, 51, 57], imitation learn-ing [38, 48, 63], populating scenes with avatars [18, 62, 64], and sanitization of spaces and objects. In contrast to off-the-shelf detectors for segmenting hu-mans in images, or estimating their 2D joints or 3D shape and pose, there exists no general detector of contact. Some work exists for detecting part-specific contact, e.g., hand-object [35, 45] or foot-ground [41, 49] contact, while other work estimates contact only in constrained environments [20, 46] with limited generalization. What we need, instead, is a contact detector for the entire body that estimates detailed, body-part-related, contact maps in arbitrary images. To train this, we need data, but no suitable dataset exists at the mo-ment. We address these limitations with a novel dataset and model for detecting contact between whole-body humans and objects in color images taken in the wild. Annotating contact is challenging, as contact areas are ipso facto occluded. Think of a person standing on the floor; the sole of the shoe, and the floor area it contacts, can not be observed. A naive approach is to instrument a human with contact sensors, however, this is intrusive, cumbersome to set up and does not scale. Instead, we use This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17100 two alternative data sources, with different but complemen-tary properties: (1) We use the PROX [17] dataset, which has pseudo ground-truth 3D human meshes for real humans moving in 3D scanned scenes. We automatically annotate contact areas, by computing the proximity between the 3D meshes. (2) We use the V-COCO [16], HAKE [27], and Watch-n-Patch [54] datasets, which contain images taken in the wild. We then hire professional annotators, and train them to annotate contact areas as 2D polygons in images. Although manual annotation is only approximate, 2D an-notations are important because they allow scaling to large, varied, and natural datasets. This improves generalization. Note that in both cases we also annotate the body part that is involved in contact, corresponding to the body parts of the SMPL(-X) [32, 36] human model. We thus present HOT (“Human-Object conTact”), a new dataset of images with human-object contact; see examples in Fig. 2. The first part of HOT , called “ HOT -Generated” (Fig. 2b), has automatic annotations, but lacks variety for human subjects and scenes. The second part, called “ HOT -Annotated” (Fig. 2a), has manual annotations, but has a huge variety of people, scenes and interactions. HOT has35,750 images with 163,534contact annotations. We then train a new contact detector on our HOT dataset. Given a single color image as input, we want to know, if contact takes place in the image, the area in which it occurs, as well as the body part that is involved. Specifically, we detect 2D heatmaps in an image, encoding the contact loca-tion and likelihood, and classify each pixel in contact to one ofSMPL(-X) ’s body parts. However, training directly with HOT annotations leads to “bleeding” heatmaps and false detections. We observe that humans reason about contact by looking at body parts and their proximity to objects in their local vicinity. Therefore, we use a body-part-driven attention module that significantly boosts performance. We evaluate our detector on withheld parts of the HOT dataset. Quantitative evaluation and ablation studies show that our model outperforms the baselines, and that all com-ponents contribute to detection performance. Our body-part attention module is the key component; a visual analysis shows that it attends to meaningful image locations, i.e., on body parts and their vicinity. Qualitative results show rea-sonable detections on in-the-wild images. By applying our detector on datasets unseen during training, we show that the model generalizes reasonably well; see Fig. 1. Then, we show that our general-purpose full-body contact detector performs on par with existing part-specific contact detectors for the foot [41] or hand [35], meaning it could serve as a drop-in replacement for these. Moreover, we show that our contact detector helps contact-driven 3D human pose estimation on PROX data [17]. Finally, we show that our HOT dataset helps a state-of-the-art (SOTA) 3D body-scene contact estimator [20] generalize to in-the-wild images. (a) “HOT-Annotated” examples. (b) “HOT-Generated” examples. (c) Human body parts of SMPL-X. Figure 2. Images and contact annotations for our HOT dataset. We show examples for both its parts, i.e., “ HOT -Generated” (Sec. 3.2) and “ HOT -Annotated” (Sec. 3.3). Contact annotations include the involved body part (c), shown color coded on a SMPL-X mesh. In summary, HOT takes a step towards automatic con-tact detection between humans and objects in color images and our contributions are three-fold: (1) We introduce the task of full-body human-object contact detection in images. (2) To facilitate machine learning for this, we introduce the HOT dataset with 2D contact area heatmaps and the asso-ciated human part labels as annotations, using both auto-generated and manual annotations. (3) We develop a new contact detector that incorporates a body-part attention mod-ule. Experiments and ablations show the benefits of the proposed model and its components. Our data and code are available at https://hot.is.tue.mpg.de . |
Cao_CiaoSR_Continuous_Implicit_Attention-in-Attention_Network_for_Arbitrary-Scale_Image_Super-Resolution_CVPR_2023 | Abstract Learning continuous image representations is recently gaining popularity for image super-resolution (SR) because of its ability to reconstruct high-resolution images with ar-bitrary scales from low-resolution inputs. Existing methods mostly ensemble nearby features to predict the new pixel at any queried coordinate in the SR image. Such a local ensemble suffers from some limitations: i) it has no learn-able parameters and it neglects the similarity of the visual features; ii) it has a limited receptive field and cannot en-semble relevant features in a large field which are important in an image. To address these issues, this paper proposes acontinuous implicit attention-in-attenti on network, called CiaoSR . We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features. Furthermore, we embed a scale-aware attention in this im-plicit attention network to exploit additional non-local in-formation. Extensive experiments on benchmark datasets demonstrate CiaoSR significantly outperforms the existing single image SR methods with the same backbone. In addi-tion, CiaoSR also achieves the state-of-the-art performance on the arbitrary-scale SR task. The effectiveness of the method is also demonstrated on the real-world SR setting. More importantly, CiaoSR can be flexibly integrated into any backbone to improve the SR performance. | 1. Introduction Single image super-resolution (SISR), which aims to reconstruct a high-resolution (HR) image from a low-resolution (LR) one, has been widely employed in many practical applications [24, 61, 91]. However, deep neural networks (DNN)-based SISR methods are facing some lim-itations in some real-world scenarios with arbitrary scales. For example, camera users may want to enhance the digi-tal zoom quality by super-resolving a photo or a video to *Currently with Google. This work was done at ETH Z ¨urich. †Corresponding Authors: Kai Zhang, cskaizhang@gmail.com; Yulun Zhang, yulun100@gmail.com RDN RDN-LIIF RDN-LTE RDN-CiaoSR SwinIR SwinIR-CiaoSR26.626.827.027.227.427.6PSNR 26.6126.6826.8127.1127.0727.42Figure 1. Comparison of different backbones and implicit models. Our proposed implicit neural network on RDN [88] has better performance than SwinIR [40]. Check Section 5 for details. continuous arbitrary scales. Most existing DNN-based SISR methods [40, 44, 86] need to train a series of models for all different scales separately. However, it can be impractical to store all these models on the device due to limited storage and computing power. Alternatively, arbitrary-scale image SR methods [13, 28, 39] aim to train a single network for all scales in a continuous manner. Most existing SISR methods [40,44,86] consist of a DNN and an upsampling module ( e.g.,pixel shuffling [60]) at a discrete scale. While substantial progresses have been made in the DNN backbones for SR, there is little attempt to study the upsampling module. A natural question to ask is: Does the pixel shuffling hinder the potential of SR models? One limitation of the pixel shuffling module is that it cannot syn-thesize SR images at large unseen and continuous scales. To tackle this, one can treat synthesizing different-scale SR images as a multi-task learning problem, and train a specific upsampling module for each scale [44]. However, these tasks are dependent and highly inter-related. Neglecting the correlation of different-scale SR tasks may lead to discrete representations and limited performance. Under a certain capacity of a network, training a model on multi-tasks may sacrifice the performance or have the comparable perfor-mance on each task. These above disadvantages limit its applicability and flexibility in the real-world scenarios. To address these, most existing arbitrary-scale SR meth-ods [13, 28, 39] replace the upsampling method with an implicit neural function and boost the performance. These methods predict an RGB value at the query point in an image This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1796 (b) Coordinate -based implicit attention (c) Our implicit attention− querykeyvalue querycoordinate distanceweight: features : coordinate value key×× (a) Self -attentionquerykeyvalue weight × ×: coordinate distance c MLP MLPFigure 2. Comparisons of different attention mechanisms. (a) Self-attention can predict pixel features on the grid, but it cannot be directly used in arbitrary-scale SR without considering coordinates. (b) Most existing methods can be treated as coordinate-based implicit attention since they calculate the distance between a key and query coordinate, and then use a function gto aggregate with the value features. However, these methods ignore the distance between the features. (c) Our implicit attention not only considers the coordinate distance, but also the distance among features with visual information. by ensembling features within a local region. However, the local ensemble methods have limitations in the ensemble weights and insufficient information ( e.g.,non-local informa-tion). The ensemble weights are often calculated by the area of the rectangle between the query point and each nearest point, which is equivalent to the bilinear interpolation. Thus, those methods cannot adaptively ensemble local features since there is no trainable parameter. These weights are only related to the coordinates of the local features, but indepen-dent of the local features. Ignoring both the coordinates and the local features lose visual information and result in blurry artifacts. It is important and necessary to design a new implicit network to predict the weights and exploit more information in the local ensemble. In this paper, we propose a novel implicit attention model to enhance arbitrary-scale SR performance. Specifically, we use our attention to predict the ensemble weights by con-sidering both the similarity and coordinate distance of local features, as shown in Figure 2. Based on such learnable weights, the implicit model can adaptively aggregate local features according to different inputs. To enrich more infor-mation, we introduce an attention in our implicit attention, which helps discover more features in a larger receptive field. Our contributions are summarized as follows: •We propose a novel continuous implicit attention-in-attention network for arbitrary-scale image SR, called CiaoSR. Different from most existing local ensemble meth-ods, our method explicitly learns the ensemble weights and exploits scale-aware non-local information. •Our CiaoSR can be flexibly integrated into any backbone, allowing the network to super-resolve an image at arbitrary scales and improve the SR performance in Figure 1. •Extensive experiments demonstrate CiaoSR achieves the state-of-the-art performance in both SISR and arbitrary-scale SR tasks. Besides, our CiaoSR has good generaliza-tion on both in-scale and out-of-scale distributions. Last, we extend our method to real-world SR settings to synthe-size arbitrary-scale images.2. Related Work Single image super-resolution (SISR). SISR aims to syn-thesize high-resolution (HR) images from low-resolution (LR) images. Compared with DNN-based SR methods [6–8, 23, 27, 50, 66, 67], methods in recent years build on deep convolutional neural network (CNN) to improve the performance, such as SRCNN [19], SRResNet [38], EDSR [44], RDN [88] and RCAN [86]. To further im-prove SR performance, some methods design CNN with residual block [9, 34, 81], dense block [75, 88, 89] and others [14, 16, 18, 20, 21, 25, 31, 32, 35, 37, 41 –43, 59, 65, 70 –72, 77, 83, 84, 87, 90]. In addition, some SR methods are built based on attention mechanism [68], such as channel atten-tion [17, 56, 86], self-attention (IPT [11] and SwinIR [40], HAT [12]), non-local attention [45,48]. However, most meth-ods focus on one specific scale, which limits the applicability and flexibility in arbitrary-scale. Arbitrary-scale super-resolution. To tackle this problem, very recently, the more practical setup of arbitrary-scale SR is considered, which aims to super-resolute images with arbitrary scales by a single model. MetaSR [28] makes the first attempt to propose an arbitrary-scale meta-upscale module. To improve the performance, many arbitrary-scale methods [64,73] are proposed. With the help of implicit neu-ral representation [2, 10, 15, 22, 33, 51, 52, 55, 57, 58, 62, 63], LIIF [13] predicts the RGB value at an arbitrary query coordi-nate by taking an image coordinate and features of backbone around the coordinate. The features are extracted by single image super-resolution methods, e.g.,EDSR [44], RDN [88] and SwinIR [40]. To improve the performance, existing methods [39, 78] propose to integrate more features in SR models. For example, LTE [39] proposes a local texture estimator by characterizing image textures in the Fourier space. UltraSR [78] integrates spatial coordinates and pe-riodic encoding in the implicit network. These methods use the bilinear interpolation to ensemble nearby features. However, such an ensemble way has no learnable parame-ters. Recently, ITSRN [79] learns the weights by taking the coordinate distance and scale token into a mapping. Most methods learn the ensemble without the feature similarity. 1797 𝑥!Value 𝑽Key 𝑲Query𝑸Weight σ c𝜙"c× × LR imageSR image ↑ Grid samplecConcatenation 𝑟 Reshape∗ Convolution× Matrix product− Subtraction σ Softmax↓# DownsamplingBackbone: QueryCoordinate Continuous ImplicitAttention-in-Attention NetworkScale-aware Attention 𝜙! ∗ 𝜑"𝜑$𝑟 𝜑! σ filtersfiltersScale-aware Attention Network Query 𝑸$𝜙$𝑊×𝐻𝑠𝑊×𝑠𝐻 CoordinatedistanceKey/ValuecoordinateQuery coordinate Coordinate distance↑ −Latentcodeunfold 𝜑𝑭 𝑭𝑠 ∗ 𝑠 𝑟 ∗ Deconvolution𝑠 SamplingRDNEDSRSwinIRLocal 3×3neighboring𝑥!Coordinate distanceScale c↓# 𝑭Value 𝑽$Key 𝑲$input𝑭/Figure 3. The architecture of our continuous implicit attention-in-attention network. Given an LR image, the encoder extracts features as latent codes. For a query point, we have a query feature and key features close to the query point, and the scale-aware non-local attention module extracts non-local features as value. Last, we use the triple of query, key and value to predict the RGB value at the query point. 3. Preliminary and Motivation LetIbe a continuous image, and xbe a 2D coordinate of a pixel in the image I. Formally, given a 2D coordinate x in the continuous image domain and latent code Zextracted by deep neural networks, the RGB value can be predicted by an implicit image function which can be defined as follows, I(x) =f(Z,x), (1) where the implicit image function can be parameterized by a multilayer perceptron (MLP). Note that this implicit image function is shared by all images. For recent SISR methods [40, 86], the implicit image function fcan be implemented as a PixelShuffle [60] with convolutions with a specific scale. However, these methods are independent on the coordinates, leading to an issue that they only adapt to the specific scale and are inflexible to synthesize arbitrary-scale images. To predict the RGB value Iqat an arbitrary query co-ordinate xq, most existing methods [13, 39, 79] propose to compute the RGB value at coordinate xqby directly ensem-bling its neighborhood information, Iq:=I(xq) =X (i,j)∈Iwi,j·f(Z∗ i,j,xq−x∗ i,j),(2) whereIis the local region centered at the query coordinate xq,e.g.,Ican be top-left, top-right, bottom-left, bottom-right coordinates, and wiis the weight of the neighboring pixelx∗ i,j, which is calculated w.r.t. the area of the rectangle between xqandx∗ i,j, as shown in Figure 4. However, the performance improvement is limited because the ensemble weight wi,jis purely based on the coordinates. The visual similarities are completely ignored in the weight calcula-tion. Besides, these methods only consider the nearest latent codes, leading to a limited receptive field. In this paper, our goal is to learn the weights adaptively by leveraging both visual information and coordinate information.4. Proposed Method The above local ensemble Eqn. (2)has a similar form to the attention mechanism. Specifically, the weights wiin Eqn. (2)can be modeled as an attention map and the latter termfcan be the value in the attention mechanism. Such an attention map is related to the similarities of the latent code and coordinate, and it can be calculated using both query and key which can integrate the latent code and coordinate information. In this sense, attention models can be used to mitigate the drawbacks of previous methods ( i.e.,calculated purely based on coordinates without considering any visual information) by learning the soft weights from both visual and coordinate information. However, the use of attention in the implicit function is non-trivial because standard self-attention [68] and neighborhood attention [26] mechanisms are based on visual features, and not conditioned on the coor-dinate information. To exploit the continuous representation learning from both the coordinate information and visual features, we propose a new attention for arbitrary-scale SR. Continuous implicit attention-in-attention. The archi-tecture of the |
Frosio_The_Best_Defense_Is_a_Good_Offense_Adversarial_Augmentation_Against_CVPR_2023 | Abstract Many defenses against adversarial attacks ( e.g. robust classifiers, randomization, or image purification) use coun-termeasures put to work only after the attack has been crafted. We adopt a different perspective to introduce A5 (Adversarial Augmentation Against Adversarial Attacks), a novel framework including the first certified preemptive de-fense against adversarial attacks. The main idea is to craft a defensive perturbation to guarantee that any attack (up to a given magnitude) towards the input in hand will fail. To this aim, we leverage existing automatic perturbation analysis tools for neural networks. We study the conditions to apply A5effectively, analyze the importance of the robustness of the to-be-defended classifier, and inspect the appearance of the robustified images. We show effective on-the-fly defen-sive augmentation with a robustifier network that ignores the ground truth label, and demonstrate the benefits of ro-bustifier and classifier co-training. In our tests, A5con-sistently beats state of the art certified defenses on MNIST, CIFAR10, FashionMNIST and Tinyimagenet. We also show how to apply A5to create certifiably robust physical ob-jects. Our code at https://github.com/NVlabs/ A5allows experimenting on a wide range of scenarios be-yond the man-in-the-middle attack tested here, including the case of physical attacks. | 1. Introduction Since Deep Neural Networks (DNNs) have been found vulnerable to adversarial attacks [11, 30], researchers stud-ied various protection strategies [4, 12, 20, 42, 43]. For in-stance, adversarial training [11,30] generates attacks while asking a DNN for the correct output in training; it is simple, partially effective and widely adopted. Certified methods (e.g., IBP [12], CROWN [43], CROWN-IBP [42]) do a step more by estimating correct (although often pessimistic) out-put bounds (Fig. 1, a) used for training. Adversarial train-ing regularizes the classification landscape against the at-tacks (Fig. 1, b), but high protection often produces a loss in clean accuracy. Other partially effective defenses are basedon randomness [35, 35] or removal of the adversarial sig-nal [13,18,27,40], by moving the input back to the space of natural (non-attacked) data before classification (Fig. 1, c). All the aforementioned strategies activate the defense mechanism only after the attack has been crafted. How-ever, when dealing with adversarial attacks, the first actor to move has a significant advantage . For instance, perturb-ing an image can avoid online person identification and pre-serve privacy [6, 19, 26]. Acting first is particularly suitable against Man in the Middle (MitM) attacks that may practi-(a) certified bounds (b) adversarial training (c) image purification (d) A5 Figure 1. (a) An input x∈y∗is correctly classified. The grey box (certified bounds [12, 42, 43]) shows that, under an attack δxA, ||δxA||∞< ϵA, misclassification ( (x+δxA)∈y0) is possible. (b) Adversarial training [4, 20] creates robust DNNs with regular classification landscapes: misclassification is less likely. (c) Im-age purification [27, 40] moves the attacked input x+δxAby δxPback to correct classification. (d) A5preemptively moves xinto a non attackable position. The original CIFAR10 image classified as airplane (69.8%confidence) can be misclassified un-der attack ( ship,100% confidence). Once robustified through A5 (right), misclassification does not occur anymore. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4067 vA vector vjVector j-th element C Classifier R Robustifier xData δxAAttacking perturbation δxD Defensive perturbation x+δxAData under attack x+δxD Robustified data x+δxD+δxARobustified data under attack ϵD Defense magnitude, ||δxD||p< ϵD ϵR AAttack magnitude, ||δxA||p< ϵR Awhile training R ϵAAttack magnitude ||δxA||p< ϵAwhile testing ϵC AAttack magnitude, ||δxA||p< ϵC Awhile training C Table 1. Notation for data x, that are the inputs of the classifier C. We adopt an equivalent notation for physical objects w. cally arise in automotive [32], audio processing [13,22,38], or while communicating with a remote machine [39]. Our idea spouses this reasoning line: we investigate a novel way to augment the data to preemptively certify that it cannot be attacked (Fig. 1, d). This fits in a recent research area that up to now has received less attention than adversar-ial training. Researchers explored image and real object augmentation to guarantee a high recognition rate in pres-ence of noise or geometric distortions, but not in adversar-ial scenarios [24]. Encryption schemes coupled with deep learning [1, 28] or watermarking [16] only partially protect against MitM attacks. The few existing preemptive robusti-fication methods [23] include a procedure [17] that first runs a classifier on clean data and achieves preemptive robustifi-cation against MitM attacks through iterative optimization; these are partially effective and do not provide any certifica-tion. Our novel framework encompasses most of the afore-mentioned cases while also introducing for the first time the concept of certified robustification. More in detail, our man-ifold contributions are: (i) We introduce A5(Adversarial Augmentation Against Adversarial Attacks), a comprehensive framework for preemptive augmentation to make data and physical objects certifiably robust against MitM adversarial at-tacks. As far as we know, this is the first time certified defense is achieved in this way. Since we provide cer-tified robustness, we guarantee protection against any form of white, grey or black box attack. (ii) We test different flavours of A5on standard datasets. By doing so, we study the connection between the ro-bustness of the legacy classifier, the magnitude of the defensive augmentation, and the protection level de-livered by A5. We show A5achieving state-of-the-art certified protection against MitM attacks by training a robustifier DNN coupled with the legacy classifier, and even better results for co-training of the robustifier and classifier. We perform a critical, visual inspection of the robustified images to answer an interesting theo-retical question ( how does a non-attackable image look like? ) and potentially provide directions for the acqui-sition of inherently robust images. (iii) Using Optical Character Recognition (OCR) as an ex-ample, we show the application of A5for the design of certifiably robust physical objects, which extends [24] to the case of certified defense against MitM attacks. (iv) We share our code at https://github.com/ NVlabs/A5 , to allow replicating our results or test-ingA5in scenarios not considered here, for instance on other datasets or for protection against physical ad-versarial attack ( e.g., adversarial patches). |
Dou_GaitGCI_Generative_Counterfactual_Intervention_for_Gait_Recognition_CVPR_2023 | Abstract Gait is one of the most promising biometrics that aims to identify pedestrians from their walking patterns. However, prevailing methods are susceptible to confounders, result-ing in the networks hardly focusing on the regions that re-flect effective walking patterns. To address this fundamen-tal problem in gait recognition, we propose a Generative Counterfactual Intervention framework, dubbed GaitGCI, consisting of Counterfactual Intervention Learning (CIL) andDiversity-Constrained Dynamic Convolution (DCDC). CIL eliminates the impacts of confounders by maximiz-ing the likelihood difference between factual/counterfactual attention while DCDC adaptively generates sample-wise factual/counterfactual attention to efficiently perceive the sample-wise properties. With matrix decomposition and di-versity constraint, DCDC guarantees the model to be effi-cient and effective. Extensive experiments indicate that pro-posed GaitGCI: 1) could effectively focus on the discrimi-native and interpretable regions that reflect gait pattern; 2) is model-agnostic and could be plugged into existing mod-els to improve performance with nearly no extra cost; 3) ef-ficiently achieves state-of-the-art performance on arbitrary scenarios (in-the-lab and in-the-wild). | 1. Introduction Gait recognition aims to utilize walking patterns to iden-tify pedestrians without explicit cooperation, thus drawing rising attention. Current gait recognition research focuses on in-the-lab [53, 69] and in-the-wild scenarios [73, 76] for theoretical analysis and practical application, respectively. The key to addressing gait recognition is to fully cap-ture the effective visual cues of the gait patterns, i.e., the regions close to the body boundary [39, 60] for both in-the-lab scenarios and in-the-wild scenarios. However, the at-tention analysis [4, 59, 68] on prevailing methods in Fig. 1 *Corresponding author. /root/2023cvpr/CASIA_tuning/dataset0_test/059/nm-02/090/059-nm-02-090-077.png /root/2023cvpr/CASIA_tuning/dataset0_test/008/cl-01/144/008-cl-01-144-070.png /root/2023cvpr/CASIA_tuning/dataset0_test/025/nm-03/036 /root/2023cvpr/CASIA_tuning/dataset0_test/044/nm-03/144/044-nm-03-144-048.png /root/2023cvpr/CASIA_tuning/dataset0_test/044/cl-01/090/044-cl-01-090-053.png /root/2023cvpr/CASIA_tuning/dataset0_test/066/cl-02/036/066-cl-02-036-015.png /root/2023cvpr/CASIA_tuning/dataset0_test/013/bg-01/090/013-bg-01-090-081.png /root/2023cvpr/CASIA_tuning/dataset0_test/038/bg-01/036/038-bg-01-036-042.png /root/2023cvpr/CASIA_tuning/dataset0_test/022/bg-02/144/022-bg-02-144-044.png (a) Silhouette(b) Existing method(c)Proposed GaitGCI 025-nm-03-036-006.png Figure 1. Network attention comparison. From top to down: sil-houette, existing method, and proposed GaitGCI. The confounders make the existing model collapse into suboptimal attention re-gions. By contrast, GaitGCI could effectively focus on the dis-criminative and interpretable regions ( i.e., close to the bound-ary [39, 60]) that could represent walking patterns. indicates that the existing methods hardly capture the effec-tive gait patterns and tend to collapse into the suboptimal attention regions, which would deteriorate the gait repre-sentation. We argue that this phenomenon is caused by the network’s susceptibility to the confounders [21, 32], which may provide shortcuts [21, 32] for the models rather than the valid gait-related patterns. For example, the attention regions of prevailing methods are related to viewpoints [64] or walking conditions [30]. As shown in Fig. 1, the prevail-ing network tends to focus on the head under the front view and the head/feet under the side view. However, the major-ity of the gait-related information close to the boundary is neglected. Therefore, how to alleviate the impact of con-founders is a fundamental problem to model discriminative and interpretable gait representation. Motivated by this, we propose a generative counterfac-tual intervention framework, named GaitGCI, consisting of Counterfactual Intervention Learning (CIL) and Diversity-Constrained Dynamic Convolution (DCDC). The core idea of CIL is to leverage the counterfactual-based causal infer-ence to alleviate the impact of confounders and mine the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5578 Gait3D (128x88) Gait3D (64x44) GREW CASIA-B (NM) CASIA-B (BG)CASIA-B (CL)OU-MVLP N/A17.535.052.5 N/A15.030.045.0 N/A 42.0 54.0 66.0N/A 95.25 96.5 97.75N/A 89.0 92.0 95.0N/A 74.0 80.0 86.0N/A88.2590.091.75 GaitSet GaitPart GaitGL CSTL OursFigure 2. GaitGCI could achieve state-of-the-art performance un-der arbitrary scenarios, including in-the-lab scenarios [53, 69] and in-the-wild scenarios [75, 76]. direct causality link between factual attention and predic-tion. Specifically, we first construct a causal analysis tool (i.e., Structural Causal Model [47]) to formulate the causal-ity links among the input, attention, and prediction. Then, the training objective is modified from maximizing the orig-inal likelihood that contains confounders to maximizing the likelihood difference between the factual/counterfactual at-tention, which forces the network to focus on the direct causality between the factual attention and the prediction instead of collapsing into the confounders. Further, considering that the previous network to pro-duce factual attention is static and the mainstream coun-terfactual is pre-defined distribution [11, 49] ( e.g., random or normal distribution), which limits the ability of the net-work to perceive the sample-wise properties. Therefore, we propose a Diversity-Constrained Dynamic Convolution (DCDC) to efficiently produce the sample-adaptive ker-nel, which aims to generate factual/counterfactual atten-tion. Specifically, we first decouple the dynamic convo-lution [57, 67] into the sample-agnostic convolution and sample-adaptive convolution. Then, to improve the effi-ciency, we apply the matrix decomposition to decompose sample-adaptive convolution into two bases and a gener-ative affinity matrix, which transforms dense convolution integration in high-dimensional space into the aggregation of bases in low-dimensional space. Besides, to guarantee the representation power, we propose a rank-based diversity constraint on two bases of the sample-adaptive convolution. By alleviating the impact of confounders, the proposed method: (1) could effectively focus on the discriminative and interpretable regions instead of collapsing into the con-founders; (2) is model-agnostic and could boost the perfor-mance of prevailing methods; (3) could efficiently achievestate-of-the-art performance under arbitrary scenarios (in-the-lab and in-the-wild) as shown in Fig. 2. The main contributions are summarized as follows: • We present counterfactual intervention learning (CIL) to alleviate the impact of confounders. CIL could ef-fectively force the model to focus on the regions that reflect gait patterns by maximizing the likelihood dif-ference between factual/counterfactual attention. • We present diversity-constrained dynamic convolution (DCDC) to generate factual/counterfactual attention in a sample adaptive manner. Matrix decomposition and diversity constraint guarantee efficiency and represen-tation power, respectively. • Extensive experiments demonstrate that the proposed framework efficiently achieves state-of-the-art perfor-mance in arbitrary scenarios. Besides, the proposed methods could serve as a plug-and-play module to boost the performance of prevailing models. |
Chen_Understanding_and_Improving_Visual_Prompting_A_Label-Mapping_Perspective_CVPR_2023 | Abstract We revisit and advance visual prompting ( VP), an input prompting technique for vision tasks. VP can reprogram a fixed, pre-trained source model to accomplish downstream tasks in the target domain by simply incorporating univer-sal prompts (in terms of input perturbation patterns) into downstream data points. Yet, it remains elusive why VP stays effective even given a ruleless label mapping ( LM) between the source classes and the target classes. Inspired by the above, we ask: How is LM interrelated with VP? And how to exploit such a relationship to improve its accuracy on target tasks? We peer into the influence of LM on VP and provide an affirmative answer that a better ‘quality’ of LM (assessed by mapping precision and explanation) can consistently improve the effectiveness of VP . This is in con-trast to the prior art where the factor of LM was missing. To optimize LM, we propose a new VP framework, termed ILM-VP (iterative l abel m apping-based v isual p rompting), which automatically re-maps the source labels to the target labels and progressively improves the target task accuracy of VP . Further, when using a contrastive language–image pretrained (CLIP) model for VP , we propose to integrate an LM process to assist the text prompt selection of CLIP and to improve the target task accuracy. Extensive exper-iments demonstrate that our proposal significantly outper-forms state-of-the-art VP methods. As highlighted below, we show that when reprogramming an ImageNet-pretrained ResNet-18 to 13 target tasks, ILM-VP outperforms base-lines by a substantial margin, e.g., 7.9% and 6.7% accuracy improvements in transfer learning to the target Flowers102 and CIFAR100 datasets. Besides, our proposal on CLIP-based VP provides 13.7% and 7.1% accuracy improvements on Flowers102 and DTD respectively. Code is available at https://github.com/OPTML-Group/ILM-VP . | 1. Introduction When learning new knowledge, humans typically start to compare and connect it with the knowledge that they were familiar with. The same idea is also applied in ML. For ex-ample, in the ‘pretraining + finetuning’ paradigm, an ML model ( e.g., deep neural network or DNN) is first trained on a (usually large) source dataset. When a relevant down-Fig. 1. Overview of VP pipelines (prior art [1,2] and our proposal termed ILM-VP) and accuracy improvement achieved by ILM-VP on target image classification tasks at-a-glance. Generally speaking, VP aims to generate a universal input perturbation template ( i.e., ‘visual prompt’) and lever-age a source-target LM (label mapping) in order to drive the fixed source model ( e.g., pretrained on ImageNet-1K) to conduct a target task ( e.g., Flowers102 image classification). Compared to the prior art, our proposal (ILM-VP) couples the design of LM with VP training. The resulting LM-VP co-design improves target task accuracy across a variety of target image classification tasks using a fixed ImageNet-pretrained source model. stream task is present, the pre-trained model is then fine-tuned over the target dataset. This learning paradigm has been predominant in the classical transfer learning [3–8] as well as in the recent deep representation learning [9–13]. However, finetuning the pre-trained model requires ei-ther partial or entire model modifications. If the pre-trained model is of large size, then it becomes too costly to store a modified copy of the pre-trained model for each downstream task. In contrast, visual prompting ( VP) (see Fig. 1 ), also known as model reprogramming or adversar-ial reprogramming, provides a new alternative to finetun-ing [1, 2, 14–17]. Instead of directly modifying the pre-trained source model, VP integrates an input transforma-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19133 tionand/or an output transformation to reprogram the fixed source model to accomplish a new target task; see an il-lustration of existing VP framework in Fig. 1 . The input transformation is typically realized by incorporating (data-agnostic) input perturbations ( i.e., prompts) into input sam-ples, and the output transformation is given by a function that maps source labels to target labels, known as label map-ping ( LM). Recently, VP has shown great promise in var-ious applications of foundation models, ranging from pre-trained vision models [1, 14, 15, 17–20] to language-vision models [2, 21–23]. The idea of prompt learning originated from in-context learning or prompting in natural language processing (NLP) [24–26]. However, when it is introduced to the vision do-main [1, 2], new questions arise. First , the recent work [1,14,27] showed that VP remains powerful even if the tar-get task largely deviates from the source domain. For exam-ple, a new performance record on target medical datasets is achieved in [1] when using VP to reprogram the fixed, ImageNet pre-trained source model. The ‘mystery’ in this example is that LM is conducted between two seemingly irrelevant source and target domains. Despite the lack of interpretability, VP can still leverage such connected source labels and the source model to effectively predict target data points. This raises the first open question: What is the ratio-nality behind LM and how to explore its influence on VP? Second , unlike prompt learning in the NLP domain, input prompts in the vision domain are typically given by ‘noisy’ perturbations to image pixels; see illustration in Fig. 1 . To-gether with the lack of interpretability of LM, the second open question is: How to interpret LM and the seemingly random perturbation pattern in VP? As mentioned above, the lack of understanding of LM and the poor interpretability of VP drive our stud-ies in this work. We develop a new visual prompting framework, termed ILM-VP (iterative l abel m apping-based visual p rompting), which provides an interactive and ex-plainable design between LM and prompt learning ( i.e., input prompt generation); see Fig. 1 for the schematic overview. Our proposal can automatically adjust LM be-tween the source domain and the target domain by taking both mapping precision and explanation into consideration, and can leverage the optimized LM to further improve the accuracy and the explainability of prompt learning. Al-though some prior work [1,17,27] attempted to improve the quality of LM as well as the overall performance of VP, they are different from our proposal in two major aspects. First , none of the prior work co-designed LM and VP. For exam-ple, the prior art [1] used a pre-prompt prediction frequency to determine the LM function. However, we find signifi-cant inconsistency between the pre-prompt and post-prompt prediction frequency of the same source model, which ex-plains the sub-optimality of the current VP methods due tothe lack of mapping precision. Second , to the best of our knowledge, VP is still treated as a ‘black box’ in the prior work. Yet, our design can provide graceful visual explana-tions to the underlying mechanisms of VP. Third , we for the first time show that LM can provide a unified solution to improving the accuracy of VP to re-purpose both vision and language-vision source models. Our contributions are unfolded below. ¬We revisit the LM problem in VP and uncover the deficiencies of existing LM methods: the lack of mapping precision and the lack of explanation. Given the importance of LM, we propose the first LM-VP co-design framework, termed ILM-VP, through a novel bi-level optimization viewpoint. ®Beyond LM for vision models, we show that LM can also be generalized to assist the text prompt selection of CLIP (contrastive language–image pretraining) and to im-prove the target task accuracy of VP using the CLIP model. ¯We empirically demonstrate the accuracy and expla-nation merits of our proposal across multiple source models and target datasets. |
Inoue_Towards_Flexible_Multi-Modal_Document_Models_CVPR_2023 | Abstract Creative workflows for generating graphical documents involve complex inter-related tasks, such as aligning ele-ments, choosing appropriate fonts, or employing aestheti-cally harmonious colors. In this work, we attempt at build-ing a holistic model that can jointly solve many different design tasks. Our model, which we denote by FlexDM , treats vector graphic documents as a set of multi-modal elements, and learns to predict masked fields such as ele-ment type, position, styling attributes, image, or text, using a unified architecture. Through the use of explicit multi-task learning and in-domain pre-training, our model can better capture the multi-modal relationships among the different document fields. Experimental results corroborate that our single FlexDM is able to successfully solve a multitude of different design tasks, while achieving performance that is competitive with task-specific and costly baselines.1 | 1. Introduction Vector graphic documents are composed of diverse multi-modal elements such as text or images and serve as the dominant medium for visual communication today. The graphical documents are created through many different design tasks, e.g., filling in a background image, chang-ing font and color, adding a decoration, or aligning texts. While skilled designers perform tasks based on their de-sign knowledge and expertise, novice designers often strug-gle to make decisions to create an effective visual presen-tation. To assist such novice designers, interactive frame-works equipped based on models that learn design knowl-edge from completed designs have been proposed [12, 38]. Our present work proposes models that can be used in such systems, with a particular focus on developing holistic mod-els that can flexibly switch between design tasks. Design tasks are characterized by 1) the variety of 1Please find the code and models at: https://cyberagentailab.github.io/flex-dm .possible actions and 2) the complex interaction between multi-modal elements. As discussed above, a designer can make almost any edit to the appearance of a vector graphic document, ranging from basic layout to nuanced font styling. While there have been several studies in solv-ing specific tasks of a single modality, such as layout gen-eration [3,13,23,26,30], font recommendation [56], or col-orization [22,40,54], in realistic design applications, we be-lieve it is essential to build a flexible model that can consider multiple design tasks in a principled manner to make auto-mated decisions on creative workflow. In this work, we refer to a certain attribute of an element as a field and formulate the various design tasks as a uni-fiedmasked field prediction , which is inspired by the recent masked autoencoders [9,15] and multi-task models [19,36]. The key idea is to utilize masking patterns to switch among different design tasks within a single model; e.g., element filling can be formulated as predicting all the fields of the newly added element. Our flexible document model, de-noted by FlexDM , consists of an encoder-decoder architec-ture with a multi-modal head dedicated to handling different fields within a visual element. After pre-training with ran-dom masking strategy, we train FlexDM by explicit multi-task learning where we randomly sample tasks in the form of masking patterns corresponding to the target design task. We illustrate in Figs. 1 and 2 an overview of FlexDM, with emphasis on the correspondence between design tasks and masking patterns. Through our carefully designed experiments, we show that our proposed FlexDM performs favorably against base-lines in five design tasks using the Rico [7] and Crello [52] datasets. We also study how different modeling approaches affect the final task performance in the ablation study. Fi-nally, we apply our framework to several previously stud-ied design tasks with minimal modifications and show that the performance matches or even surpasses the current task-specific approaches. Our contributions can be summarized in the following. • We formulate multiple design tasks for vector graphic documents by masked multi-modal field prediction in a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14287 FlexDM Layout generation Textsfilling Font & color stylingImagesfilling Elementfilling…type: Textpos: (150, 30)size: (200, 90)text: Happy\nHolidays!image: font: color:Arial(210,220,100)-[MASK][MASK][NULL]type: Textpos: (150, 30)size: (200, 90)text: Happy\nHolidays!image: font: color:…Figure 1. Examples of the design tasks that can be solved by our proposed FlexDM model, which is designed to process a vector graphic document consisting of an arbitrary number of elements ( e.g., text). Each element is composed of multi-modal fields indicating its attribute properties ( e.g., text content, position, font color, etc.). set of visual elements. • We build a flexible model to solve various design tasks jointly in a single Transformer-based model via multi-task learning. • We empirically demonstrate that our model constitutes a strong baseline for various design tasks. |
Cote_The_Differentiable_Lens_Compound_Lens_Search_Over_Glass_Surfaces_and_CVPR_2023 | Abstract Most camera lens systems are designed in isolation, sepa-rately from downstream computer vision methods. Recently, joint optimization approaches that design lenses alongside other components of the image acquisition and process-ing pipeline—notably, downstream neural networks—have achieved improved imaging quality or better performance on vision tasks. However, these existing methods optimize only a subset of lens parameters and cannot optimize glass materials given their categorical nature. In this work, we develop a differentiable spherical lens simulation model that accurately captures geometrical aberrations. We propose an optimization strategy to address the challenges of lens design—notorious for non-convex loss function landscapes and many manufacturing constraints—that are exacerbated in joint optimization tasks. Specifically, we introduce quan-tized continuous glass variables to facilitate the optimization and selection of glass materials in an end-to-end design con-text, and couple this with carefully designed constraints to support manufacturability. In automotive object detection, we report improved detection performance over existing de-signs even when simplifying designs to two-or three-element lenses, despite significantly degrading the image quality. | 1. Introduction The prevailing design paradigm for typical optical sys-tems is to conceive them in isolation by use of simplified im-age quality metrics such as spot size [ 28]. However, achiev-ing ideal imaging properties or optimal performance on com-puter vision tasks generally requires a more comprehensive approach that includes the remaining parts of the image acquisition and processing chain, in particular the sensor, image signal processing, and downstream neural networks. Over the years, many works have addressed the joint de-sign of simple optical systems such as diffractive optical elements (DOEs) [ 3,20,27,31]. These works approach joint optics design by simplifying the design to a single phase plate that allows for a differentiable paraxial Fourier image formation model, optimizable via stochastic gradi-Scene [mm]S-LAL12 S-LAH92 [mm]S-TIM1 S-LAH96 0° 5° 10° 15° 20° 25° 0° 5° 10° 15° 20° 25° (a) Baseline Lens (b) Optimized LensFigure 1. We introduce a differentiable lens simulation model and an optimization method to optimize compound lenses specifically for downstream computer vision tasks, and apply them to automo-tive object detection. Here, although the optimized two-element lens has a worse average spot size than the baseline lens ( 136 µm vs80 µm ), it achieves a better mean average precision (AP) on the BDD100K dataset (32.0 vs 30.3). The optimized lens sacrifices optical performance near the corners for better performance in the small and medium field values where most of the objects are located. In lens layout plots, dashed lines represent the baseline/optimized counterpart and annotations indicate the optimized glass materials. ent descent (SGD) variants. More recently, several differ-entiable lens simulation models have been introduced to address the more complex compound lens systems present in most commodity-type cameras. Tseng et al. [35] build such a model by training a proxy neural network, whereas other works [ 11,17,32] directly implement differentiable ray-tracing operations in automatic differentiation frame-works [ 1,22], an idea also discussed in [ 6,40,41]. However, all relevant previous works [ 11,17,32,35] optimize over only a subset of possible surface profiles and spacings, and ignore the optimization of glass materials altogether. Yet, allowing alllens variables to be freely optimized—that is, without predefined boundaries—provides an opportunity for increased performance on downstream tasks. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20803 Unfortunately, lens design optimization is no trivial pro-cess. Even optimizing for traditional optical performance metrics presents significant difficulties, notably: harsh loss function landscapes with abundant local minima and saddle points [ 30,36,39], restrictive manufacturing con-straints [ 2,28], and risk of ray-tracing failures. Optimizing a lens jointly on vision tasks only exacerbates these pitfalls due to the noisy gradients of SGD when applied to complex vision models [ 35]. Moreover, joint optimization does not naturally allow external supervision from lens designers and, as such, does not necessarily result in a manufacturable lens. In this work, we introduce a computationally efficient and differentiable pipeline for simulating and differentiating through compound spherical refractive lenses with respect to all design parameters in an end-to-end manner. Our for-ward model integrates exact optical ray tracing, accurate ray aiming, relative illumination, and distortion. Furthermore, we develop an optimization strategy to facilitate the end-to-end design of refractive lenses using SGD-based optimizers while strongly encouraging manufacturable outcomes. To this end, we carefully define losses to handle design con-straints, and introduce quantized continuous glass variables to facilitate the process of selecting the best glass materials among glass catalogs that contain dozens of candidates—a challenge unmet in prior joint optimization methods. We apply our simulation and optimization pipeline to the task of object detection (OD). We find that even simple two-element lenses such as the ones in Fig. 1 can be compelling candidates for low-cost automotive OD despite a noticeably worse image quality. Then, we validate the proposed method by demonstrating that optimizing the lens jointly with the OD model leads to consistent improvements in detection performance. We make the following contributions: •We introduce a novel method for simulating and opti-mizing compound optics with respect to glass materials, surface profiles, and spacings. •We validate the method on the end-to-end optimization of an OD downstream loss, with lenses specifically optimized for intersection over union (IoU) of bounding boxes predicted from a jointly trained detector. •We demonstrate that the proposed method results in improved OD performance even when reducing the number of optical elements in a given lens stack. In addition, we release our code and designs1in the hope of enabling further joint design applications. Limitations In end-to-end optics design, the inherent reso-lution of the dataset used to represent real-world scenes—a result of the pixel count, imaging quality, and compression artifacts—needs to be discernibly superior to the modeled optics if meaningful conclusions are to be drawn. Hence, we focus on simple lenses with strong geometrical aberrations, 1https : / / github . com / princeton -computational -imaging/joint-lens-designTseng [ 35] Sun [ 32] Hale [ 11] Li [ 17] Ours Differentiable Lens Model Hands-Free ✗ ✓ ✓ ✓ ✓ Efficient ✓ ✗ ✓ ✓ ✓ Accurate PSFs ✓ ✓ ✗ (✓) ✓ Distortion (✓) ✓ (✓) (✓) ✓ Aspherics ✓ ✓ (✓) ✓ ✗ Optimized Lens Variables No Boundaries ✗ ✓ ✓ ✓ ✓ Spacings ✓ ✗ (✓) ✗ ✓ Surface Profiles ✓ (✓) ✗ (✓) ✓ Glass Materials ✗ ✗ ✗ ✗ ✓ Table 1. Comparison of related work on the joint optimization of refractive compound optics, where each criterion is fully ✓, partially ( ✓), or not ✗met. See text for explanations. namely refractive lenses with two to four spherical elements whose combination of aperture and field of view (FOV) ex-ceeds the capabilities of the lens configuration. Incidentally, our method does not completely alleviate the need for human supervision; as in most lens design problems, a suitable lens design starting point is required for best performance. |
Goudreault_LiDAR-in-the-Loop_Hyperparameter_Optimization_CVPR_2023 | Abstract LiDAR has become a cornerstone sensing modality for 3D vision. LiDAR systems emit pulses of light into the scene, take measurements of the returned signal, and rely on hardware digital signal processing (DSP) pipelines to construct 3D point clouds from these measurements. The resulting point clouds output by these DSPs are input to downstream 3D vision models – both, in the form of train-ing datasets or as input at inference time. Existing LiDAR DSPs are composed of cascades of parameterized opera-tions; modifying configuration parameters results in signif-icant changes in the point clouds and consequently the out-put of downstream methods. Existing methods treat LiDAR systems as fixed black boxes and construct downstream task networks more robust with respect to measurement fluctua-tions. Departing from this approach, the proposed method directly optimizes LiDAR sensing and DSP parameters for downstream tasks. To investigate the optimization of LiDAR system parameters, we devise a realistic LiDAR simulation method that generates raw waveforms as input to a LiDAR DSP pipeline. We optimize LiDAR parameters for both 3D object detection IoU losses and depth error metrics by solv-ing a nonlinear multi-objective optimization problem with a 0th-order stochastic algorithm. For automotive 3D object detection models, the proposed method outperforms manual expert tuning by 39.5% mean Average Precision (mAP). | 1. Introduction Environment perception for autonomous drones [12, 53] and vehicles [72] requires precise depth sensing for safety-critical control decisions. Scanning LiDAR sensors have been broadly adopted in autonomous driving [3, 7, 56] as they provide high temporal and spatial resolution, and re-cent advances in MEMS scanning [67] and photodiode tech-nology [64] have reduced their cost and form factor. The 3D LiDAR point cloud (PC) data that existing 3D detection methods take as input is produced by a LiDAR and digital signal processor (DSP) pipeline with many mea-surement and processing steps. Typical LiDAR sensors op-erate by sending out a laser pulse and measuring the tem-poral response through an Avalanche Photo Diode (APD). This temporal wavefront signal is fed to a DSP that extractspeaks corresponding to echos from candidate targets within the scene [70]. As such, DSP processing results in a 1000-fold data reduction for a single emitted beam, producing single or multiple 3D points per beam. Compressing the waveform into points in 3D space with minimal informa-tion loss is challenging because of object discontinuities, sub-surface scattering, multipath reflections, and scattering media, see Fig. 1. In particular, significant scattering occurs in adverse weather conditions like fog [3,4,6,20,23,25,66], rain [6,18,66] and snow [19,25,27,34]. LiDAR manufactur-ers currently handle such complications by manually adjust-ing internal sensing and DSP parameters in controlled envi-ronments and restricted real-world scenarios using a com-bination of visual inspection and depth quality metrics. Generally, LiDAR production units are black boxes with configuration parameters hidden from the user. To ac-count for noisy point cloud measurements with spurious ar-tifacts, existing work has explored methods that add sim-ulated adverse effects and point cloud degradations that model rain [18, 26, 58], fog [3, 20, 50] and snow [19] to real black-box LiDAR datasets. Augmented point clouds in hand, downstream vision models are retrained for pre-dictions more robust to point cloud data corruption. An-other approach consists of generating synthetic measure-ments from 3D scenes using rendering engines [24, 57, 69]. Such existing methods typically avoid simulating transient light propagation and signal processing by converting 3D scene depth directly into a point cloud, thus lack physi-cally realistic modeling of fluctuations arising from mul-tipath effects or measurement noise. Notably, existing sim-ulation methods that alter measurements or generate syn-thetic point clouds do not optimize sensing or DSP param-eters for downstream vision performance. In this work, we directly optimize LiDAR pulse con-figuration and DSP hyperparameters for end-to-end down-stream 3D object detector losses and PC depth quality met-rics, a challenging task because hyperparameter space in-volves tens to hundreds of categorical, discrete and effec-tively continuous parameters affecting downstream tasks in complex nonlinear ways via the intermediate point cloud. Examples of categorical hyperparameters are Velodyne LiDAR sensor return modes that configure internal wave-front peak selection algorithms for point cloud formation; This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13404 LiDARLiDAR a) Cluttered Surfaces b) Strong Retroreflectors RangeIntensity Intensity Range Point Cloud DSP Point Cloud DSP LiDARc) Ambient Light Intensity Range Point Cloud DSP Ground Truth Measurement Emitted Pulse Received Echo Figure 1. LiDAR Point Cloud Formation. Typical LiDAR sensor PC measurements are produced by a multi-stage measurement and signal processing chain: The LiDAR emits a laser pulse. This signal travels through the scene and returns to the detector after single or multiple reflections. Cluttered surfaces (a), strong retroreflectors (b) and ambient light are introduced (c) in the returned signals. Thus, the full transient waveform read by sensors is the superposition of multiple return paths. The DSP, itself a chain of processing blocks, processes all temporal waveforms and extracts a continuous stream of 3D points that forms the final point cloud (bottom). rotation velocity, which impacts angular resolution, is an example of a continuous hyperparameter [4]. Grid search optimization is impractical here because of combinatorial explosion. Orthogonally to LiDAR, it was recently shown that 0th order solvers can find camera DSP hyperparameters that improve downstream 2D object de-tectors [36, 48]. We propose an optimization method for LiDAR sensing and DSP hyperparameters that minimizes end-to-end domain-specific losses such as RMSE of the measured depth against ground truth and IoU measured on downstream 3D object detection. To assess the proposed method, we devised a LiDAR simulation method based on the CARLA engine [13] that models a LiDAR DSP as well as the full transient noisy waveform formed by multiple laser echoes. We optimize sensing and DSP hyperparam-eters by solving a Multi-Objective black-box Optimization (MOO) problem with a novel CMA-ES (Covariance Matrix Adaptation-Evolution Strategy [21]) that relies on a max-rank multi-objective scalarization loss [36] to dynamically improve scale matching between different loss components. In combination with a novel champion selection method, it finds a balanced Pareto-optimal solution for which no loss component has a comparatively poor value for LiDAR op-timization with multiple objectives. We validate the pro-posed optimization method for 3D object detection and point cloud depth estimation, both in simulation and using an off-the-shelf experimental LiDAR sensor. Specifically, we make the following contributions: • We introduce a LiDAR wavefront simulation for the CARLA simulation environment that models realistic transient scene responses. • We devise a novel multi-objective optimization method for balanced MOO of LiDAR parameters. • We validate end-to-end LiDAR sensing and DSP opti-mization for 3D object detection and depth estimation through simulation and with a real system. For all ap-plications, our approach outperforms existing state-of-the-art methods, including expert tuning. Simulator code and DSP models are published here1. Limitations Because commercial LiDAR units are IP-protected black boxes, interfacing their DSP hyperparam-eters is not straightforward. While the off-the-shelf LiDAR system used in this work makes some DSP hyperparame-ters accessible, most LiDAR systems are completely closed. We hope that these findings spur LiDAR vendors to follow the lead of digital camera and ISP (Image Signal Processor) vendors and open their processing pipelines. |
Hyung_Local_3D_Editing_via_3D_Distillation_of_CLIP_Knowledge_CVPR_2023 | Abstract 3D content manipulation is an important computer vi-sion task with many real-world applications (e.g., prod-uct design, cartoon generation, and 3D Avatar edit-ing). Recently proposed 3D GANs can generate diverse photorealistic 3D-aware contents using Neural Radiance fields (NeRF). However, manipulation of NeRF still remains a challenging problem since the visual quality tends to degrade after manipulation and suboptimal control han-dles such as 2D semantic maps are used for manipula-tions. While text-guided manipulations have shown po-tential in 3D editing, such approaches often lack local-ity. To overcome these problems, we propose Local Edit-ing NeRF (LENeRF), which only requires text inputs for fine-grained and localized manipulation. Specifically, we present three add-on modules of LENeRF , the Latent Resid-†This work was done during an internship at Kakao Enterprise Corp.ual Mapper, the Attention Field Network, and the Deforma-tion Network, which are jointly used for local manipulations of 3D features by estimating a 3D attention field. The 3D attention field is learned in an unsupervised way, by distill-ing the zero-shot mask generation capability of CLIP to the 3D space with multi-view guidance. We conduct diverse ex-periments and thorough evaluations both quantitatively and qualitatively.1 | 1. Introduction 3D content editing has many real-world applications in-cluding but not limited to product design, cartoon gener-ation, and 3D Avatar editing. However, it often necessi-tates the use of sophisticated tools with complex interfaces, which can be difficult for novice users and labor-intensive even for seasoned professionals. While explicit 3D repre-1We will make our code publicly available. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12674 Figure 2. Concept figure of LENeRF. Our method enables local editing of 3D assets by generating the target feature and estimating a 3D mask which guides the model on where to make changes at the feature level. Note that the mask is estimated for tri-plane features, not for raw RGB outputs. sentations such as voxels and meshes are commonly used for 3D generation and editing [17,31,54], they are memory-intensive and lack photorealism. In contrast, recent ad-vances in Neural Radiance Fields (NeRF) [33] have shown promising progress in representing 3D environments using implicit representations [14, 21, 33, 37] combined with vol-ume rendering techniques that enable high-quality novel view synthesis. NeRF-based 3D GANs [4, 5, 11, 16, 36, 43, 50,52] have made further progress towards generating a cat-egory of 3D aware contents with a single model, extending the per-scene optimization scheme of NeRF. Several studies [29,44,45,48] have attempted to address the challenges of NeRF editing, yet certain limitations per-sist. Works such as Edit-NeRF [29] and CLIP-NeRF [48] have pioneered NeRF manipulations, but they are con-strained to low-resolution synthetic datasets and lack the capability to perform localized editing. Opposed to trans-lation [9, 53] or style transfer [13] tasks, editing typicallydemands a certain degree of localization. However, achiev-ing this with text-only control proves to be a challenging objective. Alternative methods [44, 45] that rely on seman-tic masks for editing face their own limitations: 2D guid-ance is not ideal for 3D editing and lacks the descriptive-ness required for fine-grained editing. Furthermore, these approaches require inversion steps and are difficult to gener-alize across different domains, as they depend on the avail-ability of labeled semantic masks. To overcome the existing limitations, we propose Lo-cal Editing NeRF (LENeRF), which focuses on the impor-tant aspects of 3D editing: photorealism, multi-view con-sistency, usability, diversity, and locality. With LENeRF, high-resolution photo-realistic radiance fields can be edited while maintaining their quality and multi-view consistency. One notable advantage of LENeRF is its text-only editing, making it more usable than other methods. This allows our approach to be applied to any domain by leveraging the multi-modal embedding space of Contrastive Language Image Pre-training (CLIP) [39]. Additionally, our method achieves real-time editing as it does not require any test-time optimization process. Our proposed approach exhibits particularly robust per-formance in local 3D editing. This is achieved through a unique method of editing features in the 3D space indepen-dently by granting position-wise freedom to the features. The naive approach of directly manipulating the latent code often results in global changes to the 3D content, because features in the 3D space are spatially entangled with each other as the entire radiance field is conditioned with a single latent code. To address this issue, we propose to generate a 3D mask on the region of interest with a masking prompt (e.g., ”hair”) and manipulate the features inside the region while leaving the rest unchanged. Inspired by the previ-ous approach which introduces the explanation method for capturing the regions of the interest [25], we estimate 3D masks in an unsupervised fashion by using 3D distillation of the 2D CLIP model. Although the CLIP model is not 3D-aware and the 3D GAN lacks text-conditioned mask genera-tion capability, our method enables the collaboration of two pre-trained models to generate a text-conditioned 3D mask, as demonstrated in Figure 1 (c). LENeRF comprises three add-on modules, namely La-tent Residual Mapper (LRM), Attention Field Network (AFN), and Deformation Network (DN) as depicted in Fig-ure 3. LRM generates a latent code that produces a target feature field. AFN generates a soft 3D mask indicating our region of interest. The source feature field is distorted using DN and subsequently interpolated with the target field to synthesize the final feature field. LENeRF is trained with CLIP guidance [38, 39], and AFN is additionally trained with CLIP-generated zero-shot pseudo labels. The main contributions of our paper are as follows: 12675 • We introduce Local Editing NeRF (LENeRF), a 3D content editing framework capable of localized, photo-realistic editing using a convenient real-time text-based interface. • Our method consists of add-on modules and does not require any domain-specific labels, allowing the method to be generalized to other models and domains. • Our proposed technique involves a novel 3D distilla-tion of CLIP knowledge, specifically an unsupervised approach that utilizes the 3D GAN and CLIP models jointly to generate 3D masks. • We present diverse quantitative and qualitative results, along with various applications such as sequential edit-ing, real image editing, and out-of-distribution editing. |
Deng_3D-Aware_Conditional_Image_Synthesis_CVPR_2023 | Abstract We propose pix2pix3D , a 3D-aware conditional gener-ative model for controllable photorealistic image synthesis. Given a 2D label map, such as a segmentation or edge map, our model learns to synthesize a corresponding image from different viewpoints. To enable explicit 3D user control, we extend conditional generative models with neural radiance fields. Given widely-available posed monocular image and label map pairs, our model learns to assign a label to every 3D point in addition to color and density, which enables it to render the image and pixel-aligned label map simulta-neously. Finally, we build an interactive system that allows users to edit the label map from different viewpoints and generate outputs accordingly. | 1. Introduction Content creation with generative models has witnessed tremendous progress in recent years, enabling high-quality,user-controllable image and video synthesis [19, 20, 24, 34]. In particular, image-to-image translation methods [29,56,84] allow users to interactively create and manipulate a high-resolution image given a 2D input label map. Unfortunately, existing image-to-image translation methods operate purely in 2D, without explicit reasoning of the underlying 3D struc-ture of the content. As shown in Figure 1, we aim to make conditional image synthesis 3D-aware, allowing not only 3D content generation but also viewpoint manipulation and attribute editing (e.g., car shape) in 3D. Synthesizing 3D content conditioned on user input is challenging. For model training, it is costly to obtain large-scale datasets with paired user inputs and their desired 3D outputs. During test time, 3D content creation often requires multi-view user inputs, as a user may want to specify the details of 3D objects using 2D interfaces from different viewpoints. However, these inputs may not be 3D-consistent, providing conflicting signals for 3D content creation. To address the above challenges, we extend conditional generative models with 3D neural scene representations. To This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4434 enable cross-view editing, we additionally encode semantic information in 3D, which can then be rendered as 2D label maps from different viewpoints. We learn the aforemen-tioned 3D representation using only 2D supervision in the form of image reconstruction and adversarial losses. While the reconstruction loss ensures the alignment between 2D user inputs and corresponding 3D content, our pixel-aligned conditional discriminator encourages the appearance and labels to look plausible while remaining pixel-aligned when rendered into novel viewpoints. We also propose a cross-view consistency loss to enforce the latent codes to be con-sistent from different viewpoints. We focus on 3D-aware semantic image synthesis on the CelebAMask-HQ [38], AFHQ-cat [16], and shapenet-car [10] datasets. Our method works well for various 2D user inputs, including segmentation maps and edge maps. Our method outperforms several 2D and 3D baselines, such as Pix2NeRF variants [6], SofGAN [11], and SEAN [87]. We further ablate the impact of various design choices and demonstrate applications of our method, such as cross-view editing and explicit user control over semantics and style. Please see our website for more results and code. Please check out the full version of our paper at arXiv. |
Han_ABCD_Arbitrary_Bitwise_Coefficient_for_De-Quantization_CVPR_2023 | Abstract Modern displays and contents support more than 8bits image and video. However, bit-starving situations such as compression codecs make low bit-depth (LBD) images (<8bits), occurring banding and blurry artifacts. Previous bit depth expansion (BDE) methods still produce unsatis-factory high bit-depth (HBD) images. To this end, we pro-pose an implicit neural function with a bit query to recover de-quantized images from arbitrarily quantized inputs. We develop a phasor estimator to exploit the information of the nearest pixels. Our method shows superior performance against prior BDE methods on natural and animation im-ages. We also demonstrate our model on YouTube UGC datasets for de-banding. Our source code is available at https://github.com/WooKyoungHan/ABCD | 1. Introduction The bit depth in digital contents means a number of bi-nary digits representing pixel values. As humans recognize a wide range of color and luminance, modern display de-vices and cameras support more than the 8-bit depth of im-age and video [21, 28]. Regardless of these efforts, the im-age and video codecs enforce HBD images to be quantized into LBD images due to bit starvation. Thus, most contents are under 8 bits leading to false contours and blurry arti-facts. Bit depth expansion, a.k.a. de-quantization, aims to recover missing bits caused by such quantizations. Conventional methods such as [6,10,19,24,36–38] have been proposed for the de-quantization problem. How-ever, these methods suffer from blurry artifacts resulting in distortions of details or false contours in extreme BDE. Recently, learning-based approaches, a.k.a. deep neural network, have shown remarkable performances in BDE [4, 9, 18, 26, 32, 40, 43]. Most learning-based approaches [4, 9, 32, 40, 43] reconstruct HBD images in an end-to-end manner. Recent methods [18, 26] recover residual compo-nents corresponding to missing bits from LBD images. In particular, the method called D16 [26], with the best per-*Corresponding author. Figure 1. Overview of arbitrary bit-depth expansion (dequan-tization) using ABCD. Our ABCD estimates dominant phasors of images and calculates the bit-query of LBD. Then, an MLP takes the estimated phasor information and bit-wise query ( s) to predict the bit-wise coefficient ( bC) of HBD images. formance so far, conducts a binary classification per each bit plane. However, D16 requires multiple deep neural net-works models for every bit-planes. Recently, an implicit neural representation (INR) which maps coordinates into signal values [25,29], shows promis-ing performances in various tasks [5, 11, 15, 22, 25, 29]. The implicit neural networks have a spectral bias prob-lem toward low frequencies, which makes INR hard to represent high-frequency components [27]. Fortunately, several solutions are developed to relax the spectral bias [15, 23, 30, 33, 41]. However, there is no INR approach for bit depth expansion problems. In this paper, we propose a novel model, the Arbitrary Bit-wise Coefficient model for De-quantization (ABCD), to recover missing bits from the randomly quantized LBD im-age to any HBD image. The proposed model addresses the spectral bias of INR and improves de-quantization quality through the use of an encoder to estimate the dominant pha-sors in the ground truth. As shown in Fig. 1, our encoder es-timates the dominant phasors to mitigate the spectral bias of INR. Then, the model utilizes an INR to achieve arbitrary-bit reconstructions in the amplitude domain. Finally, a bit This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5876 Input D16 [26] ABCD(Ours) Figure 2. Visual demonstration 3-bit to 8-bit de-quantization: input, deep network approach (D16 [26]), and our method. The neural network method [26] reduces severe false contours of input, but false contours still remain. Our ABCD removes such artifacts, clearly. decoding step converts bit coefficients into HBD images by multiplying the bit-basis. The proposed model represents a significant advancement over previous de-quantization techniques with providing high flexibility and accuracy as it effectively recovers missing bits from randomly quantized inputs. In summary, our main contributions are as follows: • We propose a bit depth expansion algorithm using an implicit neural representation with a bit query in arbi-trarily quantized bit levels and demonstrate our method achieves state-of-the-art performance. • We show that the proposed phasor estimator predicts the dominant phasors of the ground truth coefficient’ in the Fourier domain. • We validate our pre-trained model not only on five image datasets as de-quantization but also on the YouTube-UGC dataset as de-banding. |
Asokan_Spider_GAN_Leveraging_Friendly_Neighbors_To_Accelerate_GAN_Training_CVPR_2023 | Abstract Training Generative adversarial networks (GANs) stably is a challenging task. The generator in GANs transform noise vectors, typically Gaussian distributed, into realistic data such as images. In this paper, we propose a novel ap-proach for training GANs with images as inputs, but without enforcing any pairwise constraints. The intuition is that images are more structured than noise, which the generator can leverage to learn a more robust transformation. The process can be made efficient by identifying closely related datasets, or a “friendly neighborhood” of the target distribu-tion, inspiring the moniker, Spider GAN. To define friendly neighborhoods leveraging proximity between datasets, we propose a new measure called the signed inception distance (SID), inspired by the polyharmonic kernel. We show that the Spider GAN formulation results in faster convergence, as the generator can discover correspondence even between seemingly unrelated datasets, for instance, between Tiny-ImageNet and CelebA faces. Further, we demonstrate cas-cading Spider GAN, where the output distribution from a pre-trained GAN generator is used as the input to the subse-quent network. Effectively, transporting one distribution to another in a cascaded fashion until the target is learnt – a new flavor of transfer learning. We demonstrate the efficacy of the Spider approach on DCGAN, conditional GAN, PG-GAN, StyleGAN2 and StyleGAN3. The proposed approach achieves state-of-the-art Fréchet inception distance (FID) values, with one-fifth of the training iterations, in compari-son to their baseline counterparts on high-resolution small datasets such as MetFaces, Ukiyo-E Faces and AFHQ-Cats. | 1. Introduction Generative adversarial networks (GANs) [1] are designed to model the underlying distribution of a target dataset (with Siddarth Asokan is funded by the Qualcomm Innovation Fellowship, and the Robert Bosch Center for Cyber-Physical Systems Ph.D. Fellowship.underlying distribution pd) through a min-max optimiza-tion between the generator Gand the discriminator Dnet-works. The generator transforms an input zpz, typically Gaussian or uniform distributed, into a generated sample G(z)pg. The discriminator is trained to classify samples drawn from pgorpdas real or fake. The optimal generator is the one that outputs images that confuse the discriminator. Inputs to the GAN generator: The input distribution plays a definitive role in the quality of GAN output. Low-dimensional latent vectors have been shown to help dis-entangle the representations and control features of the tar-get being learnt [2, 3]. Prior work on optimizing the latent distribution in GANs has been motivated by the need to improve the quality of interpolated images. Several works have considered replacing the Gaussian prior with Gaussian mixtures, Gamma, non-parametric distributions, etc [4 –9]. Alternatively, the GAN generator can be trained with the latent-space distribution of the target dataset, as learnt by variational autoencoders [10,11]. However, such approaches are not in conformity with the low-dimensional manifold structure of real data. Khayatkhoei et al. [12] attributed the poor quality of the interpolates to the disjoint structure of data distribution in high-dimensions, which motivates the need for an informed choice of the input distribution. GANs and image-to-image translation: GANs that accept images as input fall under the umbrella of image translation . Here, the task is to modify particular features of an im-age, either within domain (style transfer) or across domains (domain adaptation). Examples for in-domain translation include changing aspects of face images, such as the ex-pression, gender, accessories, etc. [13 –15], or modifying the illumination or seasonal characteristics of natural scenes [16]. On the other hand, domain adaptation tasks aim at transform-ing the image from one style to another. Common applica-tions include simulation to real-world translation [17 –20], or translating images across styles of artwork [21 –23]. While the supervised Pix2Pix framework [22] originally proposed training GANs with pairs of images drawn from the source and target domains, semi-supervised and unsupervised ex-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3883 <latexit sha1_base64="P6Gv3Ru+NDlfmoyMUMdCIcjPq5U=">AAAGaHicfZTrbts2FMfVbvE679J0+zAM+0I0MFBsbmB5uXQDNhTt0hZDuyZZ0xaovICkjiXCFEmQdGKV0BPs6/Zwe4U9xY5sdY2ltAQEHZ7/7/Dw8MaMFM6PRv9cufrBhxu9j6593P/k088+v75544vnTs8thxOupbYvGXUghYITL7yEl8YCLZiEF2x2v9ZfnIF1QqtnvjQwKWimxFRw6tF19PB0c2u0PVo20jXixtiKmnZ4emPjKEk1nxegPJfUuUCtF1xC1U/mDgzlM5rBKw8Lfy5Sn/80HvFiWHdzEFnul/1JyEAX4G25FhVo4VxZsIoMCupz19Zq56UaK9bHYVrPPGWuNbq1tE6o4BwXbl6oekHCYfUqnoSfQ8KxIrBCZckSZBjlsMC8MmErrjDuF8CCLTzB9E8NWOq1/TYgnBVC3avCyricC8lBFZJ64oyFg2p9rdIzYZyiBbhJWCy3FPUUprily144vnf/GNIqHD/ENON4f0h29/AbdbEn1GqtGjK+g9TeeEj2d99UXRRUpSGR2rlmPpzK8Lhq6Qduzhpdm4vzbnEGJ2VO047XLN1/fNcR1Eq43RIYBrAOzlTtbbNUmpwivzI6QUuv+l/uZAK/DK7/3YToVG/EdmSjvUMyjdQpOasrzlresk5ShLK9nouVf1HhKSe/UkUeSXpGOczWzsvcT+9MglBm7kFxRFGbziXxmtQnmqTCAveyRINyK/B2Ep5TSzkeb7w3eWlyUAYUlb6MVxe8P3jw9LdnP5LfpUjBkdtkvDskxwcPams0JIfaYSx2vt/rDy5eKfBaS6YXFbovFvIarE6FM5KWboanuwqD/oCQxAG+UCrzOe4P02dwganCjvFVB2Mg9fkaNr4MWxst19a/h10b8i07QraugxrjdUiUtgWVTryG6pJ63nL4Kkn5fmSqtVfaw7sGw0c4bj+5XeP5eDve29452tm6+0PzHF+LvoluRreiONqP7kaPosPoJOIRRH9Gf0V/b/zb2+x91ft6hV690sR8Ga213s3/AHU0R3c=</latexit>G <latexit sha1_base64="P6Gv3Ru+NDlfmoyMUMdCIcjPq5U=">AAAGaHicfZTrbts2FMfVbvE679J0+zAM+0I0MFBsbmB5uXQDNhTt0hZDuyZZ0xaovICkjiXCFEmQdGKV0BPs6/Zwe4U9xY5sdY2ltAQEHZ7/7/Dw8MaMFM6PRv9cufrBhxu9j6593P/k088+v75544vnTs8thxOupbYvGXUghYITL7yEl8YCLZiEF2x2v9ZfnIF1QqtnvjQwKWimxFRw6tF19PB0c2u0PVo20jXixtiKmnZ4emPjKEk1nxegPJfUuUCtF1xC1U/mDgzlM5rBKw8Lfy5Sn/80HvFiWHdzEFnul/1JyEAX4G25FhVo4VxZsIoMCupz19Zq56UaK9bHYVrPPGWuNbq1tE6o4BwXbl6oekHCYfUqnoSfQ8KxIrBCZckSZBjlsMC8MmErrjDuF8CCLTzB9E8NWOq1/TYgnBVC3avCyricC8lBFZJ64oyFg2p9rdIzYZyiBbhJWCy3FPUUprily144vnf/GNIqHD/ENON4f0h29/AbdbEn1GqtGjK+g9TeeEj2d99UXRRUpSGR2rlmPpzK8Lhq6Qduzhpdm4vzbnEGJ2VO047XLN1/fNcR1Eq43RIYBrAOzlTtbbNUmpwivzI6QUuv+l/uZAK/DK7/3YToVG/EdmSjvUMyjdQpOasrzlresk5ShLK9nouVf1HhKSe/UkUeSXpGOczWzsvcT+9MglBm7kFxRFGbziXxmtQnmqTCAveyRINyK/B2Ep5TSzkeb7w3eWlyUAYUlb6MVxe8P3jw9LdnP5LfpUjBkdtkvDskxwcPams0JIfaYSx2vt/rDy5eKfBaS6YXFbovFvIarE6FM5KWboanuwqD/oCQxAG+UCrzOe4P02dwganCjvFVB2Mg9fkaNr4MWxst19a/h10b8i07QraugxrjdUiUtgWVTryG6pJ63nL4Kkn5fmSqtVfaw7sGw0c4bj+5XeP5eDve29452tm6+0PzHF+LvoluRreiONqP7kaPosPoJOIRRH9Gf0V/b/zb2+x91ft6hV690sR8Ga213s3/AHU0R3c=</latexit>G(a) Classical GANs (b) Spider GAN Figure 1. ( Color online) A comparison of design philosophies of the standard GANs and Spider GAN. (a) A prototypical GAN transforms high-dimensional Gaussian data, which is concentrated at the surface of hyperspheres in n-D, into an image distribution comprising a union of low-dimensional manifolds embedded in a higher-dimensional space. (b) The Spider GAN generator aims to learn a simpler transformation between two closely related data manifolds in an unconstrained manner, thereby accelerating convergence. tensions [23 –28] tackle the problem in an unpaired setting, and introduce modifications such as cycle-consistenty or the addition of regularization functionals to the GAN loss to maintain a measure of consistency between images. Exist-ing domain-adaptation GANs [29, 30] enforce cross-domain consistency to retain visual similarity. Ultimately, these ap-proaches rely on enforcing some form of coupling between the source and the target via feature-space mapping. |
Chen_ScaleDet_A_Scalable_Multi-Dataset_Object_Detector_CVPR_2023 | Abstract Multi-dataset training provides a viable solution for ex-ploiting heterogeneous large-scale datasets without extra annotation cost. In this work, we propose a scalable multi-dataset detector (ScaleDet) that can scale up its generaliza-tion across datasets when increasing the number of train-ing datasets. Unlike existing multi-dataset learners that mostly rely on manual relabelling efforts or sophisticated optimizations to unify labels across datasets, we introduce a simple yet scalable formulation to derive a unified se-mantic label space for multi-dataset training. ScaleDet is trained by visual-textual alignment to learn the label as-signment with label semantic similarities across datasets. Once trained, ScaleDet can generalize well on any given upstream and downstream datasets with seen and unseen classes. We conduct extensive experiments using LVIS, COCO, Objects365, OpenImages as upstream datasets, and 13 datasets from Object Detection in the Wild (ODinW) as downstream datasets. Our results show that ScaleDet achieves compelling strong model performance with an mAP of 50.7 on LVIS, 58.8 on COCO, 46.8 on Objects365, 76.2 on OpenImages, and 71.8 on ODinW, surpassing state-of-the-art detectors with the same backbone. | 1. Introduction Major advances in computer vision have been driven by large-scale datasets, such as ImageNet [9] and Open-Images [22] for image classification, or Kinetics [6] and ActivityNet [2] for video recognition. Large-scale datasets are crucial for training recognition models that generalize well. However, the collection of massive annotated datasets is costly and time-consuming. This is especially promi-nent in detection and segmentation tasks that require de-tailed annotations at the bounding box or pixel level. To exploit more training data without extra annotation cost, re-cent works unify multiple datasets to learn from more vi-sual categories and more diverse visual domains for detec-tion [38, 40, 47, 51] and segmentation [25, 37]. To train an object detector across multiple datasets, we need to tackle several challenges. First , multi-dataset train-test on any upstream or downstream dataset Objects365: person, sneakers,…OpenImages: person, footwear,…… Thermal: dog, peopleAquarium: fish,…OpenImages: sandwich,……satchelpersonfootwearsneakerselephantbottletrain across multiple upstream datasets ScaleDet: learn in a unified semantic label spaceFigure 1. Our scalable multi-dataset detector (ScaleDet) learns across datasets in a unified semantic label space by visual-textual alignment with label semantic similarities. At test time, ScaleDet can generalize on any given upstream and downstream dataset. ing requires unifying the heterogeneous label spaces across datasets, as label definitions are dataset-specific. The labels from two datasets may indicate the same or similar objects. For example, “footwear” and “sneakers” are two different labels in OpenImages [24] and Objects365 [34], but refer to the same type of objects (see Figure 1). Second , the train-ing setups may be inconsistent among datasets, as different data sampling strategies and learning schedules are often re-quired for datasets of different sizes. Third , a multi-dataset model should perform better than single-dataset models on individual datasets. This is challenging due to the heteroge-neous label spaces, the domain discrepancy across datasets, and the risk of overfitting to the larger datasets. To resolve the above challenges, existing work resorts to manually relabelings class labels [25], or training multiple dataset-specific classifiers with constraints to relate labels across datasets [51]. However, these methods lack scalabil-ity. The manual relabeling effort and the model complex-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7288 ity of training multiple classifiers grow rapidly as the num-ber of datasets increases. We overcome this limitation with ScaleDet : a scalable multi-dataset detector (Figure 1). We propose two innovations: a scalable formulation to unify multiple label spaces, and a novel loss formulation to learn hard label and soft label assignments across datasets. While hard label assignment serves to disambiguate class labels in probability space, soft label assignment works as a reg-ularizer to relate class labels in semantic similarity space. Unlike existing multi-dataset methods [25,37,38,40,47,51] that mostly generalize on seen datasets or seen classes, our method exploits vision-language learning to attain good generalization on both upstream and downstream datasets, where the downstream datasets can contain unseen classes and new domains. Our contributions are: • We propose a novel scalable multi-dataset training recipe for object detection. Our method utilizes text embeddings to unify and relate labels with semantic similarities across datasets, and trains a single classifier via visual-textual alignment to learn hard label and soft label assignments. • We conduct extensive experiments to demonstrate the compelling scalability and generalizability of ScaleDet in multi-dataset training. We show that ScaleDet can boost its performance as we increase the number of training datasets: LVIS [14], COCO [27], Objects365 [34] and OpenImages [24] (Sec 4.2). Furthermore, we show that ScaleDet achieves state-of-the-art performance on multi-ple benchmarks when compared to recent advanced de-tectors, e.g., Detic [49], UniDet [51] (Sec 4.3, Sec 4.4). • We evaluate the transferablity of ScaleDet on the chal-lenging “Object Detection in the Wild” benchmark (which contains 13 datasets) [26] to demontrate its competitive generalizability on downstream datasets (Sec 4.5). |
Dumont_Modular_Memorability_Tiered_Representations_for_Video_Memorability_Prediction_CVPR_2023 | Abstract The question of how to best estimate the memorability of visual content is currently a source of debate in the mem-orability community. In this paper, we propose to explore how different key properties of images and videos affect their consolidation into memory. We analyze the impact of several features and develop a model that emulates the most important parts of a proposed “pathway to memory”: a simple but effective way of representing the different hur-dles that new visual content needs to surpass to stay in mem-ory. This framework leads to the construction of our M3-S model, a novel memorability network that processes input videos in a modular fashion. Each module of the network emulates one of the four key steps of the pathway to mem-ory: raw encoding, scene understanding, event understand-ing and memory consolidation. We find that the different representations learned by our modules are non-trivial and substantially different from each other. Additionally, we ob-serve that certain representations tend to perform better at the task of memorability prediction than others, and we in-troduce an in-depth ablation study to support our results. Our proposed approach surpasses the state of the art on the two largest video memorability datasets and opens the door to new applications in the field. Our code is available athttps://github.com/tekal-ai/modular-memorability . | 1. Introduction The human brain is optimized to remember important content and forget irrelevant information. Research has shown that in the world of visual imagery, the brain’s recall ability is influenced by the content itself: certain images and videos tend to stay in memory for longer, no matter the au-dience it is shown to or the context it appears in [3, 9]. The property of visual content that makes it more or less mem-*Corresponding author. V ideo Low-lev el understanding module Mid-lev el understanding module High-lev el understanding module Contextual similarity moduleMemorability regression + mFusion V ideoFigure 1. Our proposed modular framework. Our framework predicts memorability by extracting low-level, mid-level and high-level memorability-aware representations. These representations are compared to a predefined visual context to extract features measuring similarity with this given context. Our M3-S model utilizes four modules to obtain these representations: a low-level understanding module composed of traditional feature extractors, a mid-level understanding module focused on scene and object properties, a high-level understanding module that extracts tem-poral patterns and actions, and a contextual similarity module that computes features through clustering. The feature vectors pro-duced by the modules are fused and fed to a regression module to produce memorability scores. orable is referred to as memorability, and current research studies this phenomenon as an intrinsic property. Memo-rability has been shown to be highly consistent across ob-servers [10,13,28,40], uncorrelated with aesthetics [28,29] and highly unintuitive [29]. Some studies are proposing that it might be a proxy to the utility of the information carried by visual content, as measured by the human brain [9]. Given its consistency, many previous works have tried to develop systems to predict memorability scores from vi-sual media directly. Some developments attempt to use low-level image features and specific semantic information [29, 30], while more recent work has focused on deep neu-ral networks, leveraging their ability to learn rich represen-tations through regressing ground-truth scores directly from the pixel-level visual input [31]. Here, we argue that current This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10751 DNN approaches are trying to solve the problem with black-box predictors that do not leverage the underlying structure governing memorability. Indeed, previous work [42] shows that our brain processes visual stimuli by first aggregating low level patterns (early visual areas V1 and V2), then un-derstanding the contents of the scene (higher visual areas V3A, V4v, V7), and finally integrating the meaning of the event being witnessed and linking with previous knowledge (prefrontal cortex). Although some of the existing systems work on different dimensions of the input (optical flow, raw pixels, text descriptions), they tend to overlook predictive patterns that can be acquired through a specific modeling of low-level, mid-level and high-level representations. Specif-ically, it has been shown that memorability is sensitive to a set of specific properties [29] (that we define and expand on in this work), such as clutter, camera movement, distinc-tiveness of objects, and other semantic and cognitive dimen-sions. Some of these properties are considered low-level: they correspond to simple transformations of the raw pixel input, photometric properties, clarity of image, or proper-ties of the capturing process (blurriness, camera movement, etc). Other properties can be considered mid-level, such as the composition of the scene, the type of objects in it, the general setting, etc. Finally, high-level properties are usu-ally related to the action depicted, emotion transmitted by the content, or general goals of the actors. We propose a new memorability framework that explic-itly models these three categories by instantiating modules that are specifically designed to extract representations that are relevant for memorability, and representative of each category. We call these representations tiered , as each rep-resentation captures information from a different tier (low, mid and high) of memorability properties. Our modular memorability model additionally introduces a fourth mod-ule that computes representations capturing the similarity of a given input with its most likely visual context: mod-eling this final property is key to understand contextual ef-fects on memorability. To define these modules, we per-form an in-depth analysis over the factors that influence memorability. We show that each of these modules con-tribute to memorability in their own way, that the repre-sentations they yield are more interpretable than black box counterparts, and that combining the information from these representations yields competitive models on the two main datasets for video memorability: VideoMem [13] and Me-mento10k [40]. To summarize, our key contributions are: 1. We introduce a comprehensive analysis of the factors that influence memorability, leading to a categoriza-tion in tiers that we leverage to propose a new modular framework to learn representations which capture the essence of each tier; |
Das_Weakly-Supervised_Domain_Adaptive_Semantic_Segmentation_With_Prototypical_Contrastive_Learning_CVPR_2023 | Abstract There has been a lot of effort in improving the perfor-mance of unsupervised domain adaptation for semantic seg-mentation task, however, there is still a huge gap in perfor-mance when compared with supervised learning. In this work, we propose a common framework to use different weak labels, e.g., image, point and coarse labels from the target domain to reduce this performance gap. Specifically, we propose to learn better prototypes that are representa-tive class features by exploiting these weak labels. We use these improved prototypes for the contrastive alignment of class features. In particular, we perform two different fea-ture alignments: first, we align pixel features with proto-types within each domain and second, we align pixel fea-tures from the source to prototype of target domain in an asymmetric way. This asymmetric alignment is beneficial as it preserves the target features during training, which is essential when weak labels are available from the tar-get domain. Our experiments on various benchmarks show that our framework achieves significant improvement com-pared to existing works and can reduce the performance gap with supervised learning. Code will be available at https://github.com/anurag-198/WDASS . | 1. Introduction Semantic segmentation requires pixel level annotation, which is expensive and time consuming. For real world ur-ban scenes [5, 9, 32, 42], this becomes more challenging as there are far too many objects to annotate in the scene. For example, it takes around 90 min to annotate an image for Cityscapes [5]. To reduce this annotation effort, the task of Unsupervised Domain Adaptative Semantic Segmentation (UDASS) [31, 44, 47, 48] proposes to learn from photore-alistic synthetic images [26, 28, 38] with relatively cheap labels. However, due to the domain gap between the real (target domain) and synthetic (source domain) images, this *Currently with Google. This work was done at ETH Z ¨urich. Road, Sidewalk, building, traffic sign, car, rider, sky, vegetation Image label Point label Coarse label Synthetic label Target Source Figure 1. We propose a common framework for different weak la-bel (image, point and coarse labels) for the task of Weakly Super-vised Domain Adaptative Semantic Segmentation(WDASS). Our proposed framework, utilising cheaper weak labels bridges the gap between UDA (CorDA [35], ProDA [43], DASS [16]) and super-vised learning. Notably, for coarse annotation, our method outper-forms supervised learning, exhibiting the potential of weak labels for domain adaptation task. problem becomes more challenging. There have been many efforts [16, 29, 31, 41, 43, 44, 44, 48] to improve the perfor-mance on the UDASS task, yet there is a big performance gap compared to supervised learning. In this work, we pro-pose to exploit additional weak labels (image [13,22], point and [1] coarse labels [5]) for the real images to improve over the UDASS performance and reduce the performance gap with supervised learning. Weakly supervised Domain Adaptive Semantic Segmen-tation (WDASS) relaxes the problem of UDASS by allow-ing weak labels from the target domain. However, it is non trivial to optimally use the weak labels. [25] works with im-age and point labels and focuses on pixel level adversar-ial alignment between source and target domains. [6] works with coarse labels and uses self-training for feature align-ment. Both these methods use the weak labels only as addi-tional supervision signal from the target domain and do not use them for aligning features between the source and tar-get domain. Moreover, these works do not have a common framework that works with different weak labels. We pro-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15434 pose a common framework that works with different weak labels (e.g., image, point and coarse labels) and use these weak labels for feature alignment between source and target domains, improving domain adaptation for semantic seg-mentation task. Inspired by the recent success of prototype based learn-ing for semantic segmentation [8, 45], few shot learn-ing [7,34,40] and UDASS [16,20,43], we propose to extend prototype learning for the WDASS task. For the UDASS task, prototypes are constructed on the target domain by av-eraging features from noisy pseudo labels [16,43], resulting in noisy prototypes. With the guidance of additional weak labels from the target domain, we improve the quality of the prototypes. Specifically, we use the pixel labeled weak labels (point or coarse labels) to correct the prototypes and image labels to further improve the features by penalising the category features not present in an image. Next, we perform contrastive alignment of features using the proto-types. We propose two alignments, namely intra domain alignment and inter domain alignment. Intra domain align-ment aligns pixel features with prototypes within individual domains (source and target). This helps in learning compact and better features. On the other hand, inter domain align-ment aligns features from the source to prototypes from the target domain in an asymmetric manner, reducing the do-main gap between source and target domains. This asym-metric alignment only changes the source features and pre-serves the target features during training. This type of align-ment is essential when we have weak labels from the target domain. Overall our proposed framework uses weak labels and improves the performance of UDASS task substantially. We summarise our main contributions as: • We propose a new framework for WDASS task that works seamlessly with image, point and coarse labels from the target domain. Our method constructs better prototypes using different weak labels. Further, we in-troduce intra and inter domain contrastive alignment of features with prototypes for source and target domains for WDASS task. • Our framework using different weak labels (image, point and coarse labels) is able to bridge the gap be-tween UDASS and supervised learning, showing the effectiveness of the weak labels. Distinctly, with coarse labels, our framework even outperforms super-vised learning. • We show the tradeoff between annotation cost vs. se-mantic segmentation performance for different weak labels. Notably, point annotation achieves better per-formance in lower annotation budget scenarios than coarse and image label. • Our framework sets a new state of the art for WDASS on standard benchmarks for different weak labels, with significant improvement over prior works.2. Related Work Unsupervised Domain Adaptative Semantic Segmenta-tionThe task of UDASS aims to learn from a labeled source domain and unlabeled target domain and improve perfor-mance on the target domain [31, 47]. The inherent domain gap between the source and target domains makes this chal-lenging. Prior works use adversarial training for distribu-tion alignment [25, 31], contrastive alignment of source-target features [16, 19, 33] and self-training with pseudo la-bels [29, 41, 44, 47, 48]. Recently [16, 43] used prototype based self-training to further improve the performance on UDASS task. However, even with various works, the per-formance gap between UDASS and supervised learning re-mains to be high [16,43]. In this work, we propose to utilize cheaper weak labels from the target domain to improve on UDASS and reduce the gap with supervised learning. Weakly Supervised Domain Adaptative Semantic Seg-mentation Weakly Supervised Domain Adaptive Semantic Segmentation (WDASS) eases the task of UDASS by al-lowing weak labels from target data. [25] makes use of im-age and point label and proposes adversarial alignment of features at pixel level to solve WDASS. [36] uses bound-ing boxes as weak labels and uses adversarial learning for domain-invariant features. [6] uses self training and a boundary loss for improving performance on WDASS with coarse labels. Despite the benefits of weak labels, WDASS task has not been properly explored by |
Bulat_LASP_Text-to-Text_Optimization_for_Language-Aware_Soft_Prompting_of_Vision__CVPR_2023 | Abstract Soft prompt learning has recently emerged as one of the methods of choice for adapting V&L models to a down-stream task using a few training examples. However, cur-rent methods significantly overfit the training data, suffer-ing from large accuracy degradation when tested on un-seen classes from the same domain. To this end, in this paper, we make the following 4 contributions: (1) To alle-viate base class overfitting, we propose a novel Language-Aware Soft Prompting (LASP) learning method by means of a text-to-text cross-entropy loss that maximizes the proba-bility of the learned prompts to be correctly classified with respect to pre-defined hand-crafted textual prompts. (2) To increase the representation capacity of the prompts, we pro-pose grouped LASP where each group of prompts is opti-mized with respect to a separate subset of textual prompts. (3) We identify a visual-language misalignment introduced by prompt learning and LASP , and more importantly, pro-pose a re-calibration mechanism to address it. (4) We show that LASP is inherently amenable to including, during train-ing, virtual classes, i.e. class names for which no visual samples are available, further increasing the robustness of the learned prompts. Through evaluations on 11 datasets, we show that our approach (a) significantly outperforms all prior works on soft prompting, and (b) matches and sur-passes, for the first time, the accuracy on novel classes ob-tained by hand-crafted prompts and CLIP for 8 out of 11 test datasets. Code will be made available here. | 1. Introduction Large-scale pre-training of neural networks has recently resulted in the construction of a multitude of foundation models for Language [7,25] and Vision & Language (V&L) understanding [1, 13, 24, 34]. Unlike the previous genera-tion of neural networks, such models can better capture the distribution of the world from which new favorable prop-erties and characteristics emerge. Of particular interest to this work are V&L models trained with contrastive learn-ing (i.e. CLIP-like models [13, 18, 24, 33, 34]), which haveenabled seamless few-shot and even zero-shot adaptation to new downstream tasks and datasets. Specifically, this pa-per proposes a simple yet highly effective way to drastically improve soft prompt learning for the few-shot adaptation of the V&L model to a given downstream task. Similarly to their NLP counterparts [16, 17, 24], prompt engineering and learning has emerged as one of the most powerful techniques for adapting a V&L to new tasks. Initially, in [24], a set of manually-defined hand-engineered templates (or prompts) like a photo of a {clsname}, or a black and white photo of a{clsname}were passed through the text encoder of the V&L model to create class-specific weights for category clsname that can be used for zero-shot recognition. Fol-lowing research in NLP [16, 17], subsequent work [35, 36] has proposed replacing the manually picked templates with a sequence of learnable vectors, also coined soft prompts , which are fed as input to the text encoder along with the class name clsname . The soft prompts are learned from a few training examples with the entire V&L model kept frozen. The whole process can be seen as parameter effi-cient fine-tuning of the model on a small training dataset. However, a clearly identifiable problem with prompt learning is base class overfitting: while the accuracy on the classes used for training (base classes) significantly in-creases, the accuracy on unseen, during training, (novel) classes significantly drops. This is to some extent expected, as soft prompts are learned from few examples belonging to the base classes. Notably, on novel classes, direct, zero-shot recognition using hand-engineered prompts outperforms all existing soft prompt learning methods. Key idea: To alleviate base class overfitting, in this work, we propose a solution motivated by the following obser-vation: since prompt learning improves the accuracy on base classes, but prompt engineering is significantly bet-ter on novel classes, we propose to learn the soft prompts by adding a cross entropy text-to-text loss that enforces the learned prompts to be close, in embedding space, to the textual ones, thus exploiting the intrinsic information cap-tured by the text encoder. The proposed text-to-text loss en-ables language-only optimization for V&L model adaption This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23232 for the first time. This is in contrast with prior soft-prompt learning methods that only capture V&L interactions. Key contributions: Based on the above, we propose a novel framework for soft prompt learning which we call Language-Aware Soft Prompting (LASP). Our main con-tributions within the LASP framework are as follows: • We propose, for the first time, language-only optimiza-tion for V&L model adaption. Specifically, we propose a novel text-to-text cross-entropy loss that maximizes the probability of the learned prompts to be correctly classi-fied with respect to the hand-engineered ones and show its effectiveness in terms of alleviating base-class overfitting. • To increase the representation capacity of the prompts, and inspired by grouped convolution and multi-head at-tention, we propose a grouped language-aware prompt representation where each group of prompts specializes to a different subset of the pre-defined manual templates. • We identify a visual-language misalignment introduced by prompt learning and LASP which impacts the gener-alization. More importantly, we propose a re-calibration mechanism based on (a) Layer Normalization fine-tuning and (b) learning a class-agnostic bias to address it. • Thanks to our language-only learning framework, we pro-pose training LASP with virtual classes by including, dur-ing training, class names for which no visual samples are available. Importantly, we show that this further increases the robustness of the learned prompts. Main results: Our methods set a new state-of-the-art for few-shot and zero-shot image classification on 11 datasets, significantly outperforming all soft prompting prior works. Importantly, we present, for the first time, a prompt learn-ing method that outperforms, for the majority of the test datasets (8 out of 11), the very strong baseline based on hand-crafted prompts and CLIP for the recognition of novel classes (i.e. zero-shot setting). |
Dong_Implicit_Identity_Leakage_The_Stumbling_Block_to_Improving_Deepfake_Detection_CVPR_2023 | Abstract In this paper, we analyse the generalization ability of bi-nary classifiers for the task of deepfake detection. We find that the stumbling block to their generalization is caused by the unexpected learned identity representation on images. Termed as the Implicit Identity Leakage, this phenomenon has been qualitatively and quantitatively verified among various DNNs. Furthermore, based on such understand-ing, we propose a simple yet effective method named the ID-unaware Deepfake Detection Model to reduce the influence of this phenomenon. Extensive experimental results demon-strate that our method outperforms the state-of-the-art in both in-dataset and cross-dataset evaluation. The code is available at https://github.com/megvii-research/CADDM. | 1. Introduction Recently, face-swap abusers use different face manip-ulation methods [17, 29, 29, 31, 65] to generate fake im-ages/videos. Those images/videos are then used to spread fake news, make malicious hoaxes, and forge judicial ev-idence, which have caused severe consequences. In order to alleviate such situations, an increasing number of deep-fake detection methods [12,14,42,59,60,63] have been pro-posed to filter out manipulated images/videos from massive online media resources, ensuring the filtered images/videos are genuine and reliable. Previous methods usually dealt with the task of deep-fake detection with binary classifiers [1,2,10,46,51]. These methods have achieved great detection accuracy in detect-ing the seen attacks learned in the training datasets ( i.e.the in-dataset evaluations). However, when confronted with media generated from newly-proposed deepfake methods (i.e.the cross-dataset evaluations), these methods often suf-fered from significant performance drops. Though plenty of researchers have designed effective methods [32, 74, 75] to *Equal contribution †Corresponding author Genuine IdentitiesFake Identities Target (ID-2) Source (ID-1) Genuine (Unseen)Fake (Unseen)Fake (ID-3) Genuine Fake Cross DatasetIn Dataset Identity Boundary ID-1 InformationID-2 InformationSource Image Target Image Fake Image New T arget Image (a) (b)Figure 1. The Implicit Identity Leakage phenomenon. Since the fake image retains some features of its source image, its identity should not be completely regarded as its target image. As a conse-quence, there exists an implicit gap between genuine identities and fake identities in the training set, which is unintentionally captured by binary classifiers. When confronted with images manipulated by unseen face-swap methods, the classifier tends to misuse iden-tity information and make false predictions. improve the generalization of deepfake detection models, it still lacks a thorough analysis of why binary classifiers fail to perform well on the cross-dataset evaluation. In this paper, given well-trained binary classifiers of deepfake detection, we find that the stumbling block for their generalization ability is caused by the mistakenly learned identity representation on images. As shown in Fig. 1 (a), a deepfake image is usually generated by replacing the face of the source image with the face of the target im-age. However, we notice that the procedure of synthesizing the fake image [5, 17, 29] may cause the information loss of ID representations. The identity of the fake image can not be considered as the same as either its target image or its source image. In particular, when the face of the target image is swapped back with the face of the fake image, it is noticeable that the identity of the target image is altered. In this way, as shown in Fig. 1 (b), when learning a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3994 deepfake detection model, there exists an implicit decision boundary between fake images and genuine images based on identities. During the training phase, binary classifiers may accidentally consider certain groups of identities as genuine identities and other groups of identities as fake identities. When tested on the cross-dataset evaluation, such biased representations may be mistakenly used by binary classifiers, causing false judgments based on the facial ap-pearance of images. In this paper, we have qualitatively and quantitatively verified this phenomenon (termed as the Implicit Identity Leakage) in binary classifiers of various backbones. Please see Sec. 3 and Sec. 5.2 for analyses. Furthermore, based on such understanding, we propose a simple yet effective method named the ID-unaware Deep-fake Detection Model to reduce the influence of Implicit Identity Leakage. Intuitively, by forcing models to only fo-cus on local areas of images, less attention will be paid to the global identity information. Therefore, we design an anchor-based detector module termed as the Artifact Detec-tion Module to guide our model to focus on the local artifact areas. Such a module is expected to detect artifact areas on images with multi-scale anchors, each of which is assigned a binary label to indicate whether the artifact exists. By lo-calizing artifact areas and classifying multi-scale anchors, our model learns to distinguish the differences between lo-cal artifact areas and local genuine areas at a finer level, thus reducing the misusage of the global identity information. Extensive experimental results show that our model accurately predicted the position of artifact areas and learned generalized artifact features in face manipulation algorithms, successfully outperforming the state-of-the-art. Contributions of the paper are summarized as follows: • We discover that deepfake detection models super-vised only by binary labels are very sensitive to the identity information of the images, which is termed as the Implicit Identity Leakage in this paper. • We propose a simple yet effective method termed as the ID-unaware Deepfake Detection Model to reduce the influence of the ID representation, successfully outperforming other state-of-the-art methods. • We conduct extensive experiments to verify the Im-plicit Identity Leakage phenomenon and demonstrate the effectiveness of our method. |
Feng_Learning_Federated_Visual_Prompt_in_Null_Space_for_MRI_Reconstruction_CVPR_2023 | Abstract Federated Magnetic Resonance Imaging (MRI) recon-struction enables multiple hospitals to collaborate dis-tributedly without aggregating local data, thereby protect-ing patient privacy. However, the data heterogeneity caused by different MRI protocols, insufficient local training data, and limited communication bandwidth inevitably impair global model convergence and updating. In this paper, we propose a new algorithm, FedPR, to learn federated vi-sual prompts in the null space of global prompt for MRI reconstruction. FedPR is a new federated paradigm that adopts a powerful pre-trained model while only learning and communicating the prompts with few learnable param-eters, thereby significantly reducing communication costs and achieving competitive performance on limited local data. Moreover, to deal with catastrophic forgetting caused by data heterogeneity, FedPR also updates efficient feder-ated visual prompts that project the local prompts into an approximate null space of the global prompt, thereby sup-pressing the interference of gradients on the server perfor-mance. Extensive experiments on federated MRI show that FedPR significantly outperforms state-of-the-art FL algo-rithms with <6%of communication costs when given the limited amount of local training data. | 1. Introduction Federated Magnetic Resonance Imaging (MRI) recon-struction enables multiple hospitals to train a powerful global model in a distributed manner without sharing pri-vate data [7, 9, 13]. In federated MRI, each client ( i.e., hospital) uses its local computing power, memory, and pri-vate data to train local models independently, while the server aggregates all the local models in each communica-*Corresponding author. Hospital1Hospital2HospitalKServer Insufficientlocal trainingdataLimited communication bandwidth ❶❷❸Catastrophic forgetting Local1Local2LocalKForgettingKnowledge PreservationareaGlobal Globalzz+1……Figure 1. Illustration of the three key issues in federated MRI. tion round and distributes the global model to each client again [26]. Existing federated MRI reconstruction techniques usu-ally improve federated learning (FL) by enhancing the ag-gregation process [9] and reducing the local parameter’s variance [5, 13]. Such techniques require a large commu-nication bandwidth and sufficient local training data. How-ever, federated MRI often faces two issues, ❶insufficient amount of local training data due to the difficulty of ac-quiring the ground-truth of MRI reconstruction [10, 11, 12] and❷limited communication bandwidth due to unbalanced regional development (see Fig. 1). To cope with the is-sue❶,pre-trained models have exhibited superior perfor-mance, and have shown to close the gap between feder-ated and centralized performance [3, 28]. However, since the model parameters need to be shared between the client and the server for updating, the large number of parame-ters of pre-trained models will result in a huge communi-cation cost . On contrary, prompt tuning has recently been suggested as a new fine-tuning paradigm by freezing the models and only learning a small number of learnable pa-rameters in the input space [15, 31, 32]. Benefited from pre-trained models, only parameter-efficient prompts are re-quired in learning and communication, and prompt tuning can be conducted with a limited number of samples, mak-ing it very appealing in tackling the above two issues ( i.e., This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8064 ❶and❷) in federated MRI. Besides, there is another critical issue for federated MRI, i.e.,❸catastrophic forgetting , caused by data heterogene-ity due to the different imaging protocols of MRI scanners adopted by different clients [9, 39] (see Fig. 1). In the local update, the global model from the prior round tends to be overfitted to the local training data, leading to catas-trophic forgetting. Analogous to continual learning, sev-eral techniques have been presented by introducing proper regularization terms in local models [4, 33, 39] or retain-ing previously acquired knowledge by knowledge distilla-tion [14, 35, 38]. However, these strategies require seek-ing a balance between multiple losses and relying on proxy datasets. Instead, Adam-NSC mitigates catastrophic forget-ting in continual learning with a new network parameter update rule, i.e., forces the network update to lie in the ap-proximate null space of the input features of previous tasks at each layer [36]. However, the null space at feature level is built upon large local training data, which generally can-not be satisfied in federated MRI, i.e., the issue ❶. Taking the issues ❶and❸into account, we suggest to optimize lo-cal prompts in the approximate null space of global prompts instead of input features, thereby preventing the loss of pre-viously gained global knowledge. In a nutshell, we explore a new FL paradigm, i.e., FedPR, to learn federated visual prompts for MRI reconstruction. To begin with, we pre-train the model on a large-scale pub-lic dataset. Given limited amount of local training data, vi-sual prompt tuning is adopted to learn local prompts in a distributed manner. For the issues ❶and❷,federated vi-sual prompts are introduced to learn a strong global model, where only local and global prompts are learnable and com-municated. As for the issue ❸, we perform singular value decomposition (SVD) on the uncentered covariance matrix of global prompts to obtain the approximate null space . FedPR tunes only the local parameters in the null space of global prompts, thereby well preserving the prior knowl-edge of previous rounds and resulting in low communica-tion costs . In particular, FedPR achieves a >4.5dB gain in PSNR with less than 6%of communication costs. To sum up, our contributions are as follows: • We propose a federated visual prompt algorithm, FedPR, to solve the three key issues in federated MRI. By lever-aging powerful pre-trained models and freezing backbone networks in FL, only a small amount of parameters in the input space are trainable, thereby reducing communica-tion costs. • We explore how to alleviate catastrophic forgetting in FL while reducing communication costs. By optimiz-ing local parameters only in the null space of global prompts, FedPR well preserves the previously acquired global knowledge in each round, maintaining competitive performance with only a few local data.• We evaluate the performance of FedPR for federated MRI reconstruction. In comparison to the state-of-the-art FL methods, FedPR achieves superior performance in com-plex scenarios, e.g., less local data, lower communication costs, and faster convergence. |
Han_AutoAD_Movie_Description_in_Context_CVPR_2023 | Abstract The objective of this paper is an automatic Audio De-scription (AD) model that ingests movies and outputs AD in text form. Generating high-quality movie AD is challeng-ing due to the dependency of the descriptions on context, and the limited amount of training data available. In this work, we leverage the power of pretrained foundation mod-els, such as GPT and CLIP , and only train a mapping net-work that bridges the two models for visually-conditioned text generation. In order to obtain high-quality AD, we make the following four contributions: (i) we incorporate context from the movie clip, AD from previous clips, as well as the subtitles; (ii) we address the lack of training data by pretraining on large-scale datasets, where visual or con-textual information is unavailable, e.g. text-only AD with-out movies or visual captioning datasets without context; (iii) we improve on the currently available AD datasets, by removing label noise in the MAD dataset, and adding char-acter naming information; and (iv) we obtain strong results on the movie AD task compared with previous methods. | 1. Introduction That of all the arts, the most important for us is the cinema. Vladimir Lenin One of the long-term aims of computer vision is to un-derstand long-form feature films. There has been steady progress towards this aim with the identification of charac-ters by their face and voice [12, 15, 25, 29, 79], the recogni-tion of their actions and inter-actions [38,50,60,85], of their relationships [37], and 3D pose [61]. However, this is still a long way away from story understanding. Movie Audio De-scription (AD) , the narration describing visual elements in movies, provides a means to evaluate current movie under-standing capabilities. AD was developed to aid visually im-paired audiences, and is typically generated by experienced annotators. The amount of AD on the internet is growing ∗: equal contribution. †: also at Google Research Subtitles: > Can I buy you a drink? > Yeah I'd love one. Sit down. Target AD: He takes the seat opposite, then places his lighter on the table Context AD: As Karen stares groomly out of the window, a man approaches toying with a lighter. She turns her head, and finds Jack standing beside her. Figure 1. Movie audio description (AD) consists of sentences describing movies for the visually impaired. Note how it is heav-ily influenced by various types of context – the visual frames, the previous AD, and the subtitles of the movie. due to more societal support for visually impaired commu-nities and its inclusion is becoming an emerging legal re-quirement. AD differs from image or video captioning in several sig-nificant respects [67], bringing its own challenges. First, AD provides dense descriptions of important visual ele-ments over time . Second, AD is always provided on a sep-arate soundtrack to the original audio track and is highly complementary to it. It is complementary in two ways: it does not need to provide descriptions of events that can be understood from the soundtrack alone (such as dialogue and ambient sounds), and it is constrained in time to intervals that do not overlap with the dialogue. Third, unlike dense video captioning, AD aims at storytelling ; therefore, it typi-cally includes factors like a character’s name, emotion, and action descriptions. In this work, our objective is automatic AD generation – a model that takes continuous movie frames as input and outputs AD in text form. Specifically, we generate text given a temporal interval of an AD, and evaluate its qual-ity by comparing with the ground-truth AD. This is a rela-tively unexplored task in the vision community with previ-ous work targeting ActivityNet videos [88], a very different domain to long-term feature films with storylines, and the LSMDC challenge [68], where the descriptions and charac-ter names are treated separately. As usual, one of the challenges holding back progress This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18930 is the lack of suitable training data. Paired image-text or video-text data that is available at scale, such as alt-text [63,72] or stock footage with captions [7], does not gen-eralize well to the movie domain [8]. However, collecting high-quality data for movie understanding is also difficult. Researchers have tried to hire human annotators to describe video clips [21, 36, 90] but this does not scale well. Movie scripts, books and plots have also been used as learning sig-nals [12, 75, 97] but they do not ground on vision closely and are limited in number. In this paper we address the AD and training data chal-lenges by – Spoiler Alert – developing a model that uses temporal context together with a visually conditioned gen-erative language model, while providing new and cleaner sources of training data. To achieve this, we leverage the strength of large-scale language models (LLMs), like GPT [64], and vision-language models, like CLIP [63], and integrate them into a video captioning pipeline that can be effectively trained with AD data. Our contributions are the following: (i) inspired by Clip-Cap [52] we propose a model that is effectively able to leverage both temporal context (from previously generated AD) and dialogue context (in particular the names of char-acters) to improve AD generation. This is done by bridg-ing foundation models with lightweight adapters to inte-grate both types of context; (ii) we address the lack of large-scale training data for AD by pretraining components of our model on partially missing data which are typically avail-able in large quantities e.g. text-only AD without movie frames, or visual captioning datasets without multiple sen-tences as context; (iii) we propose an automatic pipeline for collecting AD narrations at scale using speaker-based sep-aration; and finally (iv) we show promising results on au-tomatic AD, as seen from both qualitative and quantitative evaluations, and also achieve impressive zero-shot results on the LSMDC multi-description benchmark comparable to the finetuned state-of-the-art. |
Chen_MammalNet_A_Large-Scale_Video_Benchmark_for_Mammal_Recognition_and_Behavior_CVPR_2023 | Abstract Monitoring animal behavior can facilitate conservation efforts by providing key insights into wildlife health, popula-tion status, and ecosystem function. Automatic recognition of animals and their behaviors is critical for capitalizing on the large unlabeled datasets generated by modern video devices and for accelerating monitoring efforts at scale. However, the development of automated recognition systems is cur-rently hindered by a lack of appropriately labeled datasets. Existing video datasets 1) do not classify animals according to established biological taxonomies; 2) are too small to fa-cilitate large-scale behavioral studies and are often limited to a single species; and 3) do not feature temporally local-∗Equal contribution.ized annotations and therefore do not facilitate localization of targeted behaviors within longer video sequences. Thus, we propose MammalNet , a new large-scale animal behav-ior dataset with taxonomy-guided annotations of mammals and their common behaviors. MammalNet contains over 18K videos totaling 539 hours, which is ∼10 times larger than the largest existing animal behavior dataset [36]. It covers 17 orders, 69 families, and 173 mammal categories for animal categorization and captures 12 high-level animal behaviors that received focus in previous animal behavior studies. We establish three benchmarks on MammalNet: standard animal and behavior recognition, compositional low-shot animal and behavior recognition, and behavior detection. Our dataset and code have been made available at:https://mammal-net.github.io . This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13052 | 1. Introduction Animal species are a core component of the world’s ecosystems. Through their behavior, animals drive diverse ecological processes, including seed dispersal, nutrient cy-cling, population dynamics, speciation, and extinction. Thus, understanding and monitoring the behaviors of animals and their interactions with their physical and social environ-ments is key to understanding the complexities of the world’s ecosystems, an objective that is especially critical now given the ongoing biodiversity crisis [12]. Modern sensors, including camera traps, drones, and smartphones, allow wildlife researchers, managers, and cit-izen scientists to collect video data of animal behavior on an unprecedented scale [43]. However, processing this data to generate actionable, timely insights remains a major chal-lenge. Manual human review and annotation of footage to identify and locate species and behavioral sequences of in-terest is time-intensive and does not scale to large datasets. Thus, methods for automated animal and behavioral recogni-tion could open the door to large-scale behavioral monitoring and speed up the time to produce usable data, thereby reduc-ing the time to implement management directives. The first essential step to creating such an AI system for animal and behavior recognition is curating a diverse, repre-sentative dataset that allows us to formalize these challenges as computer vision tasks and benchmark potential solutions. Most previous datasets either only cover a limited number of animal and behavior types [4, 38], or do not implement animal labeling [36], or include a small number of videos with insufficient environmental diversity [4,38,48]. Recently, a dataset named “Animal Kingdom” [36] was proposed to study animal actions and is currently the largest existing behavioral dataset, to the best of our knowledge. However, it only contains 4,310 videos totaling 50 hours, which might be insufficient for large-scale animal behavior studies consider-ing its diversity. Furthermore, the authors only focus on the recognition of atomic actions such as yawning, swimming, and flying. These basic actions cannot be easily matched to the higher-order behavioral states that are of primary interest to end users in animal management and conservation [6]. For example, a cheetah that is running may either be hunting, escaping, or playing. Finally, and most importantly, they do not support some important tasks such as animal recogni-tion and behavior detection which are essential for animal behavior understanding. To overcome the limitations of previous datasets, we pro-pose a new dataset called MammalNet . We specifically focus on mammals since they, unlike other animal classes such as birds or insects, usually have more diverse and distin-guishable behavior statuses. MammalNet is comprised of 539 hours of annotated videos, which is ∼10 times longer than that of the largest available animal behavior dataset. It contains 18,346 videos depicting 12 fundamental high-levelbehaviors from hundreds of mammal species. Importantly, it focuses on 12 higher-order animal behaviors that are the fo-cus of previous animal behavior literature [3,8,17,33], rather than atomic actions. MammalNet also categorizes animals according to the scientific taxonomy available in Wikipedia, as we show in Fig. 1; hence the dataset can be flexibly ex-panded in the future by following the same protocols. It includes videos of approximately 800 mammal species in 173 mammal categories. We establish three benchmarks inspired by ecological research needs -standard animal & behavior classification, compositional low-shot animal & behavior recognition, and behavior detection – to promote future study in animal behavior understanding. Through our experiments, we find that: (1) Correctly rec-ognizing the animals and behaviors is a challenging task even for the state-of-the-art models, especially for less-frequent animals. The top-1 per-class accuracy is 32.5 for animal recognition, 37.8 for behavior recognition, and 17.8 for their joint recognition in our best-performing model. (2) Behavior recognition for unseen animals can be transferred from ob-servations of other seen animals due to their similar features such as appearance and movement style, which can help in studies of animals with less available data. However, to achieve more accurate behavior recognition, having access to videos of the target animals and behaviors is still crucial. |
Chen_Hand_Avatar_Free-Pose_Hand_Animation_and_Rendering_From_Monocular_Video_CVPR_2023 | Abstract We present HandAvatar, a novel representation for hand animation and rendering, which can generate smoothly compositional geometry and self-occlusion-aware texture. Specifically, we first develop a MANO-HD model as a high-resolution mesh topology to fit personalized hand shapes. Sequentially, we decompose hand geometry into per-bone rigid parts, and then re-compose paired geometry encod-ings to derive an across-part consistent occupancy field. As for texture modeling, we propose a self-occlusion-aware shading field (SelF). In SelF , drivable anchors are paved on the MANO-HD surface to record albedo information under a wide variety of hand poses. Moreover, directed soft occupancy is designed to describe the ray-to-surface relation, which is leveraged to generate an illumination field for the disentanglement of pose-independent albedo and pose-dependent illumination. Trained from monocu-lar video data, our HandAvatar can perform free-pose hand animation and rendering while at the same time achiev-ing superior appearance fidelity. We also demonstrate thatHandAvatar provides a route for hand appearance editing. Project website: https://seanchenxy.github. io/HandAvatarWeb . | 1. Introduction Human avatars [5,16,19,20,27,74] have been vigorously studied for years. However, there has been limited research that particularly focuses on hand avatars [9]. Due to the nature of distinctive properties ( e.g., serious self-occlusion and contact) between the hand and the rest of the human parts ( i.e., face, head, and body), it is essential to investigate a specialized representation tailored for modeling both the hand geometry and texture. Traditional pipeline tends to adopt texture maps and col-ored mesh for hand appearance modeling [7, 11, 12, 24, 43], but developing an elaborate personalized hand mesh and texture map usually requires expensive scan data [54] and artistic knowledge. Recently, the neural rendering tech-nique has gained raising attention, where neural radiance field (NeRF) [32] has been adapted to represent humans by predicting geometry and texture properties for an arbitrary This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8683 3D point query [9, 10, 20, 25, 37, 39, 40, 48, 51, 58, 62–64, 69, 72, 75] . Compared to the conventional mesh-texture pipeline, NeRF is cheap in training data collection and su-perior in rendering fidelity. Despite the huge success of hu-man body and face modeling, neural rendering-based hand representation [9] remains much less explored. The hand is highly articulated such that the complex hand motion brings difficulties for neural rendering. Firstly, the deformation of hand geometry is hard to model. When coping with large and complex hand deformations ( e.g., self-contact), previ-ous skinning-based methods can hardly find accurate skin-ning weights for an arbitrary query [3, 6, 18, 19, 31, 35, 39, 47, 62, 74], while part-aware methods usually suffer from across-part inconsistency issue [17, 21, 30, 60]. Secondly, hand texture is hard to model because of the highly ar-ticulated structure. For example, articulated hand motion induces serious self-occlusion so that different hand poses lead to noticeable variations in illumination and shadow pat-terns. Illumination is important for realistic rendering, but we are not aware of any prior work in estimating illumina-tion caused by articulated self-occlusion. Motivated by the above challenges, we propose Han-dAvatar for animatable realistic hand rendering. Consid-ering different difficulties in geometry and texture model-ing, we follow the idea of inverse graphics [76] to disen-tangle hand geometry, albedo, and illumination. At first, we employ explicit mesh to depict hand shapes. However, the popular hand mesh model, i.e., MANO [44], only pro-vides a coarse mesh with 778 vertices, whose shape fitting capacity is limited. Therefore, we design a super-resolution version of MANO with 12,337 vertices and 24,608 faces, namely MANO-HD, which can fit personalized hand shapes with per-vertex displacements. Additionally, massive exist-ing MANO-annotated data can be seamlessly represented by MANO-HD. For introducing mesh-based hand shape to the volume rendering pipeline [32], we propose a local-pair occupancy field (PairOF), where every two part-level geometry encodings are reassembled according to physi-cal connections to yield an across-part consistent field. As for hand texture, we propose a self-occlusion-aware shad-ing field (SelF). SelF is comprised of an albedo field and an illumination field. The albedo field resorts to anchors that are uniformly paved on MANO-HD surfaces, each of which holds positional and albedo encodings to model a small hand region. The illumination field is to cope with articulated self-occlusion, where directed soft occupancy is designed to estimate illumination and shadow patterns. MANO-HD and PairOF are pre-trained with MANO pa-rameter annotations, then they cooperate with SelF in end-to-end training on monocular video data. Finally, with hand pose as the input, our HandAvatar can perform hand an-imation and rendering. We evaluate our approach on the InterHand2.6M dataset [34] and achieve high-fidelity ge-ometry and texture for free-pose hand animation. We also demonstrate that it is convenient to edit hand appearance in HandAvatar as shown in Fig. 1. Therefore, our main contri-butions are summarized as follows: • We propose a HandAvatar framework, the first method for neural hand rendering with self-occluded illumination. • We develop MANO-HD and a local-pair occupancy field that fit hand geometry with personalized shape details. • We propose a self-occlusion-aware shading field that can render hand texture with faithful shadow patterns. • Our framework is end-to-end developed for free-pose re-alistic hand avatars. Extensive evaluations indicate our method outperforms prior arts by a large margin. |
Cheng_VindLU_A_Recipe_for_Effective_Video-and-Language_Pretraining_CVPR_2023 | Abstract The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretrain-ing protocols, making the reproducibility, analysis and com-parisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining ob-jectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scal-ing. Our empirical study reveals that the most important de-sign factors include: temporal modeling, video-to-text mul-timodal fusion, masked modeling objectives, and joint train-ing on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VINDLU, for effective VidL pretraining. Our final model trained us-ing our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on ex-ternal CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering re-sults on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly avail-able at: https://github.com/klauscc/VindLU . | 1. Introduction Fueled by the growing availability of video-and-text data [2, 8, 9, 24, 41, 43, 48] and advances in the Transformer model design [12, 54], the last few years have witnessed incredible progress in video-and-language (VidL) under-standing [26,31,40,64,75,80]. Since the initial transformer-based models for VidL, such as ClipBERT [26], the text-to-video retrieval accuracy has improved from 22.0%,22.4%, and 21.3%on MSR-VTT [65], DiDeMo [1], and Activi-A Recipe for Video-Language PretrainingStep 1: Add temporal attention to an image transformer.Step 2: Add a multimodal fusion encoder.Video EncoderText EncoderMultimodal Fusion EncoderStep 3: Add video-text matching, masked video modeling, and masked language modeling pretraining objectives.Step 4: Add images for joint image-video pretraining.Step 5: Add more frames for finetuning & inference.VideosImagesMVMMLMVTCVTM Step 6: Scale up the pretraining data and the model size.……Input Video FramesDataModelStarting Ingredients:VideosImage EncoderText EncoderVTCInput Video FramesLegend:Existing ComponentsNewly Added ComponentsImage EncoderTemporal Attention+Video EncoderFigure 1. We present a recipe for effective video-language pre-training. Our recipe starts with image and text transformer en-coders trained on video-text pairs using a contrastive objective (VTC). We then progressively add more components to our frame-work while also studying the importance of each component along the way. Our final recipe includes the steps for (1) adding temporal attention, (2) injecting a multimodal fusion encoder, (3) incorpo-rating masked modeling pretraining objectives, (4) jointly training on images and videos, (5) using more frames during fine-tuning and inference, and lastly, (6) scaling up the data and the model. tyNet [23] to >45% R@1 accuracy on all three of these datasets, thus, marking an extraordinary relative improve-ment of more than 100% in less than 2years. At the same time, the model architectures and pretrain-ing/finetuning protocols used by modern VidL approaches This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10739 MethodModel Design Pretraining Data #Frames Temporal ModelingMultimodal FusionPretraining ObjectivesDataset Size Modality PT FT Eval UniVL [39] Joint Att. [5] 2-layer TR VTC+VTM+MLM+MFM+LM HT 136M V 48 48 48 VideoCLIP [64] 1D-Conv+TR ✗ VTC HT 136M V 32 32 32 ClipBert [26] Mean Pooling BERT MLM+VTM COCO+VG 0.2M I 1 16 16 Frozen [2] Temp. Attn [5] ✗ ITC C5M 5M I+V 1→44 4 MERLOT [75] Joint Attn RoBERTa VTC+MLM+FOM YT 180M V 16 16 16 VIOLET [16] Window Attn [37] BERT VTC+VTM+MLM+MVM YT+C5M 185M I+V 4 5 5 MV-GPT [47] Joint Attn 2-layer TR MLM+LM HT 136M V ---ALL-in-one [55] Token Rolling [55] ViT VTC+VTM+MLM HT+W2 172M V 3 3 9 Singularity [25] Late Temp. Attn 3-layer TR VTC+VTM+MLM C17M 17M I+V 1→44 12 LA VENDER [32] Window Attn [37] BERT MLM C17M+IN 30M I+V 4 5 5 OmniVL [57] Temp. Attn 2×BERT VTC+VTM+LM C17M 17M I+V 1→88 8 ATP [6] ✗ ✗ VTC CLIP 400M I 1 16 16 CLIP4Clip [40] Late TR ✗ VTC CLIP 400M I 1 12 12 ECLIPSE [34] Late TR ✗ VTC CLIP 400M I+A 1 32 32 CLIP2TV [18] CLIP 4-layer TR VTC+VTM CLIP 400M I 1 12 12 CLIP-Hitchhiker [3] Late Attn ✗ VTC CLIP 400M I 1 16 120 CLIP-ViP [66] Prompt Attn [66] ✗ VTC CLIP 500M I+V 1→1212 12 TR: Transformer; Late : Late fusion; Attn : Attention. V: Video; I: Image; A: Audio; 1→4:1frame for stage-1 training and 4frames for stage-2. VTC : Video-text contrastive; VTM : Video-text matching; MLM : Masked language modeling; MFM : Masked frame modeling; LM: Language modeling. HT: HowTo100M [41]; C5M, C17M : see supplementary; YT: YT-Temporal [75]; W2: WebVid-2M [2]; COCO : [33], VG: Visual Genome [24]; IN: An internal dataset. Table 1. An overview of the existing VidL methods. Significant differences exist among these methods, making it challenging to reproduce, analyze and compare these methods. This motivates us to answer the question “What are the key steps to build a highly performant VidL framework” by investigating various components in the VidL framework design. have become significantly more complex and specialized over the last several years. As a result, it is increasingly dif-ficult to reproduce, analyze and compare most recent VidL frameworks. For example, several recent approaches [25, 32, 66] propose new architectures, new initialization strate-gies, pretraining objectives, pretraining datasets, and opti-mization protocols. Due to the large computational cost of ablating all these factors, it is difficult to understand which components are critical to the success of the pro-posed frameworks. Similarly, the key success factors of many other recent VidL approaches [6, 16, 32, 57] are also often obfuscated, which hinders future research. In Table 1, we illustrate the complexity of modern VidL frameworks by dissecting them along multiple dimensions, including temporal modeling schemes, multimodal fusion modules, pretraining objectives, the source of the pretrain-ing data, and the number of frames for pretraining, finetun-ing and inference. Based on this analysis, we observe that there exist significant differences among these VidL meth-ods. Unfortunately, it’s not clear which differences are im-portant for the overall VidL performance and which are not. The recent METER [13] work studies a subset of these components in the context of image-language modeling. However, their analysis is limited to images and, thus, ig-nores various aspects related to video modeling, such as spatiotemporal architecture design, video pretraining ob-jectives, video pretraining data, and video-specific finetun-ing/evaluation protocols such as the number of frames. As we will show in our experimental section, many of the find-ings presented in the image-based studies [13] do not hold for video. Beyond image-based analysis, we note that the concurrent work in [17] conducts an empirical study of VidL transformers. However, unlike our work, which cov-ers a broad range of VidL design factors, their analysis is focused predominantly on masked visual modeling objec-tives, which we also study in this work. Our main objective in this work is to answer the ques-tion “What are the key steps needed to build a highly per-formant VidL framework?” To do this, we conduct a thor-ough empirical study that demystifies the importance of var-ious VidL design choices and ultimately leads to a VidL framework that achieves state-of-the-art results on various VidL benchmarks. Using our empirical insights, we then develop a step-by-step recipe for effective VidL pretrain-ing. Our recipe, dubbed V INDLU (VIdeo aND Language Understanding), starts from a standard Vision Transformer (ViT) [12] and uses a simple progressive expansion scheme where at each step, we investigate a particular aspect of VidL framework design (e.g., architecture, pretraining ob-jective, pretraining data, etc.), and choose the best perform-ing option. In particular, we study the following VidL de-sign components: (i) the spatiotemporal architecture design, 10740 70.471.273.270.2Image TransformerTemporal ModelingMean PoolingLate Temp. Att.Temp. Att.Temp. Conv50.4MultimodalFusionText-to-VideoBidirectionalVideo-to-Text49.850.254.656.755.460.360.362.1MVMMLM66.567.567.5168.570.270.269.669.514 frames148 frames48MVM + MLM64.8 30507072.425M17M73.6+ 6.3%+ 3.6%+ 7.2% + 2.2%+ 1.2%PretrainingObjectivesPretrainingData# Pretraining FramesImageVideoImage + VideoMulti-stagePretraining+ 2.7% Scaling Up Data (ViTbase + BERTbase ) Averaged Acc (%) of R{1,5,10}on MSR-VTT, DiDeMo, ActivityNet-CaptionsScaling Up Model(5M Corpus)BERTlargeViTlarge+ 1.0%+ 3.0%48 framesFigure 2. We progressively expand an image transformer baseline (e.g., ViT) to a performant video-and-language (VidL) model. We do so by investigating the importance of many VidL design choices such as (i) temporal modeling, (ii) multimodal fusion modules, (iii) pretraining objectives, (iv) the source of the pretraining data, (v) the number of pre-training frames, (vi) multi-stage pretraining, and (vii) scaling of the data and model. Each bar depicts an aver-age text-to-video retrieval Recall@1,5,10 accuracy across MSR-VTT [65], DiDeMo [65], ActivityNet [23]. The red bars denote the best-performing design choice in each subgroup. Our final VidL framework, dubbed V INDLU, outperforms our initial image Transformer baseline by 23.2% . The figure was inspired by [36]. (ii) the multimodal fusion schemes, (iii) the pretraining ob-jectives, (iv) the source of the pretraining data, (v) fine-tuning/inference protocols, and (vi) scaling of the data and model. We present our recipe in Fig. 1. The key findings of our empirical study include: • Contrary to the conclusions of several prior works [6,25] that a single frame is sufficient for VidL modeling, we discover that temporal modeling using multiple frames leads to a significant improvement over the spatial-only baselines ( +6% averaged video retrieval accuracy on MSR-VTT, DiDeMo, and ActivityNet). • Multimodal fusion module incorporating video features into text is critical for good VidL performance ( +3.6% ). Conversely, adding text features to the video representa-tion is not useful.• Masked language modeling objective significantly im-proves performance ( +6.2% ) while masked video mod-eling objective brings an additional +1% improvement. • Pretraining jointly on images and videos is beneficial (+2.7% ). Also, contrary to prior methods [2,57], we find multi-stage training unnecessary. • Pretraining with a small number of frames (e.g., 4) is suf-ficient and it can significantly reduce the computational cost of large-scale pretraining. Pretraining with more frames does not lead to a substantial performance boost. • Compared to many recent CLIP-based [45] VidL ap-proaches [3, 40, 66], our recipe achieves comparable or even better performance with 20×less pretraining data. Our final model, trained using our V INDLU recipe, achieves state-of-the-art results on several VidL bench-marks. Specifically, on the video retrieval task, our method achieves 46.5%, 61.2%, 55.0% R@1 accuracy on MSR-VTT, DiDeMo, and ActivityNet outperforming the state-of-the-art by 7.8% and6.1% on the latter two datasets. Also, our approach obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA, where we achieve top-1 ac-curacy of 44.7%, 44.6%, 97.1%, and 79.0% respectively. We want to make it clear that, in this paper, we do not claim technical novelty behind any of the individual de-sign choices (i.e., different subsets of these design choices were already used by prior VidL methods as shown in Ta-ble 1). Instead, our main contribution, which we believe might be equally if not more important than proposing yet another specialized or obfuscated VidL model, is to inves-tigate these components collectively and validate their im-portance. We also do not claim superiority over previous methods (despite better results). Due to the implementa-tion complexities of each method, fair and complete com-parisons are difficult and not our intent. Instead, we hope that our recipe for building an effective VidL framework will provide useful insights for future research on VidL un-derstanding. To enable the VidL community to build on our work, we release our code and pretrained models. |