title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Huang_Tri-Perspective_View_for_Vision-Based_3D_Semantic_Occupancy_Prediction_CVPR_2023 | Abstract Modern methods for vision-centric autonomous driving perception widely adopt the bird’s-eye-view (BEV) repre-sentation to describe a 3D scene. Despite its better effi-ciency than voxel representation, it has difficulty describing the fine-grained 3D structure of a scene with a single plane. To address this, we propose a tri-perspective view (TPV) representation which accompanies BEV with two additional perpendicular planes. We model each point in the 3D space by summing its projected features on the three planes. To lift image features to the 3D TPV space, we further pro-pose a transformer-based TPV encoder (TPVFormer) to ob-tain the TPV features effectively. We employ the attention mechanism to aggregate the image features corresponding to each query in each TPV plane. Experiments show that our model trained with sparse supervision effectively pre-dicts the semantic occupancy for all voxels. We demon-strate for the first time that using only camera inputs can achieve comparable performance with LiDAR-based meth-ods on the LiDAR segmentation task on nuScenes. Code: https://github.com/wzzheng/TPVFormer .1. Introduction Perceiving the 3D surroundings accurately and compre-hensively plays an important role in the autonomous driving system. Vision-based 3D perception recently emerges as a promising alternative to LiDAR-based one to effectively extract 3D information from 2D images. Though lacking direct sensing of depth information, vision-based models empowered by surrounding cameras demonstrate promising performance on various 3D perception tasks such as depth estimation [17,42], semantic map reconstruction [1,19,48], and 3D object detection [27, 30, 46]. The core of 3D surrounding perceiving lies in how to ef-fectively represent a 3D scene. Conventional methods split the 3D space into voxels and assign each voxel a vector to represent its status. Despite its accuracy, the vast number of voxels poses a great challenge to computation and re-quires specialized techniques like sparse convolution [13]. As the information in outdoor scenes is not isotropically distributed, modern methods collapse the height dimension and mainly focus on the ground plane (bird’s-eye-view) where information varies the most [20,26,28,31,35,46,48]. *Equal contribution. †Corresponding author. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9223 Image Backbone TPVFormer Training Phase Test Phase LiDAR Ground T ruth Semantic Occupancy Prediction TPV Representation Sparse Supervision Dense Prediction Camera Input Figure 2. An overview of our method for 3D semantic occupancy prediction. Taking camera images as inputs, the proposed TPV-Former only uses sparse LiDAR semantic labels for training but can effectively predict the semantic occupancy for all voxels. They implicitly encode the 3D information of each object in the vector representation in each BEV grid. Though more efficient, BEV-based methods perform surprisingly well on the 3D object detection task [28, 31]. This is because 3D object detection only demands predictions of coarse-level bounding boxes for commonly seen objects such as cars and pedestrians. However, objects with various 3D struc-tures can be encountered in real scenes and it is difficult (if not impossible) to encode all of them using a flattened vector. Therefore, it requires a more comprehensive and fine-grained understanding of the 3D surroundings toward a safer and more robust vision-centric autonomous driving system. Still, it remains unknown how to generalize BEV to model fine-grained 3D structures while preserving its ef-ficiency and detection performance. In this paper, we advance in this direction and propose a tri-perspective view (TPV) representation to describe a 3D scene. Motivated by recent advances in explicit-implicit hy-brid scene representations [7, 8], we generalize BEV by ac-companying it with two perpendicular planes to construct three cross-planes perpendicular to each other. Each plane models the 3D surroundings from one view and combining them provides a comprehensive description of the 3D struc-ture. Specifically, to obtain the feature of a point in the 3D space, we first project it into each of the three planes and use bilinear interpolation to obtain the feature for each projected point. We then sum the three projected features as the com-prehensive feature of the 3D point. The TPV representation is thus able to describe the 3D scene at an arbitrary resolu-tion and produces different features for different points in the 3D space. We further propose a transformer-based en-coder (TPVFormer) to effectively obtain the TPV features from 2D images. We first perform image cross-attention between TPV grid queries and the corresponding 2D image features to lift 2D information to the 3D space. We then per-form cross-view hybrid-attention among the TPV features to enable interactions among the three planes. To demonstrate the superiority of TPV , we formulate a practical yet challenging task for vision-based 3D semanticoccupancy prediction, where only sparse lidar semantic la-bels are provided for training and predictions for all voxels are required for testing, as shown in Figure 2. However, as no benchmark is provided on this challenging setting, we only perform qualitative analysis but provide a quanti-tative evaluation on two proxy tasks: LiDAR segmentation (sparse training, sparse testing) on nuScenes [4] and 3D se-mantic scene completion (dense training, dense testing) on SemanticKITTI [2]. For both tasks, we only use RGB im-ages as inputs. For LiDAR segmentation, our model use the LiDAR data only for point query to compute evalua-tion metrics. Visualization results show that TPVFormer produces consistent semantic voxel occupancy prediction with only sparse point supervision during training, as shown in Figure 1. We also demonstrate for the first time that our vision-based method achieves comparable performance with LiDAR-based methods on LiDAR segmentation. 2. Related Work Voxel-based Scene Representation: Obtaining an ef-fective representation for a 3D scene is the basic procedure for 3D surrounding perception. One direct way is to dis-cretize the 3D space into voxels and assign a vector to repre-sent each voxel [49,51]. The ability to describe fine-grained 3D structures makes voxel-based representation favorable for 3D semantic occupancy prediction tasks including lidar segmentation [12, 29, 40, 44, 45, 51] and 3D scene comple-tion [5, 10, 23, 38, 43]. Though they have dominated the 3D segmentation task [44], they still lag behind BEV-based methods on the 3D detection performance [26]. Despite the success of voxel-based representations in LiDAR-centric surrounding perception, only a few works have explored voxel-based representations for vision-centric autonomous driving [5,25]. MonoScene [5] first backprojects image fea-tures to all possible positions in the 3D space along the opti-cal ray to obtain the initial voxel representation and further processes it using a 3D UNet. However, it is still challeng-ing to generalize it to 3D perception with multi-view images due to the inefficiency of voxel representations. This moti-vates us to explore more efficient and expressive ways to describe the fine-grained 3D structure of a scene. BEV-based Scene Representation: The vast number of voxels poses a great challenge to the computation effi-ciency of voxel-based methods. Considering that the height dimension contains less information than the other two di-mensions, BEV-based methods implicitly encode the height information in each BEV grid for a more compact repre-sentation of scenes [22]. Recent studies in BEV-based per-ception focus on how to effectively transform features from the image space to the BEV space [20, 26, 27, 35, 36, 48]. One line of works explicitly predict a depth map for each image and utilizes it to project image features into the 3D space followed by BEV pooling [20, 26, 28, 31, 35, 36, 48]. 9224 Another line of works employ BEV queries to implic-itly assimilate information from image features using the cross-attention mechanism [21, 27]. BEV-based perception achieves great success on vision-centric 3D detection from multi-view images [26], demonstrating comparable perfor-mance to LiDAR-centric methods. Yet, it is difficult to ap-ply BEV to 3D semantic occupancy prediction which re-quires a more fine-grained description of the 3D space. Implicit Scene Representation: Recent methods have also explored implicit representations to describe a scene. They learn a continuous function that takes as input the 3D coordinate of a point and outputs the representation of this point [32–34]. Compared with explicit represen-tations like voxel and BEV , implicit representations usu-ally share the advantage of arbitrary-resolution modeling and computation-efficient architectures [6, 11, 37]. These advantages enable them to scale to larger and more com-ple | x scenes with more fine-grained descriptions. Especially, our work is inspired by recent advances in hybrid explicit-implicit representations [7, 8]. They explicitly inject spatial information into the continuous mapping of implicit repre-sentations. Therefore, they share the computation-efficient architecture of implicit representations and better spatial awareness of explicit representations. Still, they mainly fo-cus on small-scale complex scenes for 3D-aware image ren-dering. To the best of our knowledge, we are the first to use implicit representation to model outdoor scenes for 3D sur-rounding perception in autonomous driving. 3. Proposed Approach 3.1. Generalizing BEV to TPV Autonomous driving perception requires both expressive and efficient representation of the complex 3D scene. V oxel representation [25, 40, 45] describes a 3D scene with dense cubic features V∈RH×W×D×Cwhere H,W,Dare the spatial resolution of the voxel space and Cdenotes the fea-ture dimension. A random point located at (x, y, z )in the real world maps to its voxel coordinates (h, w, d )through one-to-one correspondence. Therefore, voxel representa-tion preserves the dimensionality of the real world and offers sufficient expressiveness with appropriate H, W, D . However, the storage and computation complexity of voxel features comes proportion to O(HWD ), making it chal-lenging to deploy them in real-time onboard applications. As a popular alternative, BEV [21,26,27,31] representa-tion uses a 2D feature map B∈RH×W×Cto encode the top view of a scene. Different from the voxel counterpart, the point at (x, y, z )is projected to its BEV coordinates (h, w) using only the positional information from the ground plane regardless of the z-axis. Each feature sampled from Bcor-responds to a pillar region covering the full range of z-axis in the real world. Although BEV greatly reduces the storage VoxelBEVTPV (ours) Figure 3. Comparisons of the proposed TPV representation with voxel and BEV representation. While BEV is more efficient than the voxel representation, it discards the height information and cannot comprehensively describe a 3D scene. and computation burden to O(HW), completely omitting thez-axis has an adverse effect on its expressiveness. To address this, we propose a Tri-Perspective View (TPV) representation which is capable of modeling the 3D space at full scale without suppressing any axes and avoid-ing cubic complexity, as illustrated in Figure 3. Formally, we learn three axis-aligned orthogonal TPV planes: T= [THW,TDH,TWD],THW∈RH×W×C, TDH∈RD×H×C,TWD∈RW×D×C,(1) which represent the top, side and front views of a 3D scene respectively. H, W, D denote the resolution of the three planes and Cis the feature dimension. Intuitively, a com-plex scene, when examined from different perspectives, can be better understood because these perspectives may pro-vide complementary clues about the scene. Point Querying Formulation. Given a query point at (x, y, z )in the real world, TPV representation aggregates its projections on the top, side and front views in order to get a comprehensive description of the point. To elabo-rate, we first project the point onto the TPV planes to ob-tain the coordinates [(h, w),(d, h),(w, d)], sample the TPV planes at these locations to retrieve the corresponding fea-tures [th,w,td,h,tw,d], and aggregate them to generate the finalfx,y,z: ti,j=S(T,(i, j)) =S(T,P(x, y)), (2) fx,y,z=A(th,w,td,h,tw,d), (3) where the sampling function Sand the aggregation func-tionAare implemented with bilinear interpolation and sum-mation respectively, and each projection function Psimply performs scaling on the two relevant coordinates since TPV planes are aligned with the real-world axes. Voxel Feature Formulation. The TPV planes, when re-peated along respective orthogonal directions and summed up, construct a full-scale 3D feature space similar to the voxel feature space, but with storage and computation com-plexity of only O(HW +DH+WD), which is an order of magnitude lower than the voxel counterpart. 9225 Compared with BEV , as the three planes in TPV are per-pendicular to each other, point features along the orthogonal direction of one plane are diversified by features sampled from the other two planes. Moreover, a grid feature in each TPV plane is only responsible for view-specific information of the corresponding pillar region rather than encoding the complete information as in BEV . To sum up, TPV represen-tation generalizes BEV from single top view to complemen-tary and orthogonal top, side and front views and is able to offer a more comprehensive and fine-grained understanding of the 3D surroundings while remaining efficient. 3.2. TPVFormer For vision-centric autonomous driving perception, a 2D backbone is often employed to obtain image features before feeding them into a 3D encoder. We present a transformer-based TPV encoder (TPVFormer) to lift image features to TPV planes through the attention mechanism. Overall Structure: In TPVFormer, we introduce TPV queries, image cross-attention (ICA) and cross-view hybrid-attention (CVHA) to enable effective generation of TPV planes, as shown in Fig. 4. In fact, TPV queries and TPV planes refer to the same set of feature vectors de-fined in (1). Each TPV query t∈Tis a grid cell fea-ture belonging to one of the three planes and used to en-code view-specific information from the corresponding pil-lar region. Cross-view hybrid-attention enables direct in-teractions among TPV queries from the same or different tpv planes in order to gather contextual information. Inside image cross-attention, TPV queries aggregate visual infor-mation from image features through deformable attention. We further construct two kinds of transformer blocks: hybrid-cross-attention block (HCAB) and hybrid-attention block (HAB). Composed of both CVHA and ICA atten-tion, the HCAB block is employed in the first half of TPV-Former to effectively query visual information from image features. Following HCAB blocks, the HAB block contains only CVHA attention and specializes in contextual infor-mation encoding. Finally, we build TPVFormer by stacking N1HCAB blocks and N2HAB blocks. TPV Queries: Although referring to the same list of 2D features defined in (1), TPV queries and TPV planes are used in attention and 3D representation contexts, respec-tively. Each TPV query maps to a 2D grid cell region of sizes×s m2in the corresponding view, and further to a 3D pillar region extending from the view in the perpendicular direction. In our pipeline, TPV queries are first enhanced with raw visual information from image features in HCAB blocks, and then refined with contextual clues from other queries in HAB blocks. As for implementation, we initial-ize TPV queries as learnable parameters. Image Cross-Attention: In TPVFormer, we use image cross-attention to lift multi-scale and possibly multi-cameraimage features to the TPV planes. Considering the high resolution nature of TPV queries ( ∼104queries) and mul-tiple image feature maps ( ∼105pixels each), it is unfea-sible to compute full-scale vanilla cross-attention between them. And thus we employ the efficient deformable atten-tion [14, 27, 50] to implement image cross-attention. We take the local receptive field as an inductive bias when sampling the reference points. Specifically, for a TPV query th,wlocated at (h, w)in the top plane, we first calcu-late its coordinates (x, y)in the top view in the real world through the inverse projection function P−1 HW. Then we sample uniformly Nref HWreference points for the query th,w along the orthogonal direction of the plane: (x, y) =P−1 HW(h, w) = (( h−H 2)×s,(w−W 2)×s).(4) Refw h,w={(x, y, z i)}Nref HW i=1, (5) where Refw h,wdenotes the set of reference points in the world coordinate for query th,w. The similar procedure is repeated for all TPV queries, and note that the number of reference points Nrefmay change across planes because of different ranges of axes. After deriving the reference points forth,w, we need to project them into the pixel coordinate in order to sample the image feature maps: Refp h,w=Ppix(Refw h,w) =Ppix({(x, y, z i)}), (6) where Refp h,wis the set of reference points in the pixel coordinate for query th,wandPpixis the perspective pro-jection function determined by the camera extrinsic and in-trinsic. Note that we may have multiple cameras in differ-ent directions which will generate a set of {Refp,j h,w}Nc j=1 where Ncdenotes the number of cameras. Since not all cameras can capture the reference points of query th,w, we can further reduce computation by removing invalid sets from{Refp,j h,w}Nc j=1if none of the reference points falls onto the image captured by the corresponding camera. The fi-nal step is to generate offsets and attention weights through two linear layers applied on th,wand produce the updated TPV queries by summing up the sampled image features weighted by their attention weights: ICA(th,w,I)=1 |Nval h,w |X j∈Nval h,wDA(th,w,Refp,j h,w,Ij), (7) where Nval h,w,Ij,DA(·)denote the index set of valid cam-eras, the image features from the jt |
Du_Minimizing_the_Accumulated_Trajectory_Error_To_Improve_Dataset_Distillation_CVPR_2023 | Abstract Model-based deep learning has achieved astounding suc-cesses due in part to the availability of large-scale real-world data. However, processing such massive amounts of data comes at a considerable cost in terms of computations, stor-age, training and the search for good neural architectures. Dataset distillation has thus recently come to the fore. This paradigm involves distilling information from large real-world datasets into tiny and compact synthetic datasets such that processing the latter ideally yields similar performances as the former. State-of-the-art methods primarily rely on learning the synthetic dataset by matching the gradients ob-tained during training between the real and synthetic data. However, these gradient-matching methods suffer from the so-called accumulated trajectory error caused by the discrep-ancy between the distillation and subsequent evaluation. To mitigate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the op-timization algorithm to seek a flat trajectory. We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory. Our method, called Flat Trajec-tory Distillation (FTD) , is shown to boost the performance of gradient-matching methods by up to 4.7% on a subset of images of the ImageNet dataset with higher resolution images. We also validate the effectiveness and generalizabil-ity of our method with datasets of different resolutions and demonstrate its applicability to neural architecture search. Code is available at .https://github.com/AngusDujw/FTD-distillation. | 1. Introduction Modern deep learning has achieved astounding successes in achieving ever better performances in a wide range of *Corresponding Author.†Equal Contribution. 0 10 20 30 40 50 Epoch0.00.10.20.30.40.50.6 LTest(f)LTest(f*) ConvNet on CIFAR 100, IPC=10 MTT in Distillation MTT in Evaluation Ours in Evaluation |
Ci_GFPose_Learning_3D_Human_Pose_Prior_With_Gradient_Fields_CVPR_2023 | Abstract Learning 3D human pose prior is essential to human-centered AI. Here, we present GFPose, a versatile frame-work to model plausible 3D human poses for various appli-cations. At the core of GFPose is a time-dependent score network, which estimates the gradient on each body joint and progressively denoises the perturbed 3D human pose to match a given task specification. During the denois-ing process, GFPose implicitly incorporates pose priors in gradients and unifies various discriminative and genera-tive tasks in an elegant framework. Despite the simplic-ity, GFPose demonstrates great potential in several down-stream tasks. Our experiments empirically show that 1) as a multi-hypothesis pose estimator, GFPose outperforms exist-ing SOTAs by 20% on Human3.6M dataset. 2) as a single-hypothesis pose estimator, GFPose achieves comparable re-sults to deterministic SOTAs, even with a vanilla backbone. 3) GFPose is able to produce diverse and realistic samples in pose denoising, completion and generation tasks.1 | 1. Introduction Modeling 3D human pose is a fundamental problem in human-centered applications, e.g. augmented reality [34, 43], virtual reality [1, 42, 69], and human-robot collabora-tion [9, 15, 36]. Considering the biomechanical constraints, natural human postures lie on a low-dimensional manifold of the physical space. Learning a good prior distribution over the valid human poses not only helps to discriminate the infeasible ones but also enables sampling of rich and diverse human poses. The learned prior has a wide spec-trum of use cases with regard to recovering the 3D hu-man pose under different conditions, e.g., monocular im-ages with depth ambiguities and occlusions [8, 29, 63], in-ertial measurement unit (IMU) signals with noises [72], or even partial sensor inputs [23, 65]. 1Project page https://sites.google.com/view/gfpose/ Complete 2D Incomplete 2D Noisy 3D Incomplete 3D 3D Noise Multi -Hypothesis 3D Pose Estimation Pose Denoising Pose Completion Pose Generation GFPose Figure 1. GFPose learns the 3D human pose prior from 3D hu-man pose datasets and represents it as gradient fields for various applications, e.g., multi-hypothesis 3D pose estimation from 2D keypoints, correcting noisy poses, completing missing joints, and generating natural poses from noise. Previous works explore different ways to model human pose priors. Pioneers [16, 27] attempt to explicitly build joint-angle limits based on biomechanics. Unfortunately, the complete configuration of pose-dependent joint-angle constraints for the full body is unknown. With the recent advances in machine learning, a rising line of works seek to learn the human pose priors from data. Representa-tive methods include modeling the distribution of plausible poses with GMM [5], V AE [46], GAN [12] or neural im-plicit functions [61]. These methods learn an independent probabilistic model or energy function to characterize the data distribution pdata(x). They usually require additional optimization process to introduce specific task constraints when applied to downstream tasks. Therefore extra efforts such as balancing prior terms and different task objectives are inevitable. Some methods jointly learn the pose priors and downstream tasks via adversarial training [24,26] or ex-plicit task conditions [35,48,50] pdata(x|c). These methods seamlessly integrate priors into learning-based frameworks, but limit their use to a single given task. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4800 In this work, we take a new perspective to learn a versa-tile 3D human pose prior model for general purposes. Dif-ferent from previous works that directly model the plausi-ble pose distribution pdata(x), we learn the score (gradi-ent of a log-likelihood) of a task conditional distribution ∇xlogpdata(x|c), where cis the task-specific condition, e.g., for 3D human pose estimation, ccould be 2D images or detected 2D poses. xrepresents plausible 3D human poses. In this way, we can jointly encode the human pose prior and the task specification into the score, instead of considering the learned prior model as an ad-hoc plugin as in an optimization process. To further enhance the flexibil-ity and versatility, we introduce a condition masking strat-egy, where task conditions are randomly masked to vary-ing degrees during training. Different masks correspond to different task specifications. Thus we can handle various pose-related tasks in a unified learning-based framework. We present GFPose, a general framework for pose-related tasks. GFPose learns a time-dependent score net-work sθ(x, t|c)to approximate ∇xlogpdata(x|c)on a large scale 3D human pose dataset [18] via Denoising Score Matching (DSM) [17, 54–57, 59, 62]. Specifically, for any valid human pose x∈RJ×3in Euclidean space, we sam-ple a time-dependent noise z(t)from a prior distribution, perturb xto get the noisy pose ex, then train sθ(ex, t|c)to learn the score towards the valid pose. Intuitively, the score points in the direction of increasing pose plausibility. To handle a wider range of downstream tasks, we adopt a hier-archical condition masking strategy in training. Concretely, we randomly mask out the task condition cby sampling masks from a hierarchy of candidate masks. The candidate masks cover different levels of randomness, including hu-man level, body part level, and joint level. This helps the model to build the spatial relation between different body joints and parts, and enables GFPose directly applicable to different task settings at test time (Figure 1), e.g., recovering 3D pose from severe occlusions when cis partially masked 2D pose or unconditional pose generation when cis fully masked ( c= Ø). We evaluate GFPose on various downstream tasks, in-cluding monocular 3D human pose estimation, pose denois-ing, completion, and generation. Empirical results on the H3.6M benchmark [18] show that: 1) GFPose outperforms SOTA in both multi-hypothesis and single-hypothesis pose estimation tasks [63] and demonstrates stronger robustness to severe occlusions in pose completion [30]. Notably, un-der the single-hypothesis setting, GFPose can achieve com-parable pose estimation performance to previous determin-istic SOTA methods [11,47,70] that learns one-to-one map-ping. To the best of our knowledge, this is for the first time that a probabilistic model can achieve such performance. 2) As a pose generator, GFPose can produce diverse and real-istic samples that can be used to augment existing datasets.We summarize our contributions as follows: • We introduce GFPose, a novel score-based generative framework to model plausible 3D human poses. • We design a hierarchical condition masking strategy to enhance the versatility of GFPose and make it directly applicable to various downstream tasks. • We demonstrate that GFPose outperforms SOTA on multiple tasks under a simple unified framework. |
Chahine_An_Image_Quality_Assessment_Dataset_for_Portraits_CVPR_2023 | Abstract Year after year, the demand for ever-better smartphone photos continues to grow, in particular in the domain of portrait photography. Manufacturers thus use perceptual quality criteria throughout the development of smartphone cameras. This costly procedure can be partially replaced by automated learning-based methods for image quality as-sessment (IQA). Due to its subjective nature, it is necessary to estimate and guarantee the consistency of the IQA pro-cess, a characteristic lacking in the mean opinion scores (MOS) widely used for crowdsourcing IQA. In addition, existing blind IQA (BIQA) datasets pay little attention to the difficulty of cross-content assessment, which may de-grade the quality of annotations. This paper introduces PIQ23, a portrait-specific IQA dataset of 5116 images of 50 predefined scenarios acquired by 100 smartphones, cov-ering a high variety of brands, models, and use cases. The dataset includes individuals of various genders and ethnic-ities who have given explicit and informed consent for their photographs to be used in public research. It is annotated by pairwise comparisons (PWC) collected from over 30 im-age quality experts for three image attributes: face detail preservation, face target exposure, and overall image qual-ity. An in-depth statistical analysis of these annotations allows us to evaluate their consistency over PIQ23. Fi-nally, we show through an extensive comparison with ex-isting baselines that semantic information (image context) can be used to improve IQA predictions. The dataset along with the proposed statistical analysis and BIQA algorithms are available: https://github.com/DXOMARK-Research/PIQ2023 | 1. Introduction Social media has made smartphones a vital tool for con-necting with people worldwide. Visual media, particularly portrait photography, has become a crucial aspect of shar-ing content on these platforms. Portrait photography serves numerous applications ( e.g., advertisements, social media)and use cases ( e.g., anniversaries, weddings). Capturing a high-quality portrait is a complex exercise that demands careful consideration of multiple factors, such as scene se-mantics, compositional rules, image quality, and other sub-jective properties [ 46]. Smartphone manufacturers strive to deliver the best vi-sual quality while minimizing production costs to rival pro-fessional photography. Achieving this requires implement-ing complex tuning and optimization protocols to calibrate image quality in smartphone cameras. These cameras intro-duce sophisticated non-linear processing techniques such as multi-image fusion or deep learning-based image enhance-ment [ 55], resulting in a combination of authentic (realis-tic) camera distortions. This makes traditional objective quality assessment [ 4,16,32,40] that models digital cam-eras as linear systems unreliable [ 9]. Therefore, in addi-tion to objective measurements, the tuning process also in-cludes perceptual evaluations where cameras are assessed by image quality experts. This procedure requires shoot-ing and evaluating thousands of use cases, which can be costly, time-consuming, and challenging to reproduce. Au-tomatic image quality assessment (IQA) methods that try to mimic human perception of quality have been around for many years, in order to help in the tuning process [ 14,36, 37,39,48,57,60,64]. Blind IQA (BIQA), in particular, is a branch of IQA where image quality is evaluated without the need for undistorted reference images. Learning-based BIQA methods [ 15,24,25,27,52,59,62,67,69] have shown good performance on authentic camera distortion datasets [9,13,21,56,61,70], annotated by subjective assessment of image quality. Annotating these datasets is considered an ill-posed problem, as the subjective opinions are not de-terministic, making it challenging to use BIQA methods as accurate quality measures. Therefore, there is a need to de-velop a quantitative and formal framework to evaluate and compare subjective judgments in an objective manner. In this paper, we rely on pairwise comparisons performed by image quality experts along a fixed and relevant set of at-tributes. Multiple attributes, including target exposure, dynamic This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9968 (a) (b) Figure 1. (a) Scenes from the PIQ23 dataset. (b) Examples of the region of interest (ROI) used for different attribute comparisons. Top: overall quality; we use a resized version of the full image. Bottom: details & target exposure; we use an upscaled face area. range, color, sharpness, noise, and artifacts, define image quality [ 3]. Portrait images require additional considera-tions, such as skin tone, bokeh effect, face detail rendering, and target exposure on the face, which fall under the scope of portrait quality assessment (PQA) [ 40]. To the best of our knowledge, the problem of assessing the quality of a portrait image has received limited atten-tion. Most of the work on face IQA [ 49] has been directed towards improving face recognition systems and not as an independent topic. As far as we know, our paper introduces the first-of-its-kind, smartphone portrait quality dataset. We hope to create a new domain of application for IQA and to push forward smartphone portrait photography. Our contri-butions are the following: • A new dataset, PIQ23, consisting of 5116 single por-trait images, taken using 100 smartphone devices from 14 brands, and distributed across 50 different natural scenes ( scene = fixed visual content ). We have ad-dressed the ethical challenges involved in creating such a dataset, by obtaining from each individual depicted in the dataset a signed and informed agreement, mak-ing it the only IQA dataset with such legal and ethical characteristics, as far as we know. • A large IQA experiment controlled in a laboratory en-vironment with fixed viewing conditions. Using pair-wise comparisons (PWC) and following carefully de-signed guidelines, we gather opinions for each scene, from over 30 image quality experts (professional pho-tographers and image quality experts) on three at-tributes related to portrait quality: face detail preser-vation, face target exposure, and overall portrait image quality. • An in-depth statistical analysis method that allows us to evaluate the precision and consistency of the labels as well as the difficulty of the IQA task. This is par-ticularly important given the fact that image quality la-bels are heavily affected by subjectivity, disagreement between observers, and the number of annotations. • An extensive comparison between multiple BIQA models and a simple new method combining scene se-mantic information with quality features to strengthen image quality prediction on PIQ23. |
Agro_Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving_CVPR_2023 | Abstract A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traf-fic participants. Existing works either perform object de-tection followed by trajectory forecasting of the detected objects, or predict dense occupancy and flow grids for the whole scene. The former poses a safety concern as the num-ber of detections needs to be kept low for efficiency rea-sons, sacrificing object recall. The latter is computation-ally expensive due to the high-dimensionality of the out-put grid, and suffers from the limited receptive field inher-ent to fully convolutional networks. Furthermore, both ap-proaches employ many computational resources predicting areas or objects that might never be queried by the motion planner. This motivates our unified approach to percep-tion and future prediction that implicitly represents occu-pancy and flow over time with a single neural network. Our method avoids unnecessary computation, as it can be di-rectly queried by the motion planner at continuous spatio-temporal locations. Moreover, we design an architecture that overcomes the limited receptive field of previous ex-plicit occupancy prediction methods by adding an efficient yet effective global attention mechanism. Through exten-sive experiments in both urban and highway settings, we demonstrate that our implicit model outperforms the cur-rent state-of-the-art. For more information, visit the project website: https://waabi.ai/research/implicito. | 1. Introduction The goal of a self-driving vehicle is to take sensor ob-servations of the environment and offline evidence such as high-definition (HD) maps and execute a safe and comfort-able plan towards its destination. Meanwhile, it is important to produce interpretable representations that explain why the vehicle performed a certain maneuver, particularly if a dangerous event were to occur. To satisfy this, traditional autonomy stacks [2, 6, 9, 14, 15, 20, 32, 38, 39] break down the problem into 3 tasks: perception, motion forecasting and motion planning. Perception leverages sensor data to local-ize the traffic participants in the scene. Motion forecasting *Denotes equal contribution Figure 1. Left: Explicit approaches predict whole-scene occu-pancy and flow on a spatio-temporal grid. Right: Our implicit approach only predicts occupancy and flow at queried continuous points, focusing on what matters for downstream planning. outputs the distribution of their future motion, which is typ-ically multimodal. Finally, motion planning is tasked with deciding which maneuver the SDV should execute. Most autonomy systems are object-based , which in-volves detecting the objects of interest in the scene. To do so, object detectors threshold predicted confidence scores to determine which objects in the scene, a trade off be-tween precision and recall. Furthermore, object-based mo-tion forecasting methods are limited to predict only a hand-ful of sample trajectories or parametric distributions with closed-form likelihood for tractability, as they scale linearly with the number of objects and must run online in the vehi-cle. This causes information loss that could result in unsafe situations [30], e.g., if a solid object is below the detection threshold, or the future behavior of the object is not captured by the simplistic future trajectory estimates. In recent years, object-free approaches [3, 12, 29, 30] that model the presence, location and future behavior of all agents in the scene via a non-parametric distribution have emerged to address the shortcomings of object-based mod-els. Object-free approaches predict occupancy probability and motion for each cell in a spatio-temporal grid, directly from sensor data. More concretely, the spatio-temporal grid is a 3-dimensional dense grid with two spatial dimensions representing the bird’s-eye view, and a temporal dimension from the current observation time to a future horizon of choice. All dimensions are quantized at regular intervals. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1379 In this paradigm, no detection confidence thresholding is re-quired and the distribution over future motion is much more expressive, enabling the downstream motion planner to plan with consideration of low-probability objects and futures. Unfortunately, object-free approaches are computationally expensive as the grid must be very high-dimensional to mit-igate quantization errors. However, most of the computa-tion and memory employed in object-free methods is un-necessary, as motion planners only need to cost a set of spatio-temporal points around candidate trajectories, and not a dense region of interest (RoI). We refer the reader to Fig. 1 for an illustration. This motivates our approach, I MPLICIT O, which utilizes an implicit representation to predict both occupancy proba-bility and flow over time directly from raw sensor data and HD maps. This enables downstream tasks such as motion planning to efficiently evaluate a large collection of spatio-temporal query points in parallel, focusing on areas of in-terest where there are potential interactions with the self-driving vehicle. We design an architecture that overcomes the limited receptive field of fully convolutional explicit ar-chitectures [12, 24, 29, 30] by adding an efficient yet effec-tive global attention mechanism. In particular, we leverage deformable convolutions [8] and cross attention [37] to fo-cus on a compact set of distant regions per query, giving the predictions a global context. This is useful as dynamic objects can move at very high speeds, particularly on the highway. For instance, when predicting in-lane occupancy 3 seconds into the future on a road where the speed limit is 30 m/s, the attention can look approximately 90 meters back along the lane to find the corresponding sensor evidence. Extensive experiments in both urban and highway scenar-ios show that our object-free implicit approach outperforms the two prevalent paradigms in the literature on the task of occupancy-flow prediction: (i) object-based methods that first perform object detection to localize a finite set of ob-jects in the scene, and then predict their future trajectory dis-tribution (ii) object-free explicit methods that predict dense spatio-temporal grids of occupancy and motion. |
Bhatia_CCuantuMM_Cycle-Consistent_Quantum-Hybrid_Matching_of_Multiple_Shapes_CVPR_2023 | Abstract Jointly matching multiple, non-rigidly deformed 3D shapes is a challenging, NP-hard problem. A perfect matching is necessarily cycle-consistent: Following the pairwise point correspondences along several shapes must end up at the starting vertex of the original shape. Unfor-tunately, existing quantum shape-matching methods do not support multiple shapes and even less cycle consistency. This paper addresses the open challenges and introduces the first quantum-hybrid approach for 3D shape multi-matching; in addition, it is also cycle-consistent. Its itera-tive formulation is admissible to modern adiabatic quantum hardware and scales linearly with the total number of input shapes. Both these characteristics are achieved by reduc-ing the N-shape case to a sequence of three-shape match-ings, the derivation of which is our main technical contribu-tion. Thanks to quantum annealing, high-quality solutions with low energy are retrieved for the intermediate NP-hard objectives. On benchmark datasets, the proposed ap-proach significantly outperforms extensions to multi-shape matching of a previous quantum-hybrid two-shape match-ing method and is on-par with classical multi-matching methods. Our source code is available at 4dqv.mpi-inf.mpg.de/CCuantuMM/ . | 1. Introduction Recently, there has been a growing interest in applying quantum computers in computer vision [3, 20, 32]. Such quantum computer vision methods rely on quantum anneal-ing (QA) that allows to solve NP-hard quadratic uncon-strained binary optimisation problems (QUBOs). While having to formulate a problem as a QUBO is rather inflexi-ble, QA is, in the future, widely expected to solve QUBOs at speeds not achievable with classical hardware. Thus, cast-ing a problem as a QUBO promises to outperform more un-restricted formulations in terms of tractable problem sizes and attainable accuracy through sheer speed. A recent example for such a problem is shape match-ing, where the goal is to estimate correspondences between Figure 1. Our quantum-hybrid method matches all 100shapes of the FAUST collection [4] with guaranteed cycle consistency (white arrows). Here, we visualise the matchings via texture transfer between all shapes. Our method scales linearly in the number of shapes. See the full figure in the supplement. two shapes. Accurate shape matching is a core element of many computer vision and graphics applications ( i.e.,tex-ture transfer and statistical shape modelling). If non-rigid deformations are allowed, even pairwise matching is NP-hard, leading to a wide area of research that approximates this problem, as a recent survey shows [15]. Matching two shapes is one of the problems that was shown to benefit from quantum hardware: Q-Match [39] iteratively updates a subset of point correspondences using QA. Specifically, its cyclic α-expansion allows to parametrise changes to per-mutation matrices without relaxations. The question we ask in this work is: How can we design amulti-shape matching algorithm in the style of Q-Match that has the same benefits? As we show in the experiments, where we introduce several na ¨ıve multi-shape extensions of Q-Match, this is a highly non-trivial task. Despite tweaking them, our proposed method significantly outperforms them. IfN>2shapes have to be matched, the computational complexity of na ¨ıve exhaustive pairwise matching increases quadratically with N, which does not scale to large N. Fur-thermore, these pairwise matchings can easily turn out to be inconsistent with each other, thereby violating cycle con-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1296 Figure 2. We match Nshapes by iteratively matching triplets. sistency. For example, chaining the matchings PXYfrom shapeXtoYandPYZfromYtoZcan give very differ-ent correspondences between XandZthan the direct, pair-wise matching PXZofXandZ:PXZ̸=PXYPYZ. (We apply the permutation matrix PXYto the one-hot vertex in-dex vector x∈ X asx⊤PXY=y∈ Y.) Thus, how can we achieve cycle consistency by design? A simple solution would be to match a few pairs in the collection to create a spanning tree covering all shapes and infer the remaining correspondences by chaining along the tree. Despite a high accuracy of methods for matching two shapes, this corre-spondence aggregation policy is prone to error accumula-tion [36]. A special case of this policy is pairwise matching against a single anchor shape, which also guarantees cycle-consistent solutions by construction [19]. We build on this last option in our method as it avoids error accumulation. This paper, in contrast to purely classical methods, lever-ages the advantages of quantum computing for multi-shape matching and introduces a new method for simultaneous alignment of multiple meshes with guaranteed cycle con-sistency; see Fig. 1. It makes a significant step forward compared to Q-Match and other methods utilising adiabatic quantum computing (AQC), the basis for QA. Our c ycle-consistent quantu m-hybrid m ulti-shape m atching (CCuan-tuMM; pronounced “quantum”) approach relies on the computational power of modern quantum hardware. Thus, our main challenge lies in casting our problem in QUBO form, which is necessary for compatibility with AQC. To that end, two design choices are crucial: (1) Our method reduces the N-shapes case to a series of three-shape match-ings; see Fig. 2. Thus, CCuantuMM is iterative and hy-brid, i.e.,it alternates in every iteration between preparing a QUBO problem on the CPU and sampling a QUBO so-lution on the AQC. (2) It discards negligible higher-order terms, which makes mapping the three-shape objective to quantum hardware possible. In summary, the core technical contributions of this paper are as follows: • CCuantuMM, i.e.,a new quantum-hybrid method for shape multi-matching relying on cyclic α-expansion. CCuantuMM produces cycle-consistent matchings andscales linearly with the number of shapes N. • A new formulation of the optimisation objective for the three-shapes case that is mappable to modern QA. • A new policy in shape multi-matching to address the N-shape case relying on a three-shapes formulation and adaptive choice of an anchor shape. Our experiments show that CCuantuMM significantly outperforms several variants of the previous quantum-hybrid method Q-Match [39]. It is even competitive with several non-learning-based classical state-of-the-art shape methods [19, 33] and can match more shapes than them. In a broader sense, this paper demonstrates the very high po-tential of applying (currently available and future) quantum hardware in computer vision. |
Jiang_Robust_Outlier_Rejection_for_3D_Registration_With_Variational_Bayes_CVPR_2023 | Abstract Learning-based outlier (mismatched correspondence) rejection for robust 3D registration generally formulates the outlier removal as an inlier/outlier classification prob-lem. The core for this to be successful is to learn the dis-criminative inlier/outlier feature representations. In this paper, we develop a novel variational non-local network-based outlier rejection framework for robust alignment. By reformulating the non-local feature learning with varia-tional Bayesian inference, the Bayesian-driven long-range dependencies can be modeled to aggregate discriminative geometric context information for inlier/outlier distinction. Specifically, to achieve such Bayesian-driven contextual de-pendencies, each query/key/value component in our non-local network predicts a prior feature distribution and a posterior one. Embedded with the inlier/outlier label, the posterior feature distribution is label-dependent and dis-criminative. Thus, pushing the prior to be close to the dis-criminative posterior in the training step enables the fea-tures sampled from this prior at test time to model high-quality long-range dependencies. Notably, to achieve ef-fective posterior feature guidance, a specific probabilis-tic graphical model is designed over our non-local model, which lets us derive a variational low bound as our op-timization objective for model training. Finally, we pro-pose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estima-tion. Extensive experiments on 3DMatch, 3DLoMatch, and KITTI datasets verify the effectiveness of our method. Code is available at https://github.com/Jiang-HB/VBReg. ∗Corresponding authors Haobo Jiang, Jin Xie, and Jian Yang are with PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Under-standing for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, China.1. Introduction Point cloud registration is a fundamental but challenging 3D computer vision task, with many potential applications such as 3D scene reconstruction [1,39], object pose estima-tion [13, 44], and Lidar SLAM [15, 51]. It aims to align two partially overlapping point clouds by estimating their relative rigid transformation, i.e., 3D rotation and 3D trans-lation. A popular approach to address the large-scale scene registration problem consists of extracting point descrip-tors [11,14,17,37,38,50] and establishing correspondences between the two point clouds, from which the transforma-tion can be obtained geometrically. In this context, much effort has been dedicated to designing traditional and deep learning-based descriptors [3, 11, 21, 41, 50]. However, the resulting correspondences inevitably still suffer from out-liers (wrong matchings), particularly in challenging cases, such as low-overlap, repetitive structures, or noisy point sets, leading to registration failure. To address this, many outlier filtering strategies have been developed to robustify the registration process. These include traditional rejection methods using random sample consensus [16], point-wise descriptor similarity [7, 32] or group-wise spatial consistency [46]. Deep learning meth-ods have also been proposed, focusing on learning corre-spondence features used to estimate inlier confidence val-ues [2, 10, 33]. In particular, the current state-of-the-art method, PointDSC [2], relies on a spatial consistency-driven non-local network to capture long-range context in its learned correspondence features. While effective, PointDSC still yields limited registration robustness, par-ticularly for scenes with a high outlier ratio, where the spa-tial consistency constraints may become ambiguous [36], thereby degrading the correspondence features’ quality. In this paper, we propose to explicitly account for the ambiguities arising from high outlier ratios by developing a probabilistic feature learning framework. To this end, we introduce a variational non-local network based on an at-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1148 tention mechanism to learn discriminative inlier/outlier fea-ture representations for robust outlier rejection. Specifi-cally, to capture the ambiguous nature of long-range con-textual dependencies, we inject a random feature in each query, key, and value component in our non-local network. The prior/posterior distributions of such random features are predicted by prior/posterior encoders. To encourage the resulting features to be discriminative, we make the pos-terior feature distribution label-dependent. During train-ing, we then push the prior distribution close to the label-dependent posterior, thus allowing the prior encoder to also learn discriminative query, key, and value features. This enables the features sampled from this prior at test time to model high-quality long-range dependencies. To achieve effective variational inference, we customize a probabilistic graphical model over our variational non-local network to characterize the conditional dependencies of the random features. This lets us derive a variational lower bound as the optimization objective for network train-ing. Finally, we propose a voting-based deterministic inlier searching mechanism for transformation estimation, where the correspondence features learned from all non-local iter-ations jointly vote for high-confidence hypothetical inliers for SVD-based transformation estimation. We theoretically analyze the robustness of our deterministic inlier searching strategy compared to RANSAC, which also motivates us to design a conservative seed selection mechanism to improve robustness in sparse point clouds. To summarize, our contributions are as follows: • We propose a novel variational non-local network for outlier rejection, learning discriminative correspon-dence features with Bayesian-driven long-range con-textual dependencies. • We customize the probabilistic graphical model over our variational non-local network and derive the varia-tional low bound for effective model optimization. • We introduce a Wilson score-based voting mechanism to search high-quality hypothetical inliers , and theo-retically demonstrate its superiority over RANSAC. Our experimental results on extensive benchmark datasets demonstrate that our framework outperforms the state-of-the-art registration methods. 2. Related Work End-to-end Registration Methods. With the advances of deep learning in the 3D vision field [34], the learning-based end-to-end registration model has achieved increasing re-search attention. DCP [42] uses the feature similarity to establish pseudo correspondences for SVD-based transfor-mation estimation. RPM-Net [47] exploits the Sinkhornlayer and annealing for discriminative matching map gen-eration. [22, 23] integrate the cross-entropy method into the deep model for robust registration. RIENet [40] uses the structure difference between the source neighborhood and the pseudo-target one for inlier confidence evaluation. With the powerful feature representation of Transformer, RegTR [48] effectively aligns large-scale indoor scenes in an end-to-end manner. [13] propose a match-normalization layer for robust registration in the real-world 6D object pose estimation task. More end-to-end models such as [10, 18, 28–30, 33, 53] also present impressive precisions. Learning-based Feature Descriptors. To align the com-plex scenes, a popular pipeline is to exploit feature descrip-tors for 3D matching. Compared to hand-crafted descriptors such as [17, 37, 38], the deep feature descriptor presents su-perior registration precision and has achieved much more attention in recent years. The pioneering 3DMatch [50] ex-ploits the Siamese 3D CNN to learn the local geometric fea-ture via contrastive loss. FCGF [11] exploits a fully convo-lutional network for dense feature extraction in a one-shot fashion. Furthermore, D3feat [3] jointly learns the dense feature descriptor and the detection score for each point. By integrating the overlap-attention module into D3feat, Preda-tor [21] largely improves the registration reliability in low-overlapping point clouds. YOHO [41] utilizes the group equivariant feature learning to achieve the rotation invari-ance and shows great robustness to the point density and the noise interference. [35] develops a geometric transformer to learn the geometric context for robust super-point matching. Lepard [31] embeds the relative 3D positional encoding into the transformer for discriminative descriptor learning. Outlier Rejection Methods. Despite significant progress in learning-based feature descriptor, generating mismatched correspondences (outliers) in some challenging scenes re-mains unavoidable. Traditional outlier filtering methods, such as RANSAC [16] and its variants [4, 24, 27], use re-peated sampling and verification for outlier rejection. How-ever, these methods tend to have a high time cost, particu-larly in scenes with a high outlier ratio. Instead, FGR [52] and TEASER [45] integrate the robust loss function into the optimization objective to weaken the interference from out-liers. Recently, Chen et al. [9] developed second-order spa-tial compatibility for robust consensus sampling. With the rise of deep 3D vision, most learnable outlier rejection mod-els [10,33] formulate outlier rejection as a binary classifica-tion task and reject correspondences with low confidence. Yi et al. [49] proposed a context normalization-embedded deep network for inlier evaluation, while Brachmann et al. [6] enhanced classical RANSAC with neural-guided prior confidence. As our baseline, PointDSC [2] proposes ex-ploiting a spatial consistency-guided non-local inlier classi-fier for inlier evaluation, followed by neural spectral match-ing for robust registration. However, under high outlier 1149 ratios, spatial consistency can be ambiguous (as shown in Fig. 1), misleading non-local feature aggregation. Instead, we propose | exploiting Bayesian-driven long-range depen-dencies for discriminative non-local feature learning. 3. Approach 3.1. Background Problem Setting. In the pairwise 3D registration task, given a source point cloud X={xi∈R3|i= 1, ...,|X|} and a target point cloud Y={yj∈R3|j= 1, ...,|Y|}, we aim to find their optimal rigid transformation consisting of a rotation matrix R∗∈SO(3)and a translation vector t∗∈R3to align their overlapping region precisely. In this work, we focus on the descriptor-based pipeline for large-scale scene registration. Based on the feature-level near-est neighbor, we construct a set of putative correspondences C= ci= (xi,yi)∈R6|i= 1, ...,|C| . The inlier (cor-rectly matched correspondence) is defined as the correspon-dence satisfying ∥R∗xi+t∗−yi∥< ε, where εindicates the inlier threshold. Vanilla Non-local Feature Embedding. Given the putative correspondence set C, [2] leverages the spatial consistency-guided non-local network ( SCNonlocal ) for their feature embedding. The injected geometric compatibility matrix can effectively regularize the long-range dependencies for discriminative inlier/outlier feature learning. In detail, it contains Literations and the feature aggregation in l-th it-eration can be formulated as: Fl+1 i=Fl i+ MLP|C|X j=1softmax j(αlβ)Vl j , (1) where Fl i∈Rdindicates the feature embedding of corre-spondence ciinl-th iteration (the initial feature F0 iis ob-tained via linear projection on ci) andVl i=fl v(Fl i)∈Rd is the projected value feature. αl∈R|C|×|C|is the non-local attention map whose entry αl i,jreflects the feature similar-ity between the projected query feature Ql i=fl q(Fl i)∈Rd and the key feature Kl i=fl k(Fl j)∈Rd.β∈R|C|×|C| represents the geometric compatibility matrix of correspon-dences, where the compatibility between ciandcjis: βi,j= max 0,1−d2 ij ε2 , dij=|∥xi−xj∥ − ∥yi−yj∥|. (2) Based on the fact that the geometric distance di,jof inliers ciandcjtend to be minor, Eq. 2 will assign a high compat-ibility value on the inlier pair, thereby promoting the non-local network to effectively cluster the inlier features for discriminative inlier/outlier feature learning. 3.2. Variational Non-local Feature Embedding While effective, SCNonlocal still suffers from ambigu-ous long-range dependencies, especially in some challeng-250 500 1000 2500 5000 Number of points01020304050Compat. error (%)3DLoMatch (Predator) 3DLoMatch (FCGF)Figure 1. The ratio of inlier-outlier pairs with positive compatibil-ities in 3DLoMatch [21] using FCGF and Predator descriptors. ing scenes (e.g., the low-overlapping case). Two essential reasons are: (i)Wrong geometric compatibility. As shown in Fig. 1, for 3DLoMatch dataset with Predator and FCGF descriptors, almost 30% and 17% of inlier-outlier pairs own the positive compatibility values, respectively, which poten-tially misleads the attention weight for wrong feature clus-tering. (ii)Lack of uncertainty modeling. In symmetric or repetitive scenes, the inlier/outlier prediction contains sig-nificant uncertainty. Therefore, it’s necessary to design a robust feature representation to capture such uncertainty. To overcome them, we develop a variational non-local network, a probabilistic feature learning framework, for discriminative correspondence embedding. Our core idea is to inject random features into our model to capture the ambiguous nature of long-range dependencies, and then leverage the variational Bayesian inference to model the Bayesian-driven long-range dependencies for discrimina-tive feature aggregation. Specifically, we first introduce the random feature variables zl k,i,zl q,iandzl v,iinto the key Kl i, query Ql iand value Vl icomponents in our non-local module to capture their potential uncertainty in the long-range dependency modeling. Then, the prior/posterior en-coders are constructed to predict their prior feature distribu-tion and the posterior one, respectively. Embedded with the inlier/outlier label, the posterior feature distribution is label-dependent and discriminative. Thus, by pushing the prior close to the discriminative posterior in the training phase, this prior at test time also tends to sample discriminative query, key, and value features for high-quality long-range dependency modeling. Probabilistic Graphical Model over Variational Non-local Network. To achieve effective variational Bayesian inference, we need to first characterize the conditional de-pendencies of the injected random features so that the varia-tional lower bound can be derived as the optimization objec-tive for model training. As shown in Fig. 2, we customize the probabilistic graphical model over our non-local net-work to clarify the dependencies of random features (the circles). The solid line denotes the label prediction pro-cess, while the dashed line represents our label-dependent posterior encoder ( i.e., inference model). Notably, the de-terministic hidden query/key/value features hl k,i∈Rd′, 1150 hl q,i∈Rd′, and hl v,i∈Rd′are also introduced to sum-marize the historical information for better feature updating in each iteration. Inlier/outlier Prediction Process. Based on the defined conditional dependencies in Fig. 2, the prediction process of correspondence labels b= b1, b2, ..., b|C||bi∈ {0,1} (1 indicates inlier and 0 outlier) is formulated as follows. Beginning with the initial linear projection ˜F0∈R|C|×d of correspondences C={ci}, we iteratively perform the probabilistic non-local aggregation for feature updating. In thel-th iteration, we first employ a Gated Recurrent Unit (GRU) [12] to predict the hidden query/key/value features which summarize the historical query/key/value features (sampled from the prior distributions) and the correspon-dence features in previous iterations: hl q,i= GRU q(hl−1 q,i,[zl−1 q,i,˜Fl−1 i]), hl k,i= GRU k(hl−1 k,i,[zl−1 k,i,˜Fl−1 i]), hl v,i= GRU v(hl−1 v,i,[zl−1 v,i,˜Fl−1 i]),(3) where [·,·]denotes the feature concatenation and ˜Fl−1 iis the learned correspondence features of ciin iteration l−1. Then, with as input the predicted hidden features, the prior encoder pθ(·)is utilized to predict the prior feature distri-bution for query/key/value, respectively. Furthermore, we sample features zl q,i∈R˜d,zl k,i∈R˜dandzl v,i∈R˜d from the predicted prior query/key/value distribution and combine them with the hidden features to predict the corre-sponding query ˜Ql i∈Rd, key ˜Kl i∈Rdand value ˜Vl i∈Rd through a neural network fq,k,v θ:Rd′+˜d→Rd: zl q,i∼pθ(zl q,i|hl q,i),˜Ql i=fq θ( zl q,i,hl q,i ), zl k,i∼pθ(zl k,i|hl k,i),˜Kl i=fk θ( zl k,i,hl k,i ), zl v,i∼pθ(zl v,i|hl v,i),˜Vl i=fv θ( zl v,i,hl v,i ),(4) where the prior feature distribution is the Gaussian distribu-tion with the mean and the standard deviation parameterized by a neural network. Finally, with the learned ˜Ql i,˜Kl iand ˜Vl i, the correspondence feature ˜Fl iinl-th iteration can be aggregated with the same non-local operation in Eq. 1. Af-terLfeature iterations, we feed the correspondence feature ˜FL iin the last iteration into a label prediction model yθto predict the inlier/outlier labels bi∼yθ(bi|˜FL i), where the label prediction model outputs a scalar Gaussian distri-bution with the mean parameterized by the neural network and the unit variance. Variational Posterior Encoder. Due to the nonlinear-ity of our variational non-local model, we cannot di-rectly derive the precise posterior distribution for random query/key/value features using the standard Bayes’ theo-rem. Taking inspiration from the Variational Bayesian in-ference, we construct a label-dependent posterior encoder C ˜F0 h0 kh0 q h0 vz0 kz0 q z0 v˜K0˜Q0 ˜V0˜F1 h1 kh1 q h1 vz1 kz1 q z1 v˜K1˜Q1 ˜V1˜F2 Variational Posteriorb 1Figure 2. Probabilistic graphical model for our variational non-local network. For simplicity, we just demonstrate two itera-tions. The white circles indicate the random features and the white squares denote the deterministic hidden features. The solid line represents the inlier/outlier prediction process and the dashed line denotes the label-dependent variational posterior encoder. We just show the variational posterior for z1 k. qϕ(·)to to approximate the feature posterior: zl q,i∼qϕ(zl q,i|[hl q,i,[bi]×k]) zl k,i∼qϕ(zl k,i|[hl k,i,[bi]×k]) zl v,i∼qϕ(zl v,i|[hl v,i,[bi]×k]),(5) where [bi]×kindicates a label vector generated by tiling the scalar label ktimes. The output of each posterior encoder is a diagonal Gaussian distribution with parameterized mean and standard deviation. Variational Lower Bound. Finally, we derive the op-timization objective ELBO( θ, ϕ), the variational (evi-dence) lower bound of log-likelihood correspondence la-belslnyθ(b| C), to train our variational non-local network (Please refer to Appendix A for the detailed derivation): ELBO( θ, ϕ) =EQL−1 l=0qϕ(zl q,k,v|hl q,k,v,b)h lnyθ(b|˜FL)i − L−1X l=0Eqϕh DKL qϕ(zl q,k,v|hl q,k,v,b)||pθ(zl q,k,v|hl q,k,v)i (6) where for clarity, we utilize the subscript q, k, v to denote the same operator performed on query/key/value. DKL(·||·) denotes the Kullback–Leibler (KL) divergence between two distributions. By maximizing the variational lower bound above, we can optimize the network parameters to in-directly maximize the log-likelihood value of correspon-dence labels. Eq. 6 indicates that the discriminative, label-dependent feature posterior explicitly constrains the prior by reducing their KL divergence in the training phase, which promotes the query, key, and value features sampled from the prior to model the high-quality long-term depen-dencies at test time. 3.3. Voting-based Inlier Searching With the learned correspondence features above, we then propose a voting-based sampling strategy to search the de-sired inlier subset from the entire putative correspondences 1151 for optimal transformation estimation. Our sampling mech-anism is deterministic and efficient. We first select a set of high-confidence seeds CseedfromCbased on their inlier confidence (predicted in § 3.2) and the Non-Maximum Sup-pression (as performed in [2]), where the number of seeds |Cseed|=⌊|C| ∗v⌋(vis a fixed seed ratio). Then, for each seed, we cluster its most compatible correspondences into it to form the hypothetical inliers . Ideally, if the seed is an inli |
Huang_Contrastive_Semi-Supervised_Learning_for_Underwater_Image_Restoration_via_Reliable_Bank_CVPR_2023 | Abstract Despite the remarkable achievement of recent underwa-ter image restoration techniques, the lack of labeled data has become a major hurdle for further progress. In this work, we propose a mean-teacher based Semi -supervised Underwater Image Restoration ( Semi-UIR ) framework to incorporate the unlabeled data into network training. How-ever, the naive mean-teacher method suffers from two main problems: (1) The consistency loss used in training might become ineffective when the teacher’s prediction is wrong. (2) Using L1 distance may cause the network to over-fit wrong labels, resulting in confirmation bias. To ad-dress the above problems, we first introduce a reliable bank to store the “best-ever” outputs as pseudo ground truth. To assess the quality of outputs, we conduct an empirical analysis based on the monotonicity property to select the most trustworthy NR-IQA method. Besides, in view of the confirmation bias problem, we incorporate con-trastive regularization to prevent the overfitting on wrong labels. Experimental results on both full-reference and non-reference underwater benchmarks demonstrate that our algorithm has obvious improvement over SOTA methods quantitatively and qualitatively. Code has been released at https://github.com/Huang-ShiRui/Semi-UIR. | 1. Introduction Due to light refraction, absorption and scattering in un-derwater scenes, images taken in the water usually suffer severely from color distortion, low contrast and blur. Im-ages with these defects tend to be less visually appealing and can potentially hinder the well-functioning of underwa-ter robotic systems. Recently, many deep learning based methods [5–7, 24, 51] have been proposed to address im-age restoration problems. Numerous efforts have also been devoted to the specific domain of underwater image restora-1This work is supported by the Nature Science Foundation of Shaanxi Province of China (2021JM-125). (a) UIEBD (c)EUVP(b) UWCNN Figure 1. Examples from different benchmarks. (a) shows real-world underwater images from UIEB [22] with degraded images (first and second row). (b) shows the UWCNN training set [21] (synthesized based on the image formation model) and (c) shows the EUVP dataset [17] (synthesized by GAN). The ambient light and color cast of (b) and (c) are quite different from that of (a). tion [11, 17, 20, 22, 49]. Compared with traditional methods that mostly rely on hand-crafted priors, deep learning based solutions are able to deliver superior restoration results due to their data-driven nature. Despite their success, most of deep learning based meth-ods are designed to learn the restoration mapping on paired datasets in a supervised manner. As is known, it is ex-tremely hard, if not impossible, to acquire paired underwa-ter images in real scenes. The existing datasets for under-water image restoration have several non-negligible issues: (1)Lack of real data. A popular way to construct paired datasets is to synthesize underwater images using some physical model [21] or GAN [17, 48]. However, there is a significant discrepancy between synthesized and real data. As is shown in Fig. 1, the ambient light and color cast of synthetic data are quite different from the real counterparts. Due to domain shift, models trained on synthetic datasets often exhibit poor generalization in real scenes. Another way [22] is to manually construct pseudo labels by select-ing the best results among those produced by traditional al-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18145 gorithms. Inevitably, the quality of the pseudo ground truth is restricted by the restoration capability of traditional algo-rithms. (2) Limited data size. The current benchmarks only provide a very limited amount of paired data. For example, in UIEB [22], there are only 890 paired underwater images. Models learned on such a small dataset may run the risk of overfitting. In comparison, the standard datasets for image classification, such as ImageNet [35], are several orders of magnitude larger in size. On the other hand, unlabeled underwater images are rel-atively easy to collect. The challenge is how to make ef-fective use of these unlabeled data. Semi-supervised learn-ing, which capitalizes on both labeled and unlabeled data for model training, is best suited in this kind of scenarios. This motivates us to propose a semi-supervised scheme with the goal of improving the generalization of the resulting model on real-world underwater images. To be specific, we adopt the mean teacher method [40] as the basis. The mean teacher method finds a way to obtain pseudo labels for un-labeled data and utilizes a consistency loss to improve the accuracy and robustness of the network. Specifically, it con-structs a teacher model with improved performance from a student model via the exponential moving average (EMA) strategy. The teacher’s prediction serves as the pseudo la-bel to guide the training of the student. However, it is a non-trivial task to tailor the mean teacher method to the un-derwater image restoration problem. The reasons are as fol-lows: (1) There is no guarantee that the teacher can con-sistently outperform the student. Wrong pseudo labels may jeopardize the training of the student network. (2) The com-monly used consistency loss is based on L1 distance. The “strict” L1 loss can easily make the model overfit wrong predictions, resulting in confirmation bias. To address the first issue, we construct a reliable bank to archive the best-ever outputs from the teacher as pseudo la-bels. The main challenge here is how to determine what are the “best-ever” outputs? Intuitively, non-reference image quality assessment (NR-IQA) can be leveraged to evaluate the quality of each output. However, as noted in [3, 10, 22], the current NR-IQA metrics for underwater images are, to some extent, inconsistent with human visual perception. To identify the right one for our purpose, we compare several NR-IQA metrics using the monotonicity property as the re-liability criterion. Our empirical analysis suggests MUSIQ [19] best meets the criterion. For the second issue, we in-troduce contrastive learning as a supplementary regulariza-tion to alleviate overfitting. Unlike those conventional loss functions that are only concerned with how close the out-puts and ground truths are, contrastive loss provides addi-tional supervision to prevent the degradation of the outputs. In this sense, contrastive regularization is ideally suited to our semi-supervised learning framework since we only have access to the degraded images in the unlabeled dataset. Itenables the model to take advantage of unlabeled data. In summary, our main contributions are as follows: (1) We propose a mean teacher based semi-supervised un-derwater image restoration framework named Semi-UIR, which effectively leverages the knowledge from unlabeled data to improve the generalization of the trained model on real-world data. (2) We evaluate teacher outputs by a judi-ciously chosen NR-IQA metric and build a reliable bank to store best-ever teacher outputs, which ensures the reliabil-ity of pseudo-labels. (3) We adopt contrastive loss as a form of regularization to alleviate confirmation bias. (4) Exten-sive experimental results demonstrate the effectiveness of our proposed methods. |
Huang_Rethinking_Federated_Learning_With_Domain_Shift_A_Prototype_View_CVPR_2023 | Abstract Federated learning shows a bright promise as a privacy-preserving collaborative learning technique. However , prevalent solutions mainly focus on all private data sampledfrom the same domain. An important challenge is that whendistributed data are derived from diverse domains. The pri-vate model presents degenerative performance on other do-mains (with domain shift). Therefore, we expect that theglobal model optimized after the federated learning pro-cess stably provides generalizability performance on mul-tiple domains. In this paper , we propose Federated Proto-types Learning (FPL) for federated learning under domainshift. The core idea is to construct cluster prototypes and unbiased prototypes, providing fruitful domain knowledgeand a fair convergent target. On the one hand, we pull thesample embedding closer to cluster prototypes belongingto the same semantics than cluster prototypes from distinct classes. On the other hand, we introduce consistency reg-ularization to align the local instance with the respective unbiased prototype. Empirical results on Digits and Office Caltech tasks demonstrate the effectiveness of the proposed solution and the efficiency of crucial modules. | 1. Introduction Federated learning is a privacy-preserving paradigm [ 47, 83], which reaches collaborative learning without leaking privacy. The cornerstone solution, FedAvg [ 47], aggregates parameters from participants and then distributes the globalmodel (averaged parameters) back for further training,which aims to learn a high-quality model without central-izing private data. However, an inherent challenge in feder-ated learning is data heterogeneity [ 26,39,69,87]. Specifi-cally, the private data is collected from distinct sources with diverse preferences and presents non-iid (independently and *Corresponding Author: Mang Ye, Bo Du Figure 1. Illustration of heterogeneous federated learning . The feature visualization on inter domains ( →represents testing on tar-get domain i.e., M→SV means that local dataset is from MNIST and test model on SVHN). The toprow indicates that local train-ing results in domain shift. The bottom row shows that our method acquires generalizable performance on different domains. identically distributed) distribution [ 87]. Each participant optimizes toward the local empirical risk minimum, whichis inconsistent with the global direction. Therefore, the av-eraged global model unavoidably faces a slow convergence speed [ 40] and achieves limited performance improvement. A mainstream of subsequent efforts delves into introduc-ing a variety of global signals to regulate private model [13,28,38,40,51,66,70]. These methods focus on la-bel skew, where distributed data are from the same do-main , and simulate data heterogeneity via imbalanced sam-pling, e.g., Dirichlet strategy [ 32] to generate different la-bel distributions. Nonetheless, another noticeable data het-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16312 erogeneous property in federated learning is domain shift [21,22,41,55,57]. In particular, private data is derived from various domains , leading to distinct feature distributions. In this scenario, we argue that naive learning on private databrings poor generalizable ability in Fig. 1. Specifically, the private model fails to provide discrimination on other do-mains because it overfits local domain distribution. Theaforementioned methods mainly regulate the private modelvia global knowledge ( i.e., the average signals from partici-pants). Therefore, these algorithms share a common weak-ness: the global information is insufficient to describe di-verse domain knowledge , which is magnified under the do-main shift and thus hinders the improvement of generaliz-ability. An intuitive solution is to preserve multiple modelsfor distilling respective domain knowledge. However, it in-curs a high cost of both communication and computation. Taking into account both the effectiveness and efficiency, we rethink the prototype [ 11,36,67,82,91], which is the mean value of features with identical semantics. It rep-resents class-wise characteristics and is vector type [ 90]. Given the enormous participant scale in federated learning, it is not efficient and feasible to maintain all prototypes.However, directly averaging all prototypes to get global pro-totypes would arise the same impediment as global mod-els because averaging operation weakens the domain diver-sity. Besides, global prototypes probably yield biased to the dominant domain due to the unknown of private domainsproportion, which results in disadvantageous performance on minority domains. Driven by these two issues, on the one hand, we find representative prototypes by clustering all prototypes. Therefore, each class is abstracted by a setof diverse prototypes, capturing rich domain variance. On the other hand, we generate unbiased prototypes based oncluster prototypes to construct fair and stable global signals, which avoid optimizing toward the underlying primary do-main and thus ensure stability on different domains. Com-pared with original feature vectors, cluster and unbiased prototypes are privacy-friendly because it experiences twiceand third times averaging operation [ 70]. Hence, it is less feasible to disentangle each raw representation and subse-quently reconstruct private data. We analyze the superiority of cluster prototypes and unbiased prototypes in Sec. 3.2. In this paper, we propose Federated Prototype Learning (FPL), which consists of two components. First ,i no r -der to improve the generalizability on the premise of dis-criminability. We introduce Cluster Prototypes ContrastiveLearning (CPCL), which leverages cluster prototypes to construct contrastive learning [ 7,19,79,84,85]. CPCL adap-tively enforces the query embedding to be more similar to cluster prototypes from the same class than other prototypes with different semantics. In particular, such an objective en-courages instance feature to be close to representative proto-types in the same semantic and separates it away from otherclass prototypes, which incorporates diverse domain knowl-edge and maintains a clear decision boundary. Second ,w e utilize unbiased prototypes to provide a fair and stable con-vergence point and propose Unbiased Prototypes Consis-tent Regularization (UPCR). Specifically, we average clus-ter prototypes to acquire unbiased prototypes. The localinstance is required to minimize the feature-level distancewith the corresponding unbiased prototype. Therefore, thelocal model would not be biased toward dominant domainsand exhibits stable performance on inferior domains. Weconjecture that these two components together make FPLa competitive method for federated learning with domainshift. The main contributions are summarized below. • We focus on heterogeneous federated learning with do-main shift and identify that the inherent limitation of ex-isting methods is that global regularization signal is in-sufficient to depict diverse domain knowledge and biasedtoward major domain among participants. • We propose a simple yet effective strategy to learn a well generalizable global model in federated learning with do-main shift. Inspired by the success of prototype learn-ing, we introduce cluster prototypes to provide rich do-main knowledge and further construct unbiased proto-types based on the average of cluster prototypes to further offer fair and stable objective signal. • We conduct extensive experiments on Digits [ 23,33,52, 61] and Office Caltech [ 16] tasks. Accompanied with a set of ablative studies, promising results validate the effi-cacy of FPL and the indispensability of each module. |
Feng_Self-Supervised_Video_Forensics_by_Audio-Visual_Anomaly_Detection_CVPR_2023 | Abstract Manipulated videos often contain subtle inconsistencies between their visual and audio signals. We propose a video forensics method, based on anomaly detection, that can identify these inconsistencies, and that can be trained solely using real, unlabeled data. We train an autoregressive model to generate sequences of audio-visual features, using feature sets that capture the temporal synchronization between video frames and sound. At test time, we then flag videos that the model assigns low probability. Despite being trained entirely on real videos, our model obtains strong performance on the task of detecting manipulated speech videos. Project site: https://cfeng16.github.io/audio-visual-forensics. | 1. Introduction Supervised learning underlies today’s most successful methods for image and video forensics. However, the diffi-culty of collecting large, labeled datasets that fully capture all of the possible manipulations that one might encounter in the wild places significant limitations on this approach. A longstanding goal of the forensics community has been to design methods that, instead, learn to detect manipulations using cues discovered by analyzing large amounts of real data through self-supervision [27, 47]. We propose a method that identifies manipulated video through anomaly detection . Our model learns how audio and visual data temporally co-occur by training on large amounts of real, unlabeled video. At test time, we can then flag videos that our model assigns low probability, such as those whose video and audio streams are inconsistent. One might expect that this problem could be posed as simply detecting out-of-sync examples, such as by finding cases in which a speaker’s mouth does not open precisely at the onset of a spoken word. Unfortunately, videos in the wild are often “naturally” misaligned due to errors in encoding or recording, such as by having a single, consistent shift by a few frames [2, 23]. Instead, we pose the problem as detecting anomalies in what we call synchronization features : audio-visual features Input video Time (frame)Inputaudio Ziyang’s version(remove this text)Time delayFigure 1. Audio-visual anomaly detection. We identify fake videos by finding anomalies in their audio-visual features, using generative models trained entirely on realvideos. In one variation of our model (shown here), we use the time delay between the two modalities as our feature set, i.e., temporal misalignment between each video frame and the audio stream. We learn the distribution of these sequences, then flag sequences with low probability. that are designed to convey the temporal alignment between vision and sound. We evaluate several feature sets, each extracted from a model that has been trained to temporally align audio and visual streams of a video [18, 23, 78]. In Figure 1, we show one such feature set: the amount of time that each video frame appears to be temporally offset from its corresponding sound. To detect anomalies, we fit an autoregressive generative model [84, 99] to sequences of synchronization features extracted from real videos, and identify low probability examples. A key advantage of our formulation is that it does not require any manipulated examples for training. It also does not require the speakers in the test set to already be present in the training set. This is in contrast to previous audio-visual forensics approaches, which either require finetuning on datasets of manipulated video [40], or which are based on verifying that the speaker’s voice matches previously observed examples [25]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10491 We evaluate our model on videos that have manipulated a person’s speech and face, using datasets of lip-synced and audio-driven face reenactment videos, some of which are also manipulated by faceswap techniques. Our model obtains strong performance on the FakeA VCeleb [52] and KoDF [61] datasets, despite the fact that it is trained entirely on real examples obtained from other video datasets. Our model generalizes to other spoken languages without retraining and obtains robustness to a variety of postprocessing operations, such as compression and blurring. We show through our experiments that: •Video forensics can be posed as an audio-visual anomaly detection problem. •Synchronization features convey information about video manipulations. •Our model can successfully detect fake videos, while train-ing solely on real videos. •Our model generalizes to many types of image postpro-cessing operations and to speech videos from spoken lan-guages not observed during training. |
Cho_itKD_Interchange_Transfer-Based_Knowledge_Distillation_for_3D_Object_Detection_CVPR_2023 | Abstract Point-cloud based 3D object detectors recently have achieved remarkable progress. However, most studies are limited to the development of network architectures for im-proving only their accuracy without consideration of the computational efficiency. In this paper, we first propose an autoencoder-style framework comprising channel-wise compression and decompression via interchange transfer-based knowledge distillation. To learn the map-view feature of a teacher network, the features from teacher and student networks are independently passed through the shared au-toencoder; here, we use a compressed representation loss that binds the channel-wised compression knowledge from both student and teacher networks as a kind of regulariza-tion. The decompressed features are transferred in opposite directions to reduce the gap in the interchange reconstruc-tions. Lastly, we present an head attention loss to match the 3D object detection information drawn by the multi-head self-attention mechanism. Through extensive experiments, we verify that our method can train the lightweight model that is well-aligned with the 3D point cloud detection task and we demonstrate its superiority using the well-known public datasets; e.g., Waymo and nuScenes.1 | 1. Introduction Convolutional neural network (CNN)-based 3D object detection methods using point clouds [13] [35] [36] [43] [49] have attracted wide attention based on their outstand-ing performance for self-driving cars. Recent CNN-based works have required more computational complexity to achieve higher precision under the various wild situation. Some studies [23] [36] [43] have proposed methods to im-prove the speed of 3D object detection through which the non-maximum suppression (NMS) or anchor procedures are removed but the network parameters are still large. 1Our code is available at https://github.com/hyeon-jo/interchange-transfer-KD. Figure 1. Performance comparison between teacher and stu-dent networks for a point-cloud based 3D object detection. The top example images are qualitatively compared between the results of teacher, student and our networks. Specifically, the first row im-ages are an input sample with labels and the center heatmap head of the teacher network. The second row examples are responses of teacher, student, and ours for the yellow circle on the heatmap (or the blue dash circle on the input). The bottom image quantita-tively shows the computational complexity and the corresponding accuracy of teacher, student and our networks, respectively. Best viewed in color. Knowledge distillation (KD) is one of the parameter compression techniques, which can effectively train a com-pact student network through the guidance of a deep teacher network, as shown in the example images of Fig. 1. Starting with Hinton’s work [9], many KD studies [10] [20] [28] [44] have transferred the discriminative teacher knowledge to the student network for classification tasks. From the viewpoint of the detection task, KD should be extended to the regres-sion problem, including the object locations, which is not easy to straight-forwardly apply the classification-based KD methods to the detection task. To alleviate this problem, KD methods for object detection have been developed for mim-icking the output of the backbone network [15] ( e.g., region This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13540 proposal network) or individual detection head [2] [32]. Nevertheless, these methods have only been studied for de-tecting 2D image-based objects, and there is a limit to ap-plying them to sparse 3D point cloud-based data that have not object-specific color information but only 3D position-based object structure information. Taking a closer look at differences between 2D and 3D data, there is a large gap in that 2D object detection usually predicts 2D object locations based on inherent color infor-mation with the corresponding appearances, but 3D object detection estimates 3D object boxes from inputs consist-ing of only 3D point clouds. Moreover, the number of the point clouds constituting objects varies depending on the distances and presence of occlusions [42]. Another chal-lenge in 3D object detection for KD is that, compared to 2D object detection, 3D object detection methods [4] [6] [43] [21] have more detection head components such as 3D boxes, and orientations. These detection heads are highly correlated with each other and represent different 3D char-acteristics. In this respect, when transferring the detection heads of the teacher network to the student network using KD, it is required to guide the distilled knowledge under the consideration of the correlation among the multiple detec-tion head components. In this paper, we propose a novel interchange transfer-based KD (itKD) method designed for the lightweight point-cloud based 3D object detection. The proposed itKD comprises two modules: (1) a channel-wise autoencoder based on the interchange transfer of reconstructed knowl-edge and (2) a head relation-aware self-attention on multi-ple 3D detection heads. First of all, through a channel-wise compressing and decompressing processes for KD, the in-terchange transfer-based autoencoder effectively represents the map-view features from the viewpoint of 3D representa-tion centric-knowledge. Specifically, the encoder provides an efficient representation by compressing the map-view feature in the channel direction to preserve the spatial po-sitions of the objects and the learning of the student net-work could be regularized by the distilled position infor-mation of objects in the teacher network. For transferring the interchange knowledge to the opposite networks, the decoder of the student network reconstructs the map-view feature under the guidance of the teacher network while the reconstruction of the teacher network is guided by the map-view feature of the student network. As a result, the student network can effectively learn how to represent the 3D map-view feature of the teacher. Furthermore, to refine the teacher’s object detection results as well as its repre-sentation, our proposed head relation-aware self-attention gives a chance to learn the pivotal information that should be taught to the student network for improving the 3D de-tection results by considering the inter-head relation among the multiple detection head and the intra-head relation ofthe individual detection head. In this way, we implement a unified KD framework to successfully learn the 3D representation and 3D detection results of the teacher network for the lightweight 3D point cloud object detection. We also conduct extensive ablation studies for thoroughly validating our approach in Waymo and nuScenes datasets. The results reveal the outstanding potential of our approach for transferring distilled knowl-edge that can be utilized to improve the performance of 3D point cloud object detection models. Our contributions are summarized as follows: • For learning the 3D representation-centric knowledge from the teacher network, we propose the channel-wise autoencoder regularized in the compressed do-main and the interchange knowledge transfer method wherein the reconstructed features are guided by the opposite networks. • For detection head-centric knowledge of the teacher, we suggest the head relation-aware self-attention which can efficiently distill the detection properties under the consideration of the inter-head relation and intra-head relation of the multiple 3D detection heads. • Our work is the best attempt to reduce the parame-ters of point cloud-based 3D object detection using KD. Additionally, we validate its superiority using two large datasets that reflect real-world driving con-ditions, e.g., Waymo and NuScenes. |
Bucarelli_Leveraging_Inter-Rater_Agreement_for_Classification_in_the_Presence_of_Noisy_CVPR_2023 | AbstractIn practical settings, classification datasets are obtainedthrough a labelling process that is usually done by humans.Labels can be noisy as they are obtained by aggregating thedifferent individual labels assigned to the same sample bymultiple, and possibly disagreeing, annotators. The inter-rater agreement on these datasets can be measured whilethe underlying noise distribution to which the labels aresubject is assumed to be unknown. In this work, we: (i)show how to leverage the inter-annotator statistics to esti-mate the noise distribution to which labels are subject; (ii)introduce methods that use the estimate of the noise distri-bution to learn from the noisy dataset; and (iii) establishgeneralization bounds in the empirical risk minimizationframework that depend on the estimated quantities. We con-clude the paper by providing experiments that illustrate ourfindings.1. IntroductionSupervised learning has seen enormous progress in thelast decades, both theoretical and practical. Empirical riskminimization is used as a learning framework [23], whichrelies on the assumption that the model is trained with iid(independent and identically distributed) sampled data fromthe joint distribution between features and labels. As a con-sequence of generalization bounds, when this assumption issatisfied any desired performance can be achieved as longas enough training data is available. However in many real-world applications, due to flaws during the data collectionand labeling process, the assumption that the training datais sampled from the true feature-label joint distribution doesnot hold. Training data is often annotated by human raterswho have some non-zero probability of making mistakes. It*This work was done during Maria Sofia Bucarelli’s and Federico Si-ciliano’s internship at Amazon.has been reported in [21] that the ratio of corrupted labelsin some real-world datasets is between8.0%and,38.5%.As a consequence of the presence of incorrect labels in thetraining dataset, the aforementioned assumption is violatedand hence performance guarantees based on generalizationbounds no longer hold.This gap between theory and practice raises the questionwhether it is possible to learn from datasets with noisy la-bels while still having performance guarantees. This ques-tion has received a lot of attention lately and has alreadybeen answered in the positive in some cases [15,16]. In-deed multiple works have introduced learning algorithmsthat can cope with datasets with incorrect labels while guar-anteeing desirable performance through provable general-ization bounds. However, these solutions do not solve theentirety of the problem due to the fact that they rely onprecise knowledge of the error rate to which the labelsare subject, which is often unknown in practice. Severalworks [16,26,27] attempt to address this issue by introduc-ing techniques to estimate such error rate.Some of thesemethods have the drawback of relying on assumptions thatdo not always hold in practice, such as the existence of an-chor samples [16]. Ideally, it would be desirable to designlearning algorithms that are both robust to noisy labels, andfor which performance guarantees can be provided.An approach, often used in industry to reduce the im-pact of errors made by human raters, is to label the samedataset multiple times by different annotators. Then the in-dividual labels are combined to reduce the probability oferroneous labels in the dataset, two popular approaches aremajority vote or soft labeling. In these cases inter-annotatoragreement (IAA) scores (like Cohen’s kappa [1] and Fleiss’kappa [5]) provide measurable metrics that are directly re-lated to the probability of error present in the labels.Since the IAA holds a direct relationship with the errorrate associated with the human raters, one could potentiallyestimate the error rate and leverage this estimate to modify This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3439 the learning algorithms with the objective of making themrobust to the resulting noise in the labels. This is the maindirection we explore in this work.Motivation and Contributions:This work is motivatedby two main points: i-to the best of our knowledge thereare no published results that indicate how to leverage theIAA statistics to estimate the label noise distribution; andii-the generalization bounds of existing noise tolerant train-ing methods often rely onunknownquantities (like the truenoise distribution) instead of on quantities that can be mea-sured (like the IAA statistics).Our contributions are the following: (i) we provide amethodology to estimate the label noise distribution basedon the IAA statistics; (ii) we show how to leverage thisestimate to learn from the noisy dataset; and (iii) we pro-vide generalization bounds for our methods that depend onknownquantities.2. Related worksOur work is related to literature on three main topics: (i)robust loss function design, (ii) label aggregating and (iii)noise rate estimation.Robust loss functionsIn classification tasks, the goal isto obtain the lowest probability of classification error. The0 1loss counts how many errors a classifier makes ona given dataset and is often used in the evaluation of theclassifier. However, it is rarely used in optimization proce-dures because it is non-differentiable and non-continuous.To overcome this, many learning strategies use some con-vexsurrogatesof the0 1loss function (e.g. hinge loss,squared error loss, cross-entropy).It was proved ( [6], [7]) thatsymmetricloss functions,that are functions for which the sum of the risks over all cat-egories is equivalent to a constant for each arbitrary exam-ple, are robust to label noise. Examples of symmetric lossfunctions include the0 1loss, the Ramp Loss and (soft-max) Mean Absolute Error (MAE). In [29] authors showthat even if MAE is noise tolerant and cathegorical crossentropy (CCE) is not, MAE can perform poorly when usedto train DNN in challenging domains. They also propose aloss function that can be seen as a generalization of MAEand CCE. Several other loss functions that do not strictlysatisfy the symmetry condition have also been proposed tobe robust against label noise when training deep neural net-works [4,13,24].[15] presents two methods to modify the surrogate lossin the presence of class-conditional random label noise. Thefirst method introduces a new loss that is an unbiased esti-mator for a given surrogate loss, and the second methodintroduces a label-dependent loss. The paper provides gen-eralization bounds for both methods, which depend on thenoise rate of the dataset and the complexity of the hypothe-sis space.Labels aggregationWhen constructing datasets for su-pervised learning, data is often not labeled by a single an-notator, rather multiple imperfect annotators are asked to as-sign labels to documents. Typically, separate labels are ag-gregated into one before learning models are applied [3,20].In our work, we propose to exploit a measure of the agree-ment between annotators to explicitly calculate the noiseof the dataset. Recently some works revisited the choiceof aggregating labels. In [19] authors explore how to trainLETOR models with relevance judgments distributions in-stead of single-valued relevance labels. They interpret theoutput of a LETOR model as a probability value or distri-bution and define different KL divergence-based loss func-tions to train a model. The loss they proposed can be used totrain any ranking model that relies on gradient-based learn-ing (in particular they focused on transformer-based neu-ral LETOR models and on the decision tree-based GBMmodel). However, the authors do not directly estimate thenoise rates in the annotations or study how learning fromthese noisy labels affects the generalization error of themodels trained with the methods they introduce. In [25]the authors analyze the performance of both label aggrega-tion and non-aggregation approaches in the context of em-pirical risk minimization for a number of popular loss func-tions, including those designed specifically for the noisy la-bel learning problem. They conclude that label separationis preferable to label aggregation when noise rates are highor the number of labelers/annotations is insufficient. [17]and [22] exploit the availability of multiple human anno-tations to construct soft labels and concludes that this in-creases performance in terms of generalization to out-of-training-distribution test datasets, and robustness to adver-sarial attacks. [2] focus on efficiently eliciting soft labelsfrom individual annotators.Noise rate estimationA number of approaches have beenproposed for estimating the noise transition matrix (i.e. theprobabilities that correct labels are changed for incorrectones) [12,16,31]. Usually these methods use a small num-ber of anchor points (that are samples that belong to a spe-cific class with probability one) [8]. In particular, [16] pro-posed a noise estimation method based on anchor points,with the intent to provide an ‘end-to-end’ noise-estimation-and-learning method. Due to the lack of anchor points inreal data, some works focused on a way to detect anchorpoints in noisy data, [26,27]. In [27] the authors proposeto introduce an intermediate class to avoid directly estimat-ing the noisy class posterior. [28] also propose an iterativenoise estimation heuristic that aims to partly correct the er-ror and pointed out that the methods introduced by [16] 3440 and [27] have an error in computing anchor points, and pro-vide conditions on the noise under which the methods workor fail. [26] provides a solution that can infer the transitionmatrix without anchor points. Indeed they use the instanceswith the highest class posterior probabilities for noisy dataas anchor points. Our work differs from the mentioned workthat use anchor points because we do not need to assumethe existence of anchor points or to have a validation set tolearn the noise rate and we only use noisy data to train ourmodel, moreover we neither aim to detect anchor points inthe noisy data. Also most of these works do not study thegeneralization properties of the proposed models,while wealso address this problem and find bound that depend on theestimated noise transition matrix.Another approach is based on the clusterability condi-tion, that is an example belongs to the same true class of itsnearest-neighbors representations. [30] presented a methodthat relies on statistics of high-order consensuses among the2 nearest-neighbors noisy labels.3. Problem formulation3.1. NotationIn this paper we follow the following notation. Matricesand sets are denoted by upper-case and calligraphic letters,respectively. The space ofd-dimensional feature vectors isdenoted byX⇢Rd.We denote byCthe number of classes and byejthej-th standard canonical vector inRC, namely the vector thathas1in thej-th position and zero in all the other positions.Y={e1,...,eC}⇢{0,1}Cis the label set. Feature vec-tors and labels are denoted byxandy, respectively.Disthe joint distribution of the feature vectors and labels, i.e.(x, y)⇠D. The sampled dataset of sizenis denoted b | ybD={(xi,yi)}ni=1.f(x)denotes the output of the classifierffor feature vectorxand is aCdimensional vector. Allvectors are column vectors.We denote by`(t, y)a generic loss function for the clas-sification task that takes as inputCdimensional vectorstandy. In practicetwill contain the prediction of the modelandywill be the ground-truth label as a one-hot encodedvector. Namely`:[ 0,1]C⇥Y!R.3.2. BackgroundWe consider the classification problem within the super-vised learning framework, where the ultimate goal is to min-imizethe`-riskR`,D(f)=E(x,y)⇠D[`(f(x),y)], for someloss function`. We denote byDthe joint distribution of fea-ture vectorsxand labelsy. In practice, since the distributionis unknown instead of minimizingR`,D(f)we minimize anempirical risk over some sampled datasetbD:bR`,bD(f)=1nnXi=1`(f(xi),yi)=E(x,y)⇠bD[`(f(x),y)].(1)In this work we assume that the true labelsyiare un-known and consider two scenarios, both of which rely onHannotators.3.2.1 Scenario IIn this scenario we have access to theHlabels providedby the annotators for each sample, whereyi,arefers to thelabel provided by thea-th annotator for thei-th sample. Fora given feature vectorxithe distribution of labels providedby annotatorais given by its noise transition matrixTa,which is defined as follows:(Ta)i,j:=P(ya=j|y=i)(2)Assumption 1.We assume that all annotators have thesame noise transition matrix (i.e.Ta=Tfor alla), thatTis symmetric and that its diagonal elements are largerthan0.5(i.e.P(ya=i|y=i)>0.5,8i2{1,...C}).Note that by definitionTis right stochastic and hencealso doubly stochastic. It is also strictly diagonally domi-nant and therefore non-singular.Proposition 3.0.1.Tis positive definite.Proof.SinceTis symmetric it follows that all eigenvaluesare real. Combining the fact that it is strictly diagonallydominant with Gershgorin’s theorem we conclude that alleigenvalues lie in the range(0,1]and henceTis positivedefinite.Assumption 2.We assume that the annotators are condi-tionally independent on the true labely:P(ya,yb|y)=P(ya|y)P(yb|y).(3)We now define the IAA matrixMabbetween annotatorsaandbas follows:(Mab)i,j:=P(ya=i, yb=j)(4)Proposition 3.0.2.Leveraging Assumption2the agreementmatrixMa,bcan be written as follows:Ma,b=TaTDTb(5)D:=diag{⌫}(6)⌫:=[P(y= 1),···,P(y=C)]T.(7)Due to Proposition3.0.1and the fact thatDis positive def-inite it follows that all matricesMa,bare invertible. 3441 Assumption 3.We assume that the class probabilities (andhenceD) are known.Due to Assumption1all annotators share the same noisetransition matrixT. ThereforeMabis independent ofaandband from now on we remove this dependencyin the no-tation(i.e. we getM=TTDT). Furthermore, sinceTisinvertible andDdiagonal and positive definite it followsthatMis also positive definite.Note that since we have access to all the labels providedby theHannotators for all the samples we can obtain anestimate ofMwhich we denotecM.Assumption 4.We assume thatcMis a consistent estimator.For the case of two annotators, one possible consistentestimator[Ma,bthat exploits its symmetry condition is givenby:([Ma,b)i,j=nXk=11(ya,k=i, yb,k=j)+1(ya,k=j, yb,k=i)2n(8)If the annotators have the same transition matrix,Mwillbe the same for all pairs of annotators. So we can estimateM, in the case ofH 2by averaging the estimatorscMabobtain by Eq. (8) for all possible pairs of annotators. Theestimator in this case can be written as(cM)i,j=1H(H 1)HXa=1HXb=1b6=anXh=11(ya,h=i, yb,h=j)n.(9)3.2.2 Scenario IIIn the second scenario, for eachi-th sample we are given aunique label˜yithat is produced by aggregating theHin-dividual labels according to some known aggregating pol-icy (like majority vote). In this case, since we do not haveaccess to the individual annotations we assume thatcMisprovided.The probability that labelyiis corrupted to some otherlabel˜yiis given by theaggregated noise transition matrix 2[0,1]C⇥C, where ij:=P(˜y=j|y=i)is the proba-bility of the true labelibeing flipped into a corrupted labeljandCis the number of classes. Note that by definition is aright stochastic matrix that is determined byT, the amountof annotatorsHand the aggregating policy. We will studyboth the case where =T, and the case in which thereexists a generic Lipschitz function so that 1= (T).There are different policy choices to construct the datasetthat lead to =T. If we decide to use only one annotator,for instancea, to build the final dataset, namely for eachsample˜yi=yiawe have =Ta. Or if annotators arehomogeneous, i.e. they have the same noise transition ma-trixT, and to build the final dataset we decide to randomlyselect the label of one of the annotators we have that =T.Even restricting ourselves to the case of homogeneousannotators, depending on the rule with which we build thedataset we can have a more complex relationship betweenthe matrixTand .We also obtain generalization bounds in the case werean estimate of the agreement matrixMis not available andwe only have access to a scalar representation of the inter-annotator agreement, in particular we consider the casewhere the Cohen’sis given.3.2.3 ObjectiveThe objective in both scenarios is to: i) usecMto estimatethe noise transition matrices (Tand ); ii) leverage theseestimates to be able to learn from the noisy dataset in a morerobust manner; and iii) obtain generalization bounds for theresulting learning methods.4. Main resultsWe divide the main contributions in three sections. Inthe first section we show how to estimate the noise matri-cesTNext we indicate how to leverage these estimates tolearn for the datasets with noisy labels. Finally we obtainbounds,depending on the Rademacher complexity of theclass of functions,on the generalization gap for aboundedand Lipschitzloss function4.1. Estimation of the noise transition matricesWe start stating the following Lemma that allows us towrite the unknown matrixT(and its inverse), as a functionofDandM.Lemma 4.1.IfD12commutes withTwe have that:T=U⇤12UT(10)T 1=U⇤ 12UT(11)D 12MD 12=U⇤UT(12)whereU⇤UTis the eigenvalue decomposition ofD 12MD 12(i.e.Uis some orthogonal matrix and⇤is a diagonal positive definite matrix).A detailed discussion of when the commutativity as-sumption is satisfied is included in AppendixB. The proofof the previous Lemma can be find in AppendixC.1.Note that we could use Lemma4.1to estimateTas fol-lows:bT=bUb⇤12MbUT(13) 3442 wherebUb⇤MbUTis the eigenvalue decomposition ofD 12cMD 12. However such estimate can result in matri-ces that are not doubly stochastic, or diagonally dominantdue to estimation errors. A more accurate estimate ofTcould be obtained asbT=⇡(bUb⇤12MbUT)where⇡is a projec-tion operator to the set of doubly stochastic, positive definitematrices with diagonal elements greater than 0.5 and non-negative entries (which is a convex set). We can obtain suchprojection by solving the following optimization problem:bT=⇡(bUb⇤12MbUT) = argminB||B bUb⇤12MbUT||22(14)s.t.B=BTXjBi,j=18iBi,j 08i, jBi,i 0.58iNote that this optimization problem is convex becausethe constraints are linear and for symmetric matrices it holdsthat||bT bUb⇤12MbUT||22= max(bT bUb⇤12MbUT), which is aconvex function ofbT.To summarize,Tcan be estimated as follows.First,obtain an estimate ofM. Then obtain the eigenvalue de-composition ofD 12cMD 12=bUb⇤bUT(note that this de-composition always exists becauseD 12cMD 12is sym-metric). Finally obtain the estimate as:bT:=⇡(bUb⇤12bUT).Note that once the estimate ofbTis obtained,b can beobtained since we assumed the label aggregating policy tobe known.Lemma 4.2.LetMa,bbe the agreement matrix for anno-tatorsaandbdefined in Eq. (4) and[Ma,bbe the esti-mated agreement matrix defined in Eq. (8) and let||.||pbethe matrix norm induced by thepvector norm. For everyp2[1,1]and for every >0, with probability at least1 ||Ma,b [Ma,b||prC22nln2C2 .(15)wherePndenotes the probability according to which thentraining samples are distributed, i.e. we are assuming thatthe samples are independently drawn according the proba-bilityP.Proof.The proof can be found in AppendixC.2.From Lemma4.2it follows that ifcMis estimated as inEq. (9), sincecMis an average ofdMabit also holds that foreveryp2[1,1]and for every >0, with probability atleast1 ||M cM||prC22nln2C2 .(16)Theorem 4.3.LetTbe the noise transition matrix definedas in Eq.(2)andbTits estimate (defined as in Eq.(14)).With probability at least1 :||T bT||2C(pC+ 1) max(D) min(ˆT)r12nln2C2 (17a)||T 1 bT 1||29C(pC+ 1) max(D) min(ˆT)2r12nln2C2 (17b)forn>C2(pC+1)2(ln(2C2)22 min(ˆT)2.Proof.The proof can be found in AppendixC.3.From the previous theoremwe can notice thatthe errorin estimationofTdecays as1pnas a function ofn.4.2. Learning from noisy labelsIn this section we show how to leverage the estimates ofthe error rates to train the models.4.2.1 Posterior distribution of true labels as soft-labelsIt is noteworthy that if we have access to the labels providedby all annotators, the posterior probabilities of the true la-bels can be calculated leveragingTand Bayes’ Theorem asfollows:P(yi=c|y1,i,...,yH,i)|{z}:=pc,i/⌫cHYh=1P(yh,i|yi=c)|{z}=Tc,yh,i(18)we recall that⌫c=P(yi=c)and that the conditional prob-abilities on the r.h.s. are given byT. In our case we canuse our noisy transition estimates to estimate the posteriorprobabilities of the true labels, and afterwards we can usethese posteriors to train the classifier.Lemma 4.4.For infinite annotators the posterior distribu-tion over every sample calculated using the trueTcon-verges to the dirac delta distribution centered on the truelabel almost surely (i.e.limH!1pc,ia.s.=1(yi=c)).Proof.See AppendixC.5.We can use the posterior distributions as soft-labelsdefining the following loss for the i-th sample:`(f(xi),y1,i,...,yH,i)=`(f(xi),¯pi)(19)where¯pi=[p1,i,···,pC,i]T. Or we can use the posteriordistributions to weight the loss function at thei-th sampleevaluated at each of the possible labels:`(f(xi),y1,i,...,yH,i)=CXc=1pc,i`(f(xi),ec)(20) 3443 whereecis the vector inRCwith1in thec-th position.Notice that for categorical cross entropy loss the two func-tions defined above correspond, but in general they definetwo different loss functions.Note that these soft-labels are different from the ones ob-tained by averaging the annotators labels as is done in [25].The method using the posteriors exploits theTmatrix andthus more information than the simple mean of the valuesof the losses among annotators. We therefore expect thisto yield better results than the aggregation using the meanproposed in [25]. These considerations are supported bythe empirical results we obtained on synthetic datasets (seeSec.6).4.2.2 Robust loss functionsAnother way to leverage the estimate ofTis to use robustloss functions, like the forward and backward loss functionspresented in [15,16]. Let`(t, y)be a generic loss func-tion for the classification task, with a little abuse of nota-tion we define`(t)=[`(t, e1),...,`(t, eC)]T. The back-ward and forward loss functions are defined in Eq. (21a)and Eq. (21b), respectively.lb(t, y)=(b 1`(t))y(21a)lf(t, y)=(`(b Tt))y(21b)To explain the notation in Eq. (21a) we are first doing thedot product between the matrix 1and the vector`(t)andthen the dot product of the resulting vector withy. Theselosses leverage aggregated labels and therefore different ag-gregating techniques can be used, like majority vote. An-other possible aggregating techniq |
Ding_Exploring_Structured_Semantic_Prior_for_Multi_Label_Recognition_With_Incomplete_CVPR_2023 | Abstract Multi-label recognition (MLR) with incomplete labels is very challenging. Recent works strive to explore the image-to-label correspondence in the vision-language model, i.e., CLIP [22], to compensate for insufficient annotations. In spite of promising performance, they generally overlook the valuable prior about the label-to-label correspondence. In this paper, we advocate remedying the deficiency of label supervision for the MLR with incomplete labels by deriving a structured semantic prior about the label-to-label corre-spondence via a semantic prior prompter. We then present a novel Semantic Correspondence Prompt Network (SCP-Net), which can thoroughly explore the structured semantic prior. A Prior-Enhanced Self-Supervised Learning method is further introduced to enhance the use of the prior. Com-prehensive experiments and analyses on several widely used benchmark datasets show that our method significantly out-performs existing methods on all datasets, well demonstrat-ing the effectiveness and the superiority of our method. Our code will be available at https://github.com/ jameslahm/SCPNet . | 1. Introduction Multi-label recognition (MLR) aims to describe the im-age content with various semantic labels [5, 26, 29, 30]. It encodes the visual information into structured labels, which can benefit the index and fast retrieval of images in broad practical applications, such as the search engine [24,27] and the recommendation system [2, 33]. Benefited from the development of deep learning, MLR *Equal contributions. †Corresponding author. CNNbackbonesoftmaxCNNbackbonesigmoidtransferImageencoderTextencoderImageencoderLabelencodertransfertransferMLRMLRMCCL(a)CNN-based(b)DualCoOpImageencoderLabelencoderMLRN4132(c)ourSCPNetFigure 1. Overview of CNN-based, DualCoOp [26] and our SCP-Net. Like DualCoOp, our SCPNet adopts CLIP as the base model. Differently, our SCPNet aims to enhance the MLR with the prior about the label-to-label correspondence. MC means multi-class. CL denotes contrastive learning. has achieved remarkable progress in recent years. How-ever, collecting high-quality full annotations becomes very challenging when the label set scales up, which greatly hin-ders the wide usage of MLR in real scenarios. Recently, researchers explore more feasible solutions for MLR. For example, the full label setting is relaxed with a partial label setting in [3, 21], which merely annotates a few labels for each training image. One more extreme setting with solely onesingle positive label is tackled in [8,16]. These settings can be unified into a common issue of incomplete labels , which relieves the burden of the full annotation and con-siderably reduces the annotation cost. Therefore, it draws increasing attention from both academia and industry. Compared with the full label setting, the incomplete la-bel setting encounters a dilemma of poor supervision, re-sulting in severe performance drops for MLR. Existing methods strive to regain supervision from missing labels by exhaustively exploring the image-to-label correspondence via semantic-aware modules [4,21] or loss calibration meth-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3398 ods [8, 16, 32]. A convolutional neural network (CNN) pre-trained on the ImageNet is usually leveraged to construct the MLR model. Its multi-class softmax layer is often re-placed by a multi-label sigmoid layer (Fig. 1 (a)). Such a re-placement wipes out prior knowledge about the correspon-dence between images and labels although it is necessary and inevitable. Recently, vision-language pretrained models have ob-tained remarkable success in various vision tasks [26, 34, 35]. Thanks to their large-scale pretraining, the vision-language model, e.g., CLIP [22], which is trained with 400 million image-text pairs, can well bridge the visual-textual gap [26], providing rich prior knowledge for the down-stream tasks. For the MLR task, Sun et al. [26] propose a DualCoOp method, which is the first work to employ the CLIP as the MLR base model. Through dual prompts, Du-alCoOp directly adopts the text encoder in the CLIP as the multi-label classification head (Fig. 1 (b)), without aban-doning the visual-textual prior in the pretrained CLIP. Despite its effectiveness, DualCoOp is still limited in remedying the deficiency of label supervision, which is de-sired for the MLR with incomplete labels. Intuitively, it is convenient to reason unknown labels from annotated labels by leveraging the correspondence among labels, e.g., tables are likely to appear with chairs, and cars are usually ac-companied by roads. Therefore, such a label-to-label cor-respondence can help survive more label supervision and thus benefit MLR with incomplete labels. Besides, although most vision-language models do not encourage the con-trastive learning among texts, they are still abundant in the knowledge about the label-to-label correspondence because of the large-scale cross-modality training. However, such a valuable prior is rarely explored in the existing state-of-the-art method, i.e., DualCoOp [26]. In this paper, we aim to mitigate such deficiency of label supervision for MLR with incomplete labels by leveraging the abundant prior about the label-to-label correspondence in the CLIP [22]. We present a structured prior prompter to conveniently derive a structured semantic prior from the CLIP. Then we propose a novel Semantic Correspondence Prompt network (SCPNet) (Fig. 1 (c)), which can prompt the structured label-to-label correspondence with a cross-modality prompter. Our SCPNet also equips a semantic as-sociation module to explore high-order relationships among labels with the guidance of the derived structured semantic prior. A prior-enhanced self-supervised learning method is further introduced to comprehensively investigate the valu-able prior. As a result, our method can neatly calibrate its predicted semantic distribution while maintaining the self-consistency. To verify the effectiveness of the proposed method for MLR with incomplete labels, we conduct extensive exper-iments and analyses on a series of widely used benchmarkdatasets, i.e., MS COCO [19], PASCAL VOC [11], NUS Wide [7], CUB [28] and OpenImages [17]. Experimental results show that our method can significantly outperform state-of-the-art methods on all datasets with a maximal im-provement of 6.8%/3.4%mAP for the single positive la-bel setting and the partial label setting, respectively, well demonstrating its effectiveness and superiority. Overall, our contributions are four folds. • We advocate leveraging a structured semantic prior to deal with the deficiency of label supervision for MLR with incomplete labels. To this end, we extract such a prior via a structured prior prompter. • We present a semantic correspondence prompt Net-work (SCPNet) based on a cross-modality prompter and a semantic association module. The SCPNet can adequately explore the structured prior knowledge, thus boosting MLR with incomplete labels. • We design a prior-enhanced self-supervised learning method to further investigate such a structured seman-tic prior, which can enjoy both distribution refinement and self-consistency. • Experimental results show that our method can con-sistently achieve state-of-the-art performance on all benchmark datasets, revealing the significant effective-ness. Thorough analyses also demonstrate the superi-ority of our method. |
Ding_HGFormer_Hierarchical_Grouping_Transformer_for_Domain_Generalized_Semantic_Segmentation_CVPR_2023 | Abstract Current semantic segmentation models have achieved great success under the independent and identically dis-tributed (i.i.d.) condition. However, in real-world appli-cations, test data might come from a different domain than training data. Therefore, it is important to improve model robustness against domain differences. This work stud-ies semantic segmentation under the domain generalization setting, where a model is trained only on the source domain and tested on the unseen target domain. Existing works show that Vision Transformers are more robust than CNNs and show that this is related to the visual grouping property of self-attention. In this work, we propose a novel hierarchi-cal grouping transformer (HGFormer) to explicitly group pixels to form part-level masks and then whole-level masks. The masks at different scales aim to segment out both parts and a whole of classes. HGFormer combines mask clas-sification results at both scales for class label prediction. We assemble multiple interesting cross-domain settings by using seven public semantic segmentation datasets. Exper-iments show that HGFormer yields more robust semantic segmentation results than per-pixel classification methods and flat-grouping transformers, and outperforms previous methods significantly. Code will be available at https: //github.com/dingjiansw101/HGFormer . | 1. Introduction Research in semantic image segmentation has leaped for-ward in the past years due to the development of deep neural network. However, most of these models assume that the training and testing data follow the same distribution. In the real-world, we frequently encounter testing data that is out of distribution. The generalization ability of models under distribution shift is crucial for applications related to safety, *Corresponding author With Gaussian Noise Part-level masks (Ours) Whole-level masks (Ours)Whole-level masks (Mask2former) Semantic results (Mask2former) Semantic results (Ours) With Snow Part-level masks (Ours) Whole-level masks (Ours)Whole-level masks (Mask2former) Semantic results (Mask2former) Semantic results (Ours)Figure 1. Semantic segmentation can be considered as partition-ing an image into classification units (regions), then classifying the units. The units can range from pixels to large masks. Intuitively, mask classification is more robust than per-pixel classification, as masks allow to aggregate features over large image regions of the same class to predict a ‘global’ label. Despite this promise, the process of grouping pixels into whole-level masks directly from pixels is very challenging under the distribution shift ( e.g., Gaus-sian Noise). In order to tackle this problem, we present a hierar-chical grouping paradigm to group pixels to part-level masks first and then to group part-level masks towhole-level masks to get re-liable masks. Then we combine both part-level and whole-level mask classification for robust semantic segmentation, given that the masks at the two levels capture complementary information. such as self-driving. In domain generalization setting, mod-els are trained only on source domains and tested on tar-get domains, where the distributions of source domains and target domains are different. Unlike the domain adapta-tion [ 25,56], target data is not accessible / needed during training, making the task challenging but practically useful. Recently, Vision Transformers have been shown to be significantly more robust than traditional CNNs in the out-of-distribution generalization [ 21,25,42,58,60,70]. Some works interpret self-attention as a kind of visual group-1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15413 ing[7,38], and believe that it is related to robustness [70]. However, these works mainly focus on classification. Al-though FAN [ 70] and Segformer [ 60] have been evaluated on segmentation, they do not explicitly introduce visual grouping in their networks. Since grouping is naturally aligned with the task of semantic segmentation, we would like to ask the question: can we improve the robustness of semantic segmentation by introducing an explicit grouping mechanism into semantic segmentation networks? Most deep learning based segmentation models directly conduct per-pixel classification without the process of grouping. Some recent segmentation models introduced flat grouping [15,67] into the segmentation decoder, where pix-els are grouped into a set of binary masks directly and clas-sification on masks is used to make label prediction. By us-ing a one-to-one matching similar to DETR [ 5], the loss be-tween predicted masks and ground truth masks is computed. Therefore the network is trained to directly predict whole-level masks , as shown in Fig. 1. Intuitively, if whole-level masks are accurate, mask classification will be more robust than per-pixel classification due to its information aggrega-tion over regions of the same class. But we find that using theflat grouping to generate whole-level masks is suscepti-ble to errors, especially under cross-domain settings. This is shown by the example in Fig. 1-bottom. Different from the flat grouping works [ 14,67], we pro-pose a hierarchical grouping in the segmentation decoder, where the pixels are first grouped into part-level masks , and then grouped into whole-level masks . Actually, the hierar-chical grouping is inspired by the pioneer works of image segmentation [ 2,13,49] and is further supported by strong psychological evidence that humans parse scenes into part-whole hierarchies [ 24]. We find that grouping pixels to part-level masks and then to whole-level masks is more robust than grouping pixels directly to whole-level masks. Part-level masks and whole-level masks segment images at dif-ferent scales such as parts anda whole of classes. There-fore, part-level and whole-level masks are complementary, and combining mask classification results at those different scales improves the overall robustness. To instantiate a hierarchical grouping idea, we propose a hierarchical grouping transformer (HGFormer) in the de-coder of a segmentation model. The diagram is shown in Fig. 2. We first send the feature maps to the part-level grouping module. In the part-level grouping module, the initialization of cluster centers is down sampled from fea-ture maps. Then we compute the pixel-center similarities and assign pixels to cluster centers according to the simi-larities. To get the part-level masks, we only compute the similarities between each pixel feature and its nearby cen-ter features. We then aggregate information of the part-level masks and generate whole-level masks by using cross-attention, similar to how previous methods aggregate pixelsinformation to generate whole-level masks [ 14,67]. Finally, we classify masks at different levels, and average the se-mantic segmentation results of all the scales. We evaluate the method under multiple settings, which are assembled by using seven challenging semantic seg-mentation datasets. In each of the setting, we train the methods on one domain and test them on other domains. Extensive experiments show that our model is significantly better than previous per-pixel classification based , and whole-level mask based segmentation models for out-of-distribution generalization. To summarize, our contributions are: 1) We present a hi-erarchical grouping paradigm for robust semantic segmen-tation; 2) based on the hierarchical grouping paradigm, we propose a hierarchical grouping transformer (HGFormer), where the pixels are first grouped into part-level masks, and then grouped into whole-level masks. Final semantic segmentation results are obtained by making classifications on all masks; 3) HGFormer outperforms previous seman-tic segmentation models on domain generalized semantic segmentation across various experimental settings. We also give detailed analyses of the robustness of grouping-based methods under distribution shift. |
Ilett_3D_Shape_Reconstruction_of_Semi-Transparent_Worms_CVPR_2023 | Abstract 3D shape reconstruction typically requires identifying object features or textures in multiple images of a sub-ject. This approach is not viable when the subject is semi-transparent and moving in and out of focus. Here we over-come these challenges by rendering a candidate shape with adaptive blurring and transparency for comparison with the images. We use the microscopic nematode Caenorhab-ditis elegans as a case study as it freely explores a 3D complex fluid with constantly changing optical properties. We model the slender worm as a 3D curve using an in-trinsic parametrisation that naturally admits biologically-informed constraints and regularisation. To account for the changing optics we develop a novel differentiable ren-derer to construct images from 2D projections and compare *{T.Ilett, O.Yuval, T.Ranner, N.Cohen, D.C.Hogg }@leeds.ac.uk Funding This work was supported by University of Leeds and EPSRC. Author contributions Conceptualisation, Methodology, Formal analysis, Investigation, Software, Visualisation: TPI. Data curation, Validation: TPI, OY . Writing: TPI (original), all (review and editing). Funding acquisition, Supervision: NC, DCH, TR. †Equal contribution. Acknowledgements Additional thanks to Matan Braunstein (for help with Fig. 1), Robert I. Holbrook (data), Felix Salfelder (discussions and data), Lukas Deutz (discussions) and Jen Kruger (proof reading). Data availability Supplementary movies are available here: https://doi.org/10.6084/m9.figshare.22310650 .against raw images to generate a pixel-wise error to jointly update the curve, camera and renderer parameters using gradient descent. The method is robust to interference such as bubbles and dirt trapped in the fluid, stays consistent through complex sequences of postures, recovers reliable estimates from blurry images and provides a significant im-provement on previous attempts to track C. elegans in 3D. Our results demonstrate the potential of direct approaches to shape estimation in complex physical environments in the absence of ground-truth data. | 1. Introduction Many creatures such as fish, birds and insects move in all directions to search and navigate volumetric environments. Acquiring 3D data of their motion has informed models of locomotion, behaviour and neural and mechanical control [3,22]. While technological advances have made the collec-tion of large quantities of multi-viewpoint visual data more attainable, methods for extracting and modelling 3D in-formation remain largely domain-dependant as few species share common geometric models or exist within the same spatial and temporal scales [4, 11, 14, 26, 37, 41, 50, 54, 65]. Furthermore, while humans and some domesticated ani-mals [30,60] may act naturally while wearing special mark-ers, marker-less observations of many species makes fea-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12565 ture extraction more challenging and means pose estimation generally lacks ground-truth data [48]. As a case study in marker-less 3D shape reconstruction, we consider C. elegans , a hair-thick, ∼1 mm long animal with a simple tapered cylinder shape, which can be con-structed from a midline “skeleton”. In the wild, C. elegans can be found in a wide range of complex 3D environments, e.g. decomposing organic matter, with continually changing physical properties [15, 17, 46]. However, to date, experi-ments have focused nearly exclusively on locomotion on a plane, limiting insight to the constrained, planar behaviours. We obtained a large dataset (4 hours 53 minutes ≃ 440,000 frames at 25Hz) of experimental recordings of in-dividual worms moving freely inside a glass cube filled with a gelatin solution. The cube is positioned between three nearly-orthogonal static cameras fitted with telecentric lenses. Initial pinhole camera model parameter estimates are provided [45] but are imprecise and require continuous adjustment across the course of a recording to account for small vibrations and optical changes to the gel. We aim to simultaneously reconstruct a 3D shape and find corrected camera parameters to match these recordings in a process akin to bundle adjustment [56]. 3D reconstruction typically involves the identification and triangulation of common features from multiple view-points or the synthesis of full images including texture and shading information to match given scenes [16, 21, 47, 66]. Imaging animals with length ∼1 mm requires sufficient magnification, but simultaneously capturing long-term tra-jectories up to 25 minutes requires a large volume of view (10-20 worm lengths per axis). As the worm explores the cube it frequently appears out of focus in one or more of the cameras. Air bubbles and dirt trapped in the gel along with old tracks are difficult to differentiate from the trans-parent worm, particularly at the tapered ends. Self occlu-sion invariably appears in a least one view, where hidden parts darken the foreground while the ordering of fore/back-parts is not discernible. As the semi-transparent and self-occluding subject moves in the volume, photometric infor-mation in one view bears little relevance to the appearance in the others making feature identification and photometric matching particularly challenging. We found that standard approaches may suffice for limited sub-clips, but lose parts of the object or fail catastrophically for much of the data and the solution requires a degree of adaptation. We present an integrated “project-render-score” algo-rithm to obtain a midline curve for each image-triplet (Fig. 1). Discrete curve vertices are projected through a triplet of pinhole camera models, rendered to produce an image-triplet for direct comparison against the recorded im-ages and scored according to their intersection with worm-like pixels in all three views. The differentiable renderer stacks 2D super-Gaussian blobs at the projected locationsof each vertex to approximate the transparency along the worm, accounting for the variable focus and providing soft edges that direct the geometric model towards the midline. The scoring allows the detection of incongruities and keeps the curve aligned to the worm in all views. Regularisation terms ensure smoothness along the body and in time. Curve, camera and rendering parameters are jointly optimised us-ing gradient descent to convergence. Once the worm shape has been resolved, it is generally only lost during image degradation or significant self-occlusions that make the pos-ture unresolvable by eye. In summary, our main contributions are: • A robust pipeline for 3D posture reconstruction of a freely deforming semi-transparent object from noisy images. • A novel viewpoint renderer to capture optical distor-tions and transparency. • A feature-free bundle adjustment algorithm using di-rect image comparison and gradient descent. |
Bian_NoPe-NeRF_Optimising_Neural_Radiance_Field_With_No_Pose_Prior_CVPR_2023 | Abstract Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging. Recent advances in this direction demonstrate the possibility of jointly opti-mising a NeRF and camera poses in forward-facing scenes. However, these methods still face difficulties during dra-matic camera movement. We tackle this challenging prob-lem by incorporating undistorted monocular depth priors. These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames. This constraint is achieved using our proposed novel loss functions. Experiments on real-world indoor and outdoor scenes show that our method can handle challenging cam-era trajectories and outperforms existing methods in terms of novel view rendering quality and pose estimation ac-curacy. Our project page is https://nope-nerf. active.vision . | 1. Introduction The photo-realistic reconstruction of a scene from a stream of RGB images requires both accurate 3D geometryreconstruction and view-dependent appearance modelling. Recently, Neural Radiance Fields (NeRF) [24] have demon-strated the ability to build high-quality results for generating photo-realistic images from novel viewpoints given a sparse set of images. An important preparation step for NeRF training is the estimation of camera parameters for the input images. A current go-to option is the popular Structure-from-Motion (SfM) library COLMAP [35]. Whilst easy to use, this pre-processing step could be an obstacle to NeRF research and real-world deployments in the long term due to its long pro-cessing time and its lack of differentiability. Recent works such as NeRFmm [46], BARF [18] and SC-NeRF [12] pro-pose to simultaneously optimise camera poses and the neu-ral implicit representation to address these issues. Neverthe-less, these methods can only handle forward-facing scenes when no initial parameters are supplied, and fail in dramatic camera motions, e.g.a casual handheld captured video. This limitation has two key causes. First, all these meth-ods estimate a camera pose for each input image individ-ually without considering relative poses between images. Looking back to the literature of Simultaneous localisation and mapping (SLAM) and visual odometry, pose estimation can significantly benefit from estimating relative poses be-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4160 tween adjacent input frames. Second, the radiance field is known to suffer from shape-radiance ambiguity [55]. Esti-mating camera parameters jointly with NeRF adds another degree of ambiguity, resulting in slow convergence and un-stable optimisation. To handle the limitation of large camera motion, we seek help from monocular depth estimation [22, 28, 29, 51]. Our motivation is threefold: First, monocular depth provides strong geometry cues that are beneficial to constraint shape-radiance ambiguity. Second, relative poses between ad-jacent depth maps can be easily injected into the training pipeline via Chamfer Distance. Third, monocular depth is lightweight to run and does not require camera parameters as input, in contrast to multi-view stereo depth estimation. For simplicity, we use the term mono-depth from now on. Utilising mono-depth effectively is not straightforward with the presence of scale and shift distortions. In other words, mono-depth maps are not multi-view consistent. Previous works [9, 17, 47] simply take mono-depth into a depth-wise loss along with NeRF training. Instead, we propose a novel and effective way to thoroughly integrate mono-depth into our system. First, we explicitly optimise scale and shift parameters for each mono-depth map dur-ing NeRF training by penalising the difference between rendered depth and mono-depth. Since NeRF by itself is trained based on multiview consistency, this step transforms mono-depth maps to undistorted multiview consistent depth maps. We further leverage these multiview consistent depth maps in two loss terms: a) a Chamfer Distance loss between two depth maps of adjacent images, which injects relative pose to our system; and b) a depth-based surface rendering loss, which further improves relative pose estimation. In summary, we propose a method to jointly optimise camera poses and a NeRF from a sequence of images with large camera motion. Our system is enabled by three contri-butions. First , we propose a novel way to integrate mono-depth into unposed-NeRF training by explicitly modelling scale and shift distortions. Second , we supply relative poses to the camera-NeRF joint optimisation via an inter-frame loss using undistorted mono-depth maps. Third , we further regularise our relative pose estimation with a depth-based surface rendering loss. As a result, our method is able to handle large camera motion, and outperforms state-of-the-art methods by a sig-nificant margin in terms of novel view synthesis quality and camera trajectory accuracy. |
Chan_Histopathology_Whole_Slide_Image_Analysis_With_Heterogeneous_Graph_Representation_Learning_CVPR_2023 | Abstract Graph-based methods have been extensively applied to whole slide histopathology image (WSI) analysis due to the advantage of modeling the spatial relationships among dif-ferent entities. However, most of the existing methods fo-cus on modeling WSIs with homogeneous graphs ( e.g., with homogeneous node type). Despite their successes, these works are incapable of mining the complex structural re-lations between biological entities ( e.g., the diverse inter-action among different cell types) in the WSI. We propose a novel heterogeneous graph-based framework to lever-age the inter-relationships among different types of nu-clei for WSI analysis. Specifically, we formulate the WSI as a heterogeneous graph with “nucleus-type” attribute to each node and a semantic similarity attribute to each edge. We then present a new heterogeneous-graph edge attribute transformer (HEAT) to take advantage of the edge and node heterogeneity during massage aggregating. Further, we design a new pseudo-label-based semantic-consistent pooling mechanism to obtain graph-level features, which can mitigate the over-parameterization issue of conven-tional cluster-based pooling. Additionally, observing the limitations of existing association-based localization meth-ods, we propose a causal-driven approach attributing the contribution of each node to improve the interpretability of our framework. Extensive experiments on three pub-lic TCGA benchmark datasets demonstrate that our frame-work outperforms the state-of-the-art methods with consid-erable margins on various tasks. Our codes are available at https://github.com/HKU-MedAI/WSI-HGNN. | 1. Introduction Histopathology slides provide rich information on diag-nosis and treatment planning for many cancer diseases. The *The first two authors contributed equally to this work. Figure 1. Left: Input WSI. Middle: A WSI with selected patches and associated node types. (Black -no label; cyan -neoplastic; red -inflammatory; blue -connective; yellow -dead; green -non-neoplastic epithelial). Right: Constructed heterogeneous graph with different types of nodes and edge attributes (Illustrative). recent technological advancements in tissue digital scanners facilitate the development of whole slide histopathology im-age (WSI) analysis. However, traversing through the WSI with diverse magnifications is time-consuming and tedious for pathologists due to the large-scale nature of the WSI (e.g., its typical size is 60,000 ×60,000 pixels). Hence deep learning techniques play an important role as they introduce accurate and automated analysis of WSIs, which can signif-icantly relieve the workload of pathologists. Since it is difficult to fit the complete WSI into the mem-ory, most of the works adopt multiple instance learning (MIL) to divide the WSI into instances and then aggre-gate them for WSI analysis. However, these methods op-erate on bags of instances that do not emphasize the inter-relationships between these instances. Recently, the emer-gence of graph neural networks (GNNs) has made large progress in representing the spatial relationships between instances. As a result, there are many attempts to represent the WSIs as graphs of instances. Figure 1 presents an exam-ple of a graph constructed from WSI. Unlike convolutional neural networks (CNNs) that aggregate features based on locality in the Euclidean space, GNNs focus on locality on graph topology, which offers more flexibility in analyzing the deep connections between features in the image data be-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15661 yond the spatial locality [1]. For example, GNNs are able to learn relational information and distinguish cells based on their apposition to tumor cells, or normal stroma (i.e., cells which are tumor-infiltrating lymphocytes or from an adjacency inflammatory response), which are important for prognosis [5, 27]. However, existing paradigms on graph-based WSI anal-ysis focus on representing the WSI with a homogeneous graph structure and then predicting the response via vanilla GNNs with cluster-based pooling (i.e., based on similarities of node embeddings). Despite their successes, these meth-ods suffer from several drawbacks: (i) GNNs on homoge-neous graphs focus on aggregating direct relational infor-mation from neighboring nodes, where the complex rela-tional information of the graphs is often neglected. (ii) For different graphs, the clusters defined by similarities between node embeddings have inconsistent meanings. This intro-duces a large degree of freedom in parameters and leads to over-parameterization issue [2]. Therefore, GNNs tend to easily overfit due to a lack of identifiability [14]. In view of these limitations, we propose a novel frame-work for WSI analysis, which leverages a heterogeneous graph to learn the inter-relationships among different types of nodes and edges. The heterogeneous graph introduces a “nucleus-type” attribute to each node, which can serve as an effective data structure for modeling the structural inter-actions among the nuclei in the WSI. To tackle the aggrega-tion process in the heterogeneous graph, we propose a novel heterogeneous-graph edge attribute transformer (HEAT) ar-chitecture which can take advantage of the edge and node heterogeneity. Thus, the diverse structural relations among different biological entities in the WSI can be incorporated to guide the GNN for more accurate prediction. Further, to obtain the graph-level representations for slide-level pre-diction, we propose a semantic-consistent pooling mecha-nism — pseudo-label (PL) pooling, which pools node fea-tures to graph level based on clusters with a fixed definition (i.e., nucleus type). The proposed PL pooling can regularize the graph pooling process by distilling the context knowl-edge (i.e., pathological knowledge) from a pretrained model to alleviate the over-parameterization issue [2]. Addition-ally, we propose a Granger causality [13] based localization method to identify the potential regions of interest with clin-ical relevance to provide more insights to pathologists and promote the clinical usability of our approach. We extensively evaluate our method on three TCGA pub-lic benchmark datasets, including colon adenocarcinoma cancer (COAD) and breast invasive carcinoma (BRCA) datasets from the TCGA project [35] and the Camelyon 16 dataset [3], and compare to various latest state-of-the-art (SOTA) methods. Our method outperforms the competitors on cancer staging, cancer classification, cancer typing, and localization tasks.2. Related Works Multiple Instance Learning on WSIs. Existing WSI anal-ysis approaches generally adopt MIL [5,7,12,26,30,33,41], which first divide the WSI into fixed-size patches and then compress the information of these patches into low-dimensional vectors. Conventional methods aggregate bags of instances to learn WSI-level features for final predictions. Tellez et al. [30] compress the WSI-level image into em-bedding vectors and use a standard CNN to perform patch-level and WSI-level cancer classification. These CNN-based methods analyze local areas in the Euclidean space on fixed connectivity (i.e., fixed-size kernels), limiting the per-formance beyond the spatial locality. Graph-based methods [5,15,41] have recently been proposed, which model the in-teractions between instances via graphs. Their capability of modeling instances based on graph topology provides more flexibility to analyze complex structures of WSIs. Chen et al. [5] propose patch-GCN, a method of modeling WSI with homogeneous graphs, and regress survival data with a graph convolutional neural network (GCN) [36]. Zheng et al. [41] propose a graph-based MIL method using graph transformer networks [40]. In spite of their power, most of these WSI methods use homogeneous graphs, which limits the information mined from WSIs. A recent method [15] is proposed to model WSIs with heterogeneous graphs, where the heterogeneity in each patch is introduced by different resolution levels. However, it only considers the resolution level heterogeneity of patches, with insufficient ability to model the complex contextual interaction between patches in the same resolution level. Graph Neural Networks. Although the SOTA GNNs have shown great successes in many problem domains [16, 19, 20], they are mostly focused on homogeneous graphs [32, 36, 37, 40, 42]. These architectures extract the locality information on the graph topology and learn the graph representations by performing aggregation on neigh-boring nodes. However, the potential heterogeneity in nodes and edges is not incorporated by these homogeneous GNN algorithms, and therefore their capability in mining the structural information is limited. Several works attempt to address the heterogeneity in their architectural designs [16, 28, 34] and assume that the relation type is finite and discrete. However, when modeling images with graphs, the heterogeneity in relations is typically continuous (e.g., the similarity between nodes) or high-dimensional. Although there are several attempts [5, 10] to extend SOTA GNNs [32, 36] to incorporate edge attributes, their works are lim-ited to homogeneous graphs. Graph Pooling. Graph pooling aims to aggregate node-level features to obtain graph-level features. Conventional methods [36] directly take the average of node-level fea-tures to extract graph-level features, which tends to over-15662 Figure 2. The paradigm of our proposed heterogeneous graph-based WSI analysis framework, which includes heterogeneous graph con-struction, heterogeneous-graph edge attribute transformer (HEAT) for structural information aggregation, pseudo-label-based (PL) graph pooling for slide-level prediction and casual-driven localization. smooth the signals of the nodes and cannot generate rep-resentative graph-level features. Recently, there is exten-sive development of graph pooling algorithms based on the clusters of the embeddings [6, 15, 25]. However, the clus-ters constructed based on similarity are inconsistent across graphs. This leads to a large degree of freedom in parame-ters which easily causes overfitting. A semantic-consistent pooling method is therefore needed. Explaining GNNs. Despite the success of graph neural net-works, their poor interpretability of the parameters makes them notoriously recognized as “blackboxes”. With the ad-vances in network attribution methods [29], extensive at-tempts have been made to open such “blackboxes” [24,39]. Generating network explanation is an important qualitative step in the WSI analysis since it can highlight the abnor-mal regions for further investigation. Conventional explain-ers try to find the associations between the parameters in deep neural networks (or the nodes in GNNs) and the pre-dictions. GNNExplainer [39] is the SOTA method explain-ing the contributions of node features to the GNN pre-dictions. It trains feature masks on each node and edge feature to minimize the prediction loss of a trained GNN. PGExplainer [24] shares the same objective as GNNEx-plainer and trains a generative model to generate explana-tions. Recently, there has been emerging attention in gener-ating causal explanations for GNNs [23,29], and most of the methods focus on the Granger causality as the explanationobjective. Gem [23] trains explanation generators from the causal perspective. Causal explainers attempt to provide ex-planations of features that are causal rather than associated with the neural network prediction. 3. Preliminaries Heterogeneous Graph : A heterogeneous graph is defined by a graph G=(V,E,A,R), where V,E,Arepresent the set of entities (vertices or nodes), relations (edges), and entity types, respectively. And Rrepresents the space of edge attributes. For v∈ V,vis mapped to an entity type by a function τ(v)∈ A . An edge e= (s, r, t)∈ E links the source node sand the target node t, and ris mapped to an edge attribute by a function ϕ(e) =r∈ R . Every node v has a d-dimensional node feature x∈ X , where Xis the embedding space of node features. Granger Causality [13, 23]: Let Ibe all the available in-formation and I−Xbe the information excluding variable X. If we can make a better prediction of YusingIthan usingI−X, we conclude that XGranger-causes Y. WSI Classification : Given a WSI Xand a heterogeneous graphGconstructed from X, we wish to predict the label ywith a GNN model M. We also aim to assign an im-portance score f(v)to each node v∈ V inGas the causal contribution of each patch to the prediction for localization. 15663 Figure 3. Examples of introduced meta-relations in a heteroge-neous graph constructed from a WSI. 4. Methodology 4.1. Heterogeneous Graph Construction We introduce our methodology of modeling the WSI with a heterogeneous graph. Figure 2 presents the overall workflow of our proposed framework. We adopt the com-monly used OTSU thresholding algorithm [5] and sliding window strategy to crop each WSI into non-overlapping patches. Uninformative patches with backgrounds are re-moved. These patches define the nodes of the graph con-structed. To define the corresponding node type, we use HoverNet [12] pretrained on the PanNuke dataset [8] to classify the patches into predefined types. HoverNet de-tects nuclei |
Bao_DexArt_Benchmarking_Generalizable_Dexterous_Manipulation_With_Articulated_Objects_CVPR_2023 | Abstract To enable general-purpose robots, we will require the robot to operate daily articulated objects as humans do. Current robot manipulation has heavily relied on using a parallel gripper, which restricts the robot to a limited set of objects. On the other hand, operating with a multi-finger robot hand will allow better approximation to human be-havior and enable the robot to operate on diverse articu-lated objects. To this end, we propose a new benchmark called DexArt, which involves Dexterous manipulation with Articulated objects in a physical simulator. In our bench-mark, we define multiple complex manipulation tasks, and the robot hand will need to manipulate diverse articulated objects within each task. Our main focus is to evaluate the generalizability of the learned policy on unseen articulated objects. This is very challenging given the high degrees of freedom of both hands and objects. We use Reinforcement *Equal contributions. Work done while interning at UC San Diego.Learning with 3D representation learning to achieve gen-eralization. Through extensive studies, we provide new in-sights into how 3D representation learning affects decision making in RL with 3D point cloud inputs. More details can be found at https://www.chenbao.tech/dexart/. | 1. Introduction Most tools and objects humans interact with are articu-lated objects. To allow household robots to facilitate our daily life, we will need to enable them to manipulate di-verse articulated objects with multi-finger hands as hu-mans do. However, learning dexterous manipulation re-mains a challenging task given the high Degree-of-Freedom (DoF) joints of the robot hands. While recent work has shown encouraging progress in using Reinforcement Learn-ing (RL) [1, 8, 29, 69] for dexterous manipulation, most re-search focuses on manipulating a single rigid object. The manipulation of diverse articulated objects not only adds This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21190 additional complexity with joint DoF, but also brings new challenges in generalizing to unseen objects in test time, which has been a major bottleneck for RL. This requires efforts on integrating 3D visual understanding and robot learning on a novel benchmark. Recent proposed robotic manipulation benchmarks [7, 12, 34, 65] play important roles in robot learning algorithm development. For example, the MetaWorld [65] benchmark provides more than 50 tasks for evaluating RL algorithms. However, each proposed MetaWorld task only focuses on one single object without considering generalization across object instances. To enable generalizability for the robots, the ManiSkill [19, 41] benchmark is proposed with diverse manipulation tasks and a large number of objects to manipu-late within each task. While this is encouraging, the use of a parallel gripper has limited the tasks the robot can perform, and the ways how the robot can operate. For example, it is very challenging for a parallel gripper to pick up a bucket using the handle. In this paper, we propose a new benchmark for Dexterous manipulation with diverse Articulated objects (DexArt ). We introduce multiple tasks with a dexterous hand (the Allegro Hand) manipulating the articulated ob-jects in the simulation. For each task, instead of operating with a particular object, we provide a training set of diverse articulated objects and the goal is to generalize the policy to a different test set of articulated objects. To achieve such a generalization, we incorporate RL with generalizable visual representation learning: we adopt 3D point clouds as our observations and use a PointNet encoder [44] to extract vi-sual representations for decision making. The generalizabil-ity of the policy depends on the 3D structure understand-ing modeled by the PointNet encoder. We experiment and benchmark with different methods and settings, and provide four key observations as follows: (i) Training with more objects leads to better general-ization. For each task, we trained policies using varying numbers of objects for each task and tested them on the same set of unseen objects. We find training with more objects consistently achieves better success rates. Similar findings have been reported in studies on manipulation with parallel grippers (Generalist-Specialist Learning [24], Man-iSkill [41]). While this might not be surprising from the perception perspective, it does present more challenges for a single RL policy to work with different objects simultane-ously. It highlights the importance of learning generalizable visual representations for RL. (ii) Encoder with a larger capacity does not necessar-ily help. We experiment with different sizes of PointNet encoders, and we observe the simplest one with the least parameters achieves the best sample efficiency and success rate, whether the network is pre-trained or not. This is sur-prising from the vision perspective, but it is consistent withprevious literature which shows RL optimization becomes much more challenging with large encoders [41]. (iii) Object part reasoning is essential. With multi-finger hand interacting with different object parts, our intuition is that object part recognition and reasoning can be essential for manipulation. To validate our intuition, we pre-train the PointNet encoder with object part segmentation tasks. We show the object part pre-training can significantly improve sample efficiency and success rate compared to approaches without pre-training and with other pre-training methods. (iv) Geometric representation learning brings robust pol-icy. We evaluate the robustness of the policy under unseen camera poses. We find that the policy trained with partial point cloud is surprisingly resilient to variations in camera poses, which aligns with the previous studies that use com-plete point clouds in policies [32]. The accuracy remains consistent even with large viewpoint variation. This is par-ticularly useful for real robot applications as it is challeng-ing to align the camera between sim and real. With the proposed baselines and detailed analysis among them, we hope DexArt benchmark provides a platform to not only study generalizable dexterous manipulation skill itself, but also study how visual perception can be improved to aim for better decision making. We believe the unifi-cation of perception and action, and studying them under DexArt can create a lot of research opportunities. |
Ahuja_Neural_Rate_Estimator_and_Unsupervised_Learning_for_Efficient_Distributed_Image_CVPR_2023 | Abstract Thanks to advances in computer vision and AI, there has been a large growth in the demand for cloud-based vi-sual analytics in which images captured by a low-powered edge device are transmitted to the cloud for analytics. Use of conventional codecs (JPEG, MPEG, HEVC, etc.) for compressing such data introduces artifacts that can seri-ously degrade the performance of the downstream analytic tasks. Split-DNN computing has emerged as a paradigm to address such usages, in which a DNN is partitioned into a client-side portion and a server side portion. Low-complexity neural networks called ‘bottleneck units’ are in-troduced at the split point to transform the intermediate layer features into a lower-dimensional representation bet-ter suited for compression and transmission. Optimizing the pipeline for both compression and task-performance re-quires high-quality estimates of the information-theoretic rate of the intermediate features. Most works on compres-sion for image analytics use heuristic approaches to esti-mate the rate, leading to suboptimal performance. We pro-pose a high-quality ‘neural rate-estimator’ to address this gap. We interpret the lower-dimensional bottleneck out-put as a latent representation of the intermediate feature and cast the rate-distortion optimization problem as one of training an equivalent variational auto-encoder with an appropriate loss function. We show that this leads to im-proved rate-distortion outcomes. We further show that re-placing supervised loss terms (such as cross-entropy loss) by distillation-based losses in a teacher-student framework allows for unsupervised training of bottleneck units without the need for explicit training labels. This makes our method very attractive for real world deployments where access to labeled training data is difficult or expensive. We demon-strate that our method outperforms several state-of-the-art methods by obtaining improved task accuracy at lower bi-trates on image classification and semantic segmentation tasks.1. Introduction Visual analytics powered by computer-vision and AI are being ubiquitously deployed in various domains including retail, industry 4.0, security, and smart-cities [16]. These analytics are increasingly being powered by deep-neural networks (DNN) that are often too complex to be imple-mented on low-powered mobile or client devices. Instead, visual data captured by a mobile device is transmitted over the network to a server or the cloud. Data-compression techniques are applied in order to keep the data-rate man-ageable. Standard data compression techniques for images and videos, such as JPEG, BPG, H.264, H.265, etc. are known to be suboptimal for visual analytics since these optimize rate-distortion performance for human perception rather than for semantics-based analytics. Consequently, task performance can degrade severely in the presence of even relatively mild compression artifacts. To address this, split DNN-computing (also called col-laborative intelligence [8, 17]) has emerged as a recent paradigm in which a DNN is partitioned into two: a front-end comprising the input layer and a number of subsequent layers deployed on the mobile or client side, and a back-end comprising the remaining layers residing on the server or cloud. Specially designed neural networks, called bottle-neck layers [12,21,24] are introduced at the partition point. These layers transform the high-dimensional intermediate features at the split point into a lower-dimensional space, enabling greater compression. This has important bene-fits over the traditional approach. First, the model can be trained to learn features that are jointly optimized both for task performance and for compression, resulting in greater compression efficiency; second, there is no need to recon-struct the original signal resulting in greater computational efficiency. Early approaches [5] to split-computing explored the use of simple lossless and lossy compression techniques to compress the intermediate features. While lossless techniques resulted in only mild reduction in bandwidth, naively quantizing the intermediate features during infer-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2022 ence to reduce bitrate leads to a drop in the task perfor-mance. Inspired by the impressive results of ML-based image compression approaches [2, 3, 26], subsequent ap-proaches [7,23,25] include the quantization operation in the end-to-end training of the model. These approaches yield significantly better rate-distortion performance (task accu-racy vs compression level) than that obtained by traditional image compression methods such as JPEG and the more re-cent HEIC. These approaches suffer, however, from a couple of ma-jor limitations in practical, real-world scenarios. First, the parameters of the trained DNN model are valid only for a particular split-point, and for a particular compression level. Changing either the split point or the compression level (as is needed for variable bit-rate communication) re-quires, therefore, a complete retraining of the entire model. Such multiple retrainings increase training complexity sig-nificantly. Inference also becomes highly inefficient since entire sets of DNN parameters (which can run into tens of millions) have to be reloaded each time the compression level needs to be changed. A second limitation is the need for large volumes of labeled training data. Such data may often not be available in real-world situations; and gathering and annotating data might be expensive and impractical. To address the first challenge, a recent approach [10] outlined a systematic procedure to both design and train the bottleneck layers introduced at various split points and for various compression levels. Crucially, the parameters of the original DNN were left untouched. However, the method used a sub-optimal formulation to estimate the rate of the transmitted data at the bottleneck; further, it did not tackle the unsupervised scenario. Despite these simplifi-cations, the method did achieve state-of-the-art results. To handle unsupervised training, another method [20] explored the use of distillation-based approaches. However, that too involved training all or part of the network, in particular the front-end of the network (also called its head ). Hence, it has the same limitations as other methods when deployed in dynamic, variable-bit rate usages. Contributions: We present in this work1, an approach to split DNN computing for image analytics that addresses the challenges outlined above. We make use of bottleneck lay-ers and similar to [10], we train the weights of the bot-tleneck layer only. We propose an improved modeling of the rate-loss via a neural rate-estimator using methods of variational inference and show that this results in large re-ductions in bit-rate without sacrificing task accuracy. Fur-ther, we present a distillation-based approach to enable un-supervised training of the bottleneck layers. We observe that while lack of a supervisory signal does result in a drop 1Code at https://github.com/intellabs/spic Figure 1. Variational model [3]:xis a vector to be compressed; yis its latent representation derived from an analysis network with parameters ϕg.θgare the parameters of a synthesis network that recovers xfrom y.zare hyper-latents introduced to capture de-pendencies in y;ϕhandθhare the corresponding analysis and synthesis networks relating yandz. in the rate-distortion performance, our method still outper-forms most other published methods that used supervised training. 2. Background In this section we present background on two topics cen-tral to our approach: (i) connection between rate-distortion optimization for compression and training an equivalent V AE, and (ii) designing and training bottleneck layers for variable-rate, split-DNN computing. 2.1. Variational Image Compression Several works have explored the connection between the rate-distortion objective of lossy compression and the loss function of variational autoencoders (V AE) [2,3,22,28]. We follow the expositions of [3,28] below. A convolutional en-coder network ga(x;ϕg)transforms the image vector xinto a continuous latent representation y. This is then quantized to a discrete variable ˆywhich is then entropy-coded and transmitted. The decoder recovers ˆy, and obtains a lossy image reconstruction via a deconvolutional neural network gs(ˆy;θg).θgandϕgare the weights of the corresponding DNNs. To cast this formally as a V AE, the problem needs to be relaxed because of the quantization involved which is a non-differentiable operation. This can be achieved by either re-placing quantization by additive noise [2], using a stochastic form of binarization [26], or using a smooth approximation of the gradient [25]. The relaxed optimization problem can then be represented as a variational autoencoder: a proba-bilistic generative model (“generating” a reconstructed im-age from the latent representation) of the image combined 2023 (a)Training mode: Architecture of the hyperprior-network used at the output of the bottle-neck module for estimation of the rate-loss term Lr (b)Inference mode: Hyperprior network is not used during inference, only bottleneck module is present. Figure 2. Pipeline in training and in inference mode. Note that the hyperprior network is present only during training. with an approximate inference model (“inferring” the latent representation from the source image) as shown in Fig. 1 In order to capture the dependencies that were observed to exist between elements of ˆy, an additional set of random variables, z, called ‘hyperlatents’ were introduced [3]. The hyperprior p(y|z)w | as modeled as Gaussian and the scale of this ( σ) was the output of a second DNN hswith weights θh. The inference model was accordingly extended through a DNN hawith weights ϕhsuch that z=ha(y;ϕh). The relaxed rate-distortion objective is then given by: L=Eh −log2p(z)−log2p(y|z) +λ∥x−gs(y)∥2i =R+λD (1) where the rate, R, is the sum first two terms which are the rates of the hyper-latents and latents respectively, and the final term is the distortion, D. In section 3.1, we will apply these principles to the inter-mediate feature at the split point in order to obtain a neural rate-estimator for the output of the bottleneck encoder. 2.2. Design and Training of Bottleneck Units We follow the approach of Datta et al. [10] for the de-sign and training of bottleneck units. The procedure is briefly outlined next. Inspired by methods in neural ar-chitecture search (NAS), the procedure involves exploring a joint space of architectural hyper-parameters and train-ing hyper-parameters. The architectural space comprises parameters related to the design or topology of the bottle-neck encoder. In [10], this space comprised two hyper-parameters: the number of channels at the output of the bot-tleneck encoder and the stride of the convolutional kernelsused therein. More broadly, though, this could encompass various other architectural parameters of interest such as number of layers in the bottleneck encoder, topology of the layers (convolutional, linear, residual, etc.), etc. The train-ing hyper-parameter space comprises variables such as the weight (Lagrange multipliers) λin the training loss func-tion (see Eq. (2)), and the quantization step size, Q, used to discretize the latent space. A sample is generated from this joint space which fixes the topology of the bottleneck unit and the hyper-parameters to be used for its training. A training run is then performed to train the weights of this bottleneck unit (importantly, without modifying the weights of the original network). The accuracy of the pipeline with the trained bottleneck is measured on the task of interest along with the average bit-rate required to transmit the com-pressed features. This results in a candidate bottleneck layer that yields a certain accuracy and bit-rate. This process is then repeated multiple times – each time with a different sample from the joint hyper-parameter space – to generate sufficient number of such candidates. From this set, Pareto optimal set of points are determined. The points lying on the Pareto frontier correspond to the set of trained bottle-neck layers that yield the optimal accuracy vs compression performance. 3. Approach We consider a split DNN architecture, where the inter-mediate features from the final layer at the front-end are compressed and transmitted to the back-end. A specially designed bottleneck unit is introduced at the split point. The bottleneck encoder, which resides at the client, trans-forms the output of the front-end into a lower-dimensional 2024 Figure 3. Unsupervised training of bottlenecks using a teacher-student model. space more suited for compression. The outputs of this en-coder are discretized (quantized) by a uniform quantizer of step-size Q, entropy-coded, and transmitted to the server or cloud where the back-end resides. Here, following entropy decoding and inverse quantization, the bottleneck decoder – which is a mirror of the bottleneck encoder – restores the lower-dimensional features into the original higher di-mensional space and the rest of the inference is completed. Our objective is to design and train the bottleneck layers to jointly optimize for task-performance and compression. For this we follow the procedure outlined in section 2.2. We first define a joint space of architectural and training hyper-parameters, and an appropriate training loss function. These are described next. Architectural hyper-parameters: The bottleneck en-coder transforms the front-end output from an H×W×C dimensional latent space (height ×width×channels) into Hr×Wr×Crdimensional space. The output height and width can be related to the input by a single scale factor of S, i.e. Hr=H/S ,Wr=W/S . Choosing S > 1and Cr< C leads to a lower-dimensional output. We model the bottleneck encoder as a single neural network layer to keep its computational complexity low (since the decoder is a mirror image of the encoder, the decoder is also sin-gle layer). We experimented with various topologies for the bottleneck layers and eventually selected a depthwise-separable topology as this has the fewest parameters and the lowest computational complexity [15]. Details are pre-sented in Section 5.3. Training loss and training hyper-parameters: The model is trained with a loss function of the form: L=Lr+λLt, (2) where, Ltis a task-loss to maximize task performance, Lr is a rate-loss term to minimize bit-rate of the encoded data, andλis a term that controls the relative weighting of the two terms. We note the similarity between equations (1) and (2) and observe that Eq. (2) also represents a form of rate-distortion objective with the distortion-term Dbeing replaced by an equivalent task-loss, Lt. Figure 4. Unsupervised classification curves for different distilla-tion layers ForLt, the usual loss functions such as cross-entropy loss for classification, and sum of per-pixel cross-entropy losses for segmentation are used. Various approaches have been explored in literature to choose an appro-priate Lr. Since, after quantization, the feature values are discrete-valued, the rate is the expected code length which is lower-bounded by the entropy of the probability distribution of the quantized alphabet [9]. Estimation of this quantity is challenging; hence, several published works adopt indirect approaches. These include using the ℓ2-norm or ℓ1-norm of the compressed feature [6]; or the ℓ1-norm of the DCT coefficients of the original image, either directly [19], or along with spatial prediction [1]. However, these are proxies for the true rate and hence suboptimal. Instead, we will present next an approach based on variational inference to train the bottleneck and learn a neural rate-estimator from the training data. Together, λandQform the training hyper-parameter space. The search procedure from section 2.2 is performed over the 4-dimensional space of (S, C r, λ, Q ). 3.1. Variational formulation of Lr As described earlier in section 2.1, use of variational methods to learn an entropy model from data have primarily been explored in the context of image compression for re-construction. In this section, we present an approach based on those principles to perform compression of latent repre-sentations for image analytics instead. Consider the varia-tional model shown in Fig. 1 where xis not the input image, but instead the tensor output of the intermediate layer just before the split point, and yis the latent representation at the output of the bottleneck encoder. Hence, the transform ga(x)in this case is simply the bottleneck encoder. yis then quantized to ˆywhich is then entropy-coded and transmitted. The decoder recovers ˆy, and obtains a lossy reconstruction 2025 Table 1. Parameters and their ranges for search space exploration. Here ‘S’ indicates supervised, and ‘US’ indicates unsupervised ParameterRange Classification (Resnet50)Segmentation (DeepLab v3) No. of Channels Cr{32,64,96,128}{2,4,8,16, 32,48,64} Stride S {2,4,6} Quant. parameter Q [0.5, 10.0] [0.5,6.0] L, where λ= 10L [-9,-6] (S) [-11, -7] (US)[-4,-1] (S) [-16,-6] (US) of the original tensor xvia the bottleneck decoder gs(ˆy). Similar to [3], we extend this model by adding hyperlatent, z, with a corresponding hyperprior encoder (inference) and hyperprior decoder (generative). From Eq. 1 the rate is measured by R=E[−log2p(z)−log2p(y|z)] (3) We use the above estimate of rate as the rate-loss term in Eq. (2). The architecture of the hyperprior encoder, as shown in Fig. 2, comprises two convolutional layers with ReLu acti-vations (details in supplementary material). The hyperprior decoder is the mirror of the encoder using transpose convo-lutions. These hyperprior models help improve the rate estima-tion as will be evident from results later. However, a sepa-rate set of hyperprior encoder and decoder has to be trained for each separate compression level. Including them in the overall pipeline would lead to the same limitations of other methods that were outlined in Section 1, viz. having to reload a large set of network weights everytime the bit-rate is to be changed. To circumvent this, we include the hy-perprior models only to provide an estimate for the rate-loss term during training; during inference, only the bottle-neck layers are included, not the hyperprior models. This increases training complexity somewhat compared to train-ing only the bottleneck layers (as was done in [10]), but no overhead is introduced during inference. Thus, we retain the benefits of being able to efficiently support variable bit-rate operation and adaptive splitting. 4. Unsupervised Training of Bottleneck Units Access to labeled training data is often challenging in real-world situations. We present, therefore, a method to train the bottleneck units in an unsupervised manner when we have access to only unlabeled training data. We adopt the student-teacher paradigm from literature on knowledge-distillation (KD) [27]. In the KD framework, knowledge-Table 2. Split points within the models. Resnet50 DeepLab v3 Split 1 layer4[2] Classifier[0].conv[1] Split 2 layer4[0] Resnet50.layer4[1] Split 3 layer3[4] Resnet50.layer3[5] transfer from the teacher to student can be achieved by us-ing either the logits or feature information from the teacher. In our scenario, the teacher network is simply the original task network without any split points or bottleneck units. Instead of using a supervisory task-loss, L |
Barroso-Laguna_Two-View_Geometry_Scoring_Without_Correspondences_CVPR_2023 | Abstract Camera pose estimation for two-view geometry tradi-tionally relies on RANSAC. Normally, a multitude of image correspondences leads to a pool of proposed hypotheses, which are then scored to find a winning model. The inlier count is generally regarded as a reliable indicator of “con-sensus”. We examine this scoring heuristic, and find that it favors disappointing models under certain circumstances. As a remedy, we propose the Fundamental Scoring Net-work (FSNet), which infers a score for a pair of overlap-ping images and any proposed fundamental matrix. It does not rely on sparse correspondences, but rather embodies a two-view geometry model through an epipolar attention mechanism that predicts the pose error of the two images. FSNet can be incorporated into traditional RANSAC loops. We evaluate FSNet on fundamental and essential matrix es-timation on indoor and outdoor datasets, and establish that FSNet can successfully identify good poses for pairs of im-ages with few or unreliable correspondences. Besides, we show that naively combining FSNet with MAGSAC++ scor-ing approach achieves state of the art results. | 1. Introduction How to determine the relative camera pose between two images is one of the cornerstone challenges in computer vi-sion. Accurate camera poses underpin numerous pipelines such as Structure-from-Motion, odometry, SLAM, and vi-sual relocalization, among others [26, 37, 47, 48, 56]. Much of the time, an accurate fundamental matrix can be es-timated by existing means, but the failures are prevalent enough to hurt real-world tasks, and are hard to antici-pate [23]. Where are the mistakes coming from? Traditional approaches first detect then describe a set of interest points in each image, and establish correspon-dences between the two sets while possibly filtering them, e.g., checking for mutual nearest neighbors or applying Lowe’s ratio test [36]. Then, random subsets of correspon-dences are sampled and a 5-point or 7-point algorithm is Reference frame Destination frame F-Mat Hypothesis MAGSAC++ FSNet Figure 1. Example where SuperPoint-SuperGlue [19, 46] corre-spondences are highly populated by outliers, but there are still enough inliers to produce a valid fundamental matrix hypothe-sis. In such scenarios with unreliable correspondences, current top scoring methods fail (MAGSAC++ [5]), while our proposed FSNet model, a correspondence-free scoring approach, is able to pick out the best fundamental matrix. used to estimate many essential or fundamental matrix hy-potheses, respectively, ( i.e., two-view geometry models). A RANSAC [25] loop iterates over the generated hypotheses, and ranks them. Conventionally, the ranking is scored by counting inliers, i.e. the number of correspondences within a threshold of that two-view geometry hypothesis. Finally, the top-ranked hypothesis is further refined by using all in-lier correspondences. As the research in robust model estimation advances [2, 5, 13, 15, 41, 58], the different stages of the pipeline are be-ing revisited, e.g., local feature detection and description is learned with neural networks, outlier correspondences are filtered with learned models, hypotheses are sampled more efficiently, or the inlier threshold is optimized. Although the latest matching pipelines produce very accurate and robust correspondences [19, 46, 50], correspondence-based scor-ing methods are still sensitive to the ratio of inliers, num-ber of correspondences, or the accuracy of the keypoints This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8979 [2, 5, 10, 23]. Incorrect two-view geometry estimation can lead to invalid merges in 3D reconstruction models [48], bad localization services [1], or more expensive steps when finding outliers in pose graphs [14]. A second family of approaches emerged in recent years, where a neural network is trained to directly regress two-view geometry from the input images [1, 40, 42, 68]. Thus, such approaches replace all the components of the RANSAC pipeline. This can be a viable approach when two views are extremely difficult, even when they do not overlap [11]. However, challenging scenarios, e.g., wide-baseline, or large illumination changes, can lead to incor-rect predictions [32]. Typically, poses directly regressed this way have fewer catastrophic relative pose predictions, but they have difficulty in estimating precise geometry [1]. On the other hand, correspondence-based hypotheses can be very precise, if estimated correspondences are of suffi-ciently high quality. Our approach uses correspondences to generate model hypotheses, but does not use correspon-dences to score them during the RANSAC loop. We propose a fundamental matrix scoring network that leverages epipolar geometry to compare features of the im-ages in a dense manner. We refer to our method as the Fun-damental Scoring Network, or FSNet for short. Inspired by the success of Vision Transformers [61], and detector-free matchers [31, 50], we define an architecture that incor-porates the epipolar geometry into an attention layer, and hence the quality of the fundamental matrix hypothesis con-ditions the coherence of the computed features. Figure 1 shows an example where correspondences are highly popu-lated by outliers. However, there are still enough inliers to generate a good fundamental matrix, and FSNet was able to select it from the hypothesis pool. Our contributions are 1) an analysis of the causes of scor-ing failures, as well as more insights into the traditional RANSAC approach of relative pose estimation; 2) FSNet, a network that predicts angular translation and rotation errors for a given image pair and a fundamental matrix hypothesis; 3) an image order-invariant design of FSNet that outputs the same values for (Image A, Image B, F) and (Image B, Im-age A, FT) inputs; 4) a solution that can be combined with state-of-the-art methods to cope with current failure cases. |
Huang_VoP_Text-Video_Co-Operative_Prompt_Tuning_for_Cross-Modal_Retrieval_CVPR_2023 | Abstract Many recent studies leverage the pre-trained CLIP for text-video cross-modal retrieval by tuning the backbone with additional heavy modules, which not only brings huge computational burdens with much more parameters, but also leads to the knowledge forgetting from upstream models. In this work, we propose the VoP: Text-Video Co-operative Prompt Tuning for efficient tuning on the text-video retrieval task. The proposed VoP is an end-to-end framework with both video & text prompts introducing, which can be regarded as a powerful baseline with only 0.1% trainable parameters. Further, based on the spatio-temporal characteristics of videos, we develop three novel video prompt mechanisms to improve the performance with different scales of trainable parameters. The basic idea of the VoP enhancement is to model the frame position, frame context, and layer function with specific trainable prompts, respectively. Extensive experiments show that compared to full fine-tuning, the enhanced VoP achieves a 1.4% average R@1 gain across five text-video retrieval benchmarks with 6×less parameter overhead. The code will be available at https://github.com/bighuang624/VoP . | 1. Introduction Due to the remarkable progress in large-scale contrastive language-image pre-training [16, 21, 22, 31], a recent pop-ular direction for the crucial text-video cross-modal re-trieval [9,25,34,36] task is to transfer pre-trained image-text knowledge to the video domain [10,27,40] with fine-tuning. However, the dominant full fine-tuning strategy inevitably forgets the useful knowledge acquired in the large-scale pre-training phase and poses a risk of overfitting, as the entire model is updated with limited downstream data. Moreover, full fine-tuning requires to maintain an independent model *Work done during internship at Alibaba DAMO Academy. †Corresponding author. 101 100101102 Trainable parameters (%)4 3 2 1 0123R@1 gain against Full (%)(100%,+0%)(11.78%,+2.9%) (0.32%,+1.8%) (0.10%,+0.9%) (0.10%,-2.0%) (0.10%,-2.1%) (0.54%,-4.6%)(1.65%,-4.1%)(1.65%,-3.5%)(6.41%,-1.9%) Full Bias Proj Partial AdapterATTNAdapterFFN VoP VoPF VoPF+P VoPF+CFigure 1. Fine-tuning comparison of our proposed methods and full fine-tuning (Full) . For each method, we represent the R@1 (recall at rank 1) gain on the MSR-VTT-9k dataset together with the number of trainable parameters. And we show only a part of our proposed methods for clarity, labeled with ✩. More detailed results are reported in Sec. 4.2. weight for every dataset during deployment, which becomes infeasible due to the increasing model capacity. In this paper, we introduce prompt tuning [20, 24] to address the challenges that limit the transferability and gen-eralizability. Keeping the backbone frozen and only tuning a few extra parameters prepended to the input, prompt tuning has been widely applied as a flexible and light-weight fine-tuning protocol. Compared to uni-modal ap-plications [1, 23], text-video cross-modal retrieval requires more parameters to support the dual-branch structure, making it logical to benefit from the parameter-efficient tuning strategy. In addition, different from text descriptions that compose sequential information from words, video-understanding requires summarizing information in both the spatial and temporal dimensions. Therefore, we assume that designing non-trivial video prompts further contributes to prompting both branches for mutual promotion. According to the above discussion, we propose the VoP: Text-Video C o-operative Prompt Tuning to simultaneously introduce tunable prompts in both textual and visual en-coders. Also, different from existing related efforts [18] that This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6565 only insert prompt vectors into the input textual sequences, we find that preparing prompts for every layer of both encoders can further close the gap to full fine-tuning. As observed in Fig. 1, V oP achieves competitive or superior performance than other efficient tuning protocols with only 0.1% parameter storage. To exploit essential video-specific information, we fur-ther design three novel video prompts from different per-spectives, which can seamlessly replace conventional visual prompts in V oP. Specifically, (1) position-specific video prompts model the information shared between frames at the same relative position. (2) Generated context-specific video prompts integrate injected contextual message from the frame sequence into the intra-frame modeling. (3) And function-specific video prompts adaptively assist to learn intra-or inter-frame affinities by sensing the transformation of layer functions. By exploring video-specific prompts, V oP offers a new way to transfer pre-trained foundation models to the downstream video domain. We compare our solutions with popular tuning strategies on MSR-VTT [37] (both 9k and 7k splits), DiDeMo [14], ActivityNet [13] and LSMDC [33]. Learning video-specific information while maintaining the pre-trained knowledge, our video prompts deliver an average R@1 improvement of up to 4.2% for V oP, and therefore exceed full fine-tuning by up to 1.4% with much fewer trainable parameters. In summary, the main contributions of our work are three-fold: • We propose the V oP as a strong baseline that effectively adapts CLIP to text-video retrieval with negligible train-able parameters. • To exploit video-specific information, we further develop three video prompts respectively conditioned on the frame position, frame context, and layer function. • Extensive experiments on five text-video retrieval bench-marks demonstrate that various combinations of our video prompts effectively enhance V oP, outperforming full fine-tuning with much less parameter overhead. |
Fu_You_Do_Not_Need_Additional_Priors_or_Regularizers_in_Retinex-Based_CVPR_2023 | Abstract Images captured in low-light conditions often suffer from significant quality degradation. Recent works have built a large variety of deep Retinex-based networks to enhance low-light images. The Retinex-based methods require de-composing the image into reflectance and illumination com-ponents, which is a highly ill-posed problem and there is no available ground truth. Previous works addressed this problem by imposing some additional priors or regulariz-ers. However, finding an effective prior or regularizer that can be applied in various scenes is challenging, and the performance of the model suffers from too many additional constraints. We propose a contrastive learning method and a self-knowledge distillation method for Retinex decompo-sition that allow training our Retinex-based model with-out elaborate hand-crafted regularization functions. Rather than estimating reflectance and illuminance images and rep-resenting the final images as their element-wise products as in previous works, our regularizer-free Retinex decomposi-tion and synthesis network (RFR) extracts reflectance and illuminance features and synthesizes them end-to-end. In addition, we propose a loss function for contrastive learning and a progressive learning strategy for self-knowledge dis-tillation. Extensive experimental results demonstrate that our proposed methods can achieve superior performance compared with state-of-the-art approaches. | 1. Introduction High quality images are highly desirable for many com-puter vision and machine learning applications. In a low-light environment, images often suffer from visual degrada-tions such as poor visibility and low contrast. The darker the area in an image, the more the information is lost, and the harder it is to recover the image with a good quality. Many computer vision systems may malfunction or fail completely in low-light conditions. If dark regions can be removed or al-leviated after the low-light image enhancement, images can (a) Input (b) SCI (CVPR 22) [ 22] (c) URetinexNet (CVPR 22) [ 32] (d) Ours Figure 1. Visual comparison on a real-captured low-light image using state-of-the-art approaches. become more clear. Thus low-light image enhancement can potentially benefit many computer vision applications. Traditional methods for low-light image enhancement can be divided into two categories, one based on histogram equalization [ 1,2,7,12,25] and the other based on Retinex theory [ 4,6,9,10,13,28]. However, relying on the hand-crafted features, they are not robust against the degraded vis-ibility and unexpected noise. Recently, deep learning based approaches for low-light image enhancement have received extensive attention. In-stead of using prior handcrafted features, these methods, such as directly learning enhanced results in an end-to-end network and deep Retinex-based methods, can automatically learn the features with deep neural networks [ 5,8,16,17,19, 20,22,26,27,29,31–35,37–39] for low-light image enhance-ment. A specific survey can be found in [ 14]. Since Retinex theory [ 11] models the color perception of human vision on natural scenes, deep Retinex-based meth-ods have better enhancement performance and generaliza-tion in most cases than directly learning enhanced results in an end-to-end network. However, Retinex decomposition This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18125 is a highly ill-posed problem and there is no well-defined ground truth for the reflectance and illumination of real im-ages. Previous works [ 26,31–33,37–39] addressed this problem by introducing some additional priors or regular-izers, but these manually well-designed regularization func-tions are difficult to apply to all scenes. The joint optimiza-tion with too many constraints also leads to the absence of adaptivity and efficiency. Therefore previous Retinex-based methods often generate unnatural images. To tackle these challenging issues, we propose a con-trastive learning method and a self-knowledge distillation method for Retinex decomposition. With these methods, our proposed regularizer-free Retinex decomposition and syn-thesis network (RFR) can be trained without additional pri-ors or regularizers. Considering the effect of image degra-dations on the Retinex decomposition, instead of estimat-ing reflectance and illumination images and representing the final image as their element-wise product, our model extracts reflectance and illumination features and synthe-sizes them through a neural network. Rather than using multiple networks to perform decomposition, enhancement and denoising separately for low-light image enhancement, this method can be learned through an end-to-end network. Moreover, a novel loss function for contrastive learning is introduced, where some low-quality negative samples are taken into account. Then, a progressive learning strategy for self-knowledge distillation is presented, which allows the student to learn the knowledge of the teacher more effec-tively. We conduct extensive experiments to demonstrate that our proposed methods are beneficial and reasonable. Fig. 1shows the visual comparisons of images recovered from the low-light one using different approaches. In summary, the contributions of our work are as follows: ∙We propose a contrastive learning method and a self-knowledge distillation method that enable the network to extract high-quality reflectance and illumination without additional priors or regularizers. ∙We propose a regularizer-free Retinex decomposition and synthesis network (RFR) for low-light image en-hancement that extract reflectance and illumination features and synthesizes them end-to-end to suppress the effect of image degradations on the Retinex theory. ∙We propose a loss function named Weighted Normal-ized Temperature-Scaled Cross-Entropy Loss for con-trastive learning and a progressive learning strategy for self-knowledge distillation. ∙Comprehensive experiments on different datasets show the superiority of our proposed methods compared with state-of-the-art approaches.2. Methodology |
Chu_BUOL_A_Bottom-Up_Framework_With_Occupancy-Aware_Lifting_for_Panoptic_3D_CVPR_2023 | Abstract Understanding and modeling the 3D scene from a single image is a practical problem. A recent advance proposes a panoptic 3D scene reconstruction task that performs both 3D reconstruction and 3D panoptic segmentation from a single image. Although having made substantial progress, recent works only focus on top-down approaches that fill 2D instances into 3D voxels according to estimated depth, which hinders their performance by two ambiguities. (1) instance-channel ambiguity : The variable ids of instances in each scene lead to ambiguity during filling voxel chan-nels with 2D information, confusing the following 3D re-finement. (2) voxel-reconstruction ambiguity : 2D-to-3D lifting with estimated single view depth only propagates 2D information onto the surface of 3D regions, leading to ambi-guity during the reconstruction of regions behind the frontal view surface. In this paper, we propose BUOL , aBottom-Up framework with Occupancy-aware Lifting to address the two issues for panoptic 3D scene reconstruction from a sin-gle image. For instance-channel ambiguity , a bottom-up framework lifts 2D information to 3D voxels based on de-terministic semantic assignments rather than arbitrary in-stance id assignments. The 3D voxels are then refined and grouped into 3D instances according to the predicted 2D instance centers. For voxel-reconstruction ambiguity , the estimated multi-plane occupancy is leveraged together with depth to fill the whole regions of things and stuff. Our method shows a tremendous performance advantage over state-of-the-art methods on synthetic dataset 3D-Front and real-world dataset Matterport3D. Code and models will be released. | 1. Introduction Joint learning of 3D reconstruction and perception is a challenging and practical problem for various applica-tions. Existing works focus on combining 3D reconstruc-*Intern at Shanghai AI Laboratory.†Corresponding author. 𝑠6 𝑠7 𝑖3𝑖2𝑖1𝑠1𝑠2𝑠6 𝑠7𝑠1𝑖1𝑖2𝑖3 𝑠1𝑠2𝑠1 i2i3𝑖2 𝑖4 𝑖5𝑖1 𝑖3 𝑖6i2i4i7 𝑖1𝑖3𝑖6𝑖2 𝑖3 𝑖4 𝑖5𝑖1 𝑖6𝑖1 𝑖2 𝑖3 i4i7i2 i2i4i7 𝑖3𝑖6𝑖1 𝑖1𝑖2𝑠6 𝑠7𝑖3𝑠4 𝑠5𝑠2𝑠1 𝑠3 𝑠6𝑠4 𝑠5𝑠2𝑠1 𝑠3 𝑠6 GroupingInstances Semantics Instance CentersRandomized Assignment (b) Our BUOL(a) General top -down approaches Deterministic AssignmentFigure 1. Comparison of the feature lifting from 2D to 3D. (a) General Top-down approaches: Feature lifting by depth with the two randomized instance assignments in the top-down framework. The predicted 2D instance masks {i1, i2, i3}are lifted to only the surface of 3D instances at variable channels, such as {i1, i3, i6} or{i3, i6, i1}, which results in instance-channel ambiguity and voxel-reconstruction ambiguity. (b) Our BUOL: Occupancy-aware lifting with the deterministic semantic assignment in the bottom-up framework. The predicted 2D semantic category maps {s1, s2, s6, s7}are lifted to the whole regions of things ( s1, s2) and stuff ( s6, s7), and the voxels are finally grouped into 3D in-stances {i1, i2, i3}by corresponding 2D instance centers. tion with semantic segmentation [26, 27] or instance seg-mentation [11, 23, 28]. Recently, a pioneer work [6] unifies the tasks of 3D reconstruction, 3D semantic segmentation, and 3D instance segmentation into panoptic 3D scene re-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4937 construction from a single RGB image, which assigns a cat-egory label (i.e. a thing category with easily distinguishable edges, such as tables, or a stuff category with indistinguish-able edges, such as wall) [22] and an instance id (if the voxel belongs to a thing category) to each voxel in the 3D volume of the camera frustum. Dahnert et al. [6] achieve this goal in a top-down pipeline that lifts 2D instance masks to channels of 3D voxels and predicts the panoptic 3D scene reconstruction in the follow-ing 3D refinement stage. Their method first estimates 2D instance masks and the depth map. The 2D instance masks are then lifted to fill voxel channels on the front-view sur-face of 3D objects using the depth map. Finally, a 3D model is adopted to refine the lifted 3D surface masks and attain panoptic 3D scene reconstruction results of all voxels. After revisiting the top-down panoptic 3D scene recon-struction framework, we find two crucial limitations which hinder its performance, as shown in Figure 1(a). First, instance-channel ambiguity : the number of instances varies in different scenes. Thus lifting 2D instance masks to fill voxel channels can not be achieved by a determinis-tic instance-channel mapping function. Dahnert et al. [6] propose to utilize a randomized assignment that randomly assigns instance ids to the different channels of voxel fea-tures. For example, two possible random assignments are shown in Figure. 1(a), where solid and dashed arrow lines with the same color indicate a 2D mask is assigned to differ-ent voxel feature channels. This operator leads to instance-channel ambiguity, where an instance id may be assigned to an arbitrary channel, confusing the 3D refinement model. In addition, we experimentally discuss the impact of different instance assignments ( e.g., random or sorted by category) on performance in Section 4. Second, voxel reconstruc-tion ambiguity : 2D-to-3D lifting with depth from a single view can only propagate 2D information onto the frontal surface in the camera frustum, causing ambiguity during the reconstruction of regions behind the frontal surface. As shown by dashed black lines in the right of Figure 1(a), the 2D information is only propagated to the frontal surface of initialized 3D instance masks, which is challenging for 3D refinement model to reconstruct the object regions behind the frontal surface accurately. In this paper, we propose BUOL , aBottom-Up frame-work with Occupancy-aware Lifting to address the above two ambiguities for panoptic 3D scene reconstruction from a single image. For instance-channel ambiguity, our bottom-up framework lifts 2D semantics to 3D semantic voxels, as shown in Figure. 1(b). Compared to the top-down methods shown in Figure. 1(a), instance-channel ambigu-ity is tackled by a simple deterministic assignment map-ping from semantic category ids to voxel channels. The voxels are then grouped into 3D instances according to the predicted 2D instance centers. For voxel-reconstructionambiguity, as shown in Figure. 1(b), the estimated multi-plane occupancy is leveraged together with depth by our occupancy-aware lifting mechanism to fill regions inside the things and stuff besides front-view surfaces for accurate 3D refinement. Specifically, our framework comprises a 2D priors stage, a 2D-to-3D lifting stage, and a 3D refinement stage. In the 2D priors stage, the 2D model predicts 2D semantic map, 2D instance centers, depth map, and multi-plane oc-cupancy. The multi-plane occupancy presents whether the plane at different depths is occupied by 3D things or stuff. In the 2D-to-3D lifting stage, leveraging estimated multi-plane occupancy and depth map, we lift 2D semantics into deterministic channels of 3D voxel features inside the things and stuff besides the front-view surfaces. In the 3D refine-ment stage, we predict dense 3D occupancy in each voxel for reconstruction. Meanwhile, the 3D semantic segmen-tation is predicted for both the thing and stuff categories. The 3D offsets towards the 2D instance centers are also estimated to identify voxels belonging to 3D objects. The ground truth annotations of 3D panoptic reconstruction, i.e., 3D instance/semantic segmentation masks and dense 3D oc-cupancy, can be readily converted to 2D instance center, 2D semantic segmentation, depth map, multi-plane occupancy, and 3D offsets for our 2D and 3D supervised learning. Dur-ing inference, we assign instance ids to 3D voxels occu-pied by thing objects based on 2D instance centers and 3D offsets, attaining final panoptic 3D scene reconstruction re-sults. Extensive experiments show that the proposed bottom-up framework with occupancy-aware lifting outperforms prior competitive approaches. On the pre-processed 3D-Front [10] and Matterport3D [2], our method achieves +11.81% and +7.46% PRQ (panoptic reconstruction qual-ity) over the state-of-the-art method [6], respectively. |
Bai_AUNet_Learning_Relations_Between_Action_Units_for_Face_Forgery_Detection_CVPR_2023 | Abstract Face forgery detection becomes increasingly crucial due to the serious security issues caused by face manipula-tion techniques. Recent studies in deepfake detection have yielded promising results when the training and testing face forgeries are from the same domain. However, the problem remains challenging when one tries to generalize the de-tector to forgeries created by unseen methods during train-ing. Observing that face manipulation may alter the re-lation between different facial action units (AU), we pro-pose the Action-Units Relation Learning framework to im-prove the generality of forgery detection. In specific, it con-sists of the Action Units Relation Transformer (ART) and the Tampered AU Prediction (TAP). The ART constructs the relation between different AUs with AU-agnostic Branch and AU-specific Branch, which complement each other and work together to exploit forgery clues. In the Tampered AU Prediction, we tamper AU-related regions at the image level and develop challenging pseudo samples at the feature level. The model is then trained to predict the tampered AU regions with the generated location-specific supervi-sion. Experimental results demonstrate that our method can achieve state-of-the-art performance in both the in-dataset and cross-dataset evaluations. | 1. Introduction The success of the generative model, e.g., Generative Adversarial Networks (GAN) [21], rapidly improves the quality of face forgery, which provokes researchers to pur-sue antithetical counter-detection methods to deal with po-tential social security issues. Though recent works have demonstrated their effectiveness in identifying forgery im-ages from known forging methods that are used in train-ing [4,10,29,39,43], the generalization on unknown forgery *Equal contribution †Corresponding Author Figure 1. The average correlation intensity (y-axis) between an AU (x-axis) and other AUs under different data volume (top-20% and bottom-80% samples from FF++ [42], respectively). More details and results can be found in the supplementary. It can be observed that the average correlation intensity between an AU and other AUs is weakening after manipulation. methods is not guaranteed [9, 17, 27, 52]. Some recent works [6, 31, 45, 49, 50, 55] have noticed this imminent problem and attempted to capture more in-trinsic forgery clues to improve the generalization of identi-fication methods. In particular, these works can be roughly categorized into two branches, 1) data modification which applies carefully selected augmentations [50] or manually generates forgery images [31, 45, 55] with only real im-ages to enlarge training data diversity meanwhile avoiding overfitting to specific defects, 2) auxiliary task integrat-ingwhich defines an affinitive loss to help the model learn underlying differences between real and fake faces [6, 55]. Despite their success, it is noticed that the relations between face units that are general in biology research for under-standing human facial characteristics are less explored, in-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24709 hibiting the further improvement of model generalization. In this work, we aim to construct a face forgery detection framework that unifies the data modification and auxiliary task integrating schemes with relation clues of facial action units. Our insight is motivated by the theory in the Facial Action Coding System [18], which represents human faces through a set of facial muscle movements called facial Ac-tion Units (AU). More specifically, in facial morphology, a muscle controls different AUs. When showing a certain emotion, a group of AUs would be activated simultaneously, indicating that there are underlying relations between AUs. Therefore, a direct intuition is that relations among AUs may be different in real and manipulated faces, since de facto methods modify the entire face using graphical meth-ods, or generate the individual facial regions by GAN. This hypothesis is evidenced by our experiments in Fig. 1, which shows that an AU in real face presents stronger average cor-relation intensity to other AUs (the stars), whereas that in forgery face are weaker (the triangles). To explore these clues, we focus on relations between local regions associated with AUs and propose the Action Units Relation Learning framework to improve the robust-ness and generalization of the forgery detector. The pro-posed framework is comprised of AURelation Transformer (ART ) and Tampered AUPrediction ( TAP ). ART, explic-itly learning the relations between AUs, works as a dual network. In specific, it consists of an AU-specific Branch for learning relations among AU-aligned regions and an AU-agnostic Branch for learning relations among image-patch regions. The AU-specific Branch extracts embed-dings aligned with individual AU and builds their relations by attention mechanism. The AU-agnostic Branch is a stan-dard Vision-Transformer block [16] that is designed to con-struct relations between different image patches. These two branches complement each other for building a detailed and global view of the input face image. From another perspec-tive, TAP formulates an auxiliary task to enhance the ability of the model to sense local forgery defects. In particular, it constructs a Partial Face Mask by randomly removing AU-related regions from the facial area. The mask is uti-lized to modify data in the remaining regions at both image and feature levels to generate challenging fake counterparts. The model is then trained to predict the manipulated regions with the help of Local Tampering Supervision. By doing so, the networks are more sensitive to AU-related regions that have been manipulated, which is beneficial for identifying forgery images. Notably, the proposed Action Units Re-lation Learning framework absorbs the advantages of both data modification and auxiliary task integrating schemes. We evaluate our framework following cross-dataset pro-tocol and cross-manipulation protocol. In the cross-dataset evaluation, our approach performs favorably against other state-of-the-art detectors and achieves the AUC scores of 92.77%, 99.22%, 73.82%, 86.16%, 81.45% on CDF [36], DFD [1], DFDC [13], DFDCP [14], and FFIW [57] datasets, respectively. In the cross-manipulation evalua-tion, our approach achieves the AUC of 99.98%, 99.60%, 99.89%, and 98.38% on DF [2], F2F [48], FS [3] and NT [47], respectively. Experimental results demonstrate the effectiveness and generalization of our framework. Our contributions can be summarized as follows: • We propose the Action Units Relation Trans-former (ART) to effectively build correlations among different AU-related regions and eventually improve the performance of the forgery detection. • We propose the Tampered AU Prediction (TAP) as an auxiliary task to strengthen the model capability of sensing local forgery regions. • Experimental results on in-dataset and cross-dataset evaluation protocols demonstrate the effectiveness and generalization of our framework. |
Demirel_Meta-Tuning_Loss_Functions_and_Data_Augmentation_for_Few-Shot_Object_Detection_CVPR_2023 | Abstract Few-shot object detection, the problem of modelling novel object detection categories with few training instances, is an emerging topic in the area of few-shot learning and ob-ject detection. Contemporary techniques can be divided into two groups: fine-tuning based and meta-learning based approaches. While meta-learning approaches aim to learn dedicated meta-models for mapping samples to novel class models, fine-tuning approaches tackle few-shot detection in a simpler manner, by adapting the detection model to novel classes through gradient based optimization. Despite their simplicity, fine-tuning based approaches typically yield competitive detection results. Based on this observation, we focus on the role of loss functions and augmentations as the force driving the fine-tuning process, and propose to tune their dynamics through meta-learning principles. The pro-posed training scheme, therefore, allows learning inductive biases that can boost few-shot detection, while keeping the advantages of fine-tuning based approaches. In addition, the proposed approach yields interpretable loss functions, as opposed to highly parametric and complex few-shot meta-models. The experimental results highlight the merits of the proposed scheme, with significant improvements over the strong fine-tuning based few-shot detection baselines on benchmark Pascal VOC and MS-COCO datasets, in terms of both standard and generalized few-shot performance metrics. | 1. Introduction Object detection is one of the computer vision problems that has greatly benefited from the advances in supervised deep learning approaches. However, similar to the case in many other problems, state-of-the-art in object detection re-lies on the availability of large-scale fully-annotated datasets, which is particularly problematic due to the difficulty of collecting accurate bounding box annotations [18, 46]. This practical burden has lead to a great interest in the approaches Augmentations FSOD model Loss function Loss Query set mAP REINFORCE over the augmentation & loss functions Gradient descent Few-shot support setRecall Precision Recall Proxy task Figure 1. The overall architecture of the meta-tuning approach. that can potentially reduce the annotation cost, such as weakly-supervised learning [29, 57], learning from point annotations [7], and mixed supervised learning [45]. A more recently emerging paradigm in this direction is few-shot ob-ject detection (FSOD). In the FSOD problem, the goal is to build detection models for the novel classes with few labeled training images by transferring knowledge from the base classes with a large set of training images. In the closely related Generalized-FSOD (G-FSOD) problem, the goal is to build few-shot detection models that perform well on both base and novel classes. FSOD methods can be categorized into meta-learning and fine-tuning approaches. Although meta-learning based methods are predominantly used in the literature in FSOD research [8,22,31,36,52,75,76,79,81,83], several fine-tuning based works have recently reported competitive results [6, 15, 32, 53, 61, 65, 72, 84]. The main premise of meta-learning approaches is to design and train dedicated meta-models that map given few train samples to novel class detection models, e.g. [73] or learn easy-to-adapt models [30] in a MAML [16] fashion. In contrast, however, fine-tuning based methods tackle the problem as a typical transfer learning problem and apply the general purpose supervised training techniques, i.e. regularized loss minimization via gradient-based optimization, to adapt a pre-trained model to few-shot classes. It is also worth noting that the recent results on fine-tuning based FSOD are aligned with related observations on few-shot classification [9, 12, 63] and segmentation [4]. While some of the FSOD meta-learning approaches are at-tractive for being able to learn dedicated parametric training This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7339 mechanisms, they also come with two important shortcom-ings: (i) the risk of overfitting to the base classes used for training the meta-model due to model complexity, and (ii) the difficulty of interpreting what is actually learned; both of which can be crucially important for real-world, in-the-wild utilization of a meta-learned model. From this point of view, the simplicity and generality of a fine-tuning based FSOD ap-proach can be seen as major advantages. In fact, one can find a large machine learning literature on the components (opti-mization techniques, loss functions, data augmentation, and architectures) of an FT approach, as opposed to the unique and typically unknown nature of a meta-learned inference model, especially when the model aims to replace standard training procedures for modeling the novel few-shot classes. While MAML [16] like meta-learning for quick adaptation is closer in nature to fine-tuning based approaches, the van-ishing gradient problems and the overall complexity of the meta-learning task practically limits the approach to target only one or few model update steps, whereas an FT approach has no such computational difficulty. Perhaps the biggest advantage of a fine-tuning based FSOD approach, however, can also be its biggest disad-vantage: its generality may lack the inductive biases needed for effective learning with few novel class samples while preserving the knowledge of base classes. To this end, such approaches focus on the design of fine-tuning details, e.g. whether to freeze the representation parameters [65], use contrastive fine-tuning losses [61], increase the novel class variances [84], introduce the using additional detection heads and branches [15, 72]. However, optimizing such details specifically for few-shot classes in a hand-crafted manner is clearly difficult, and likely to be sub-optimal. To address this problem, we focus on applying meta-learning principles to tune the loss functions and augmen-tations to be used in the fine-tuning stage for FSOD, which we call meta-tuning (Figure 1). More specifically, much like the meta-learning of a meta-model, we define an episodic training procedure that aims to progressively discover the optimal loss function and augmentation details for FSOD purposes in a data-driven manner. Using reinforcement learn-ing (RL) techniques, we aim to tune the loss function and augmentation details such that they maximize the expected detection quality of an FSOD model obtained by fine-tuning to a set of novel classes. By defining meta-tuning over well-designed loss terms and an augmentation list, we restrict the search process to effective function families, reducing the computational costs compared to AutoML methods that aim to discover loss terms from scratch for fully-supervised learn-ing [20, 42]. The resulting meta-tuned loss functions and augmentations, therefore, inject the learned FSOD-specific inductive biases into a fine-tuning based approach. To explore the potential of the meta-tuning scheme for FSOD, we focus on the details of classification loss func-tions, based on the observations that FSOD prediction mis-takes tend to be in classification rather than localization details [61]. In particular, we first focus on the softmax temperature parameter, for which we define two versions: (i) a simple constant temperature, and (ii) time (fine-tuning iteration index) varying dynamic temperature, parameterized as an exponentiated polynomial. In all cases, the parameters learned via meta-tuning yield an interpretable loss function that has a negligible risk of over-fitting to the base classes, in contrast to a complex meta-model. We also model augmen-tation magnitudes during meta-tuning for improving the data loading pipeline for few-shot learning purposes. Addition-ally, we incorporate a score scaling coefficient for learning to balance base versus novel class scores. We provide an experimental analysis on the Pascal VOC [13] and MS-COCO [40] benchmarks for FSOD, using the state-of-the-art fine-tuning based baselines MPSR [72] and DeFRCN [53]. Our experimental results show that the proposed meta-tuning approach provides significant perfor-mance gains in both FSOD and Generalized FSOD settings, suggesting that meta-tuning loss functions and data augmen-tation can be a promising direction in FSOD research. |
Chen_CLIP2Scene_Towards_Label-Efficient_3D_Scene_Understanding_by_CLIP_CVPR_2023 | Abstract Contrastive Language-Image Pre-training (CLIP) achieves promising results in 2D zero-shot and few-shot learning. Despite the impressive performance in 2D, apply-ing CLIP to help the learning in 3D scene understanding has yet to be explored. In this paper, we make the first attempt to investigate how CLIP knowledge benefits 3D scene understanding. We propose CLIP2Scene, a simple yet effective framework that transfers CLIP knowledge from 2D image-text pre-trained models to a 3D point cloud network. We show that the pre-trained 3D network yields impressive performance on various downstream tasks, i.e., annotation-free and fine-tuning with labelled data for semantic segmentation. Specifically, built upon CLIP , we design a Semantic-driven Cross-modal Contrastive Learning framework that pre-trains a 3D network via semantic and spatial-temporal consistency regularization. For the former, we first leverage CLIP’s text semantics to select the positive and negative point samples and then employ the contrastive loss to train the 3D network. In terms of the latter, we force the consistency between the temporally coherent point cloud features and their corresponding image features. We conduct experiments on SemanticKITTI, nuScenes, and ScanNet. For the first time, our pre-trained network achieves annotation-free 3D semantic segmentation with 20.8% and 25.08% mIoU on nuScenes and ScanNet, respectively. When fine-tuned with 1% or 100% labelled data, our method significantly outperforms other self-supervised methods, with improvements of 8% and 1% mIoU, respectively. Furthermore, we demonstrate the generalizability for handling cross-domain datasets. Code is publicly available1. | 1. Introduction 3D scene understanding is fundamental in autonomous driving, robot navigation, etc [26,28]. Current deep learning-Symbol†denotes the corresponding authors. 1https://github.com/runnanchen/CLIP2Scene . Semantic and Spatial -Temporal Consistency RegularizationImage EncoderAnnotation -free 1% annotation CLIP2Scene Text EncoderCLIPHow CLIP benefits 3D scene understanding? 100% annotationSemantic -driven Cross -modal Contrastive Learning car, bus pedestrian carFigure 1. We explore how CLIP knowledge benefits 3D scene understanding. To this end, we propose CLIP2Scene, a Semantic-driven Cross-modal Contrastive Learning framework that leverages CLIP knowledge to pre-train a 3D point cloud segmentation net-work via semantic and spatial-temporal consistency regularization. CLIP2Scene yields impressive performance on annotation-free 3D semantic segmentation and significantly outperforms other self-supervised methods when fine-tuning on annotated data. based methods have shown inspirational performance on 3D point cloud data [15, 32, 33, 38, 47, 56, 62]. However, some drawbacks hinder their real-world applications. The first one comes from their heavy reliance on the large collection of annotated point clouds, especially when high-quality 3D annotations are expensive to acquire [39,40,44,51]. Besides, they typically fail to recognize novel objects that are never seen in the training data [11,45]. As a result, it may need ex-tra annotation efforts to train the model on recognizing these novel objects, which is both tedious and time-consuming. Contrastive Vision-Language Pre-training (CLIP) [48] provides a new perspective that mitigates the above issues in 2D vision. It was trained on large-scale free-available image-text pairs from websites and built vision-language This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7020 correlation to achieve promising open-vocabulary recogni-tion. MaskCLIP [61] further explores semantic segmenta-tion based on CLIP. With minimal modifications to the CLIP pre-trained network, MaskCLIP can be directly used for the semantic segmentation of novel objects without additional training efforts. PointCLIP [59] reveals that the zero-shot classification ability of CLIP can be generalized from the 2D image to the 3D point cloud. It perspectively projects a point cloud frame into different views of 2D depth maps that bridge the modal gap between the image and the point cloud. The above studies indicate the potential of CLIP on enhanc-ing the 2D segmentation and 3D classification performance. However, whether and how CLIP knowledge benefits 3D scene understanding is still under-explored. In this paper, we explore how to leverage CLIP’s 2D image-text pre-learned knowledge for 3D scene understand-ing. Previous cross-modal knowledge distillation meth-ods [44, 51] suffer from the optimization-conflict issue, i.e., some of the positive pairs are regarded as negative samples for contrastive learning, leading to unsatisfactory represen-tation learning and hammering the performance of down-stream tasks. Besides, they also ignore the temporal coher-ence of the multi-sweep point cloud, failing to utilize the rich inter-sweep correspondence. To handle the mentioned problems, we propose a novel Semantic-driven Cross-modal Contrastive Learning framework that fully leverages CLIP’s semantic and visual information to regularize a 3D network. Specifically, we propose Semantic Consistency Regulariza-tion and Spatial-Temporal Consistency Regularization. In semantic consistency regularization, we utilize CLIP’s text semantics to select the positive and negative point samples for less-conflict contrastive learning. For spatial-temporal consistency regularization, we take CLIP’s image pixel fea-ture to impose a soft consistency constraint on the temporally coherent point features. Such an operation also alleviates the effects of imperfect image-to-point calibration. We conduct several downstream tasks on the indoor and outdoor datasets to verify how the pre-trained network bene-fits the 3D scene understanding. The first one is annotation-free semantic segmentation. Following MaskCLIP, we place class names into multiple hand-crafted templates as prompts and average the text embeddings generated by CLIP to con-duct the annotation-free segmentation. For the first time, our method achieves 20.8% and 25.08% mIoU annotation-free 3D semantic segmentation on the nuScenes [24] and ScanNet [20] datasets without training on any labelled data. Secondly, we compare with other self-supervised methods in label-efficient learning. When fine-tuning the 3D network with 1% or 100% labelled data on the nuScenes dataset, our method significantly outperforms state-of-the-art self-supervised methods, with improvements of 8% and 1% mIoU, respectively. Besides, to verify the generalization capability, we pre-train the network on the nuScenes datasetand evaluate it on SemanticKITTI [3]. Our method still significantly outperforms state-of-the-art methods. The key contributions of our work are summarized as follows. •The first work that distils CLIP knowledge to a 3D network for 3D scene understanding. •We propose a novel Semantic-driven Cross-modal Con-trastive Learning framework that pre-trains a 3D net-work via spatial-temporal and semantic consistency regularization. •We propose a novel Semantic-guided Spatial-Temporal Consistency Regularization that forces the consistency between the temporally coherent point cloud features and their corresponding image features. •For the first time, our method achieves promising re-sults on annotation-free 3D scene segmentation. When fine-tuning with labelled data, our method significantly outperforms state-of-the-art self-supervised methods. |
Ji_Seeing_What_You_Miss_Vision-Language_Pre-Training_With_Semantic_Completion_Learning_CVPR_2023 | Abstract Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct correspond-ing information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked mod-eling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corre-sponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. More-over, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual ques-tion answering, image-text retrieval, and video-text retrieval. | 1. Introduction Our real-world contains a wide variety of information, such as texts, images, sounds, etc. For a powerful general artificial intelligence system, it is necessary to capture the se-*Equal contribution. †Corresponding Author. M V M M L M M L S CM V S C ( a ) C r o s s -A t t e n t i o n I n t e r a c t i o n o f M a s k e d M o d e l i n g[ C L S ] [ M A S K ]I m a g e t o k e n s T e x t t o k e n s [ C L S ] C o o k i n g t a b l e w i th a s s o r t e d f i s h a n d l e m o n s w i th a s p a r a g u s . P r e -t r a i n i n g w i th t o k e n -l e v e l r e c o n s t r u c t i o n sP r e -t r a i n i n g w i th S C L [ C L S ] A c i t y s t r e e t w i th p e o p l e w a l k i n g a n d v e h i c l e s o n th e r o a d .P r e -t r a i n i n g w i th t o k e n -l e v e l r e c o n s t r u c t i o n sP r e -t r a i n i n g w i th S C L ( b ) C r o s s -A t t e n t i o n M a p o f [ C L S ]Figure 1. (a) The comparisons between previous masked modeling tasks and our proposed Semantic Completion Learning (SCL), which is composed of “MVSC” and “MLSC”. (b) The cross-modal attention map visualization of the text global representation ([CLS]) on the input image for our model pre-trained with or without SCL. mantic association from different modality sources. Towards this goal, multimodal representation learning is a crucial technique to bridge the heterogeneity gap between different modalities [6, 36]. In this area, vision-language pre-training models [14, 21, 30, 35, 43] have shown an impressive se-mantic alignment ability, which brings substantial advances on various downstream tasks, for instance, visual question answering, image-text retrieval, etc. Recently, numerous self-supervised vision-language pre-training models [3, 10, 14, 17, 31, 32, 43] have been proposed. These methods model the interactions between vision and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6789 language features mainly by using various masked model-ing tasks, such as masked language modeling (MLM) and masked vision modeling (MVM). As shown in Fig. 1(a), the basic idea of MLM and MVM is self-reconstructing the masked tokens via leveraging informative visible tokens to realize local-to-local alignment. Specifically, MLM adopted by BERT [24] is to predict the original vocabulary IDs of the masked words. Inspired by the success of MLM in pre-training, there is a flourishing trend to extend it to visual pre-training tasks. Generally, by masking some visual patches, MVM tasks predict their original pixels [13,17], correspond-ing discrete tokens [4, 10, 46] generated by the VQ-V AE variants, or Histograms of Oriented Gradients (HOG) fea-tures [11], etc. These masked modeling tasks only focus on reconstruct-ing the local masked tokens, and pay little attention to recov-ering the missing global semantic information caused by data corruption. The token-level reconstruction may lead to in-adequate learning of global representations for cross-modal information. As illustrated in Fig. 1(b), in the situation of token-level reconstructions, the global representation is dis-ordered in its attention on the other modality. It implies that the global-to-local alignment ability of the pre-training model is limited, leading to a degraded global representation. However, the global semantic features have a great impact on the performance of the pre-training model as they are usually used to deal with downstream tasks. Therefore, it is crucial to ensure the global semantic features to learn more accurate global-to-local alignment. Intuitively, considering that the paired vision and text data are two views of the same semantic information, the missing semantics of masked data can be completed by capturing information from the other modality. From this point of view, we propose a novel pre-training task called Semantic Completion Learning (SCL). Specifically, SCL is composed of dual parts: masked vision semantic completion (MVSC) and masked language semantic completion (MLSC). As shown in Fig. 1(a), MVSC (MLSC) exploits information of complete text (vision) data to recover the global semantic representations of masked vision (text) data. In this way, the model can generate representative global features with accu-rate global-to-local alignment. For example, as illustrated in Fig. 1(b), compared with the model pre-trained without SCL, the attention maps with SCL pre-training are more discriminative and reasonable. For the architecture of the vision-language pre-training model, we adopt a general framework that consists of two uni-modal encoders and a fusion encoder. Moreover, we present a flexible vision encoder to enable our model to per-form image-text and video-text multimodal tasks simultane-ously. Specifically, for video inputs, the vision encoder only adds a few additional learning parameters, and the [CLS] feature of each frame is treated as a bridge associating spatialmodeling within the frame and temporal modeling among frames. Inspired by curriculum learning [3], we train the model with image-text and video-text datasets successively to transfer visual knowledge from images to videos. In a nutshell, our contributions are three-fold. (1) To en-hance the global-to-local alignment of global representations, we propose a new pre-training task called Semantic Com-pletion Learning (SCL), which recovers missing semantic information from unmasked data, promoting learning more representative global features. (2) We design an adaptive vision encoder, which can transfer multimodal pre-training knowledge between images and videos readily. (3) We con-duct multiple vision-language downstream tasks to demon-strate the generalization of semantic completion learning, and the vision encoder, including visual question answer-ing, visual reasoning, image-text retrieval, and video-text retrieval. Our model SCL achieves state-of-the-art perfor-mance based on a similar pre-training data scale. Our code is available at https://github.com/IIGROUP/SCL. |
He_Towards_Scalable_Neural_Representation_for_Diverse_Videos_CVPR_2023 | Abstract Implicit neural representations (INR) have gained in-creasing attention in representing 3D scenes and images, and have been recently applied to encode videos ( e.g., NeRV [1], E-NeRV [2]). While achieving promising results, existing INR-based methods are limited to encoding a handful of short videos ( e.g., seven 5-second videos in the UVG dataset) with redundant visual content, leading to a model design that fits individual video frames independently and is not efficiently scalable to a large number of diverse videos. This paper focuses on developing neural representations for a more practical setup – encoding long and/or a large number of videos with diverse visual content. We first show that instead of dividing videos into small subsets and encoding them with separate models, encoding long and diverse videos jointly with a unified model achieves better compression re-sults. Based on this observation, we propose D-NeRV, a novel neural representation framework designed to encode diverse videos by (i) decoupling clip-specific visual content from motion information, (ii) introducing temporal reason-ing into the implicit neural network, and (iii) employing the task-oriented flow as intermediate output to reduce spatial redundancies. Our new model largely surpasses NeRV and traditional video compression techniques on UCF101 and UVG datasets on the video compression task. Moreover, when used as an efficient data-loader, D-NeRV achieves 3%-10% higher accuracy than NeRV on action recognition tasks on the UCF101 dataset under the same compression ratios. | 1. Introduction Implicit neural representations (INR) have achieved great success in parameterizing various signals, such as 3D scenes [3 –5], images [6,7], audio [6], and videos [1,2,8 –10]. The key idea is to represent signals as a function approx-imated by a neural network, mapping a reference coordi-nate to its corresponding signal value. Recently, INR has received increasing attention in image and video compres-NeR V D-NeR VNeR V Figure 1. Comparison of D-NeRV and NeRV when representing diverse videos. NeRV optimizes representation to every video independently while D-NeRV encodes all videos by a shared model. sion tasks [1, 2, 8, 11 –15]. Compared with learning-based video compression techniques [16 –18], INR-based methods (e.g., NeRV [1]) are more favorable due to simpler training pipelines and much faster video decoding speed. While impressive progress has been made, existing INR-based methods are limited to encoding a single short video at a time. This prohibits the potential applications in most real-world scenarios, where we need to represent and compress a large number of diverse videos. A straightforward strategy for encoding diverse videos is to divide them into multiple subsets and model each of them by a separate neural network, as shown in Figure 1 (top). However, since this strategy is unable to leverage long-term redundancies across videos, it achieves inferior results compared to fitting all diverse videos with a single shared model. As shown in Figure 2, under the same compression ratio (bits per pixel), the performance of NeRV is consistently better when fitting a larger number of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6132 videos. This suggests that representing multiple videos by a single large model is generally more beneficial. However, as observed empirically, the current design of NeRV offers diminishing returns when scaling to large and diverse videos. We argue that the current coupled design of content and motion information modeling exaggerates the difficulty of memorizing diverse videos. To address this, we propose D-NeRV, a novel implicit neural representation that is specifically designed to efficiently encode long or a large number of diverse videos1. A representative overview of differences between D-NeRV and NeRV is shown in Figure 1. When representing diverse videos, NeRV encodes each video into a separate model or simply concatenates multiple videos into a long video and encodes it, while our D-NeRV can represent different videos in a single model by conditioning on key-frames from each video clip. Compared to NeRV , we have the following improvements. First, we observe that the visual content of each video of-ten represents appearance, both background and foreground, which vary significantly among different videos, while the motion information often represents the semantic structure (e.g., similar motion for the same action class) and can be shared across different videos. Therefore, we decouple each video clip into two parts: clip-specific visual content and motion information, which are modeled separately in our method. Second, motivated by the vital importance of tem-poral modeling in video-related tasks, instead of outputting each frame independently, we introduce temporal reasoning into the INR-based network by explicitly modeling global temporal dependencies across different frames. Finally, con-sidering the significant spatial redundancies in videos, rather than predicting the raw pixel values directly, we propose to predict the task-oriented flow [19 –22] as an intermedi-ate output, and use it in conjunction with the key-frames to get the final refined output. It alleviates the complexity of memorizing the same pixel value across different frames. With these improvements, our D-NeRV significantly out-performs NeRV , especially when increasing the number of videos as shown in Figure 2. To summarize, our main contri-butions are as follows: •We propose D-NeRV, a novel implicit neural represen-tation model, to represent a large and diverse set of videos as a single neural network. •We conduct extensive experiments on video recon-struction and video compression tasks. Our D-NeRV consistently outperforms state-of-the-art INR-based methods (E-NeRV [2]), traditional video compres-sion approaches (H.264 [23], HEVC [24]), and the recent learning-based video compression methods (DCVC [18]). 1“Long videos" and “a large number of videos" are viewed as inter-changeable concepts in this paper because a long video can be obtained by concatenating a collection of diverse videos. 100 600 2400 9600 Video Count2325272931PSNRD-NeRV NeRVFigure 2. Comparison of D-NeRV and NeRV with fixed compres-sion ratio on UCF101. The size of circles indicates model sizes. •We further show the advantage of D-NeRV on the ac-tion recognition task by its higher accuracy and faster decoding speed, and reveal its intriguing properties on the video inpainting task. |
He_Dynamic_Focus-Aware_Positional_Queries_for_Semantic_Segmentation_CVPR_2023 | Abstract The DETR-like segmentors have underpinned the most recent breakthroughs in semantic segmentation, which end-to-end train a set of queries representing the class proto-types or target segments. Recently, masked attention [8] is proposed to restrict each query to only attend to the fore-ground regions predicted by the preceding decoder block for easier optimization. Although promising, it relies on the learnable parameterized positional queries which tend to encode the dataset statistics, leading to inaccurate local-ization for distinct individual queries. In this paper, we pro-pose a simple yet effective query design for semantic seg-mentation termed Dynamic Focus-aware Positional Queries (DFPQ), which dynamically generates positional queries conditioned on the cross-attention scores from the preced-ing decoder block and the positional encodings for the cor-responding image features, simultaneously. Therefore, our DFPQ preserves rich localization information for the tar-get segments and provides accurate and fine-grained posi-tional priors. In addition, we propose to efficiently deal with high-resolution cross-attention by only aggregating the con-textual tokens based on the low-resolution cross-attention scores to perform local relation aggregation. Extensive ex-periments on ADE20K and Cityscapes show that with the two modifications on Mask2former, our framework achieves SOTA performance and outperforms Mask2former by clear margins of 1.1%, 1.9%, and 1.1% single-scale mIoU with ResNet-50, Swin-T, and Swin-B backbones on the ADE20K validation set, respectively. Source code is available at https://github.com/ziplab/FASeg . | 1. Introduction Semantic segmentation aims at assigning each pixel in an image with a semantic class label. As the end-to-end De-tection Transfomer (DETR) [3,42,49,58] is revolutionizing the paradigm of the object detection task, recent segmen-tors [2,8,9,54] follow DETR to learn a set of queries repre-†Corresponding author. E-mail: bohan .zhuang @gmail .comsenting the class prototypes or target segments and achieve state-of-the-art performance on semantic segmentation. In DETR-like frameworks, providing the queries with meaningful positional priors and encourage each query to concentrate on specific regions is essential to learn repre-sentative queries [28, 43, 49, 58]. In this spirit, masked at-tention [8] is proposed, which restricts each query to only attend to a foreground region predicted by the previous de-coder block with binary masks. Although promising, the positional priors in masked attention may be inaccurate and deteriorate performance for two reasons. First, each query comprises a content query that contains semantic informa-tion and a positional query that provides positional infor-mation for the likely locations of the target segments. How-ever, masked attention still relies on positional queries that are randomly initialized learnable parameters [3, 40] (Fig-ure 1 (a)), which tend to encode the average statistics across the dataset and cannot reflect the segments with large lo-cation variances. Second, since each query only attends to the predicted foreground regions, inaccurate predictions lead to error accumulation across the decoder blocks, espe-cially during an early training stage. To this end, recent detectors propose to dynamically en-code the anchor points into the positional queries to guide queries concentrating around the anchor positions [28, 30, 43] (Figure 1 (b)). The anchor-based query design miti-gates the mentioned issues as the positional queries are dy-namically generated for each target object, thus providing more accurate positional priors. In addition, it avoids re-stricting the queries to only attend to the foreground regions with binary masks to mitigate the error accumulation issue. However, the anchor-based queries cannot describe the fine-grained positional priors for semantic segmentation, which has details, edges, and boundaries [5, 6]. Motivated by the observations that attention scores im-ply the salient regions for token pruning [24, 26], self-supervised learning [4], and semantic segmentation [34,56], in this paper, we propose a simple yet effective query design for semantic segmentation, dubbed Dynamic Focus-aware Positional Queries (DFPQ), which dynamically generates This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11299 ....Cross-attentionmaps(c)Anchorpoints(b)x!,𝑦!,..,(x",𝑦") x!,𝑦!,..,(x",𝑦") Decoderblocks...Randominitialization (a) ......Cross-attentionvisualization Figure 1. (a) The original randomly initialized positional queries [3] as learnable network parameters, where the positional queries are shared among the Transformer decoder blocks and tend to encode dataset statistics modelling the likely positions for the semantic regions, which leads to inaccurate localization. (b) The anchor-based positional queries [43] are conditional on the bounding box coordinates to give each query positional priors around the anchor. However, the anchor points cannot describe semantic regions, thus still sub-optimal for semantic segmentation. (c) Our dynamic focus-aware queries for semantic segmentation are dynamically generated from the cross-attention scores of the preceding decoder block to provide accurate and fine-grained positional priors, facilitating locating and refining the target segments progressively. the positional queries conditioned on the cross-attention scores of the preceding decoder block and the positional encodings for the corresponding image features, simulta-neously (Figure 1 (c)). In this way, our DFPQ preserves the localization information of the target segments, thereby providing accurate and fine-grained positional priors and fa-cilitating progressively locating and refining the target seg-ments. When implementing the positional encodings with more powerful ones like [10], our DFPQ is further empow-ered with higher capacity to encode the neighbourhood in-formation for the target segments. Compared to the anchor-based positional queries [28, 43], our DFPQ can cover fine-grained locations for the segmentation details, edges, and boundaries which include rich segmentation cues. In addition, we propose an efficient method named High-Resolution Cross-Attention (HRCA) to mine details for segmenting small regions from the high-resolution feature maps ( 1/4×1/4of the original image size). Considering performing cross-attention on high-resolution feature maps requires a formidable amount of memory footprints and computational complexity, e.g., 11G extra floating-point operations with an input resolution of 512×512, we pro-pose to encode token affinity only on the informative areas of high-resolution feature maps that are indicated important in the low-resolution counterparts. In this way, fine-grained details are learned efficiently with affordable memory and computations. Our main contributions can be summarized as follows: • We make the pioneering attempt to present a simple yet effective query formulation for semantic segmentation, which provides accurate and fine-grained positional priors to localize the target segments, and mitigates the error accumulation problem while being lightweight with little extra computation. • We propose an efficient high-resolution cross-attention layer to enrich the segmentation details, which dis-cards the semantically unimportant regions for any tar-get segments in the high-resolution feature maps with affordable memory footprint and computational cost. • Extensive experiments on ADE20K and Cityscapes datasets demonstrate that simply incorporating our DFPQ and HRCA into Mask2former [8] achieves sig-nificant performance gain and outperforms the SOTA methods. For instance, our FASeg outperforms SOTA methods by 1.1%, 1.3%, and 0.9% single-scale mIoU on the ADE20K [57] validation set with ResNet-50, Swin-T, and Swin-B backbones, respectively. |
Hsiung_Towards_Compositional_Adversarial_Robustness_Generalizing_Adversarial_Training_to_Composite_Semantic_CVPR_2023 | Abstract Model robustness against adversarial examples of single perturbation type such as the ℓp-norm has been widely stud-ied, yet its generalization to more realistic scenarios involv-ing multiple semantic perturbations and their composition remains largely unexplored. In this paper, we first propose a novel method for generating composite adversarial exam-ples. Our method can find the optimal attack composition by utilizing component-wise projected gradient descent and automatic attack-order scheduling. We then propose general-ized adversarial training (GAT ) to extend model robustness fromℓp-ball to composite semantic perturbations, such as the combination of Hue, Saturation, Brightness, Contrast, and Rotation. Results obtained using ImageNet and CIFAR-10 datasets indicate that GAT can be robust not only to all the tested types of a single attack, but also to any combination of such attacks. GAT also outperforms baseline ℓ∞-norm bounded adversarial training approaches by a significant margin. | 1. Introduction Deep neural networks have shown remarkable success in a wide variety of machine learning (ML) applications, rang-ing from biometric authentication (e.g., facial image recog-nition), medical diagnosis (e.g., CT lung cancer detection) to autonomous driving systems (traffic sign classification), etc. However, while these models can achieve outstand-ing performance on benign data points, recent research has shown that state-of-the-art models can be easily fooled by malicious data points crafted intentionally with adversarial perturbations [37]. To date, the most effective defense mechanism is to incor-porate adversarial examples during model training, known as adversarial training (AT) [21, 48]. Nonetheless, current adversarial training approaches primarily only consider a single perturbation type (or threat model) quantified in a spe-cific distance metric (e.g., ℓp-ball). In this regard, the lackof exploration of the compositional adversarial robustness against a combination of several threat models could lead to impractical conclusions and undesirable bias in robustness evaluation. For example, a model that is robust to pertur-bations within ℓp-ball does not imply it can simultaneously be robust to other realistic semantic perturbations (e.g., hue, saturation, rotation, brightness, and contrast). To tackle this issue, in this paper, we propose generalized adversarial training (GAT) , which can harden against a wide range of threat models, from single ℓ∞-norm or se-mantic perturbation to a combination of them. Notably, extending standard adversarial training to composite adver-sarial perturbations is a challenging and non-trivial task, as each perturbation type is sequentially applied, and thus the attack order will affect the effectiveness of the composite adversarial example. To bridge this gap, we propose an effi-cient attack order scheduling algorithm to learn the optimal ordering of various perturbation types, which will then be incorporated into the GAT framework. Different from existing works, this paper aims to address the following fundamental questions: (a) How to generalize adversarial training from a single threat model to multiple? (b) How to optimize the perturbation order from a set of semantic and ℓp-norm perturbations? (c) Can GAT outper-form other adversarial training baselines against composite perturbations? Our main contributions in this paper provide affirmative answers to the questions: 1.We propose composite adversarial attack (CAA), a novel and unified approach to generate adversarial examples across from multiple perturbation types with attack-order-scheduling, including semantic perturbations ( Hue, Satu-ration, Contrast, Brightness and Contrast ) and ℓp-norm space. To the best of our knowledge, this paper is the first work that leverages a scheduling algorithm for finding the optimal attack order in composite adversarial attacks. |
Jiang_Understanding_and_Constructing_Latent_Modality_Structures_in_Multi-Modal_Representation_Learning_CVPR_2023 | Abstract Contrastive loss has been increasingly used in learning representations from multiple modalities. In the limit, the nature of the contrastive loss encourages modalities to ex-actly match each other in the latent space. Yet it remains an open question how the modality alignment affects the downstream task performance. In this paper, based on an information-theoretic argument, we first prove that ex-act modality alignment is sub-optimal in general for down-stream prediction tasks. Hence we advocate that the key of better performance lies in meaningful latent modality struc-tures instead of perfect modality alignment. To this end, we propose three general approaches to construct latent modality structures. Specifically, we design 1) a deep fea-ture separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra-and inter-modality regularization. Extensive experiments are con-ducted on two popular multi-modal representation learn-ing frameworks: the CLIP-based two-tower model and the ALBEF-based fusion model. We test our model on a va-riety of tasks including zero/few-shot image classification, image-text retrieval, visual question answering, visual rea-soning, and visual entailment. Our method achieves consis-tent improvements over existing methods, demonstrating the effectiveness and generalizability of our proposed approach on latent modality structure regularization. | 1. Introduction Vision-language representation learning aims to learn generic representations from images and texts that could benefit multimodal downstream applications. As the two modalities are essentially from different data sources and distributions, how to effectively fuse the two modalities has *Work done while at Amazon. Image EncoderThe trail climbs steadily uphill most of the way.In-Modality RegularizationIn-Modality RegularizationCross-Modality Regularization2.BrownianBridges1.Deep Feature Separation1.Deep Feature Separation 3.Geometric ConsistencyText EncoderAugmentedimagespace Figure 1. Constructing latent modality structures to improve multi-modal representation learning. become an important question. Some work aims to unify the representations of two modalities in one encoder, where the image and text are usually tokenized into sequences [60,61,65,66]. Another line of research represents the im-age and text modality separately with modality-specific en-coders and utilizes contrastive learning to align the modal-ities, achieving state-of-the-art performance on multiple downstream applications [ 13,26,31,32,41,49,53,54,70]. Despite the successful empirical practice of contrastive loss in multi-modal representation learning, it remains an open question whether bridging and aligning the two modalities always brings benefits to downstream tasks. One concept closely related to this question is the modal-ity gap [ 35,49,68,72], where it is defined as the dis-tance between the feature distributions of the two modali-ties. Modality alignment can be considered as reducing the modality gap. At a first glance, one would conjecture that contrastive loss would reduce the modality gap by pulling positive (paired) image and text data together for better rep-resentation. However, a recent study [ 35] shows evidence that contrastive learning does not always reduce the modal-ity gap. Furthermore, we also show in our empirical anal-ysis that a reduced modality gap does not always guaran-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7661 tee better performance in downstream applications. Moti-vated by these empirical observations, in this paper we first theoretically study the modality gap problem, by showing that when the modality gap is zero, i.e., exact alignment be-tween the two modalities, the learned representations nec-essarily have to pay a price for the downstream prediction task, which we term as the information gap between the two modalities (Theorem 3.1). Intuitively, this is because that representations with zero modality gap can only preserve predictive information present in both of the modalities at the cost of losing the modality-specific information. Our theory then suggests that instead of exact modality matching, whether learned representations are meaningful is an important factor in multi-modal representation learn-ing. In particular, we propose to improve on top of con-trastive learning with regularizations to construct better la-tent structures. We consider intra-modality, inter-modality, and intra-inter-modality regularizations. These regulariza-tions are generalizable and can be applied to various vision-language models with modality-specific encoders. Specif-ically, for intra-modality regularization, motivated by our theoretic result, we propose deep feature separation to en-courage the model to preserve both the modality-shared and modality-specific information in different components. For inter-modality regularization, we aim to bridge two modalities with their augmentations. Consequently, we pro-posed a Brownian bridge loss between the triplet of (text, augmented image, image) to regularize the inter-modality structures. For intra-inter-modality regularization, we in-troduce the geometric consistency loss that promotes geo-metric symmetry in the latent space. In summary, the main contributions of this paper are: •We conduct empirical and theoretical analysis on un-derstanding the impact of the modality alignment on downstream tasks. We show that a reduced modal-ity gap does not always guarantee better performance, and can instead hurt the performance when the infor-mation gap between the two modalities is large (The-orem 3.1). Combined with the existing theory of con-trastive learning, our theory suggests preserving both modality-shared and modality-specific information. •Inspired by our theory, we propose three instrumental regularizations on top of the contrastive loss, i.e., the intra-modality, inter-modality, and intra-inter-modality regularizations to improve latent modality structures. •We conduct extensive and comprehensive experiments on various vision-language models to show that the proposed methods consistently improve over the base-lines for different model families ( e.g., CLIP and AL-BEF) and for different downstream applications ( e.g., cross-modality retrieval, VQA, VR and etc).2. Related work Most recent works on vision-language representation learning can be categorized based on how information from different modalities is used for joint learning. The first category applies unified models [ 60,61,65,66] to process both images and texts, where the inputs are usually tok-enized into sequences [ 2,48]. Unified models feature sim-pler and more universal designs, but typically underperform methods with modality-specific encoders (the second cat-egory). These methods use separate encoders for images and texts ( e.g. CLIP [ 41,49,53], ALIGN [ 26]), and rely on contrastive loss [ 6,21,45] to align multiple modalities. These methods have been shown to achieve state-of-the-art (SOTA) performance on image-text retrieval; but the support is lacking for multi-modality tasks requiring inter-modality interaction, e.g. VQA. To conquer this problem, most recent approaches use a hybrid fashion where the mod-els have separate encoders for images and texts along with a late-fusion multi-modal encoder [ 13,31,32,54,70]. Specifi-cally, image-text matching (ITM) loss and masked language modeling (MLM) loss are usually applied for training the fusion encoder. The methods in the later category utilize separate en-coders for different modalities. However, this can lead to the phenomenon that image embeddings and text embed-dings reside in different regions of the joint latent space. Such a phenomenon, termed modality gap , is observed in many multi-modal models [ 49,68,72]. A recent study [ 35] shows that the modality gap presents from the initializa-tion and can be preserved during contrastive training. This naturally brings in another variety in multi-modality mod-els – the latent modality gap and modality structures. Cy-CLIP [ 18] advocates for the benefit of consistency in latent modality structures. Other works [ 20,58,69] investigate the modality-specific and modality-shared information. Yet to the best of our knowledge, no other prior work has studied the modality gap from a theoretical view. In this work, we show that directly reducing the modality gap does not help in performance gain from both empirical experiments and theoretical analysis. Consequently, we propose to study the impact of latent modality structures, and propose three ap-proaches to obtain more meaningful latent modality struc-tures that can improve downstream applications. 3. Understanding the Impact of Modality Gap on Downstream Performance Despite being used extensively as a heuristic in prac-tice [ 35,68,70,72], it remains an open question whether modality alignment in the feature space through contrastive learning is optimal for downstream performance [ 35]. In this section, we first formally formulate the modality gap problem, present our empirical evidence on the relationship 7662 between the modality gap and the performance of down-stream tasks, and then probe into its theoretical underpin-ning by providing an information-theoretical analysis. Notation Throughout the paper, we will use XTandXV to denote the random variables corresponding to the input texts and images, respectively. We shall use Yto denote the target variable in the downstream task of interest. For ex-ample, in the context of online shopping, XTandXVcould be the textual and visual descriptions of a product, and in this case Yis the expected sale of this product. When deal-ing with data with multi-modalities, we often use modality-specific encoder gTandgVto obtain features in the same latent space, i.e., ZT=gT(XT)andZV=gV(XV)are the extracted features from textual and visual inputs. In this work, we focus on the setting where inputs from different modalities are paired with each other, meaning that a sam-ple consists of the tuple (xT,xV,y)from the underlying joint distribution p. The goal of reducing the modality gap in the latent space is then to shrink the statistical distance (e.g., KL-divergence, etc) between ZTandZV. For two random variables XTand XV, we define I(XT;XV)to be the Shannon mutual information between XTandXV. Similarly, we use H(Y|XT,XV)to denote the conditional entropy of Ygiven the two modalities as input. Following common practice, for classification tasks, `CE(ˆy,y)is the cross-entropy loss between the prediction ˆyand the ground-truth label y. One useful fact about the conditional entropy H(Y|XT,XV)and the cross-entropy loss is the following variational form [ 14,73]:H(Y| XT,XV)=i n f fEp[`CE(f(XT,XV),Y)], where the infi-mum is over all the prediction functions that take both XT andXVas input to predict the target Yand the expectation is taken over the joint distribution pof(XT,XV,Y). 3.1. Empirical Analysis on Modality Gap Given paired multi-modal data, one natural idea explored in the literature [ 35,70,72] is to use contrastive pretrain-ing by treating paired multimodal data as the positive pairs and others as negative pairs. The goal is to align the posi-tive pairs so that they are closer to each other in the feature space while at the same time ensuring the negative pairs to be farther away. More specifically, let (xT,xV,y)and (x0 T,x0 V,y0)be two tuples sampled from the joint distribu-tion. Then, in order to align the two modalities, (xT,xV), (x0 T,x0 V)are used as positive pairs while (xT,x0 V)and (x0 T,xV)are constructed as negative pairs. Based on the contrastive loss principle [ 63, Theorem 1], a better model should come with smaller modality gaps (better alignment). However, despite being extensively used as a pretraining strategy in practice, it is unclear how the modality alignment affects the downstream tasks of inter-est. To approach this important question, we first conductFigure 2. Visualization of the modality gap between text and im-age features. There is no clear-cut relationship between the gap of these two modalities and the downstream retrieval performance. experiments to explore the effect of reducing modality gap on the task of image/text retrieval. We plot the alignment between paired image/text data in the feature space and also compute the average distance between them as the gap measure in Fig. 2. We per-form pre-training on COCO [ 36] dataset and evaluate the zero-shot retrieval performance on Flick30K [ 71] test set. We optimize an additional alignment loss during training, LAlign =1/hZT,ZVi2, to reduce the gap between modali-ties. We control the gap by adjusting the scale of LAlignwith {1,0.5,0}. From Fig. 2, we can see that the retrieval per-formance barely changes when changing the gap between two modalities. Note that as we normalized the data in the feature space, the gap difference in the figure is significant. 3.2. |
Ji_Ultra-High_Resolution_Segmentation_With_Ultra-Rich_Context_A_Novel_Benchmark_CVPR_2023 | Abstract With the increasing interest and rapid development of methods for Ultra-High Resolution (UHR) segmentation, a large-scale benchmark covering a wide range of scenes with full fine-grained dense annotations is urgently needed to facilitate the field. To this end, the URUR dataset is introduced, in the meaning of Ultra-High Resolution dataset with Ultra-Rich Context. As the name suggests, URUR contains amounts of images with high enough res-olution (3,008 images of size 5,120 ×5,120), a wide range of complex scenes (from 63 cities), rich-enough context (1 million instances with 8 categories) and fine-grained annotations (about 80 billion manually annotated pixels), which is far superior to all the existing UHR datasets including DeepGlobe, Inria Aerial, UDD, etc.. More-over, we also propose WSDNet, a more efficient and ef-fective framework for UHR segmentation especially with ultra-rich context. Specifically, multi-level Discrete Wavelet Transform (DWT) is naturally integrated to release com-putation burden while preserve more spatial details, along with a Wavelet Smooth Loss (WSL) to reconstruct orig-inal structured context and texture with a smooth con-strain. Experiments on several UHR datasets demonstrate its state-of-the-art performance. The dataset is available at https://github.com/jankyee/URUR. | 1. Introduction Benefited from the advancement of photography and sensor technologies, the accessibility and analysis of ultra-high resolution (UHR) images has opened new horizons for the computer vision community, playing an increasingly important role in a wide range of applications, including but *Corresponding Authors.not limited to disaster control, environmental monitoring, land resource protection and urban planning. The focus of this paper is on semantic segmentation for UHR images. The most commonly-used datasets in existing UHR seg-mentation methods include DeepGlobe [4], Inria Aerial [8] and Citysacpes [3]. According the definition of UHR me-dias [9,10], an image with at least 2048 ×1080 (2.2M) pixels are regarded as 2K high resolution media. An image with at least 3,840 ×1,080 (4.1M) pixels reaches the bare mini-mum bar of 4K resolution, and 4K ultra-high definition me-dia usually refers to a minimum resolution of 3,840 ×2,160 (8.3M). However, except for Inria Aeral which reaches to 5,000 ×5,000 pixels, the average resolution of all other two datasets are below 2,500 ×2,500 (6.2M), thus actually they are not strictly UHR medias. Besides, DeepGlobe also adopts coarse annotations that result in numbers of noises. Although the utra-high resolution, Inria Aerial con-tains only 180 images in limited scenes, and only anno-tates one category of building, which is not sufficient to fully verify the performance of UHR segmentation methods and limits the development of the community. Therefore, a novel large-scale benchmark dataset covering a wide range of scenes with full fine-grained dense annotations is ur-gently needed to facilitate the field. To this end, the URUR dataset is proposed in the paper, in this meaning of Ultra-High Resolution dataset with Ultra-Rich Context. Firstly for the resolution, URUR contains 3,008 UHR images of size 5,120 ×5,120 (up to 26M), coming from a wide range of complex scenes in 63 cities. For annotations, there are 80 billion manually annotated pixels, including 2 million fine-grained instances with 8 categories, which is of ultra-high context and far superior to all the existing UHR datasets. Visualization samples and detailed statistics are revealed in Figure 1 and Section 3. In order to balance the memory occupation and accu-racy when the image resolution grows to ultra-high, earlier This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23621 ()&!" ''! (*'% ' %!$ #& & ! Figure 1. The comparison between natural datasets (Pascal VOC [1], COCO [2], Cityscapes [3]), and representative UHR datasets (Deep-Globe [4], ISIC [5], UDD6 [6], UA Vid [7], Inria Aerial [8] and URUR). As shown that UHR images (from b to g) cover a larger filed of view and contain more regions with very large contrast in both scale and shape, than natural images (a). Existing UHR datasets either adopt coarse annotations (b, d, e) or only annotate one category (c, f). The proposed URUR dataset (h) utilizes fine-grained dense annotations for whole 8 categories. works for UHR segmentation utilize a two-branch global-local collaborative network to preserve both global and lo-cal information, taking the globally down-sampled image and locally cropped patches as inputs respectively. The representative works include GLNet [10] and FCtL [11]. However, this type of framework requires multiple predic-tions on the patches thus the overall inference speed is very slow. To further achieve a better balance among accuracy, memory and inference speed, ISDNet [12] is proposed to integrate shallow and deep networks for efficient segmen-tation. The shallow branch has fewer layers and faster in-ference speed, its input does not need any downsampling or cropping. For the deep branch, the input image is di-rectly down-sampled to ensure high inference speed. Then a heavy relation-aware feature (RAF) module is utilized to exploit the relationship between the shallow and deep feature. In this paper, we propose WSDNet, the evolu-tion of ISDNet, to formulate a more efficient and effective framework for UHR segmentation. Specifically, multi-level Discrete Wavelet Transform (DWT) and Inverse Discrete Wavelet Transform (IWT) are naturally integrated to release computation burden while preserve more spatial details in the deep branch, thus RAF can be removed for higher infer-ence speed. The Wavelet Smooth Loss (WSL) is also de-signed to reconstruct original structured context and texture distribution with the smooth constrain in frequency domain. Overall, the contributions of this paper are summarized as follows: • We introduce the URUR dataset, a novel large-scaledataset covering a wide range of scenes with full fine-grained dense annotations, which is superior to all the exiting UHR datastes to our knowledge. • WSDNet is proposed to preserve more spatial details with multi-level DWT-IWT, and a Wavelet Smooth Loss is presented to reconstruct original structured context and texture distribution with the smooth con-strain in frequency domain. • Statistics and experiments demonstrate the superior-ity of URUR and WSDNet. WSDNet achieves state-of-the-art balance among accuracy, memory and infer-ence speed on several UHR datasets. |
Jain_DART_Diversify-Aggregate-Repeat_Training_Improves_Generalization_of_Neural_Networks_CVPR_2023 | Abstract Generalization of Neural Networks is crucial for deploy-ing them safely in the real world. Common training strate-gies to improve generalization involve the use of data aug-mentations, ensembling and model averaging. In this work, we first establish a surprisingly simple but strong bench-mark for generalization which utilizes diverse augmenta-tions within a training minibatch, and show that this can learn a more balanced distribution of features. Further, we propose Diversify-Aggregate-Repeat Training (DART) strategy that first trains diverse models using different aug-mentations (or domains) to explore the loss basin, and fur-ther Aggregates their weights to combine their expertise and obtain improved generalization. We find that Repeating the step of Aggregation throughout training improves the over-all optimization trajectory and also ensures that the indi-vidual models have sufficiently low loss barrier to obtain improved generalization on combining them. We theoret-ically justify the proposed approach and show that it in-deed generalizes better. In addition to improvements in In-Domain generalization, we demonstrate SOTA performance on the Domain Generalization benchmarks in the popular DomainBed framework as well. Our method is generic and can easily be integrated with several base training algo-rithms to achieve performance gains. Our code is available here: https://github.com/val-iisc/DART . | 1. Introduction Deep Neural Networks have outperformed classical methods in several fields and applications owing to their re-markable generalization. Classical Machine Learning the-ory assumes that test data is sampled from the same distri-bution as train data. This is referred to as the problem of In-Domain (ID) generalization [15, 18, 29, 32, 48], where *Equal Contribution.∓Equal contribution second authors. Correspon-dence to Samyak Jain <samyakjain.cse18@itbhu.ac.in >, Sravanti Adde-palli<sravantia@iisc.ac.in >.⋄Indian Institute of Technology, Varanasi §Indian Institute of Technology, Dhanbad.‡Work done during internship at Vision and AI Lab, Indian Institute of Science, Bangalore.the goal of the model is to generalize to samples within same domain as the train dataset. This is often considered to be one of the most important requirements and criteria to evaluate models. However, in several cases, the test dis-tribution may be different from the train distribution. For example, surveillance systems are expected to work well at all times of the day, under different lighting conditions and when there are occlusions, although it may not be possible to train models using data from all these distributions. It is thus crucial to train models that are robust to distribution shifts, i.e., with better Out-of-Domain (OOD) Generaliza-tion [25]. In this work, we consider the problems of In-Domain generalization and Out-of-Domain Generalization of Deep Networks. For the latter, we consider the popu-lar setting of Domain Generalization [4, 23, 41], where the training data is composed of several source domains and the goal is to generalize to an unseen target domain. The problem of generalization is closely related to the Simplicity Bias of Neural Networks, due to which models have a tendency to rely on simpler features that are often spurious correlations to the labels, when compared to the harder robust features [55]. For example, models tend to rely on weak features such as background, rather than more robust features such as shape, causing a drop in object clas-sification accuracy when background changes [22, 72]. A common strategy to alleviate this is to use data augmenta-tions [8–10, 27, 42, 53, 75, 77] or data from several domains during training [23], which can result in invariance to sev-eral spurious correlations, improving the generalization of models. Shen et al. [57] show that data augmentations en-able the model to give higher importance to harder-to-learn robust features by delaying the learning of spurious fea-tures. We extend their observation by showing that training on a combination of several augmentation strategies (which we refer to as Mixed augmentation) can result in the learn-ing of a balanced distribution of diverse features. Using this, we obtain a strong benchmark for ID generalization as shown in Table-1. However, as shown in prior works [1], the impact of augmentations in training is limited by the capacity of the network in being able to generalize well to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16048 Table 1. Motivation: Performance (%) on CIFAR100, ResNet-18 with ERM training for 200 epochs. Mixed-Training (MT) outper-forms individual augmentations, and ensembles perform best. Test Augmentation Train Augmentation No Aug. Cutout Cutmix AutoAugment Pad+Crop+HFlip (PC) 78.51 67.04 56.52 58.33 Cutout (CO) 77.99 74.58 56.12 58.47 Cutmix (CM) 80.54 74.05 77.35 61.23 AutoAugment (AA) 79.18 71.26 60.97 73.91 Mixed-Training (MT) 81.43 77.31 73.20 74.73 Ensemble (CM+CO+AA) 83.61 79.19 73.19 73.90 the diverse augmented data distribution. Therefore, increas-ing the diversity of training data demands the use of larger model capacities to achieve optimal performance. This de-mand for higher model capacity can be mitigated by train-ing specialists on each kind of augmentation and ensem-bling their outputs [11,38,59,79], which results in improved performance as shown in Table-1. Another generic strategy that is known to improve generalization is model-weight av-eraging [31, 70, 71], which results in a flatter minima. In this work, we aim to combine the benefits of the three strategies discussed above -diversification, special-ization and model weight averaging, while also overcom-ing their individual shortcomings. We propose a Diversify-Aggregate-Repeat Training strategy dubbed DART (Fig.1), that first trains MDiverse models after a few epochs of common training, and then Aggregates their weights to ob-tain a single generalized solution. The aggregated model is then used to reinitialize the Mmodels which are further trained post aggregation. This process is Repeated over training to obtain improved generalization. The Diversify step allows models to explore the loss basin and specialize on a fixed set of features. The Aggregate (or Model Interpo-lation) step robustly combines these models, increasing the diversity of represented features while also suppressing spu-rious correlations. Repeating the Diversify-Aggregate steps over training ensures that the Mdiverse models remain in the same basin thereby permitting a fruitful combination of their weights. We justify our approach theoretically and em-pirically, and show that intermediate model aggregation also increases the learning time for spurious features, improving generalization. We present our key contributions below: • We present a strong baseline termed Mixed-Training (MT) that uses a combination of diverse augmentations for different images in a training minibatch. • We propose a novel algorithm DART, that learns spe-cialized diverse models and aggregates their weights iteratively to improve generalization. • We justify our method theoretically, and empirically on several In-Domain (CIFAR-10, CIFAR-100, Im-ageNet) and Domain Generalization (OfficeHome, PACS, VLCS, TerraIncognita, DomainNet) datasets. Step-1: ERM training Data Data Step-3: Aggregate M models to one Step-4: Repeat times Net epochs for a single model: epochs Data Step-2: Diversify Train models, each for epochs Figure 1. Schematic Diagram of the proposed method DART |
Dessi_Cross-Domain_Image_Captioning_With_Discriminative_Finetuning_CVPR_2023 | Abstract Neural captioners are typically trained to mimic human-generated references without optimizing for any specific communication goal, leading to problems such as the gen-eration of vague captions. In this paper, we show that fine-tuning an out-of-the-box neural captioner with a self-supervised discriminative communication objective helps to recover a plain, visually descriptive language that is more informative about image contents. Given a target image, the system must learn to produce a description that enables an out-of-the-box text-conditioned image retriever to iden-tify such image among a set of candidates. We experiment with the popular ClipCap captioner, also replicating the main results with BLIP . In terms of similarity to ground-truth human descriptions, the captions emerging from dis-criminative finetuning lag slightly behind those generated by the non-finetuned model, when the latter is trained and tested on the same caption dataset. However, when the model is used without further tuning to generate captions for out-of-domain datasets, our discriminatively-finetuned captioner generates descriptions that resemble human ref-erences more than those produced by the same captioner wihtout finetuning. We further show that, on the Concep-tual Captions dataset, discriminatively finetuned captions are more helpful than either vanilla ClipCap captions or ground-truth captions for human annotators tasked with an image discrimination task.1 | 1. Introduction The last decade has seen impressive progress on the task of automatically generating image descriptions with deep neural networks [ 5,39,44,46]. Most of the proposed meth-1Our code is available at https : / / github . com / facebookresearch / EGG / tree / main / egg / zoo / discriminative_captioner . Figure 1. Setup of our discriminative finetuning method when ap-plied to the ClipCap captioner [ 27]. All CLIP encoders are frozen, while the language generation modules (mapper and GPT-2) are updated based on reward values. ods try to optimize the similarity of system-produced cap-tions with ground-truth human references, either through a standard cross-entropy cost function [ 3], or by maximizing natural-language-generation (NLG) metrics such as CIDEr [31,36] through a reward-based objective. While imitating human captions is a reasonable goal, it does not take into account that, in concrete applications, an image description is produced for a purpose [ 15,21]. There are a multitude of context-dependent purposes a description might be produced for, but a fundamental one is to correctly characterize an object so that a hearer could discriminate it from other contextual elements [ 18]. This ability to discriminate between referents is a core purpose of communication, playing a fundamental role in its evo-lution and acquisition (e.g., [ 6,38]). We study here what happens when we take an out-of-the-box image captioner that was trained to imitate human captions, and finetune its language components with a discriminative objective using This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6935 reinforcement learning. In particular, we let the captioner play a discrimination game with an out-of-the-box caption-based image retriever. The captioner generates a caption given a target image, and the retriever (whose weights are not updated) uses the caption to select the target among a set of candidates, as shown in Fig. 1. This finetuning technique does not require annotated data (only a set of images), and it’s agnostic to the underlying captioner and retriever com-ponents. We report two strong and novel results. First, we show that captions finetuned in this way lead to better 0-shot cross-domain caption generation.2Second, not only are the finetuned captions good for neural text-based image retrieval (both in-and across-domain), but they can also be more useful to human annotators, helping discrimi-nate the target from distractors more than human-generated ground-truth captions do . We conclude the paper with an analysis of the finetuned captions, comparing them to human-generated and non-finetuned ones from the Concep-tual Captions dataset. We find that discriminative finetuning undoes the more abstract language that the underlying sys-tem had learned from the ground-truth captions, leading to a more plainly descriptive style that we expect to be more useful in practical applications. |
Du_Efficient_Mask_Correction_for_Click-Based_Interactive_Image_Segmentation_CVPR_2023 | Abstract The goal of click-based interactive image segmentation is to extract target masks with the input of positive/negative clicks. Every time a new click is placed, existing methods run the whole segmentation network to obtain a corrected mask, which is inefficient since several clicks may be needed to reach satisfactory accuracy. To this end, we propose an efficient method to correct the mask with a lightweight mask correction network. The whole network remains a low computational cost from the second click, even if we have a large backbone. However, a simple correction net-work with limited capacity is not likely to achieve com-parable performance with a classic segmentation network. Thus, we propose a click-guided self-attention module and a click-guided correlation module to effectively exploits the click information to boost performance. First, several tem-plates are selected based on the semantic similarity with click features. Then the self-attention module propagates the template information to other pixels, while the correla-tion module directly uses the templates to obtain target out-lines. With the efficient architecture and two click-guided modules, our method shows preferable performance and ef-ficiency compared to existing methods. The code will be released at https://github.com/feiaxyt/EMC-Click . | 1. Introduction Interactive image segmentation aims to select the ob-ject of interest with minimal iterative interactions, which can benefit various computer vision tasks, such as semantic segmentation [30], instance segmentation [17], and medi-cal image analysis [28]. As the success of these tasks often requires large-scale mask-level annotations and it is time-consuming to annotate the image manually, interactive seg-mentation is an attractive way to simplify the annotation process and alleviate the annotation cost. Different interaction strategies have been studied to sim-plify the interactive process, including bounding boxes [23, 37, 48], clicks [33, 40, 47], scribbles [1, 2], boundary Figure 1. Comparison between the architectures of our method and classic methods. Classic methods run the segmentation network in every iteration, which is inefficient if a large network is adopted. We update the mask via a lightweight mask correction network, enabling an efficient interaction. points [32], and extreme points [34]. However, due to the complexity of the object boundary and appearance, the per-formance of methods based on bounding boxes may drop if the bounding boxes are not tightly drawn [48]. Besides, it requires more effort to identify the object boundary and place bounding boxes, boundary points, or extreme points on the image. In contrast, the click-based method only re-quires the users to progressively place positive and nega-tive clicks on the foreground and background areas, respec-tively. It has recently attracted more attention due to its simplicity and well-established training and evaluation pro-tocols [40,47]. Hence, we focus on the click-based method. In click-based interactive segmentation methods, the model returns a corrected prediction mask after each click from the users. Efficiency is of great importance to in-teractive segmentation methods since a typical segmenta-tion process requires several interactive iterations. In re-cent years, deep learning-based interactive segmentation methods have achieved considerably better performance compared to traditional methods. However, some recent works [20, 22, 39] achieve state-of-the-art performance by employing inference-time optimization to refine the masks, which significantly increases the computational cost. To This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22773 eliminate online optimization, RITM [40] investigates dif-ferent designs for interactive segmentation and achieves state-of-the-art results with a classic segmentation network. To improve efficiency, FocalClick [8] proposes target crop and focus crop strategies to segment two selected local re-gions efficiently. Most click-based methods [7,8,27,31,40, 47] iteratively run the whole segmentation network to up-date the masks with the input of user clicks, which is time-consuming especially when we have a strong segmentation network. Majumder et al. [32] proposes to refine the mask with a lightweight refinement network. However, the net-work requires the user to place boundary points, which is not user-friendly. Therefore, more efficient click-based in-teractive segmentation methods are still required to reach a better trade-off between performance and efficiency. With the above consideration, in this work, we develop an efficient click-based interactive segmentation model based on an Efficient Mask Correction network (EMC-Click). The model consists of a base segmentation net-work and a lightweight mask correction network. The base network takes the first click as input and outputs target-aware features and a coarse prediction. The mask correction network iteratively updates the prediction whenever a new click is placed. The difference between our network and the classic network is shown in Fig. 1. This decoupled de-sign ensures that the computational cost of each iteration is relatively low from the second click. However, using a sim-ply designed mask correction network to update the predic-tion can hardly achieve comparable performance with clas-sic methods that use a strong segmentation network. Thus, we propose two feature augmentation modules, including a click-guided self-attention module and a click-guided cor-relation module, to effectively exploit the click information to augment the features in the mask correction network. We first extract the click features and enrich them by select-ing several template features based on the semantic similar-ity between the clicks and other pixels. The self-attention module propagates the information of the template features to other pixels, and the correlation model directly learns target contours. Both modules effectively and efficiently augment the features and improve the segmentation perfor-mance. Experiments on five benchmarks demonstrate that our method achieves competitive performance and higher efficiency compared with state-of-the-art click-based inter-active segmentation methods. Our contribution can be summarized as follows: • We propose EMC-Click, an efficient click-based inter-active method, to correct the masks via a lightweight mask correction network interactively. Our method significantly reduces computational cost from the sec-ond click especially when we have a large backbone. • We propose two feature augmentation modules to im-prove the segmentation accuracy by effectively ex-ploiting the click information. • We build EMC-Click on different base segmentation networks and evaluate our models on several bench-marks. The results show that our method achieves preferable performance and efficiency compared to state-of-the-art methods. |
Chen_DBARF_Deep_Bundle-Adjusting_Generalizable_Neural_Radiance_Fields_CVPR_2023 | Abstract Recent works such as BARF and GARF can bundle ad-just camera poses with neural radiance fields (NeRF) which is based on coordinate-MLPs. Despite the impressive re-sults, these methods cannot be applied to Generalizable NeRFs (GeNeRFs) which require image feature extractions that are often based on more complicated 3D CNN or trans-former architectures. In this work, we first analyze the dif-ficulties of jointly optimizing camera poses with GeNeRFs, and then further propose our DBARF to tackle these issues. Our DBARF which bundle adjusts camera poses by tak-ing a cost feature map as an implicit cost function can be jointly trained with GeNeRFs in a self-supervised manner. Unlike BARF and its follow-up works, which can only be applied to per-scene optimized NeRFs and need accurate initial camera poses with the exception of forward-facing scenes, our method can generalize across scenes and does not require any good initialization. Experiments show the effectiveness and generalization ability of our DBARF when evaluated on real-world datasets. Our code is available at https://aibluefisher.github.io/dbarf . | 1. Introduction The recent introduction of NeRF (Neural Radiance Fields) [28] bridges the gap between computer vision and computer graphics with the focus on the Novel view synthe-sis (NVS) task. NeRF demonstrates impressive capability of encoding the implicit scene representation and rendering high-quality images at novel views with only a small set of coordinate-based MLPs. Although NeRF and its variants simplify the dense 3D reconstruction part of the traditional photogrammetry pipeline that includes: the reconstruction of dense point clouds from posed images followed by the recovery and texture mapping of the surfaces into just a simple neural network inference, they still require known accurate camera poses as inputs. Nonetheless, the acquisition of camera poses is expen-sive in the real world. Most NeRF-related methods ob-tain the camera poses by Structure-from-Motion (SfM) [4, 23, 34]. In SfM, camera poses are optimized under the Initial Poses (BEV, iter=0)Optimized Poses (BEV, iter=10000)Optimized Poses (SV, iter=20000)BARF =>GeNeRF DBARFFigure 1. Results of optimizing camera poses with BARF and DBARF . From left to right are the initial camera poses, bird’s eye view (BEV) of optimized camera poses after 1e4iterations, and side view (SV) of optimized camera pose after 2e4iterations. Red and blue denote ground truths and estimated camera poses (The inconsistent ground truth poses in different iterations are due to the randomness of selecting the training batches). Top: The cam-era poses diverge quickly when BARF [20] is applied to GeNeRF, even with the camera poses initialized by perturbing the ground truth with very small noise. Bottom : Results obtained by our DBARF, the camera poses are randomly initialized. keypoint-metric reprojection error in a process referred to as bundle adjustment [43]. A notorious problem of SfM is that it sometimes fails, e.g. in textureless or self-similar scenes, and can also take days or even weeks to complete for large-scale scenes. Consequently, one main forthcoming issue with NeRF is that its rendering quality highly relies on accu-rate camera poses. Recently, several works try to solve the pose inaccuracy jointly with NeRF. One of the representa-tive works is BARF [20]. NeRF maps the pixel coordinates into high-dimensional space as Fourier features [39] before inputting into the MLPs to enable networks to learn the high-frequency part of images. However, Fourier features can be a double-edged sword when the camera poses are jointly optimized with NeRF, where gradients from high-frequency components dominate the low-frequency parts during training. To mitigate this problem, BARF draws inspiration from the non-smoothness optimization in high-dimensional functions: optimizer can get stuck at a local optimum, but the training can be easier when the objective This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24 function is made smoother. Consequently, BARF adopts a coarse-to-fine strategy which first masks out the high-frequency components, and then gradually reactivates them after the low-frequency components become stable. The camera poses are adjusted by the photometric loss during training instead of the keypoint-metric cost in SfM. Despite its promising results, BARF and its follow-up works [5, 26] still require the pre-computed camera poses from SfM. One other issue with vanilla NeRF is that it needs time-consuming per-scene training. Making NeRF generaliz-able across scenes [3, 18, 47, 53] has recently gained in-creasing attention. However, similar to vanilla NeRF, GeN-eRFs (generalizable NeRFs) also depend on accurate cam-era poses. There is no existing work that tried to optimize the camera poses jointly with GeNeRFs. This intrigues us to investigate the replacement of NeRF with GeNeRFs in BARF. We find that the joint optimization is non-trivial in our task settings, and the camera poses can diverge quickly even when initialized with the ground truths ( cf. top row of Fig. 1). In this paper, we identified two potential reasons which cause the failure of bundle adjusting GeNeRFs. The first reason is the aggregated feature outliers, which are caused by occlusions. The other reason is due to the high non-convexity of the cost function produced by ResNet fea-tures [40], which produces incoherent displacements like the issue caused by positional encodings [39] in BARF. We further proposed our method DBARF, which jointly opti-mizes GeNeRF and relative camera poses by a deep neural network. Our implicit training objective can be equivalently deemed as a smooth function of the coarse-to-fine training objective in BARF. Specifically, we construct a residual fea-ture map by warping 3D points onto the feature maps of the nearby views. We then take the residual feature map as an implicit cost function, which we refer to as cost map in the following sections. By taking the cost map as input, we utilize a deep pose optimizer to learn to correct the rela-tive camera poses from the target view to nearby views. We further jointly train the pose optimizer and a GeNeRF with images as supervision, which does not rely on ground truth camera poses. In contrast to previous methods which only focus on per-scene camera pose optimization, our network is generalizable across scenes. In summary, the contributions of this work are: • We conduct an experiment on bundle adjusting GeN-eRFs by gradient descent and analyze the difficulty of jointly optimizing camera poses with GeNeRFs. • We present DBARF to deep bundle adjusting camera poses with GeNeRFs. The approach is trained end-to-end without requiring known absolute camera poses. • We conduct experiments to show the generalizationability of our DBARF, which can outperform BARF and GARF even without per-scene fine-tuning. |
Erbach_EvShutter_Transforming_Events_for_Unconstrained_Rolling_Shutter_Correction_CVPR_2023 | Abstract Widely used Rolling Shutter (RS) CMOS sensors capture high resolution images at the expense of introducing dis-tortions and artifacts in the presence of motion. In such situations, RS distortion correction algorithms are critical. Recent methods rely on a constant velocity assumption and require multiple frames to predict the dense displacement field. In this work, we introduce a new method, called Eventful Shutter (EvShutter)1, that corrects RS artifacts us-ing a single RGB image and event information with high temporal resolution. The method firstly removes blur us-ing a novel flow-based deblurring module and then com-pensates RS using a double encoder hourglass network. In contrast to previous methods, it does not rely on a constant velocity assumption and uses a simple architecture thanks to an event transformation dedicated to RS, called Filter and Flip (FnF), that transforms input events to encode only 1The evaluation code and the dataset can be found here https:// github.com/juliuserbach/EvShutterthe changes between GS and RS images. To evaluate the proposed method and facilitate future research, we collect the first dataset with real events and high-quality RS im-ages with optional blur, called RS-ERGB. We generate the RS images from GS images using a newly proposed simula-tor based on adaptive interpolation. The simulator permits the use of inexpensive cameras with long exposure to cap-ture high-quality GS images. We show that on this realistic dataset the proposed method outperforms the state-of-the-art image-and event-based methods by 9.16 dB and 0.75 dB respectively in terms of PSNR and an improvement of 23 % and 21 % in LPIPS. | 1. Introduction Most consumer cameras like cell phones or action cam-eras use a rolling shutter (RS) sensor which instead of capturing the whole frame in a single shot as in a global shutter (GS) camera, it acquires each row sequentially as shown in Fig. 2. Specifically, it obtains each row yat time This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13904 Figure 2. Rolling Shutter Explained. While recording with a RS camera (left), image rows are exposed sequentially, while in the case of a GS camera (right) they are exposed all at the same time. ty=y·tH/H, where His the height of an image and tHis the mid-exposure time of the last row. RS cameras are often prefered due to their high spatial resolution and high frame rates at low cost. Global shutter sensors have a more com-plicated circuitry, and therefore a smaller resolution than rolling shutter sensors of the same physical size. A larger sensor size in turn increases the cost of the lens. However, in the presence of fast camera or object movement, RS sen-sors can produce skew distortion in images (see Fig. 1) and jello artifacts in videos. To correct such artifacts, RS correction methods are widely used. Conventional methods attempt to recover the corresponding GS image from the RS output by first esti-mating the camera motion or dense displacement field dur-ing the image acquisition and sequentially applying motion compensation to align all the pixels to a fixed timestamp. However, these methods make strong assumptions on the motion type and are prone to fail in the presence of com-plex motion patterns like local or non-linear motion, which no longer satisfy the assumptions. In this work, we address the fundamental problem of modeling scene changes during image acquisition by us-ing the high temporal information provided by an auxiliary event camera. Instead of measuring synchronous frames of absolute brightness, event cameras only measure asyn-chronous brightness changes at each pixel with a high tem-poral resolution in the order of microseconds [3]. Un-like image-based methods, event cameras allow to estimate the motion without relying on a constant speed assumption and synthesizing images by directly adding the brightness changes registered by the events to an image. This en-ables exciting computational photography applications such as event-guided image deblurring [12, 18,24], video inter-polation [8, 22,23] and high dynamic range imaging [6, 16]. In this paper, we propose a RS compensation method, called Eventful Shutter (EvShutter), that only requires a sin-gle RS image and the corresponding events. The method firstly removes blur using a novel flow-based deblurring module and then compensates RS using an hourglass net-work with two encoders relying on complementary geo-metric and synthesis-based interpolation approaches. The proposed method is the first method that does not rely onconstant velocity assumption and uses a simple architec-ture thanks to the newly introduced event transformation, that we call Filter and Flip (FnF). FnF transforms the input events to encode just changes between GS and RS images. To train and evaluate the proposed method, we collect the first dataset with real events and high quality RS images (optionally with blur), called RS-ERGB. The RS images are generated from GS images using our new simulator based on adaptive interpolation. This pipeline allows the use of an inexpensive high speed camera and the use of long expo-sures while capturing the GS images. Contributions of this work are as follows 1.Eventful Shutter (EvShutter) : the first event-assisted RS distortion compensation and deblurring method that avoids constant speed motion assumptions. |
He_Analyzing_and_Diagnosing_Pose_Estimation_With_Attributions_CVPR_2023 | Abstract We present Pose Integrated Gradient (PoseIG), the first in-terpretability technique designed for pose estimation. We ex-tend the concept of integrated gradients for pose estimation to generate pixel-level attribution maps. To enable compari-son across different pose frameworks, we unify different pose outputs into a common output space, along with a likelihood approximation function for gradient back-propagation. To complement the qualitative insight from the attribution maps, we propose three indices for quantitative analysis. With these tools, we systematically compare different pose estimation frameworks to understand the impacts of network design, backbone and auxiliary tasks. Our analysis reveals an interesting shortcut of the knuckles (MCP joints) for hand pose estimation and an under-explored inversion error for keypoints in body pose estimation. Project page and code: https://qy-h00.github.io/poseig/. | 1. Introduction Human pose estimation of both the body and the hand is a critical vision task for augmented and virtual reality applications. State-of-the-art methods [12, 15, 19, 20, 33, 36] perform impressively on benchmarks but are difficult to compare beyond differences in average end-point-error (EPE). Averaged results on large-scale benchmarks depend on the underlying data distribution and tend to obscure the behaviour of pose estimation systems [10]. As such, we are motivated to find alternative ways to interpret and compare pose estimates across different methods. To that end, we present the first method for estimating pixel-level attribution maps designed specifically for pose estimation. Integrated Gradients (IG) [35] is a commonly used attri-bution technique. IG and its derived variants [21, 35, 40] can produce pixel-level attribution maps for various image and natural language classification tasks. IG computes gradients to measure the relationship between changes to an input and changes to the target likelihood. However, IG is not directly applicable to pose estimation. Unlike in classification, where *Equal contribution Likelihood Joint Selection ForwardBackwardVisualizationPredictionGroundtruth FI:3.6,LI:22.5,DI:11.5IndicesCommon Output Figure 1. Pose Integrated Gradients (PoseIG) generates spatial attribution maps for pose estimation. Based on the attribution maps, we propose numerical indices to quantitatively characterize the attributions throughout the scene. models always directly output a class likelihood, pose es-timation models vary in their output, ranging from spatial likelihoods to regressed coordinates. Therefore, we must in-troduce a likelihood approximation function between the pre-dicted outputs and their targets to approximate the target like-lihood. Based on these likelihoods, we can back-propagate the gradients and generate attribution maps. Moreover, to enable meaningful comparison across frameworks, we pro-pose unifying the different outputs into a common output space and use the same likelihood approximation function S(·)for back-propagation. Fig. 1 shows our interpretability pipeline for generating pixel-level attribution maps that can be compared across different pose frameworks. Existing works [35] and [21] focus on qualitative attri-butions and produce visualizations for single inputs. We are interested in these visualizations for pose estimation; however, we additionally target quantitative analysis of the attributions. As such, we have designed attribution-based indices to help analyze and diagnose pose estimation frame-works. Based on PoseIG’s attribution maps, we introduce three indices to numerically characterize the attributions. The Foreground Index (FI) measures the extent to which the foreground is considered in the attributions. The Locality Index (LI) measures the amount of attribution around an im-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4821 age coordinate, and the Diffusion Index (DI) measures how concentrated or dispersed the attributions are in the scene. Armed with PoseIG’s attribution maps and the associated indices, we study existing body and hand pose estimation frameworks and provide insights on their design and archi-tectures. Finally, we diagnose existing models and find two overlooked issues in pose estimation. First, we reveal an artifically high performance of MCP1or knuckle joints in the hand, likely from shortcut learning as a result of data preprocessing. Secondly, we observe an under-explored phenomenon of keypoint inversion [27], where keypoints are mistakenly predicted at the location of other keypoints. Accordingly, we introduce simple mitigating solutions and recommend these be incorporated into future protocols to improve hand and body pose estimation. Our main contributions can be summarized as follows: •We introduce PoseIG, the first interpretability technique designed for pose estimation. PoseIG provides pixel-level attributions and can be applied to compare differ-ent pose estimation works based on a unified output space and a likelihood approximation function. •We propose three numerical indices to quantitatively characterize the attributions in the scene. •Using PoseIG’s attributions and indices, we analyze and compare different body and hand pose estimation frameworks and provide insight on their design. •We diagnose a shortcut problem in hand pose estimation and keypoint inversion errors in human pose estimation and propose simple solutions to alleviate these issues. We hope it will serve as a useful tool to the community for analyzing, diagnosing, and improving pose estimation frameworks. |
Ikehata_Scalable_Detailed_and_Mask-Free_Universal_Photometric_Stereo_CVPR_2023 | Abstract In this paper, we introduce SDM-UniPS, a groundbreak-ing Scalable, Detailed, Mask-free, and Universal Photomet-ric Stereo network. Our approach can recover astonishingly intricate surface normal maps, rivaling the quality of 3D scanners, even when images are captured under unknown, spatially-varying lighting conditions in uncontrolled envi-ronments. We have extended previous universal photometric stereo networks to extract spatial-light features, utilizing all available information in high-resolution input images and accounting for non-local interactions among surface points. Moreover, we present a new synthetic training dataset that encompasses a diverse range of shapes, materials, and illu-mination scenarios found in real-world scenes. Through ex-tensive evaluation, we demonstrate that our method not only surpasses calibrated, lighting-specific techniques on pub-lic benchmarks, but also excels with a significantly smaller number of input images even without object masks. | 1. Introduction Photometric stereo [52] aims to deduce the surface nor-mal map of a scene by analyzing images captured from a fixed perspective under diverse lighting conditions. Untilvery recently, all photometric stereo methods assumed their specific lighting conditions, which led to limitations in their applicability. For instance, methods that assumed direc-tional lighting conditions ( e.g., [20,24,25]) were unsuitable under natural illumination, and vice versa ( e.g., [15, 38]). To overcome this limitation, the “universal” photomet-ric stereo method (UniPS) [22] has been introduced, de-signed to operate under unknown and arbitrary lighting con-ditions. In contrast to prior uncalibrated photometric stereo methods [7,9,27], which assumed specific physically-based lighting models, this method encodes a non-physical fea-ture at each pixel for representing spatially-varying illumi-nation, which is served as a substitute for physical light-ing parameters within the calibrated photometric stereo net-work [21]. This method has taken the first step towards dealing with unknown, spatially-varying illumination that none of the existing methods could handle. However, the surface normal map recovered by UniPS, while not en-tirely inaccurate, appears blurry and lacks fine detail (see the top-right corner of Fig. 1). Upon investigation, we pin-pointed three fundamental factors contributing to the subpar reconstruction performance. Firstly, extracting illumination Supported by JSPS KAKENHI Grant Number 22K17919. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13198 features ( i.e., global lighting contexts) from downsampled images caused a loss of information at higher input resolu-tions and produced blurry artifacts. Secondly, UniPS em-ploys a pixel-wise calibrated photometric stereo network to predict surface normals using illumination features, which leads to imprecise overall shape recovery. Although pixel-wise methods [20,21,25] offer advantages in capturing finer details compared to image-wise methods [8, 29, 49], they suffer from an inability to incorporate global information. Lastly, the third issue lies in the limited variety of shape, material, and illumination conditions present in the training data, which hampers its capacity to adapt to a diverse range of real-world situations. This limitation primarily stems from the fact that current datasets ( i.e., PS-Wild [22]) do not include renderings under light sources with high-frequency components focused on specific incident angles, such as point or directional sources. Consequently, the method ex-hibits considerable performance degradation when exposed to directional lighting setups like DiLiGenT [46], as will be demonstrated later in this paper. In this paper, we present a groundbreaking photometric stereo network, the Scalable, Detailed, and Mask-Free Uni-versal Photometric Stereo Network (SDM-UniPS), which recovers normal maps with remarkable accuracy from im-ages captured under extremely uncontrolled lighting condi-tions. As shown in Fig. 1, SDM-UniPS is scalable , enabling the generation of normal maps from images with substan-tially higher resolution ( e.g., 2048x2048) than the training data ( e.g., 512x512); it is detailed , providing more accu-rate normal maps on DiLiGenT [46] with a limited number of input images than most existing orthographic photomet-ric stereo techniques, including calibrated methods, and in some cases, surpassing 3D scanners in detail; and it is mask-free, allowing for application even when masks are absent, unlike many conventional methods. Our technical novelties include: 1. The development of a scale-invariant spatial-light fea-ture encoder that efficiently extracts illumination fea-tures while utilizing all input data and maintaining scalability with respect to input image size. Our en-coder, based on the "split-and-merge" strategy, accom-modates varying input image sizes during training and testing without sacrificing performance. |
Bai_Masked_Autoencoders_Enable_Efficient_Knowledge_Distillers_CVPR_2023 | Abstract This paper studies the potential of distilling knowledge from pre-trained models, especially Masked Autoencoders. Our approach is simple: in addition to optimizing the pixel reconstruction loss on masked inputs, we minimize the dis-tance between the intermediate feature map of the teacher model and that of the student model. This design leads to a computationally efficient knowledge distillation framework, given 1) only a small visible subset of patches is used, and 2) the (cumbersome) teacher model only needs to be par-tially executed, i.e., forward propagate inputs through the first few layers, for obtaining intermediate feature maps. Compared to directly distilling fine-tuned models, distill-ing pre-trained models substantially improves downstream performance. For example, by distilling the knowledge from an MAE pre-trained ViT-L into a ViT-B, our method achieves 84.0% ImageNet top-1 accuracy, outperforming the baseline of directly distilling a fine-tuned ViT-L by 1.2%. More intriguingly, our method can robustly distill knowl-edge from teacher models even with extremely high mask-ing ratios: e.g., with 95% masking ratio where merely TEN patches are visible during distillation, our ViT-B competi-tively attains a top-1 ImageNet accuracy of 83.6%; surpris-ingly, it can still secure 82.4% top-1 ImageNet accuracy by aggressively training with just FOUR visible patches (98% masking ratio). The code and models are publicly available athttps://github.com/UCSC-VLAA/DMAE . | 1. Introduction Following the success in the natural language processing [10, 27], the Transformer architecture is showing tremen-dous potentials in computer vision [2, 11, 25, 26, 33], es-pecially when they are pre-trained with a huge amount of unlabelled data [3] with self-supervised learning techniques [1, 14]. Masked image modeling, which trains models to predict the masked signals (either as raw pixels or as se-mantic tokens) of the input image, stands as one of the most powerful ways for feature pre-training. With the most re-cent representative work in this direction, masked autoen-coder (MAE) [13], we are now able to efficiently and effec-tively pre-train high-capacity Vision Transformers (ViTs) with strong feature representations, leading to state-of-the-art solutions for a wide range of downstream visual tasks. In this paper, we are interested in applying knowledge distillation [17], which is one of the most popular model compression techniques, to transfer the knowledge from these strong but cumbersome ViTs into smaller ones. In contrast to prior knowledge distillation works [17, 21, 34], the teacher considered here is a pre-trained model whose predictions do not necessarily reveal the fine-grained re-lationship between categories; therefore, typical solutions like aligning the soft/hard logits between the teacher model and the student model may no longer remain effective. Moreover, after distilling the pre-trained teacher model, these student models need an extra round of fine-tuning to adapt to downstream tasks. These factors turn distilling pre-trained models seemingly a less favorable design choice in terms of both performance and computational cost. Nonetheless, surprisingly, we find by building upon MAE, the whole distillation framework can efficiently yield high-performance student models. There are two key de-signs. Firstly, we follow MAE to let the encoder exclusively operate on a small visible subset of patches and to employ a lightweight decoder for pixel reconstruction. Whereas rather than using the “luxury” setups in MAE, we show ag-gressively simplifying pre-training from 1600 epochs to 100 epochs andpushing masking ratio from 75% to 95% suffice to distill strong student models. Secondly, instead of align-ing logits, we alternatively seek to match the intermediate feature representation; this enables the cumbersome teacher model to only forward propagate inputs through the first few layers, therefore, reducing computations. We note applying L1 norm for distance measure is essential recipe for ensur-ing a successful intermediate feature alignment. We name this distilling MAE framework as DMAE. Compared to the traditional knowledge distillation frame-work where the teacher is a fine-tuned model, DMAE is more efficient and can train much stronger student models This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24256 EncoderDecoderFeatureAlignmentReconstructedImageTeacherStudentInput MaskedInput Figure 1. Illustration of the distillation process in DMAE. There are two key designs. Firstly, following MAE, we hereby only take visible patches as inputs and aims to reconstruct the masked ones. Secondly, knowledge distillation is achieved by aligning the intermediate features between the teacher model and the student model. Note the gray blocks denote the dropped high-level layers of the teacher model during distillation. at different capacities. For example, by setting ViT-B as the student model, while the baseline of distilling a fine-tuned ViT-L achieves 82.8% top-1 ImageNet accuracy, DMAE substantially boosts the performance to 84.0% (+1.2%) top-1 ImageNet accuracy, at a even lower training cost ( i.e., 195 GPU hours vs. 208 GPU hours, see Table 9). More intrigu-ingly, we found that DMAE allows for robust training with extremely highly masked images—even with TEN visible patches ( i.e., 95% masking ratio), ViT-B can competitively attain a top-1 ImageNet accuracy of 83.6%; this masking ra-tio can further be aggressively pushed to 98% (FOUR vis-ible patches) where DMAE still help ViT-B secure 82.4% top-1 ImageNet accuracy. We hope this work can benefit future research on efficiently unleashing the power of pre-train models. |
Chai_Persistent_Nature_A_Generative_Model_of_Unbounded_3D_Worlds_CVPR_2023 | Abstract Despite increasingly realistic image quality, recent 3D image generative models often operate on 3D volumes of fixed extent with limited camera motions. We investigate the task of unconditionally synthesizing unbounded nature scenes, enabling arbitrarily large camera motion while main-taining a persistent 3D world model. Our scene represen-tation consists of an extendable, planar scene layout grid, which can be rendered from arbitrary camera poses via a 3D decoder and volume rendering, and a panoramic sky-dome. Based on this representation, we learn a generative world model solely from single-view internet photos. Our method enables simulating long flights through 3D land-scapes, while maintaining global scene consistency—for instance, returning to the starting point yields the same view of the scene. Our approach enables scene extrap-olation beyond the fixed bounds of current 3D genera-tive models, while also supporting a persistent, camera-independent world representation that stands in contrast to auto-regressive 3D prediction models. Our project page: https://chail.github.io/persistent-nature/ .1. Introduction Generative image and video models have achieved re-markable levels of realism, but are still far from pre-senting a convincing, explorable world. Moving a vir-tual camera through these models—either in their latent space [3, 23, 29, 72] or via explicit conditioning [35]—is not like walking about in the real world. Movement is either very limited (for example, in object-centric models [5]), or else camera motion is unlimited but quickly reveals the lack of a persistent world model. Auto-regressive 3D synthesis meth-ods exemplify this lack of persistence [42, 45]; parts of the scene may change unexpectedly as the camera moves, and you may find that the scene is entirely different when return-ing to previous positions. The lack of spatial and temporal consistency can give the output of these models a strange, dream-like quality. In contrast, machines that can generate unbounded, persistent 3D worlds could be used to develop agents that plan within a world model [21], or to build vir-tual reality experiences that feel closer to the natural world, rather than appearing as ephemeral hallucinations [42]. We therefore aim to develop a unconditional generative model capable of generating unbounded 3D scenes with a This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20863 persistent underlying world representation. We want synthe-sized content to move in a way that is consistent with camera motion, yet we should also be able to move arbitrarily far and still generate the same scene upon returning to a previous camera location, regardless of the camera trajectory. To achieve this goal, we model a 3D world as a terrain plus a skydome . The terrain is represented by a scene layout grid—an extendable 2D array of feature vectors that acts as a map of the landscape. We ‘lift’ these features into 3D and decode them with an MLP into a radiance field for volume rendering. The rendered terrain images are super-resolved and composited with renderings from the skydome model to synthesize final images. We train using a layout grid of limited size, but can extend the scene layout grid by any de-sired amount during inference, enabling unbounded camera trajectories. Since our underlying representation is persistent over space and time, we can fly around 3D landscapes in a consistent manner. Our method does not require multiview data; each part of our system is trained from an unposed collection of single-view images using GAN objectives. Our work builds upon two prior threads of research that tackle generating immersive worlds: 1) generative models of 3D data, and 2) generative models of infinite videos. Along the first direction are generators of meshes, volumes, radi-ance fields, etc (e.g., [5, 54, 59]). These models represent a consistent 3D world by construction, and excel at render-ing isolated objects and bounded indoor scenes. Our work, in contrast, tackles the challenging problem of generating large-scale unbounded nature scenes. Along the second di-rection are methods like InfiniteNature [42, 45], which can indeed simulate visual worlds of infinite extent. These meth-ods enable unbounded scene synthesis by predicting new viewpoints auto-regressively from a starting view. However, they do not ensure a persistent world representation; content may change when revisited. Our method aims to combine the best of both worlds, generating boundless scenes (unlike prior 3D generators) while still representing a persistent 3D world (unlike prior video generative models). In summary: We present an unconditional 3D generative model for unbounded nature scenes with a persistent world repre-sentation, consisting of a terrain map and skydome. We augment our generative pipeline to support camera extrapolation beyond the training camera distribution by extending the terrain features. Our model is learned entirely from single-view land-scape photos with unknown camera poses. 2. Related Work Image and view extrapolation. Pioneering work by Kaneva et al. [32] proposed the task of infinite image extrap-olation by using a large image database to perform classical2D image retrieval, stitching, and rendering. More recently, various learning-based 2D image inpainting [24, 41, 46, 67, 79, 94, 95, 97] and outpainting [2, 9, 43, 80, 89, 91] methods have been developed. These methods fill in missing image regions or expand the field of view by synthesizing realistic image content that is coherent with the partial input image. Beyond 2D, prior work has explored single-view 3D view extrapolation , often by applying 2D image synthesis tech-niques within a 3D representation [28, 30, 40, 65, 66, 74, 90]. However, these methods can only extrapolate content within a very limited range of viewpoints. Video generation. Video generation aims to synthesize re-alistic videos from different types of input. Unconditional video generation produces long videos often from noise input [3,17,19,47,53,77,83], while conditional video gener-ation generates sequences by conditioning on one or a few images [13, 15, 27, 36, 38, 84, 85, 85, 86, 88, 92, 96], or a text prompt [26, 75]. However, applying these ideas in 3D re-quires supervision from multi-view training data, and cannot achieve persistent 3D scene content at runtime, since there is no explicit 3D representation. Some recent work preserves global scene consistency via extra 3D geometry inputs such as point clouds [49] or voxel grids [22]. In contrast, our method synthesizes both the geometry and appearance of an entire world from scratch using a global feature representa-tion to achieve consistent generated content. Generative view synthesis. Novel view synthesis aims to produce new views of a scene from single [7,31,37,57,66,73, 74,81,82,90,93] or multiple image observations [1,10,16,39, 48,50 –52,64,68,71,87,98] by constructing a local or global 3D scene representation. However, most prior methods can only interpolate or extrapolate a limited distance from the input views, and do not possess a generative ability. On the other hand, a number of generative view synthesis methods have been recently proposed utilizing neural vol-umetric representations [5, 14, 20, 54 –56, 62, 69, 78]. These methods can learn to generate 3D representations from 2D supervision, and have demonstrated impressive results on generating novel objects [59], faces [5, 12, 20, 58], or indoor environments [14, 63]. However, none of these methods can generate unbounded outdoor scenes due to lack of multi-view data for supervision, and due to the larger and more complex scene geometry and appearance that is difficult to model with prior representations. In contrast, our approach can generate globally consistent, large-scale nature scenes by training solely from unstructured 2D photo collections. Our work is particularly inspired by recent perpetual view generation methods, including InfiniteNature [45] and InfiniteNature-Zero [42], which can generate unbounded fly-through videos of natural scenes, and are trained on nature videos or photo collections. However, these methods gener-ate video sequences in an auto-regressive manner, and there-fore cannot achieve globally consistent 3D scene content. 20864 GupEbgGbg Glandxyzyflandfcolorσlayout decoding ray distanceσvolume renderingflandz IHRmHRdHR ILR n dLRfim mLRinitial terrain (32x32)refined terrain (256x256)Figure 2. Overview of scene layout decoding. The layout generator Glandsamples a random latent code to produce a 2D scene layout grid flandrepresenting the shape and appearance of a terrain map, and which can be spatially extended using a grid of latent codes (see x3.2). To render an image from a given camera, sampled points along camera rays passing over the feature plane are decoded via an MLP into a color featurefcolorand density, which are then volume rendered. This produces a low-resolution image, mask, depth, image features, and a projected noise pattern, which are provided to a refinement network Gupto produce final image, mask, and depth outputs. Our approach instead adopts a global scene representation that can be trained to generate consistent-by-construction and realistic novel views spanning large-scale scenes. Con-current works for scene synthesis InfiniCity [44] and Scene-Dreamer [8] leverage birds-eye-view representations, while SceneScape [18] builds a mesh representation from text. 3. Method Our scene representation for unbounded landscapes con-sists of two components, a scene layout grid and a skydome . The scene layout grid models the landscape terrain, and is a 2D grid of features defined on a “ground plane.” These 2D features are intended to describe both the height and appear-ance content of the terrain, representing the full 3D scene — in fact, we decode these features to a 3D radiance field, which can then be rendered to an image ( x3.1). To enable camera motion beyond the training volume, we spatially ex-tend the 2D feature grid to arbitrary sizes ( x3.2). Because it is computationally expensive to generate and volume render highly detailed 3D content at the scale we aim for, we use an image-space refinement network that adds additional texture detail to rendered images ( x3.3). The second scene component is a skydome (x3.4), which is a spherical (panoramic) image intended to model very remote content, such as the sun and sky, as well as distant mountains. The skydome is generated to harmonize with the terrain content described by the scene layout grid. All the stages of our approach are trained with GAN losses (x3.5). In what follows, we use the 3D coordinate convention that the ground plane is the xz-plane, and the y-axis represents height above or below this plane. Generally, | the camera used to view the scene will be positioned some height above the ground. 3.1. Scene layout generation and rendering To represent a distribution over landscapes, we take a generative approach following the layout representation ofGSN [14]. First, a 2D scene layout grid is synthesized from a sampled random noise code zpassed to a StyleGAN2 [34] generatorGland. This creates a 2D feature grid fland, which we bilinearly interpolate to obtain a 2D function over spatial coordinatesxandz: fland(x;z) = Interpolate( Gland(z);(x;z)) (1) To define a full 3D scene, we need a way to compute the content at any 3D location (x;y;z ). We define a multi-layer perceptronMthat takes a scene grid feature, as well as the heightyof the point at which we want to evaluate the scene content. The outputs of Mare the 2D-to-3D lifted feature fcolorand the density at point (x;y;z ): fcolor;=M(fland(x;z);y): (2) In this way, the 2D scene layout grid determines a radiance field over all 3D points within the bounds of the grid [14, 70, 93]. That is, feature vectors in the grid encode not just appearance information, but also the height (or possibly multiple heights) of the terrain at their ground location. To render an image from a desired camera pose, we cast raysrfrom the camera origin through 3D space, sample points (x;y;z )along them, and compute fcolorandat each point. We then use volume rendering to composite fcoloralong each ray into projected 2D image features fim, a disparity image dLR, and a sky segmentation mask mLR. We form an initial RGB image of the terrain, ILR, via a learned linear projection Pof these image features. This process is depicted in the left half of Fig. 2, and is defined as: fim(r) =NX i=1wifcolor;i; d LR(r) =NX i=1widi; mLR(r) =NX i=1wi; I LR=Pfim;(3) wherei2f1::Ngrefers to the index of each sampled point along ray rin order of increasing distance from the camera, 20865 Latent Grid flandz00z01z02z12z10z11z22z20z21WHG + 2D SOATBlended Featurez00 β002H2WFigure 3. Layout extension procedure. To extend the layout at in-ference time, we sample noise codes zin a grid arrangement. To smoothly transition between adjacent feature grids, we use the SOAT (StyleGAN of All Trades) procedure [11] in 2D. Operating on a 22sub-grid, we apply each generator layer four times in fully convolutional manner over the entire sub-grid, each time con-ditioned on a different corner latent code z, before multiplying by bilinear blending weights. This process is repeated for each layer of the generator and each sub-grid. Each 22sub-grid produces a2H2Wfeature grid, and sub-grids are blended together in an overlapping fashion to obtain an extended feature grid flandof arbitrary spatial size. diis the inverse-depth (disparity) of point i, and weights wi are determined from the volume rendering equations used in NeRF [51] (see supplemental). We intend the mask mLRto distinguish sky regions (which will be empty and filled later using the skydome) from non-sky regions, and achieve this by training using segmented real images in which color and disparity for sky pixels are replaced with zero. Since to achieve zero disparity all weights along a ray must be zero (which also results in a zero-valued color feature), this approach encourages the gen-erator to omit sky content. However, while we find that the model indeed learns to generate transparent sky regions, land geometry can also become partially transparent. To counter this, we penalize visible decreases in opacity along viewing rays using finite differences of opacity : Ltransparent (r) =NX i=2wimax( i 1 i;0) i: (4) 3.2. Layout Extension WhileGlandcreates a fixed-size feature grid, our objec-tive is to generate geometry of arbitrary size, enabling long-distance camera motion at inference time. Hence, we devise a way to extend the feature grid in the xandzdimensions. We illustrate this process in Fig. 3, where we first sample noise codes zin a grid arrangement, where each zgenerates a 2D layout feature grid of size HW. To obtain a smooth transition between these independently sampled layout fea-tures, we generalize the image interpolation approach from SOAT (StyleGAN of all Trades) [11] to two dimensions. We operate on 22sub-grids and blend intermediate featuresfrom each layer of the generator as follows: fk;l+1=Gl(fl;zk);k=f00;01;10;11g fl+1=X k=f00;01;10;11g k(x;z)fk;l+1:(5) For each of the four corner anchors k, we construct the modulated feature fk;l+1by applying Gl(thel-th layer of Gland) in a fully convolutional manner over the entire sub-grid. We then interpolate between the four feature grids using bilinear interpolation weights k(x;z). By stitching these 22sub-grids in an overlapping manner, we can obtain a scene layout feature grid of arbitrary size to use as fland. Additional details are provided in the supplemental. 3.3. Image refinement Due to the computational cost of volume rendering, train-ing the layout generator at higher resolutions becomes im-practical. We therefore use a refinement network Gupto up-sample the initial generated image ILRto a higher-resolution resultIHR, while adding textural details (Fig. 2-right). We use a StyleGAN2 backbone for Gup, replacing the earlier feature layers with feature output fimand the RGB residual layers with a concatenation of ILR,dLR, andmLR. To encour-age the refined terrain image IHRto be consistent with the sky mask, the network also predicts a refined disparity map and sky mask for compositing with the skydome (see x3.4): IHR;dHR;m HR=Gup(fim;ILR;dLR;m LR): (6) We compute a reconstruction loss between the initial and refined disparity and mask outputs, and penalize Gupfor producing gray sky pixels in IHRoutside the predicted mask mHR. Please see the supplemental for more details. For fine texture details, StyleGAN2 also uses layer-wise spatial noise in intermediate generator layers (in addition to the global latent z). Using a fixed 2D noise pattern results in texture ‘sticking’ as we move the camera [33], but resam-pling it every frame reduces spatial coherence and removing it entirely results in convolutional gridding artifacts. To avoid these issues and improve spatial consistency, we replace the 2D image-space noise with projected 3D world-space noise, where the noise input to Gupis the projection of samples from a grid of noise, n. This noise pattern is drawn from a standard Gaussian distribution defined on the ground plane at the same resolution of the layout features, which is then lifted into 3D and volume rendered along each ray r: n(r) =NX i=1win(x;z): (7) 3.4. Skydome We model remote content (sky and distant mountains) sep-arately with a skydome generator Gsky(Fig.4). This genera-tor follows the StyleGAN3 architecture [33], with a mapping 20866 EclipGsky skydome resultterrain imagexHRsky outputxskycompositeFigure 4. Skydome generator. Conditioned on the terrain image, the skydome generator Gskysynthesizes distant content (e.g., sky pixels and remote mountains) that is consistent with the generated terrain using encoder Eclip.Gskyis conditioned on cylindrical coordinates which can be unwrapped to produce a panoramic skydome image. network and synthesis network conditioned on cylindrical coordinates [4]. We adapt it by conditioning on the terrain output: we encode terrain images IHRusing the pretrained CLIP image encoder Eclip[60], and concatenate this to the style-code output of the mapping network as input into Gsky: Isky=Gsky(concat(Eclip(IHR);mapping( z))):(8) Conditioning on the foreground terrain image encourages the skydome generator to generate a sky that is consistent with the terrain content. This model trains on single-view land-scape images but can produce a full panorama at inference-time by passing in coordinates that correspond to a 360 cylinder. The skydome is rendered to an individual camera viewpoint using camera ray directions, giving the skydome imageIdomewhich is then composited with the terrain image using the sky mask: Ifull=IHR mHR+Idome (1 mHR): (9) 3.5. Training We train the layout generator (rendering at 32x32), re-finement network (upsampling to 256x256), and skydome generator separately. To train the refinement network, we op-erate on outputs of the layout generator, freezing the weights of that model. For the skydome generator, we train using real landscape images, and apply it only to the outputs of the refinement network at inference time. We follow the StyleGAN2 objective [34], with additional losses for each training stage, architecture, and hyperparameters provided in the supplemental. Dataset and camera poses. We train on LHQ [76], a dataset of of 90K unposed, single-view images of natural landscapes. A number of LHQ images contain geometry that is not amenable to “flying”, such as a landscape pictured through a window, or a closeup of trees. Therefore, we perform a filtering process on LHQ prior to training (see supplemental). We also obtain auxiliary outputs – disparity and sky seg-mentation – using the pretrained DPT [61] model. Disparityand sky segmentation are used to construct the real image distribution in the GAN training phases. After filtering, we use 56,982 images for training, and augment with horizontal flipping. During training we also need to sample camera poses. Prior 3D generators [5, 6, 14, 20, 58, 69] either use ground-truth poses from a simulator, or assume an object-centric camera distribution in which the camera looks at a fixed origin from some radius. Because our dataset lacks ground truth poses, we first sample a bank of training poses uniformly across the layout feature grid with random small height offsets, and rotate such that the near half of the camera view frustum falls entirely within the layout grid. Since the aerial layout should not be specific to any given camera pose, we generate flandwithout any camera pose information, and then adopt the sampling scheme from GSN [14] which samples a camera pose from the initial training pose bank proportional to the inverse terrain density at each camera position, to avoid placing the camera within occluding geometry. 4. Experiments Given its persistent scene representation and the exten-sibility of the its layout grid, our model enables arbitrary motion through a synthesized landscape, including long cam-era trajectories. We show sample outputs from our model under a variety of camera movements ( x4.1); present qual-itative and quantitative comparisons with alternate scene representations, including auto-regressive prediction models and unconditional generators defined for bounded or object-centric scenes (x4.2); and investigate variations of our model to evaluate design decisions ( x4.3). 4.1. Persistent, unbounded scene synthesis Figure 5 shows example landscapes generated by our model with various camera motions. As the camera moves (by rotating and/or translating) the generated imagery changes in a way that is consistent with the underlying ge-ometry, e.g. hills move across the image or become clos |
Jain_OneFormer_One_Transformer_To_Rule_Universal_Image_Segmentation_CVPR_2023 | Abstract Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architec-tures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies seg-mentation with a multi-task train-once design. We first pro-pose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, in-stance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference.Thirdly, we propose using a query-text contrastive loss dur-ing training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model out-performs specialized Mask2Former models across all three segmentation tasks on ADE20k, Cityscapes, and COCO, de-spite the latter being trained on each task individually. We believe OneFormer is a significant step towards making im-age segmentation more universal and accessible. | 1. Introduction Image Segmentation is the task of grouping pixels into multiple segments. Such grouping can be semantic-based ( e.g., road, sky, building), or instance-based (objects with well-defined boundaries). Earlier segmentation ap-proaches [6,19,32] tackled these two segmentation tasks in-dividually, with specialized architectures and therefore sep-arate research e ffort into each. In a recent e ffort to unify se-mantic and instance segmentation, Kirillov et al. [23] pro-posed panoptic segmentation, with pixels grouped into an This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2989 amorphous segment for amorphous background regions (la-beled “stu ff”) and distinct segments for objects with well-defined shape (labeled “thing”). This e ffort, however, led to new specialized panoptic architectures [9] instead of unify-ing the previous tasks (see Fig. 1a). More recently, the research trend shifted towards unifying image segmenta-tion with new panoptic architectures, such as K-Net [47], MaskFormer [11], and Mask2Former [10]. Such panop-tic/universal architectures can be trained on all three tasks and obtain high performance without changing architecture. They do need to, however, be trained individually on each task to achieve the best performance (see Fig. 1b). The individual training policy requires extra training time and produces di fferent sets of model weights for each task. In that regard, they can only be considered a semi-universal approach. For example, Mask2Former [10] is trained for 160K iterations on ADE20K [13] for each of the semantic, instance, and panoptic segmentation tasks to obtain the best performance for each task, yielding a total of 480k iterations in training, and three models to store and host for inference. In an e ffort to truly unify image segmentation, we pro-pose a multi-task universal image segmentation framework (OneFormer ), which outperforms existing state-of-the-arts on all three image segmentation tasks (see Fig. 1c), by only training once on one panoptic dataset. Through this work, we aim to answer the following questions: (i)Why are existing panoptic architectures [10,11] not suc-cessful with a single training process or model to tackle all three tasks? We hypothesize that existing methods need to train individually on each segmentation task due to the absence of task guidance in their architectures, making it challenging to learn the inter-task domain di fferences when trained jointly or with a single model. To tackle this chal-lenge, we introduce a task input token in the form of text: “the task is{task}”, to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model. We uniformly sample {task}from{panoptic, instance, semantic}and the corresponding ground truth during our joint training process to ensure our model is unbiased in terms of tasks. Motivated by the ability of panoptic [23] data to capture both semantic and instance information, we derive the semantic and instance labels from the cor-responding panoptic annotations during training. Conse-quently, we only need panoptic data during training. More-over, our joint training time, model parameters, and FLOPs are comparable to the existing methods, decreasing train-ing time and storage requirements up to 3 ×, making image segmentation less resource intensive and more accessible. (ii)How can the multi-task model better learn inter-task and inter-class di fferences during the single joint training pro-cess? Following the recent success of transformer frame-works [2,10,17,18,21,30,46] in computer vision, we formu-late our framework as a transformer-based approach, which can be guided through the use of query tokens. To add task-specific context to our model, we initialize our queries as repetitions of the task token (obtained from the task input) and compute a query-text contrastive loss [33, 43] with the text derived from the corresponding ground-truth label for the sampled task as shown in Fig. 2. We hypothesize that a contrastive loss on the queries helps guide the model to be task-sensitive and reduce category mispredictions. We evaluate OneFormer on three major segmentation datasets: ADE20K [13], Cityscapes [12], and COCO [27], each with all three segmentation tasks. OneFormer sets the new state of the arts for all three tasks with a single jointly trained model. To summarize, our main contributions are: •We propose OneFormer, the first transformer-based multi-task universal image segmentation framework that needs to be trained only once with a single univer-sal architecture, a single model, and on a single dataset to outperform existing frameworks across the seman-tic, instance, and panoptic segmentation tasks, despite the latter need to be trained separately on each task. •OneFormer uses a task-conditioned joint training strat-egy, uniformly sampling di fferent ground truth do-mains (semantic, instance, or panoptic) by deriving all GT labels from panoptic annotations to train its multi-task model. Thus, OneFormer actually achieves the orignial unification goal of panoptic segmenta-tion [23]. •We validate OneFormer through extensive experi-ments on three major benchmarks: ADE20K [13], Cityscapes [12], and COCO [27]. OneFormer sets a new state-of-the-art performance on all three segmen-tation tasks compared with methods using the standard Swin-L [30] backbone and improves even more with new ConvNeXt [31] and DiNAT [17] backbones. |
Hamaguchi_Hierarchical_Neural_Memory_Network_for_Low_Latency_Event_Processing_CVPR_2023 | Abstract This paper proposes a low latency neural network ar-chitecture for event-based dense prediction tasks. Conven-tional architectures encode entire scene contents at a fixed rate regardless of their temporal characteristics. Instead, the proposed network encodes contents at a proper tem-poral scale depending on its movement speed. We achieve this by constructing temporal hierarchy using stacked latent memories that operate at different rates. Given low latency event steams, the multi-level memories gradually extract dy-namic to static scene contents by propagating information from the fast to the slow memory modules. The architec-ture not only reduces the redundancy of conventional archi-tectures but also exploits long-term dependencies. Further-more, an attention-based event representation efficiently en-codes sparse event streams into the memory cells. We con-duct extensive evaluations on three event-based dense pre-diction tasks, where the proposed approach outperforms the existing methods on accuracy and latency, while demon-strating effective event and image fusion capabilities. The code is available at https://hamarh.github.io/ hmnet/ . | 1. Introduction Latency matters for many vision applications such as au-tonomous vehicles or UA Vs, directly affecting their safety and reliability measures. Latency is also crucial for better user experience in time-sensitive vision applications such as augmented reality, where the latency of standard RGB cam-eras is insufficient. For example, a vehicle traveling 80km/h will move 74cm within a frame of a standard 30fps camera. Event cameras have extremely low latency. Unlike stan-dard vision cameras that capture the intensity of all pixels at a fixed rate, event cameras asynchronously record intensity changes of individual pixels. This unique principle leads to extremely low latency (microseconds) and high temporal resolution, along with many other advantages such as high dynamic range and low power consumption. With such at-Conventional RNN (fixed rate) Stage1Stage2Stage3 Stage1Stage2Stage3 Stage1Stage2Stage3 Stage1Stage2Stage3 Ours (variable rate) Stage1Stage2Stage3 Stage1 Stage1Stage2 Stage1mem1 mem2 mem3 Ours RNN Events Image events eventsFigure 1. Schematic comparison between conventional and our methods. Our method adaptively processes dynamic and static scene contents using latent memories with variable rates. Our approach simultaneously processes fast-moving objects and static scene contents at low latency (See the right figure). tractive characteristics, event cameras are becoming popu-lar input devices for many dense prediction tasks such as semantic segmentation [1, 39], object detection [18, 30, 34], depth estimation [10, 26], and optical flow [11, 46]. Despite the emergence of low latency event-cameras, few works focused on low latency recognition models ded-icated to event-based dense prediction tasks. Instead, most previous works apply standard CNN architectures equipped with the recurrent modules as a backbone [10, 15, 30, 39], resulting in the same latency levels as frame-based mod-els. Another line of research applies Spiking Neural Net-works (SNNs) for event data. However, SNNs suffer from low accuracy due to the lack of established training prin-ciples [40, 41] or high latency due to long simulation time steps [44]. We need backbone architectures that best exploit low latency and high temporal resolution of event data. To this end, we propose a Hierarchical Neural Memory Network (HMNet) for low latency event processing. The key idea is to encode the scene contents at a proper tem-poral scale depending on their speed. For this purpose, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22867 the proposed network builds multi-level latent memory with different operating rates (Fig. 1). The low-level memories run fast to quickly encode local and dynamic information, whereas high-level memories perform global reasoning on static information at a lower frequency. This design signif-icantly reduces the computational loads in contrast to con-ventional methods that runs the entire forward path every time. The paper also proposes an Event Sparse Cross Atten-tion (ESCA) that directly injects sparse event streams into dense memory cells with minimal information loss. We conduct extensive evaluations on three event-based vision tasks (object detection, semantic segmentation, and monocular depth estimation) as well as event-image fusion task. Experimental results show that HMNet outperforms existing methods while reducing latency by 40%-50%. |
Barath_Finding_Geometric_Models_by_Clustering_in_the_Consensus_Space_CVPR_2023 | Abstract We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies. The problem is formalized as finding dominant model instances progressively without forming crisp point-to-model assign-ments. Dominant instances are found via a RANSAC-like sampling and a consolidation process driven by a model quality function considering previously proposed instances. New ones are found by clustering in the consensus space. This new formulation leads to a simple iterative algorithm with state-of-the-art accuracy while running in real-time on a number of vision problems – at least two orders of magnitude faster than the competitors on two-view motion estimation. Also, we propose a deterministic sampler re-flecting the fact that real-world data tend to form spatially coherent structures. The sampler returns connected com-ponents in a progressively densified neighborhood-graph. We present a number of applications where the use of mul-tiple geometric models improves accuracy. These include pose estimation from multiple generalized homographies; trajectory estimation of fast-moving objects; and we also propose a way of using multiple homographies in global SfM algorithms. Source code: https://github.com/ danini/clustering-in-consensus-space . | 1. Introduction Robust multi-instance model fitting is the problem of in-terpreting a set of data points as a mixture of noisy ob-servations stemming from multiple instances of geometric models. Examples for such a problem are the estimation of plane-to-plane correspondences ( i.e., homography matri-ces) in two images, and the retrieval of rigid motions in a dynamic scene captured by a moving camera. In the state-of-the-art algorithms, finding an unknown number of model instances is achieved by clustering the data points into dis-joint sets, each representing a particular model instance. Robustness is achieved by considering an outlier model. Figure 1. Multi-homography fitting with the proposed method in 0.04secs (left), and with Prog-X [4] in 1.48secs (right). Prog-X is one of the fastest SOTA algorithm. Outliers are not drawn. Multi-instance model fitting has been studied since the early sixties. The Hough-transform [22, 23] is perhaps the first popular method for finding multiple instances of a sin-gle class [18, 37, 44, 67]. The RANSAC [16] algorithm was as well extended to deal with finding multiple instances. Se-quential RANSAC [25,60] detects instances in a sequential manner by repeatedly running RANSAC to recover a sin-gle instance and, then, removing its inliers from the point set. The greedy approach that makes RANSAC a powerful tool for recovering a single instance becomes its drawback when estimating multiple ones. Points are assigned not to the best but to the first instance, typically the one with the largest support, for which they cannot be deemed outliers. MultiRANSAC [71] forms compound hypotheses about n instances. In each iteration, MultiRANSAC draws samples of size ntimes m, where mis the number of points required for estimating a model instance, e.g.,m= 4 for homogra-phies. Besides requiring the number nof the instances to be known a priori, the increased sample size affects the prob-lem complexity and, thus, the processing time severely. Modern approaches for multi-model fitting [1, 3, 24, 32– 34, 40, 62, 66] follow a two-step procedure. First, they gen-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5414 Figure 2. Left: A case when assigning points to a single line (by color) prevents finding all 9visible instances. Dashed black lines are not recovered. When fitting planes to 4out of the 7points, only a single plane can be found. Middle, Right : Examples where the point-to-model assignment fails at the intersection of planes. erate many instances by repeatedly selecting minimal point sets and fitting model instances. Second, a subset of the hypotheses is selected interpreting the input data points the most. This selection is done in various ways. For instance, a popular group of methods [1,3,24,40] optimizes point-to-model assignments by energy minimization using graph la-beling techniques [8]. The energy originates from point-to-model residuals, label costs [14], and geometric priors [40] such as the spatial coherence of the data points. Another group of methods uses preference analysis based on the dis-tribution of the residuals of data points [32–34, 68]. Also, there are techniques [62, 63, 69] approaching the problem as hyper-graph partitioning where the instances are repre-sented by vertices, and the points by hyper-edges. Prog-X [4] and CONSAC [26] discussed that the first, instance generation, step of the mentioned methods leads to a number of issues, e.g., the instances are generated blindly, having no information about the data at hand. This ap-proach severely restricts the out-of-the-box applicability of such techniques since the user either has to consider the worst-case scenario and, thus, generate an unnecessarily high number of instances; or requires some rule of thumb, e.g., to generate twice the point number hypotheses that pro-vides no guarantees of finding the sought instances. Prog-X approaches the problem via interleaving the model proposal and optimization steps. CONSAC further improves it by us-ing a deep-learning-based guided sampling approach. A common point of allstate-of-the-art algorithms is for-malizing the multi-model fitting problem as finding dis-joint sets of data points each representing a model instance. There are two main practical issues with this assumption. First, in some cases, a point belongs to multiple instances and this assumption renders the problem unsolvable , see the left image of Fig. 2. Also, the point-to-model assignment is often unclear even if it is done by a human, especially, for points around the intersection of instances, see the right two plots of Fig. 2 for examples. The second issue stems from the recovery of disjoint point sets that usually requires a rather complex procedure, e.g. labeling via energy mini-mization, that affects the run-time severely. The main contribution of this paper is a fundamentallynew problem formulation that does not require forming crisp point-to-model assignments, i.e., a point can be as-signed to multiple instances. This is different from the for-mulations used in the state-of-the-art algorithms for general multi-model fitting [1, 3, 4, 24, 26, 34, 40, 63]. This prop-erty allows the proposed method to be a simple iterative algorithm and, yet, to obtain results superior to the state-of-the-art both in terms of accuracy and run-time, being real-time on a number of problems, see Fig. 1, including ones where multi-model fitting algorithms generally are notreal-time, e.g., two-view motion detection. Also, this assump-tion relaxes the greedy nature of sequential algorithms as the ordering in which the instances are proposed becomes unimportant. As the second contribution, we discuss ways of exploiting multiple instances in popular applications – Structure-from-Motion, pose estimation for generalized and pin-hole cameras, and trajectory estimation of fast-moving objects. By considering multiple models, the accuracy is increased in almost all cases on several publicly available real-world datasets. As the third contribution, we propose a new sampler designed specifically for multi-instance model fitting. The sampler considers that real-world data tend to form spatially coherent structures. It returns the connected components in a gradually densified neighborhood-graph. While several samplers exist that exploit spatial properties of the data, e.g. [6, 38], the proposed one is deterministic . |
Feng_Mutual_Information-Based_Temporal_Difference_Learning_for_Human_Pose_Estimation_in_CVPR_2023 | Abstract Temporal modeling is crucial for multi-frame human pose estimation. Most existing methods directly employ optical flow or deformable convolution to predict full-spectrum motion fields, which might incur numerous irrele-vant cues, such as a nearby person or background. Without further efforts to excavate meaningful motion priors, their results are suboptimal, especially in complicated spatio-temporal interactions. On the other hand, the temporal difference has the ability to encode representative motion information which can potentially be valuable for pose es-timation but has not been fully exploited. In this paper, we present a novel multi-frame human pose estimation frame-work, which employs temporal differences across frames to model dynamic contexts and engages mutual information objectively to facilitate useful motion information disen-tanglement. To be specific, we design a multi-stage Tem-poral Difference Encoder that performs incremental cas-caded learning conditioned on multi-stage feature differ-ence sequences to derive informative motion representa-tion. We further propose a Representation Disentanglement module from the mutual information perspective, which can grasp discriminative task-relevant motion signals by explic-itly defining useful and noisy constituents of the raw motion features and minimizing their mutual information. These place us to rank No.1 in the Crowd Pose Estimation in Com-plex Events Challenge on benchmark dataset HiEve, and achieve state-of-the-art performance on three benchmarks PoseTrack2017, PoseTrack2018, and PoseTrack21. | 1. Introduction Human pose estimation has long been a nontrivial and fundamental problem in the computer vision community. *Corresponding Author Figure 1. Directly leveraging optical flow can be distracted by irrelevant clues such as background and blur (a), and sometimes fails in scenarios with fast motion and mutual occlusion (b). Our proposed framework proceeds with temporal difference encoding and useful information disentanglement to capture more tailored temporal dynamics (c), yielding more robust pose estimations (d). The goal is to localize anatomical keypoints ( e.g., nose, an-kle, etc.) of human bodies from images or videos. Nowa-days, as more and more videos are recorded endlessly, video-based human pose estimation has been extremely de-sired in enormous applications including live streaming, augmented reality, surveillance, and movement tracking [14, 15, 24, 35, 44]. An extensive body of literature focuses on estimating human poses in static images , ranging from earlier meth-ods employing pictorial structure models [43, 51, 55, 69] to recent attempts leveraging deep convolutional neural net-works [35, 49, 56, 58] or Vision Transformers [32, 61, 65]. Despite the impressive performance in still images, the ex-tension of such methods to video-based human pose estima-tion still remains challenging due to the additional temporal dimension in videos [34, 57]. By nature, the video presents distinctive and valuable dynamic contexts (i.e., the tempo-ral evolution in the visual content) [71]. Therefore, being able to effectively utilize the temporal dynamics (motion information) is fundamentally important for accurate pose This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17131 estimation in videos [35]. One line of work [35, 37, 54] attempts to derive a uni-fied spatial-temporal representation through implicit motion compensation. [54] presents a 3DHRNet which utilizes 3D convolutions to extract spatiotemporal features of a video tracklet to estimate pose sequences. [35] adopts deformable convolutions to align multi-frame features and aggregates aligned feature maps to predict human poses. On the other hand, [40,45,67] explicitly model motion contexts with op-tical flow. [40, 45] propose to compute dense optical flow between every two frames and leverage the flow features for refining pose heatmaps temporally across multiple frames. Upon studying the previous methods [34, 35, 40, 45], we empirically observe that the pose estimation performance is boosted with the implicit or explicit imposition of mo-tion priors. However, the movement of any visual evidence is usually attended to in these paradigms, resulting in clut-tered motion features that include numerous irrelevant in-formation ( e.g., nearby person, background), as illustrated in Fig. 1. Directly exploiting such vanilla motion features delivers inferior results, especially in complex scenarios of mutual occlusion and fast motion. More specifically, not all pixel movements are equally important in video-based hu-man pose estimation [66]. For example, background vari-ations and pixel changes caused by image quality degrada-tion ( e.g., blur and occlusion) are usually useless and dis-tracting, whereas the salient pixel movements driven by human body motions play a more important role in un-derstanding motion patterns [21]. Therefore, discovering meaningful motion dynamics is crucial to fully recovering human poses across a video. On the other hand, investigat-ing temporal differences across video frames allows one to discover representative motion cues [25, 52, 59]. Although it has already shown success in various video-related tasks (action recognition [52], video super-resolution [24]), its application on video-based human pose estimation remains under-explored. In this paper, we present a novel framework, named Temporal Difference Learning based on Mutual Information (TDMI) for human pose estimation. Our TDMI consists of two key components: (i)A multi-stage Temporal Difference Encoder (TDE) is designed to model motion contexts conditioned on multi-stage feature differences among video frames. Specifically, we first compute the feature difference sequences across multiple stages by leveraging a temporal difference operator. Then, we perform incremental cascaded learning via intra-and inter-stage feature integration to derive the motion representation. (ii)We further introduce a Representation Disentanglement module (RDM) from the mutual infor-mation perspective, which distills the task-relevant motion features to enhance the frame representation for pose estimation. In particular, we first disentangle the usefuland noisy constituents of the vanilla motion representation by activating corresponding feature channels. Then, we theoretically analyze the statistical dependencies between the useful and the noisy motion features and arrive at an information-theoretic loss. Minimizing this mutual information objective encourages the useful motion com-ponents to be more discriminative and task-relevant. Our approach achieves significant and consistent performance improvements over current state-of-the-art methods on four benchmark datasets. Extensive ablation studies are conducted to validate the efficacy of each component in the proposed method. The main contributions of this work can be summa-rized as follows: (1) We propose a novel framework that leverages temporal differences to model dynamic contexts for video-based human pose estimation. (2) We present a disentangled representation learning strategy to grasp dis-criminative task-relevant motion signals via an information-theoretic objective. (3) We demonstrate that our approach achieves new state-of-the-art results on four benchmark datasets, PoseTrack2017, PoseTrack2018, PoseTrack21, and HiEve. |
Beyer_FlexiViT_One_Model_for_All_Patch_Sizes_CVPR_2023 | Abstract Vision Transformers convert images to sequences by slic-ing them into patches. The size of these patches controls a speed/accuracy tradeoff, with smaller patches leading to higher accuracy at greater computational cost, but chang-ing the patch size typically requires retraining the model. In this paper, we demonstrate that simply randomizing the patch size at training time leads to a single set of weights that performs well across a wide range of patch sizes, mak-ing it possible to tailor the model to different compute bud-?All authors made significant technical contributions. Lucas started and led the project. 1Google Research, Brain Team.2Google Research. 3work done at Google Brain, while being a PhD student at NYU. 0.1 1 10 Inference speed [ms/img] on TPUv370758085ImageNet-1k val accuracy [%] FlexiViT-L FlexiViT-B FlexiViT-SDeiT III ResNet50 EfficientNetV2Figure 2. FlexiViT results on ImageNet-1k. We train three Flexi-ViTs based on DeiT III on ImageNet-1k and show their speed-accuracy tradeoff when evaluated at various patch sizes. Each curve corresponds to a single model with a single set of weights run with different patch sizes. We also evaluate DeiT III at vari-ous patch sizes to show ViT’s natural inflexibility. EfficientNet-v2 numbers from [52] and ResNet50 numbers from [5], the latter distills from an ImageNet-21k pretrained teacher. FlexiViT was trained for 1000 epochs, but runs for 600, 300, and 90 epochs shown as shaded curves indicate that long training mostly benefits the short-sequence setting and is not strictly necessary. gets at deployment time. We extensively evaluate the re-sulting model, which we call FlexiViT, on a wide range of tasks, including classification, image-text retrieval, open-world detection, panoptic segmentation, and semantic seg-mentation, concluding that it usually matches, and some-times outperforms, standard ViT models trained at a single patch size in an otherwise identical setup. Hence, FlexiViT training is a simple drop-in improvement for ViT that makes it easy to add compute-adaptive capabilities to most mod-els relying on a ViT backbone architecture. Code and pre-trained models are available at github.com/google-research/big_vision . This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14496 | 1. Introduction Vision Transformers (ViTs) cut images into non-overlapping patches and perform all computations on to-kens created from these patches. This “patchification” pro-cedure represents a significant shift away from the previ-ously dominant convolutional neural network (CNN) ap-proach [32], where an image is processed with small lo-cal and typically overlapping filters. Patchification has un-locked new capabilities, such as (random) dropping of im-age patch tokens [10, 20, 44, 53, 61], adding specialized to-kens for new tasks [54, 56] or mixing image tokens with tokens from other modalities [1, 38, 64]. Despite the importance of patchification for ViT mod-els, the role of the patch size has received little attention. While the original ViT paper [15] works with three patch sizes (3232, 1616, and 1414 pixels), many follow-up works fix the patch size at 16 16 pixels [54,55,65]. In this work, we show that the patch size provides a simple and effective lever to change the compute and predictive per-formance of a model, without changing model parametriza-tion. For example, a ViT-B/8 model achieves 85:6%top-1 accuracy on ImageNet1k with 156 GFLOPs and 85 M pa-rameters, while a ViT-B/32 model achieves only 79:1%ac-curacy with 8.6 GFLOPs and 87 M parameters. Despite the major difference in performance and compute, these models have essentially the same parametrization. However, stan-dard ViT models perform well only at the patch size that they have been trained at. Tuning the patch size therefore requires complete re-training of the model. To overcome this limitation, we propose FlexiViT , a flex-ible ViT which matches or outperforms standard fixed-patch ViTs across a wide range of patch sizes with no added cost. To train FlexiViT, we randomize the patch size during train-ing, and resize the positional and patch embedding param-eters adaptively for each patch size, as shown in Figure 1. These simple modifications are already sufficient for strong performance, but we also propose a optimized resizing op-eration and a training procedure based on knowledge distil-lation which achieves even better results. We demonstrate the efficiency of FlexiViT models in many downstream tasks, such as image classification, trans-fer learning, panoptic and semantic segmentation, image-text retrieval and open-world recognition, and provide a general recipe for flexifying existing ViT-based training se-tups. Furthermore, we show that flexibility of the back-bone, i.e. strong performance across patch sizes, is often preserved even after fine-tuning with a fixed patch size. We leverage this observation to perform resource-efficient transfer learning: we finetune the model cheaply with a large patch size, but then deploy it with a small patch size for strong downstream performance. We further show that flexible patch size can be used to accelerate pre-training. To explain the effectiveness of FlexiViT, we analyze themodel’s representations. We find that the representations are often similar across different patch sizes, especially in the deeper layers. Finally, we show that FlexiViT out-performs alternative architectural ways of controlling the performance-compute trade-off in ViT models. |
Cai_RIAV-MVS_Recurrent-Indexing_an_Asymmetric_Volume_for_Multi-View_Stereo_CVPR_2023 | Abstract This paper presents a learning-based method for multi-view depth estimation from posed images. Our core idea is a “learning-to-optimize” paradigm that iteratively indexes a plane-sweeping cost volume and regresses the depth map via a convolutional Gated Recurrent Unit (GRU). Since the cost volume plays a paramount role in encoding the multi-view geometry, we aim to improve its construction both at pixel-and frame-levels. At the pixel level, we propose to break the symmetry of the Siamese network (which is typi-cally used in MVS to extract image features) by introducing a transformer block to the reference image (but not to the source images). Such an asymmetric volume allows the net-work to extract global features from the reference image to predict its depth map. Given potential inaccuracies in the poses between reference and source images, we propose to incorporate a residual pose network to correct the relative poses. This essentially rectifies the cost volume at the frame level. We conduct extensive experiments on real-world MVS datasets and show that our method achieves state-of-the-art performance in terms of both within-dataset evaluation and cross-dataset generalization. | 1. Introduction Multi-view stereo (MVS) aims to recover dense 3D ge-ometry from multiple images captured from different view-points with calibrated cameras [28]. It is a fundamen-tal problem in computer vision and has wide applications ranging from autonomous driving [12, 55], remote sens-ing [3], augmented reality [50], to robotics [22]. Follow-ing the seminal MVSNet [59], many learning-based meth-ods [17, 39, 40, 52, 53, 58, 60] have been proposed, achiev-ing great improvements against their traditional counter-parts [5, 14, 19, 44], in terms of accuracy or efficiency. Most of the learning-based MVS methods [17,39,40,52, 58,60] rely on traditional plane-sweeping [14,19] approach to generate a cost volume by comparing the CNN features of reference image and source images at several depth hy-potheses, and then apply 2D or 3D convolutional encoder-decoders to aggregate and regularize the cost volume. The2D CNN methods [17] use multi-level features as skip con-nections to help decode the cost volume for depth regres-sion. Even though the skip connections improve the depth maps, they weaken the role of cost volume and the geome-try knowledge embedded therein to some extent. Hence, 2D CNN methods suffer from degraded generalization when testing on unseen domains. The 3D CNN methods [31] usesoft-argmin to regress the depth map as the expectation from the cost volume distribution, and hence cannot predict the best candidate but instead an averaged one when dealing with a flat or multi-modal distribution caused by textureless, repeated, or occluded regions, etc. To mitigate these prob-lems, we propose RIA V-MVS, a new paradigm to predict the depth via learning to recurrently index an asymmetric cost volume, obtaining improved accuracy and generaliza-tion. As depicted in Fig. 1, our RIA V-MVS features several nontrivial novel designs. First, we learn to index the cost volume by approach-ing the correct depth planes per pixel via an index field (a grid of indices to identify the depth hypotheses), as shown in Fig. 1-(e). The proposed recurrent estimate of the index field enables the learning to be anchored at the cost volume domain. Specifically, it recurrently predicts the residual in-dex field in a descent direction of matching cost to retrieve cost values for the next iteration. The newly updated index field is used to directly index ( i.e., sampling via linear in-terpolation) depth hypotheses to render a depth map, which is iteratively optimized to approach the ground truth depth, making the system end-to-end trainable. Second, to facilitate the optimization, we propose to im-prove the cost volume at pixel-and frame-levels, respec-tively. At the pixel level, a transformer block is asymmet-rically applied to the reference view (but not to the source views). By capturing long-range global context via a trans-former and pixel-wise local features via CNNs, we build an asymmetric cost volume to store more accurate matching similarity cues. At the frame level, we propose a residual pose net to rectify the camera poses that are usually ob-tained via Visual SLAM [9, 16, 30] and inevitably contain noise. The rectified poses are used to more accurately back-ward warp the reference features to match the counterparts in source views. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 919 Figure 1. Our pipeline versus RAFT [49] and IterMVS [52]. Our recurrent processing of a plane-sweep cost volume by the iteratively refined index field serves as a new design for multi-view depth estimation. Our RIA V-MVS is depicted versus two related works RAFT [49] and IterMVS [52] as in Fig. 1. First, our method is developed using RAFT’s GRU-based iterative optimiza-tion. However, RAFT operates an all-pair correlation vol-ume (no multi-view geometry constraints) for optical flow (Fig. 1-(a) and (c)), our method is proposed for multi-view depth estimation by constructing a plane-sweep cost vol-ume (Fig. 1-(b)). Second, IterMVS [52] iteratively predicts the depth and reconstructs a new plane-sweep cost volume using updated depth planes centered at the predicted depth (Fig. 1-(d)). Instead, as shown in Fig. 1-(e), our proposed index field serves as a new design that bridges the cost vol-ume optimization ( i.e., by learning better image features via back-propagation) and the depth map estimation ( i.e., by sampling sweeping planes). It makes forward and back-ward learning differentiable. We conduct extensive exper-iments on indoor-scene datasets, including ScanNet [15], DTU [27], 7-Scenes [20], and RGB-D Scenes V2 [32]. We also performed well-designed ablation studies to verify the effectiveness and the generalization of our approach. |
Huang_Style_Projected_Clustering_for_Domain_Generalized_Semantic_Segmentation_CVPR_2023 | Abstract Existing semantic segmentation methods improve gen-eralization capability, by regularizing various images to a canonical feature space. While this process contributes to generalization, it weakens the representation inevitably. In contrast to existing methods, we instead utilize the differ-ence between images to build a better representation space, where the distinct style features are extracted and stored as the bases of representation. Then, the generalization to unseen image styles is achieved by projecting features to this known space. Specifically, we realize the style projec-tion as a weighted combination of stored bases, where the similarity distances are adopted as the weighting factors. Based on the same concept, we extend this process to the decision part of model and promote the generalization of semantic prediction. By measuring the similarity distances to semantic bases (i.e., prototypes), we replace the common deterministic prediction with semantic clustering. Compre-hensive experiments demonstrate the advantage of proposed method to the state of the art, up to 3.6% mIoU improve-ment in average on unseen scenarios. Code and models are available at https://gitee.com/mindspore/ models/tree/master/research/cv/SPC-Net . | 1. Introduction Domain generalization methods aim to promote the per-formance of model (trained on source datasets), when ap-plying it to unseen scenarios (target domains) [9, 19, 29, 36, 62, 74, 75]. Recently, domain generalization for semantic segmentation (DGSS) has attracted increasingly more at-tention due to the rise of safety-critical applications, such as autonomous driving [3, 12, 22, 45]. Existing DGSS methods improve the pixel-wise gen-eralization performance by learning domain-agnostic rep-*This work was done during W. Huang’s internship at Noah’s Ark Lab. †Corresponding author 𝑑𝑑1𝑑𝑑2 𝑑𝑑3𝑑𝑑4(a) IN (b) IW (c) IN&IW (d) OursShallow featuresDeep features Style projection Semantic clusteringOriginal distribution𝑑𝑑1<𝑑𝑑2 𝑑𝑑3>𝑑𝑑4𝑑𝑑1𝑑𝑑2Blue / Green / Orange : Different domains /: Different classes from different domains : Unseen samples : Style bases : Semantic bases Style space Semantic spaceFigure 1. Illustration of instance normalization/whitening (IN/IW) [5, 20, 40] and our proposed style projected clustering method. IN and IW regularize image features from different domains to a canonical space (a-c). Our method builds style and semantic rep-resentation spaces based on the data from known domains (d). resentations [5, 16, 20, 25, 40, 42, 66, 72]. Researches in this line share the similar goal in general, that is to cap-ture the domain-invariant characteristics of object contents, and eliminates the domain-specific ones ( i.e., image styles). As two representatives, Instance Normalization (IN) [56] and Instance Whitening (IW) [17] regularize image fea-tures from different domains to a canonical space, as illus-trated in Fig. 1(a) and 1(b). Specifically, IN achieves center-level feature alignment via channel-wise feature normaliza-tion [33,40], and IW realizes uniform feature distribution by removing linear correlation between channels [5,41]. More-over, the combination of these two methods is proposed in [42] for a better generalization, as shown in Fig. 1(c). Nevertheless, feature regularization inevitably weakens the representation capability, as a part of feature informa-tion is eliminated. Theoretically, it works under a strong assumption that the eliminated information is strictly the domain-specific ones. Yet in practice, the perfect disen-tanglement between image style and content is difficult to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3061 achieve. It means that a part of content features will also be eliminated in the process of feature regularization, and thus degrades the segmentation performance. Instead of seeking common ground by feature regular-ization, we aim to address DGSS in a different way. In this paper, we propose style projection as an alternative, which utilizes the features from different domains as bases to build a better representation space, as shown in Fig. 1(d). The motivation of style projection comes from a basic concept of generalization, that is to represent unseen data based on the known ones. Specifically, following the common practice, we adopt the statistics ( i.e., mean and variance) of features in channel dimension to represent image styles. The im-age styles from source domains are iteratively extracted and stored as the bases of representation. Then, we project the style of given unseen images into this representation space to promote generalization. This projection process is im-plemented as a weighted combination of stored style bases, where the similarity distance between styles are adopted as the weighting factors, i.e.,λ1andλ2shown in Fig. 1(d). Based on the projected style features, we further devise the decision part of model, which is elaborated for semantic segmentation. Typically, existing methods learn a paramet-ric function to map pixel-wise features to semantic predic-tions. We replace this deterministic prediction with seman-tic clustering , where the class of each pixel is predicted by the minimal similarity distance to semantic bases, as shown in Fig. 1(d). Notably, it follows the same concept of style projection, that is to predict unseen data based on the known ones. More concretely, to facilitate the performance of se-mantic clustering, we propose a variant of contrastive loss to align the semantic bases of same classes and enhance dis-criminability between different classes. We conduct comprehensive experiments on single-and multi-source settings to demonstrate the superior general-ization of our method over existing DGSS methods. In ad-dition, we visually analyze the effective representation of our proposed method for unseen images in both style and semantic spaces. Contributions of this paper are summarized as follows: • Beyond existing feature regularization methods, we propose style projected clustering, pointing out a new avenue to address DGSS. • We propose style projection, which projects unseen styles into the style representation space built on known domains for a better representation. • We propose semantic clustering to predict the class of each pixel in unseen images by the similarity distance to semantic bases, which further improves the general-ization capability for unseen domains. • Our proposed method outperforms the current state of the arts on multiple DGSS benchmarks.2. Related Work Domain adaptation and generalization. To reduce the burden of pixel-wise annotations on target domains, do-main adaptation (DA) technologies are proposed to narrow the domain gap between source and target domains via im-age translation [14, 24, 37], feature alignment [55, 60, 61], self-training [2, 39, 77] and meta-learning [13, 34] strate-gies. However, these DA methods require the access of data on target domains. Domain generalization (DG) aims to address a more practical problem where the target do-main cannot be accessed. Numerous DG works have been proposed for image classification via style augmentation [19, 59, 68, 75], domain alignment [29, 31], feature disen-tanglement [27, 44] and meta-learning [9, 26, 28]. Domain generalization for semantic segmentation. Sim-ilar to image classification, DG for semantic segmentation (DGSS) methods are proposed to learn domain-agnostic representations, including style augmentation [16, 25, 43, 72], feature normalization/whitening [5, 40, 42, 66] and meta-learning [20]. To avoid overfitting on source domains, DRPC [72] and FSDR [16] adopt style augmentations in the image space to extend the number of source samples, while WildNet [25] realizes it in the feature space with the aid of ImageNet [8]. Alternatively, normalization and whitening are investigated to achieve distribution alignment between different domains. IBN-Net [40] and RobustNet [5] adopt instance normalization and whitening, respectively, to elim-inate the specific style information of each domain. Further-more, SAN-SAW [42] proposes semantic-aware instance normalization and whitening to enhance the distinguishabil-ity between classes. In addition, PintheMem [20] combines the memory-guided network with the meta-learning strat-egy and obtains competitive performances. Different from these DGSS methods, our method embraces the differences from multiple known domains and takes advantage of their diversity to build a better representation space, realizing the representation of unseen images by the known data. Prototype learning. Inspired by the cognitive psychology that human use the knowledge learned in the past to judge the class of unknown things [51,69], prototype-based classi-fication methods have attracted increasing attention, where the class of unknown images is determined by its nearest neighbors in the feature space [7, 10]. Owing to its excel-lent interpretability and generalizability, prototype learning shows good potential in many fields, such as few-shot learn-ing [1, 52], zero-shot learning [67, 71], unsupervised learn-ing [30,65]. Recently, prototype learning is also introduced in the dense prediction task, including supervised [76], few-shot [54,63] and domain adaptive [53,73] semantic segmen-tation. To facilitate the learning of prototypes, metric learn-ing [23,50,64] is often adopted to pull samples belonging to the same class together and push those of different classes away from each other in the embedding ( i.e., feature) space. 3062 Shallow feature 𝐹𝐹𝑚𝑚𝑠𝑠Projected feature 𝐹𝐹𝑚𝑚𝑟𝑟Deep feature 𝐹𝐹𝑚𝑚𝑑𝑑 Style projection Semantic clustering 𝐹𝐹𝑚𝑚𝑠𝑠 𝐹𝐹𝑚𝑚𝑟𝑟𝐹𝐹𝑚𝑚𝑛𝑛 (𝜇𝜇𝑚𝑚,𝜎𝜎𝑚𝑚)Style extraction Normalization (Eq.2)(Eq.1) Projection (Eq.6) (𝜇𝜇𝑚𝑚′,𝜎𝜎𝑚𝑚′)⋯ Style bases (𝑝𝑝𝑚𝑚𝜇𝜇,𝑝𝑝𝑚𝑚𝜎𝜎)(𝜆𝜆1,𝜆𝜆2,…,𝜆𝜆𝑚𝑚) Weighed sum (Eq.5)SimilarityImagesConvolution ConvolutionPredictions (Eq.3,4)Momentum update (Eq.7) 𝐹𝐹𝑚𝑚𝑑𝑑⋯Semantic bases (𝑝𝑝𝑚𝑚𝑐𝑐) Embedding clustering ℒ𝑣𝑣𝑣𝑣𝑟𝑟(Eq.11)Momentum update (Eq.13)ℒ𝐶𝐶𝐶𝐶 (Eq.10) MaskSimilarity (Eq.9) GTPred : Different classes : Pixel embeddings 𝑒𝑒 𝑚𝑚𝑐𝑐Mean : Mean embeddings ̅𝑒𝑒𝑚𝑚𝑐𝑐: Push : Pull : FeatureSemantic clusteringStyle projection : Only for trainingℒ𝑑𝑑𝑑𝑑𝑠𝑠(Eq.12)DiscriminationFigure 2. The framework of style projected clustering, which consists of two components, i.e., style projection and semantic clustering. We iteratively extract the style and semantic information of seen domains as style bases (pµ m, pσ m)and semantic bases pc m. In style projection, we first calculate the similarity between the unseen style (µm, σm)from the shallow feature Fs mand style bases (pµ m, pσ m)as weighted factors λm. Then, the weighted combination of style bases (µ′ m, σ′ m)is projected on Fn mto obtain the projected feature Fr m. In semantic clustering, we calculate the similarity between pixel embeddings in the deep feature Fd mand semantic bases pc m. Then, the class of each pixel is determined by the nearest semantic base. During the training phase, the cross-entropy loss LCE, variance loss Lvarand discrimination loss Ldisare adopted to supervise the learning of style and semantic bases. Similar to these methods, we adopt the form of prototypes (i.e. bases) to represent semantics. Yet these semantic bases are learned in a different way to facilitate domain general-ization, by using a new variant of contrastive loss. 3. Style Projected Clustering The overall architecture of our proposed method is de-picted in Fig. 2, which consists of two components, i.e., style projection and semantic clustering. In style projec-tion, we project the unseen style into the style representa-tion space built on style bases, according to the similarity between the unseen style and style bases. In semantic clus-tering, we estimate the similarity between pixel embeddings and semantic bases ( i.e., prototypes) to determine the class of pixels in unseen images by the nearest semantic base. 3.1. Problem Formulation In the domain generalized semantic segmentation prob-lem, we are given Msource domains S={S1,S2, ...,SM} that are from multiple datasets with different data distribu-tions. The m-th source domain Smcan be represented as Sm={(xm, ym)}, where xm∈RH×W×3is an image from the m-th source domain, ym∈RH×W×Cis the cor-responding pixel-wise label, Cis the number of semantic classes, HandWare the height and width of the image xm, respectively. In this work, our goal is to train a seman-tic segmentation model ϕto obtain the best generalization performance on multiple target domains Twhich cannot be accessed during the training phase.3.2. Style Projection The style d |
Dogaru_Sphere-Guided_Training_of_Neural_Implicit_Surfaces_CVPR_2023 | Abstract In recent years, neural distance functions trained via volumetric ray marching have been widely adopted for multi-view 3D reconstruction. These methods, however, apply the ray marching procedure for the entire scene volume, leading to reduced sampling efficiency and, as a result, lower reconstruction quality in the areas of high-frequency details. In this work, we address this problem via joint training of the implicit function and our new coarse sphere-based surface reconstruction. We use the coarse representation to efficiently exclude the empty volume of the scene from the volumetric ray marching procedure without additional forward passes of the neural surface network, which leads to an increased fidelity of the reconstructions compared to the base systems. We evaluate our approach by incorporating it into the training procedures of several implicit surface modeling methods and observe uniform improvements across both synthetic and real-world datasets. Our codebase can be accessed via the project page†. †https://andreeadogaru.github.io/SphereGuided1. Introduction The task of multi-view 3D reconstruction remains the focus of modern computer vision and graphics research. It has major practical significance in AR/VR metaverses, synthetic media, medical imaging, and the special effects industry. This task is classically addressed via the multi-view stereo (MVS) reconstruction systems [3, 4, 6, 9, 10, 26, 38], which estimate the underlying scene geometry in the form of a point cloud using a photometric consistency between the different views. However, in recent years they have been largely phased out by the methods that represent the scene as neural implicit fields [5, 14, 16 –19, 22, 29, 31 –33, 35 –37]. These approaches have multiple advantages compared to the classical MVS. For example, they can easily accommodate non-Lambertian and texture-less surfaces [8], are good at interpolating unseen parts of the geometry by leveraging regularization [12], and at the same time can achieve an impressive quality of renders [2]. This work focuses on improving the subset of such meth-ods specialized in opaque surface reconstruction [5, 22, 31, 35]. Most of these approaches employ neural signed dis-tance fields [23] (SDFs) trained using volumetric ray march-ing [5, 31, 35]. The training step of this procedure contains This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20844 two stochastic elements: sampling a raycorresponding to a training pixel and sampling a set of points along the ray to approximate the color integral. The sampling efficiency at these steps largely determines the resulting quality of the reconstructions. While in the abovementioned methods the training rays are selected uniformly within the scene vol-ume, their point sampling procedure typically employs a multi-stage importance [31] or uncertainty [5, 35] sampling to improve the accuracy of the reconstructions. At the same time, it was shown [32] that neural signed distance fields benefit from the surface-based sampling of rays for surface rendering methods, such as IDR [36], which the modern multi-view reconstruction systems do not in-corporate. Additionally, some of the novel-view synthe-sis works [16, 18, 37] successfully combined a simple two-stage coarse-to-fine sampling with explicitly defined surface guides to achieve a better rendering quality, as opposed to using the sophisticated multi-stage sampling procedures of the surface reconstruction methods. To guide the ray march-ing, they use explicit coarse surface approximations in the form of a set of volumetric primitives [18] or sparse oc-trees [16, 37]. However, these methods require a complete scene reconstruction to fit such an approximation [18, 37] or employ a heuristic optimization procedure [16] which we show performs poorly for the surface reconstruction task. Inspired by these approaches, we improve the existing surface reconstruction methods’ ray sampling and marching procedures using explicitly defined coarse representations. We propose training a coarse reconstruction as a sphere cloud which guides both sampling steps during volume rendering. We also propose a new optimization approach for coarse reconstruction based on gradient descent, which allows us to train it alongside the implicit surface field. Additionally, we introduce a point resampling scheme, which prevents the spheres from getting stuck in the local minima, and a repul-sion mechanism that ensures high degrees of exploration of the reconstructed surface. Finally, we provide empirical evi-dence of the proposed method’s applicability to different ap-proaches for implicit surface modeling. Specifically, we pair our method with several modern systems [5, 22, 31, 35] for surface reconstruction and observe uniform improvements in the resulting quality across multiple 3D reconstruction benchmarks. 2. Related works Implicit volumetric representations. Neural implicit representations have gained much attention in recent years for the problem of multi-view 3D surface reconstruction. Their widespread adoption started after several works have introduced training approaches based on the differentiable rendering of implicit functions. They initially relied on the surface rendering [21, 28, 36] procedure, where the pixel’s color is approximated using the radiance of a single pointin the volume. However, they were recently phased out by the training procedures based on the volume rendering with multiple samples via ray marching. Introduced in a semi-nal work on novel view synthesis, Neural Radiance Fields (NeRFs) [19], the volumetric ray marching has been later adapted [22,31,35] to the problem of surface modeling since it significantly improved the reconstruction quality. The ray marching procedure estimates the color along the ray using the volume rendering integral, approximated as a sum of the weighted radiances at multiple points throughout the volume. The aforementioned works employ methods based on importance [31], uncertainty [35] or surface intersection-based [22] sampling to obtain this set of points, increasing the approximation accuracy compared to more simple strate-gies, such as uniform sampling. We propose a new hybrid surface representation that im-proves ray marching by limiting the sampling space to a volume coarsely bounding the scene. This is used in con-junction with the ray marching mechanism of the base neural reconstruction method which further optimizes the selection of samples around the reconstructed surface. We also use this hybrid surface representation to guide the sampling of the training rays, improving the quality of reconstructions given the same training time. Hybrid representations. To improve the training effi-ciency and rendering frame rate, multiple hybrid representa-tions [1,7,17,20,24,25,37] have been proposed, which jointly optimize the implicit and explicit representations. These methods employ point clouds [1, 24, 25], hash tables [20], sparse voxel grids [7, 17, 37], and volumetric primitives [14] to improve both the training and rendering procedures in terms of either their speed or the resulting quality. Below we discuss the methods that are most closely related to our approach. Iso-Points [32] introduced joint optimization of the signed distance functions with a point-based surface representa-tion. In our approach, we use a sphere-based representation, which allows us to sample both the rays andpoints along these rays to lie near the surface, thus modifying both the ray-sampling and the ray-marching procedures. Closely re-lated to our work are Neural Sparse V oxel Fields [16] and Neural 3D Reconstruction in the Wild [30] systems. They both employ sparse voxel grids to guide the ray marching. However, the method in [16] uses a greedy optimization strategy to train these representations, which, as we show, results in an inferior reconstruction quality compared to our gradient-based training. Compared to [30], our method does not employ the initialization using a sparse point-cloud, and it trains the guiding reconstruction from scratch. 3. Method Our approach addresses a multi-view 3D reconstruction problem. The goal is to estimate the surface of a scene, de-20845 Initial samples Uncertainty / Importance samples Standard sampling Sphere-Guided sampling SDF Outside Inside Outside Inside OutsideFigure 2. Our method works by filtering the samples along the ray that lie outside of the surface region, approximated by a trainable sphere cloud. Such filtering improves the sample efficiency in the optimization process and allows the implicit function to converge to a better optimum. noted as S, given a collection of images with the associated camera parameters. In our case, this surface is extracted as a level set of the learned implicit representations: either a signed distance function (SDF) or an occupancy field. In this section, we begin by describing the volume rendering approach utilized by most state-of-the-art methods. Then, we show how a learnable sphere cloud Scould be used to improve the volume rendering-based training process and fi-nally describe the optimization pipeline for the sphere cloud itself. 3.1. Volume rendering We assume the underlying implicit model fto represent the geometry of the surface, and that there is a transformation that maps it to a surface density function σ:R3→R+, defined at each point xin the volume. In order to render the surface defined by σvia volumetric rendering, we first need to consider a ray p(t) =o+tv, t≥ 0, emanated from the camera origin o∈R3in the direction v∈S2, and the corresponding color C(o,v)of the pixel on the image plane of that camera. We also need to define a radiance field c:R3×S2→R3, which produces a view-dependent color at each point in the volume. The observed color C(o,v)can then be expressed as the following integral along the ray: C(o,v) =Z+∞ 0w(t)c(p(t),v)dt | , (1)where w(t)is the probability of a ray terminating at p(t), which can be derived from the density σ: w(t) =T(t)σ(t), T (t) = exp −Zt 0σ(s)ds .(2) In practice, the color integral is approximated by evaluating the density and radiance at a set of nsampled points P= {pi=o+tiv}n i=1using the discretized version [19] of the equations above: ˆC(o,v) =NX i=1Tiαici, T i=i−1Y j=1(1−αj). (3) Here Tidenotes the accumulated transmittance and αi— the opacity value at point pi, which can also be estimated from the density function via the following formula: αi= 1−exp −Zti+1 tiσ(t)dt . (4) 3.2. Sphere-guided volume rendering The sampling strategy for the points Phas a major impact on the resulting reconstructions since it directly affects the approximation quality of eq. 1. To improve it, some meth-ods [22] employ the root-finding procedure to obtain the first intersection with the surface along the ray and sample more points near it. Other methods [5, 31, 35] are first estimating a dense set of proposals Tvia importance or uncertainty sampling. Then, Pis obtained either via inverse transform sampling by evaluating the density σat the proposals Tand normalizing it along the ray [5,35], or in some cases by using an entire set of proposals [31]. Algorithm 1: Sphere-guided sampling. Input: rayp(t), spheres S,#samples n 1Initialize a set of intervals I=∅ 2forSi∈Sdo 3 Add sphere-ray intersection to I: I:=I ∪Si∩p(t) 4end 5Find a minimal set of intervals [sk, tk] :S k[p(sk),p(tk)] =I 6Initialize a set of points T0=∅ 7fork= 1. . . K do 8 Setnk:=⌊n/(tk−sk)⌋ 9T0:=T0∪linspace (sk, tk, nk) 10end 11Obtain TusingT0and the sampling method of choice 12return T 20846 (a)5·104iterations (b) 7.5·104iterations (c) 1.5·105iterations (d) 3·105iterations Figure 3. Visualization of the training process. Initially, we assign a large radius to all spheres in the cloud (a) and gradually reduce it during the optimization down to a minimum value (c). Our proposed repulsion loss prevents the clumping of the spheres and encourages exploration, which results in an improved reconstruction of the thin surfaces (d). To further improve the efficiency of both proposal sam-pling and root-finding procedures, we utilize a set of guiding spheres Swhich cover the object’s surface. They allow us to ensure that the training samples Pare mainly generated from the areas of interest, making the implicit surface function converge to a better optimum, especially for the scenes with high-frequency details. We achieve that by applying both the root-finding and proposal sampling procedures only within the volume, defined by the sphere cloud S, as illustrated in Figure 2. For the sampling of proposals T, our method is described in the Algorithm 12, while the details of the modi-fied root-finding approach can be found in the supplementary materials. In short, the algorithm first intersects a given ray with the sphere cloud, yielding a set of intersections I. Then it finds the minimum coverage of the intersections. That is, [sk, tk] is the minimal set of segments with union I. Then each of these segments is linearly sampled to obtain the initial set of proposals T0. This set is finally upsampled using the base method. 3.3. Sphere cloud optimization At the beginning of training, we initialize the sphere cloud Sof size Mwith centers {ci}M i=1, uniformly dis-tributed across the volume of the scene, and set the radii of the spheres to an initial value rmax. The training then proceeds by alternating the updates of the sphere cloud and the implicit function. Importantly, we only update the sphere centers civia an optimization-based process and rely on scheduling their radii to decrease from the initial value rmax to the minimum rminvia a fixed schedule. Also, in our ap-proach, all spheres in the cloud are assigned the same radius value. Figure 3 illustrates the sphere cloud optimization during the training process. The main learning signal for the sphere centers comes from moving them towards the estimated surface ˆS, which is defined as an h-level set of the implicit function f:ˆS={x∈R3|f(x) =h}, where hdepends on the type of the function (e.g., for SDF h= 0). This can be formulated as a following loss: Lsurf=MX i=1 f(ci)−h 2. (5) This objective ensures that the sphere centers lie in the proximity of the reconstructed surface, i.e., maximizes the precision. However, it does not guarantee that the point cloud covers an entire object’s surface. To address that, we design a repulsion term that prevents the neighboring spheres from clumping together and encourages exploration of the entire surface region: Lrep=MX i=1X j∈K(i)rnI(||cj−ci||2< d) ||cj−ci||2, (6) where K(i)denotes the indices of the k-nearest spheres toSi,rnis the current radius of the spheres, and dis a hyperparameter, which sets the maximum distance for the repulsion. Since the magnitude of this loss depends on the current radius of the spheres, the repulsion has more effect in the beginning of the training, encouraging better exploration of the scene volume. Our final objective for optimization of the centers of the spheres is the following: L=Lsurf+λLrep. (7) The radius scheduling in our method defines the exploration-exploitation trade-off and, in principle, could be picked sep-arately for each scene. However, we found out that the following exponential schedule works well in most cases: rn=max(rmaxe−nβ, rmin). (8) 20847 Here, ndenotes the training iteration, and βis a hyperpa-rameter controlling the decay rate. We use the same βvalue across datasets and set it so that the radius reaches the mini-mum value of rminin less than half of the training iterations. To avoid problems with the sphere cloud convergence, we apply a resampling procedure for the empty spheres which get stuck without reaching the surface. This process is typically applied up to 8 times during training, depending on the total number of iterations. Similarly to [16], we sample Kpoints inside each sphere at which we evaluate the implicit function and find the spheres which have no surface inside them. We then resample these spheres near the ones which contain a surface region. Lastly, to avoid choosing training rays that do not intersect the surface of the object, we sample their endpoints uniformly from the volume bounded by the spheres. For more details on the sphere resampling and sphere-guided ray sampling procedures, please refer to the supplementary materials. 4. Experiments We conduct our main experiments using three popular 3D reconstruction benchmarks: DTU MVS [11], Blended-MVS [34] and Realistic Synthetic 360 [19] and evaluate our approach by combining it with four different methods for implicit surface reconstruction. 4.1. Base methods Our approach acts as an addition to the 3D reconstruc-tion systems that learn neural implicit surfaces through vol-ume rendering. Therefore, we apply it to four representa-tive systems to showcase its effectiveness and applicability. UNISURF [22] represents the geometry of the scene via an occupancy field that is learned through a combination of surface and volume rendering approaches. V olSDF [35] and NeuS [31] propose to train an SDF via volume render-ing by transforming it into occupancy defined along the ray. NeuralWarp [5] builds upon V olSDF by using an additional loss term that directly enforces the photo consistency of the learned geometry by warping patches across different views. Our approach can be seamlessly incorporated into all these systems, and we provide additional implementation details in the supplementary materials. 4.2. Training process For each of the base models, we have used the official codebase, except for V olSDF, for which we use the code pro-vided as part of the NeuralWarp method. Therefore, for the implementation aspects of the base methods, including the architectures and training details, we refer to the respective publications. We employ the same optimizer, scheduling, hy-perparameters, number of iterations, and other technicalities as in the reference methods to train the implicit functions.The sphere cloud optimization process is adapted for each system with regard to the implicit geometry type and the total number of iterations used for training. The sphere radius in the SDF-based methods decays exponentially from rmax= 0.4tormin= 0.04, while for UNISURF it ranges from rmax= 2.0tormin= 0.1. The repulsion penalty considers thek= 10 nearest spheres that intersect with each sphere in the cloud, i.e. d= 2∗rn, and its weight is λ= 0.1in UNISURF and λ= 0.0001 in the other methods. We found a number of 15,000 spheres to be sufficient for representing most scenes, which are scaled to fit in the bounding sphere of radius one. We employ the Adam [13] optimizer with a learning rate of 10−4for the optimization of the centers of the spheres in all experiments. 4.3. Realistic Synthetic 360 evaluation The Realistic Synthetic 360 dataset was introduced in [19] as a benchmark for the novel view synthesis task. Each of its eight scenes features an object realistically rendered from 100 training viewpoints and paired with a ground truth mesh. Though this dataset was not originally intended for the 3D reconstruction task, it contains objects with complex geometries and non-Lambertian materials, representing a challenge for classical 3D reconstruction systems. As some of the ground truth meshes contain internal surfaces that are not visible in any of the training views, we filter them by removing the non-visible parts. We perform a similar filtering step for the reconstructed meshes and compute the Chamfer distance between the cleaned meshes by sampling one million points on each surface. We report the distance computed at the original scale of the meshes multiplied by 102in Table 1. We also report the qualitative results in Figure 4. We can see that our method achieves improvements across most of the scenes for all of the compared methods, which is especially noticeable in scenes such as ficus and materials. We hypothesize that this is the case because of the com-plex structure of the reconstructed |
Feng_3D_Spatial_Multimodal_Knowledge_Accumulation_for_Scene_Graph_Prediction_in_CVPR_2023 | Abstract In-depth understanding of a 3D scene not only involves locating/recognizing individual objects, but also requires to infer the relationships and interactions among them. How-ever, since 3D scenes contain partially scanned objects with physical connections, dense placement, changing sizes, and a wide variety of challenging relationships, existing methods perform quite poorly with limited training sam-ples. In this work, we find that the inherently hierarchical structures of physical space in 3D scenes aid in the au-tomatic association of semantic and spatial arrangements, specifying clear patterns and leading to less ambiguous predictions. Thus, they well meet the challenges due to the rich variations within scene categories. To achieve this, we explicitly unify these structural cues of 3D phys-ical spaces into deep neural networks to facilitate scene graph prediction. Specifically, we exploit an external knowl-edge base as a baseline to accumulate both contextual-ized visual content and textual facts to form a 3D spa-tial multimodal knowledge graph. Moreover, we propose a knowledge-enabled scene graph prediction module bene-fiting from the 3D spatial knowledge to effectively regular-ize semantic space of relationships. Extensive experiments demonstrate the superiority of the proposed method over current state-of-the-art competitors. Our code is available athttps://github.com/HHrEtvP/SMKA . | 1. Introduction In recent years, much success has been achieved on 3D point cloud scene understanding such as semantic seg-mentation [9, 11, 15, 16, 21, 28, 29, 49] and object detec-tion [10, 22, 25, 27, 43]. However, the 3D world is not only defined by objects but also by the relationships between ob-jects. A 3D scene graph can abstract the environment as a graph where nodes represent objects and edges character-ize the relationships between object pairs, which has already been recognized in recent seminal works [1,30,37,38,41,46]. However, relationship graphs predicted by current methods are far from satisfactory due to the noisy, cluttered and par-*Equal contribution †Corresponding author Figure 1. A brief overview of our method. tial nature of real 3D scans. Moreover, these data-driven methods treat sophisticated relationships in 3D space in-dependently for classification using the geometric features proximity or fit, and are ignorant of commonsense or other useful 3D spatial cues beyond visual information. 3D objects in real scenes commonly have strongly structured regulari-ties [33,39], whose semantic and spatial arrangements follow clear patterns, but still exhibit rich structural variations even within the scene category. The key observation is that 3D scene structures are in-herently hierarchical [20]. By definition, an instance can have multiple supports, lamps are standing on a table ,chairs are supported by the floor and only the floor does not have any support, and it is unlikely that a pillow is supporting a couch . Although relationships themselves cast no light on the human eyes, a growing body of works [14, 31] suggest that even very complex relationship information is reasoned hierarchically and systemically according to the role of the prefrontal cortex. Relationships, such as support, can be extracted rapidly, are hard to ignore, and influence other relationships in the perceptual process. For example, a TV and a sofa are related since they together serve the function of ‘watching TV’, but these two objects can be far apart in a scene. Relationships of this kind are much more difficult, if not possible, to infer based on geometric analysis alone. The model can relate the table easily which supports the TVand use the table as a bridge to predict the ‘front’ relationship with sofa, where table andsofa are all supported by the floor and relationships within them is intuitive. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9182 The underlying hierarchical structures in 3D scenes are label free and reliable, and can hence play an essential role in scene understanding at no additional cost. Existing 3D scene graph prediction models [1, 30, 37, 38, 41, 46] are oblivious to the underlying structures in the point cloud scenes. The question is how to take this prior knowledge into consid-eration to make the 3D scene graph achieve higher accu-racy? KISG [47] proposes a graph auto-encoder to learn a closed set and ground truth prior knowledge from relation-ship triplets in data for 3D scene graph prediction. Although KISG [47] takes note of knowledge, it captures relevant prior knowledge from text-only ground truth labels, which merely contain facts expressed by label descriptions while lacking complex but indispensable multimodal knowledge for 3D scene graph prediction. In addition, noises contained in the manually annotated labels are easily included in the knowledge base and affects the prediction of relationships. To address the above problems, we show that the implicit hierarchical structure correlations between object pairs and their relationships can be explicitly represented by a knowl-edge base. As shown in Fig. 1, we propose a 3D spatial mul-timodal knowledge accumulation module to explicitly merge the hierarchical structures of 3D scenes into the network to strengthen the 3D scene graph prediction process. Firstly, we filter the external commonsense knowledge base, classify the hierarchical tokens for each node, and add new support edges to form the hierarchical symbolic knowledge graph for 3D scenes. Secondly, we retrieve the hierarchical token from the reconstructed symbolic knowledge graph for object instances in 3D scenes to build a visual graph, and extract contextual features for nodes and edges using a region-aware graph network. Finally, to bridge the heterogeneous gap between the symbolic knowledge and visual information, we propose a graph reasoning network to correlate 3D spatial vi-sual contents of scenes with textual facts. Conditioned on the learned vision-relevant 3D spatial multimodal knowledge, we incorporate this network into the relationships prediction stage as extra guidance, which can effectively regularize the distribution of possible relationships of object pairs and thus make the predictions less ambiguous. Our main contributions are: 1) We are the first to explic-itly unify the regular patterns of 3D physical spaces with the deep architecture to facilitate 3D scene graph prediction. 2) We propose a hierarchical symbolic knowledge construction module that exploits extra knowledge as the baseline to admit the hierarchical structure cues of 3D scene. 3) We introduce a knowledge-guided visual context encoding module to con-struct hierarchical visual graph and learn the contextualized features by a region-aware graph network. 4) We propose a 3D spatial multimodal knowledge accumulation module to regularize the semantic space of relationship prediction. Re-sults show that the learned knowledge and proposed modules consistently boost 3D scene graph prediction performance.2. Related Work 2D Image-based Scene Graph Generation. Scene graph was first proposed for image retrieval [17], and subsequently received increasing attention in the vision community to produce graphical abstractions of images. Mainstream ap-proaches [5, 36, 42, 44, 45] follow a two-step pipeline that first detects objects followed by classification of the rela-tionship for each object pair. However, research on scene graphs has focused primarily on 2D images, ignoring 3D spatial characteristics such as position and geometry, and with limited spatial coverage. Our proposed method extends 2D scene graphs to 3D spaces, where the scene representa-tion, network architecture and training mechanism all have to be altered in fundamental ways to meet the challenges arising from learning 3D scene structures and relationships. More detailed discussions can be found in the survey [4]. Knowledge Representation has been extensively studied to incorporate prior knowledge, e.g. DBPedia [2], Concept-Net [35], WordNet [24], VisualGenome [19] and hasPart [3], to aid numerous vision tasks [23]. Gao et al. [12] incorpo-rated commonsense knowledge to learn the internal-external correlations among room and object entities for an agent to take proper decisions at each viewpoint. Zhang et al. [48] addressed the explainability of visual reasoning by introduc-ing the explicit integration of external knowledge. Ding et al. [8] extracted the multimodal knowledge triplet to boost the performance of visual question answering. Chen et al. [6] constructed the prior knowledge of statistical correlations between object pairs and their relationships to address the issue of the uneven distribution over different relationships. Although previous studies have taken notice of knowledge in different vision tasks, they only implicitly mine the ex-tra knowledge base or count the frequency of relationship pairs in datasets to strengthen the iterative message propa-gation between relationships and objects while ignoring the intrinsic properties of the data. Scene Graph Prediction in Point Clouds. With the re-cently proposed 3DSSG datasets containing 3D scene graph annotations [37], the community started to explore semantic relationship prediction in 3D real world data. SGPN [37, 38] is the first work to build a 3D scene graph using both objects and their interrelations as graph nodes. It then performs message propagation using graph convolutional networks. Kimera [30] proposed a 3D dynamic scene graph that cap-tures metric and semantic aspects of a dynamic environment, where nodes represent spatial concepts at different levels of abstraction, and edges represent spatial-temporal relations among the nodes. EdgeGCN [46] exploits multi-dimensional edge features for explicit relationship modeling and explores two associated twinning interaction mechanisms for the in-dependent evolution of scene graph representations. Wu et al. [41] proposed a method to incrementally build semantic scene graphs from a 3D environment given a sequence of 9183 Figure 2. Method pipeline. (a) A hierarchical symbolic knowledge is firstly reconstructed to exploit external knowledge as the baseline and admit the hierarchical structure cues of 3D scene. (b) We then build a hierarchical visual graph and learn the contextualized features by the region-aware graph network. (c) Finally, a 3D spatial multimodal knowledge is accumulated to strengthen relationship predictions. RGB-D frames. KISG [47] uses the ground truth relation-ship triplets in the dataset to extract the prior knowledge and then fuses it in the scene graph prediction stage. One limitation of KISG [47] is that its relevant prior knowledge depends on the text-only dataset label while ignoring hierar-chical and ind |
Gomes_Video_Compression_With_Entropy-Constrained_Neural_Representations_CVPR_2023 | Abstract Encoding videos as neural networks is a recently pro-posed approach that allows new forms of video processing. However, traditional techniques still outperform such neu-ral video representation (NVR) methods for the task of video compression. This performance gap can be explained by the fact that current NVR methods: i) use architectures that do not efficiently obtain a compact representation of tem-poral and spatial information; and ii) minimize rate and distortion disjointly (first overfitting a network on a video and then using heuristic techniques such as post-training quantization or weight pruning to compress the model). We propose a novel convolutional architecture for video repre-sentation that better represents spatio-temporal information and a training strategy capable of jointly optimizing rate and distortion. All network and quantization parameters are jointly learned end-to-end, and the post-training opera-tions used in previous works are unnecessary. We evaluate our method on the UVG dataset, achieving new state-of-the-art results for video compression with NVRs. Moreover, we deliver the first NVR-based video compression method that improves over the typically adopted HEVC benchmark (x265, disabled b-frames, “medium” preset), closing the gap to autoencoder-based video compression techniques. | 1. Introduction Lossy video compression is a Rate-Distortion (R-D) op-timization problem of the form minD+λR. Given a video, the encoder’s task is to minimize the number of bits required to represent it, R(Rate), while also minimizing any dis-tortion brought about by the compression, D(Distortion). λcontrols the trade-off between both and defines a Pareto frontier, where improvements in the distortion term come at Figure 1. Compared to the previous state-of-the-art method, our approach produces sharper frames at lower bpp. Images are la-beled with PSNR @ bpp. an increased cost in the rate term. Traditionally, heuristic-based engineered video codecs are used to encode videos in efficient representations. Such codecs have gradually de-veloped into complex pipelines and have been established as powerful standards such as H.264 [26] and HEVC [30]. Inspired by the success of deep learning in many im-age processing tasks, over the past few years, video com-pression using neural networks has been the target of much research [8, 18]. Most of the proposed methods are based This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18497 on Encoder-Decoder neural network pairs, capable of trans-forming Groups of Frames (GoF) into latent representations and subsequently recovering them. These representations are quantized, entropy coded, and then stored/transmitted. Commonly, the loss function optimized by these methods is theoretically grounded in information theory and structured as an R-D problem. Dis some measure of the difference be-tween the original and decoded frames, and Ris given by an estimate of the lower bound of the length of a bit sequence representing the latent vector. More recently, Implicit Neural Representations (INRs) have emerged as alternatives to dense grids for repre-senting continuous signals. They have seen remarkable success in 3D scene reconstruction and shape represen-tation [22, 24], and have also been used for video rep-resentation [3, 7, 33]. In such a case, a video is inter-preted as a function f(x, y, t ) = ( R, G, B )which is ap-proximated by fitting a neural network to a set of samples S={(x, y, t ),(R, G, B )}. The video is then effectively stored in the neural network’s parameters and can be recov-ered by performing forward passes. Interestingly, by using INRs, video compression can be framed as a neural network compression problem. Previous efforts in using INRs for video compression, however, have mostly treated the problems of representing the input signal with high fidelity and compressing it as mostly disjoint tasks. A network is first trained to minimize a distortion loss and is then put through some procedure to reduce its size, e.g., storing its weights in a 16-bit floating point format, quantization, or pruning [7, 10, 29]. Further-more, current architectures are not designed with parameter efficiency as a priority, suffering from an inefficient alloca-tion of parameters, which can be prohibitive for the task of video compression [7, 16]. Tackling the above issues, we propose a new compact convolutional architecture for video representation that pro-vides better R-D performance than previous works (see Fig-ure 1). In addition, drawing on information theory, we pro-pose a theoretically motivated R-D formulation for video compression with INRs that jointly minimizes rate and dis-tortion. We build on the work of Oktay et al . [23] and model the entropy of the neural network weights, allow-ing us to minimize it jointly with the distortion. Thus, our method learns weights that simultaneously provide high-fidelity representations of the original video and have low entropy. Applying entropy coding methods to the final weights produces a compressed video representation. In summary, our main contributions are: 1. we propose a novel compact convolutional architecture for neural video representation, which results in better representation capacity than NeRV [7] and faster en-coding and decoding than E-NeRV [16];2. we formally define signal compression with INRs as an R-D problem by modeling the entropy of the weights and using quantization-aware training (allowing end-to-end training and eliminating the need for post-training techniques such as pruning); 3. we show that such an entropy modeling can also im-prove other methods, e.g., NeRV; 4. we evaluate our method on the UVG [21] dataset, im-proving on the state-of-the-art results for video com-pression with INRs and outperforming DVC [17], a well-established neural video compression method. |
Han_Multiscale_Tensor_Decomposition_and_Rendering_Equation_Encoding_for_View_Synthesis_CVPR_2023 | Abstract Rendering novel views from captured multi-view images has made considerable progress since the emergence of the neural radiance field. This paper aims to further ad-vance the quality of view synthesis by proposing a novel approach dubbed the neural radiance feature field (NRFF). We first propose a multiscale tensor decomposition scheme to organize learnable features so as to represent scenes from coarse to fine scales. We demonstrate many benefits of the proposed multiscale representation, including more accurate scene shape and appearance reconstruction, and faster convergence compared with the single-scale repre-sentation. Instead of encoding view directions to model view-dependent effects, we further propose to encode the rendering equation in the feature space by employing the anisotropic spherical Gaussian mixture predicted from the proposed multiscale representation. The proposed NRFF improves state-of-the-art rendering results by over 1 dB in PSNR on both the NeRF and NSVF synthetic datasets. A significant improvement has also been observed on the real-world Tanks & Temples dataset. Code can be found at https://github.com/imkanghan/nrff . | 1. Introduction View synthesis aims to synthesize unrecorded views from multiple captured views using computer vision tech-niques. A great deal of effort has been made to solve this problem in the past few decades [29]. The recently proposed neural radiance field (NeRF) [18] made a break-through in this area by modeling a scene via a multi-layer perceptron (MLP). The NeRF achieves an impressive photo-realistic view synthesis quality with 6 degrees of free-dom for the first time. The NeRF also represents a scene in a very compact form. That is, only a small number of pa-rameters in the MLP, whose size is even smaller than the captured images. However, this advantage in model size *Corresponding author.comes at the expense of extensive computations. Numerous evaluations of the MLP are required to render a single pixel, incurring a challenge for both training and testing. Representing a scene via learnable features is shown to be an effective alternative approach for photo-realistic view synthesis [6,7,19,25]. Several data structures are employed to efficiently organize learnable features to achieve compact representations. Multiresolution hash encoding (MHE) [19] and tensor decomposition in TensoRF [6] are two typical works in this direction. MHE organizes learnable features in multiresolution hash tables. As each hash table corre-sponds to a distinct grid resolution, a point is thus indexed into different positions of the hash tables to mitigate the negative effects of hash collisions. However, this structure breaks the local coherence in nature scenes, even though the spatial hash function in MHE preserves the coherence to some extent. By comparison, TensoRF decomposes a 3D tensor into 2D plane and 1D line tensors, where the local co-herence is largely preserved. However, TensoRF’s decom-position is performed only in a single scale, whereas mul-tiscale methods are much more desirable for wide-ranging computer vision tasks [1, 14, 16, 26, 27]. We thus propose a multiscale tensor decomposition (MTD) method to rep-resent scenes from coarse to fine scales. We show that the proposed MTD method is able to reconstruct more accurate scene shapes and appearances, and also converges faster than the single-scale TensoRF. As a result, the proposed MTD method achieves better view synthesis quality than TensoRF, even with fewer learnable features. View direction encoding is the key to the success of neu-ral rendering in modeling complex view-dependent effects. Frequency (or position encoding) [18] and spherical har-monics [30] are the two mostly used view direction en-coding methods. The encoded feature vector of a view di-rection is then fed to an MLP to predict a view-dependent color. This approach models the 5D light field function (3D spatial position with 2D view direction) [13]. In computer graphics, the light field is usually modeled by the rendering equation [10], where the outgoing radiance is the interaction result of the incoming light at a point with a specific mate-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4232 rial. An accurate solution to the rendering equation involves Monte Carlo sampling and integration, which is compu-tationally expensive, especially for the scenario of inverse rendering [9]. In this paper, we propose to encode the ren-dering equation in the feature space in lieu of the color space using the predicted anisotropic spherical Gaussian mixture. In this way, the following MLP is aware of the rendering equation so as to better model complex view-dependent ef-fects. As we use both neural and learnable feature represen-tations as well as the rendering equation encoding (REE) in the feature space, we dub the proposed method the neural radiance feature field (NRFF). In summary, we make the following contributions: • We propose a novel multiscale tensor decomposition scheme to represent scenes from coarse to fine scales, enabling better rendering quality and faster conver-gence with fewer learnable features; • In lieu of direct encoding of view directions, we pro-pose to encode the rendering equation in the feature space to facilitate the modeling of view-dependent ef-fects. |
Hsu_NS3D_Neuro-Symbolic_Grounding_of_3D_Objects_and_Relations_CVPR_2023 | Abstract Grounding object properties and relations in 3D scenes is a prerequisite for a wide range of artificial intelligence tasks, such as visually grounded dialogues and embodied manipu-lation. However, the variability of the 3D domain induces two fundamental challenges: 1) the expense of labeling and 2) the complexity of 3D grounded language. Hence, essential desiderata for models are to be data-efficient, generalize to different data distributions and tasks with unseen semantic forms, as well as ground complex language semantics ( e.g., view-point anchoring and multi-object reference). To ad-dress these challenges, we propose NS3D, a neuro-symbolic framework for 3D grounding. NS3D translates language into programs with hierarchical structures by leveraging large language-to-code models. Different functional modules in the programs are implemented as neural networks. Notably, NS3D extends prior neuro-symbolic visual reasoning meth-ods by introducing functional modules that effectively reason about high-arity relations ( i.e., relations among more than two objects), key in disambiguating objects in complex 3D scenes. Modular and compositional architecture enables NS3D to achieve state-of-the-art results on the ReferIt3D view-dependence task, a 3D referring expression compre-hension benchmark. Importantly, NS3D shows significantly improved performance on settings of data-efficiency and gen-eralization, and demonstrate zero-shot transfer to an unseen 3D question-answering task. | 1. Introduction Interacting with the physical world requires 3D visual understanding; it entails the ability to interpret 3D objects and relations among multiple entities, as well as reason about 3D instances in a scene from language expressions. However, due to the variability of the 3D domain, there are two prevalent challenges: the expense of annotating 3D labels and the complexity of 3D grounded language. In this paper, we tackle these two challenges on a specific task of 3D scene understanding, the referring expression comprehension (3D-REC) task. As shown in Figure 1, in a 3D-REC task, the input contains a sentence and a 3D scene, Instruction:Looking at the front of the copier, pick the printerthat is to the right of the copier.Complex 3D Grounding in 3D-RECInstruction:Facing the front of the cabinet, choose the lampthat is on the left of it. NS3DGeneralizationData EfficiencyZero-Shot Transfern% TrainTestTrainTest novelsettingsTrain 3D-REC TaskTest 3D-QATask Figure 1. NS3D achieves grounding of 3D objects and relations in complex scenes, while showing state-of-the-art results in data efficiency, generalization, and zero-shot transfer. usually given as a collection of object point clouds; the goal is to identify the correct referred object in the scene. The task is challenging: obtaining high-quality annotations for such tasks is expensive; the referring expressions often require reasoning about multiple objects, such as anchoring speaker viewpoints ( i.e., facing X, select the object Ybehind Z) and utilizing multiple objects in the scene as reference points. Many prior works have studied end-to-end methods to tackle this problem [1, 2, 16 –18, 20, 30, 37, 39, 40], jointly at-tending over features from language and point clouds. These methods report strong performance, but generally require large amounts of data to train and are prone to dataset bi-ases, such as object co-occurrences. Meanwhile, the learned 3D representations cannot be directly transferred to related downstream tasks, such as 3D question answering. In addi-tion, most prior works in 3D grounding are based on Trans-formers [33], which reduce the set of realizable functions to a subset of reasoning tasks with binary relations [26, 33]. This has limited their ability to resolve complex 3D grounded languages, empirically leading to a noticeable performance This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2614 drop when the language contains view-dependent relations. To this end, we propose NS3D as a powerful neuro-symbolic approach to solve 3D visual reasoning tasks, with more faithful grounding of 3D objects and relations. NS3D first parses the referring expression from the free language form to a neuro-symbolic program form. We introduce the use of Codex [11], a large language-to-code model, for se-mantic parsing with a small number of prompting examples, leading to perfect identification of entities and program struc-tures. Such program structures decompose each referring expression into a set of functional modules that are hierarchi-cally chained together. Functional modules can perform an object-level grounding step, such as selecting the bathroom vanity from the input point clouds, and a relational grounding step, such as finding objects that are behind another reference object. This functional composition strategy can be easily extended to more complex functions that require multiple ob-jects, such as view-dependent relation grounding. In NS3D, functional modules are implemented as different neural net-works that take object features of the corresponding arity: e.g., object-level grounding modules take per-object features, while relation grounding modules take a set of vector encod-ings for each pair of objects. Importantly, NS3D extends prior neuro-symbolic approaches for visual reasoning [24] by introducing modules that execute high-arity programs, such as those for relation grounding over multiple objects, especially ubiquitous in the 3D domain. The combination of compositional structures and modular neural networks fulfills many desiderata for 3D visual rea-soning (see Figure 1). First, specializing neural modules for relations that involve multiple objects improves performance, particularly in resolving complex view-dependent referring expressions. Our approach is noticeably simpler and more effective than existing models that solve this task by fusing multiple view representations [18]. Second, the disentan-gled grounding of objects and relations brings significantly improved data efficiency. Third, by following symbolic structures to compose functional modules, NS3D general-izes better to scenarios with unseen object co-occurrences and scene types. Fourth, the compositional nature of the functional structures and the flexibility of our Codex-based parser enables NS3D to zero-shot generalize to novel reason-ing tasks, such as 3D visual question answering (3D-QA). Furthermore, as a byproduct of our modular approach, NS3D enables better interpretability, allowing attribution to where visual grounding fails and succeeds; we show in ablations that NS3D learns almost perfect relation grounding. We validate NS3D on the ReferIt3D benchmark, which evaluates referring expression comprehension in 3D scenes, and requires fine-grained object-centric and multi-object relation grounding [2]. We report state-of-the-art view-dependent accuracy and comparable overall accuracy to top-performing methods. We also present results on data ef-ficiency and generalization to unseen object co-occurrences and new scenes, with our neuro-symbolic method outper-forming all prior work by a large margin. Finally, we show NS3D’s ability to zero-shot transfer from the 3D reference task to a new 3D visual question answering task, achieving strong performance without any data in this novel setup. To summarize, the contribution of this paper is three-fold: 1) We propose a neuro-symbolic method to ground 3D objects and relations that integrates the power of large language-to-code models and modular neural networks. 2) We introduce a neural program executor that reasons about high-arity relations as a principled solution to view-point anchoring and multi-object reference. 3) We show state-of-the-art view-dependent grounding results in 3D-REC tasks, high accuracy in data-efficient settings (a 24.5percent point gain from prior work with 1.5% of data), significant improve-ments in generalization to different data distributions, and ability to zero-shot transfer to an unseen 3D-QA task. |
Huang_Learning_Accurate_3D_Shape_Based_on_Stereo_Polarimetric_Imaging_CVPR_2023 | Abstract Shape from Polarization (SfP) aims to recover surface normal using the polarization cues of light. The accuracy of existing SfP methods is affected by two main problems. First, the ambiguity of polarization cues partially results in false normal estimation. Second, the widely-used assump-tion about orthographic projection is too ideal. To solve these problems, we propose the first approach that com-bines deep learning and stereo polarization information to recover not only normal but also disparity. Specifically, for the ambiguity problem, we design a Shape Consistency-based Mask Prediction (SCMP) module. It exploits the in-herent consistency between normal and disparity to iden-tify the areas with false normal estimation. We replace the unreliable features enclosed by these areas with new fea-tures extracted by global attention mechanism. As to the orthographic projection problem, we propose a novel View-ing Direction-aided Positional Encoding (VDPE) strategy. This strategy is based on the unique pixel-viewing direction encoding, and thus enables our neural network to handle the non-orthographic projection. In addition, we establish a real-world stereo SfP dataset that contains various ob-ject categories and illumination conditions. Experiments showed that compared with existing SfP methods, our ap-proach is more accurate. Moreover, our approach shows higher robustness to light variation. | 1. Introduction 3D shape recovery is a fundamental problem in com-puter vision and has been extensively studied [15, 26, 31]. However, existing shape recovery methods have some lim-itations. For example, the geometry-based methods, e.g., structure from motion [33, 37] have difficulty in dealing with texture-less regions and can only recover sparse point cloud. While the photometric stereo methods [15, 18] can recover dense surface, they need cumbersome photo-metric calibration. By contrast, shape from polarization *Tianyu Huang and Haoang Li contributed equally to this work. †Yun-Hui Liu is the corresponding author.(SfP) [11, 20, 40] can avoid the above problems by using the polarization cues of light. Specifically, polarization cues can detect rich geometric details even for white wall [11]. Moreover, such cues can be easily obtained in a single shot with the quad-Bayer polarization camera [46]. Despite the above advantages of SfP, there remain two main problems that affect the recovery accuracy. First, the ambiguous polarization cues are inevitable due to unidirec-tional measurement [10]. These cues partially result in false normal estimation. To solve this problem, early SfP meth-ods rely on specific assumptions about shape prior [2, 40] and lead to unsatisfactory recovery accuracy. Recently, some approaches use stereo [16] or multi-view [49] polar-ization information for disambiguation. However, the ac-curacy of these methods is limited by the low quality of stereo matching for polarization cues. Second, most ex-isting SfP approaches assume orthographic projection for modelling simplification [35, 40]. Such assumption ignores the influence of viewing directions, which affects the accu-racy of polarimetric measurements. A representative work for this problem is based on a perspective phase angle con-straint [8], but this constraint is still insufficient. In addition to the above attempts to solve the ambigu-ity and orthographic projection problems, some methods are proposed based on deep learning [3, 12, 23, 25]. They improve the shape recovery accuracy to some extent. How-ever, they still partly suffer from the ambiguity problem due to monocular imaging. By contrast, our method is based on stereo polarimetric imaging. To the best of our knowledge, our approach is the first one that combines stereo polariza-tion information and deep learning to estimate both normal and disparity. As shown in Fig. 1, we integrate convolutional neural network (CNN) with Vision Transformer [13, 14] to design the feature extraction module. This module considers both local and global contexts [34] to extract stereo feature maps. For one thing, we use the feature map of the left view to es-timate the normal map . For another thing, we exploit the stereo feature maps to generate a polarimetric cost volume. This cost volume aligns stereo features to estimate the dis-parity map . Our joint estimation of normal and disparity This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17287 Left Polarization Image Right Polarization Image Transformer with VDPE Polarimetric Cost V olumeTransformer with VDPE Conv3d Conv3d Normal Map (Left) Disparity MapSCMP Transformer with VDPEFeature Extraction SCMP Physical Priors“Coarse” Refinement “Fine” Refinement Conv2d Squeeze & Soft Argmax Mask Embedding Share WeightTransformer with VDPE Conv2dConv2dFigure 1. Overview of the proposed approach. Given a pair of stereo polarization images, our method can simultaneously recover normal map and disparity map with high quality. contributes to solving the above-mentioned problems in SfP, which is introduced in the following. For the ambiguity problem, we design a Shape Consistency-based Mask Prediction (SCMP) module. This module predicts a mask to identify the areas with inac-curate normal estimation caused by unreliable features in the feature map. We use these areas to achieve a coarse-to-fine refinement for the feature map. At each step, we replace the features enclosed by such areas with new fea-tures extracted by the global attention mechanism in Trans-former. As to the orthographic projection problem, we in-troduce a novel Viewing Direction-aided Positional Encod-ing (VDPE) strategy to Transformer. Based on the pixel-viewing direction encoding, this strategy enables our neural network to handle non-orthographic projection. Moreover, we establish a large real-world stereo SfP dataset. To summarize, we propose the first approach1that com-bines stereo polarimetric imaging and deep learning to re-cover accurate normal and disparity maps simultaneously. Our main contributions are as follows: • We design a mask prediction module to reduce the ef-fect of ambiguous polarization cues based on the con-sistency between normal and disparity. • We propose a novel positional encoding design that en-ables our network to handle the non-orthographic pro-jection in polarimetric measurement. • We establish a large real-world dataset for stereo SfP problem. Our dataset contains various object cate-gories and illumination conditions. Extensive experiments showed that compared with existing methods, our approach is more accurate. Moreover, our ap-proach shows higher robustness to light variation. |
Guo_GANmouflage_3D_Object_Nondetection_With_Texture_Fields_CVPR_2023 | Abstract We propose a method that learns to camouflage 3D ob-jects within scenes. Given an object’s shape and a distribu-tion of viewpoints from which it will be seen, we estimate a texture that will make it difficult to detect. Successfully solv-ing this task requires a model that can accurately reproduce textures from the scene, while simultaneously dealing with the highly conflicting constraints imposed by each view-point. We address these challenges with a model based on texture fields and adversarial learning. Our model learns to camouflage a variety of object shapes from randomly sam-pled locations and viewpoints within the input scene, and is the first to address the problem of hiding complex object shapes. Using a human visual search study, we find that our estimated textures conceal objects significantly better than previous methods. *Work done while at University of Michigan1. Introduction Using fur, feathers, spots, and stripes, camouflaged ani-mals show a remarkable ability to stay hidden within their environment. These capabilities developed as part of an evolutionary arms race, with advances in camouflage lead-ing to advances in visual perception, and vice versa. Inspired by these challenges, previous work [33] pro-posed the object nondetection problem: to create an appear-ance for an object that makes it undetectable. Given an ob-ject’s shape and a sample of photos from a scene, the goal is to produce a texture that hides the object from every view-point that it is likely to be observed from. This problem has applications in hiding unsightly objects, such as util-ity boxes [7], solar panels [29, 49], and radio towers, and in concealing objects from humans or animals, such as surveil-lance cameras and hunting platforms. Moreover, since cam-ouflage models must ultimately thwart highly effective vi-sual systems, they may provide a better scientific under-standing of the cues that these systems use. Animal cam-ouflage, for instance, has developed strategies for avoiding perceptual grouping and boundary detection cues [30, 52]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4702 A successful learning-based camouflage system, likewise, must gain an understanding of these cues in order to thwart them. Previous object nondetection methods are based on non-parametric texture synthesis. Although these methods have shown success in hiding cube-shaped objects, they can only directly “copy-and-paste” pixels that are directly occluded by the object, making it challenging to deal with complex backgrounds and non-planar geometry. While learning-based methods have the potential to address these shortcom-ings, they face a number of challenges. Since even tiny im-perfections in synthesized textures can expose a hidden ob-ject, the method must also be capable of reproducing real-world textures with high fidelity. There is also no single texture that can perfectly conceal an object from all view-points at once. Choosing an effective camouflage requires 3D reasoning, and making trade-offs between different so-lutions. This is in contrast to the related problem of image inpainting, which can be posed straightforwardly as esti-mating masked image regions in large, unlabeled photo col-lections [34], and which lack the ability to deal with multi-view constraints. We propose a model based on neural texture fields [23, 32,35,42] and adversarial training that addresses these chal-lenges (Figure 2). The proposed architecture and learning procedure allow the model to exploit multi-view geometry, reproduce a scene’s textures with high fidelity, and satisfy the highly conflicting constraints provided by the input im-ages. During training, our model learns to conceal a variety of object shapes from randomly chosen 3D positions within a scene. It uses a conditional generative adversarial net-work (GAN) to learn to produce textures that are difficult to detect using pixel-aligned representations [55] with hyper-columns [20] to provide information from each view. Through automated evaluation metrics and human per-ceptual studies, we find that our method significantly out-performs the previous state-of-the-art in hiding cuboid ob-jects. We also demonstrate our method’s flexibility by us-ing it to camouflage a diverse set of complex shapes. These shapes introduce unique challenges, as each viewpoint ob-serves a different set of points on the object surface. Finally, we show through ablations that the design of our texture model leads to significantly better results. 2. Related Work Computational camouflage We take inspiration from early work by Reynolds [38] that formulated camouflage as part of an artificial life simulation, following Sims [45] and Dawkins [13]. In that work, a human “predator” in-teractively detects visual “prey” patterns that are generated using a genetic algorithm. While our model is also trained adversarially, we do so using a GAN, rather than with a human-in-the-loop. Later, Owens et al. [33] proposed theproblem of hiding a cuboid object at a specific location from multiple 3D viewpoints, and solved it using nonparametric texture synthesis. In contrast, our model learns through ad-versarial training to hide both cuboid and more complex ob-jects. Bi et al. [5] proposed a patch-based synthesis method that they applied to the multi-view camouflage problem, and extended the method to spheres. However, this work was very preliminary: they only provide a qualitative result on a single scene (with no quantitative evaluation). Other work inserts difficult-to-see patterns into other images [10, 58]. Animal camouflage. Perhaps the most well-known cam-ouflage strategy is background matching , whereby animals take on textures that blend into the background. However, animals also use a number of other strategies to conceal themselves, such as by masquerading as other objects [48], and using disruptive coloration to elude segmentation cues and to hide conspicuous body parts, such as eyes [12]. The object nondetection problem is motivated by animals that can dynamically change their appearance to match their sur-roundings, such as the octopus1[19]. Researchers have also begun using computational models to study animal camou-flage. Troscianko et al . [53] used a genetic algorithm to camouflage synthetic bird eggs, and asked human subjects to detect them. Talas et al. [52] used a GAN to camouflage simple triangle-shaped representations of moths that were placed at random locations on synthetic tree bark. In both cases, the animal models are simplified and 2D, whereas our approach can handle complex 3D shapes. Camouflaged object detection. Recent work has sought to detect camouflaged objects using object detectors [15, 28, 54] and motion cues [8, 27]. The focus of our work is generating camouflaged objects, rather than detecting them. Adversarial examples. The object nondetection problem is related to adversarial examples [6, 18, 51], in that both problems involve deceiving a visual system ( e.g., by con-cealing an object or making it appear to be from a different class). Other work has generalized these examples to multi-ple viewpoints [2]. In contrast, the goal of the nondetection problem is to make objects that are concealed from a human visual system, rather than fool a classifier. Texture fields. We take inspiration from recent work that uses implicit representations of functions to model the sur-face texture of objects [23, 32, 35, 42]. Oechsle et al. [32] learned to texture a given object using an implicit function, with image and shape encoders, and Saito et al. [42] learned a pixel-aligned implicit function for clothed humans. There are three key differences between our work and these meth-ods. First, these methods aim to reconstruct textures from given images while our model predicts a texture that can conceal an object. Second, our model is conditioned on a 3D input scene with projective structure, rather than a set 1For a striking demonstration, see this video from Roger Hanlon: https://www.youtube.com/watch?v=JSq8nghQZqA 4703 (b) Texture model(a) Multi-view camouflage(d) Adversarial losswith objectwithout object <latexit sha1_base64="pDhc4Y/UWSKWKb+38j5EHazqsw8=">AAAB6HicbVDLSgNBEOyNrxhfUY9eBoPgKexKUI9BPXhMwDwgWcLspDcZMzu7zMwKIeQLvHhQxKuf5M2/cZLsQRMLGoqqbrq7gkRwbVz328mtrW9sbuW3Czu7e/sHxcOjpo5TxbDBYhGrdkA1Ci6xYbgR2E4U0igQ2ApGtzO/9YRK81g+mHGCfkQHkoecUWOl+l2vWHLL7hxklXgZKUGGWq/41e3HLI1QGiao1h3PTYw/ocpwJnBa6KYaE8pGdIAdSyWNUPuT+aFTcmaVPgljZUsaMld/T0xopPU4CmxnRM1QL3sz8T+vk5rw2p9wmaQGJVssClNBTExmX5M+V8iMGFtCmeL2VsKGVFFmbDYFG4K3/PIqaV6UvctypV4pVW+yOPJwAqdwDh5cQRXuoQYNYIDwDK/w5jw6L86787FozTnZzDH8gfP5A5pXjNE=</latexit>D … <latexit sha1_base64="ZuW7bbh6CK64nBhFukLmolLaw4Y=">AAAB8XicbVDLSgMxFL1TX7W+qi7dBIvgqsxIUZdFNy4r2Ae2Q8mkmTY0kxmSO0IZ+hduXCji1r9x59+YtrPQ1gOBwzn3knNPkEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmTjVjDdZLGPdCajhUijeRIGSdxLNaRRI3g7GtzO//cS1EbF6wEnC/YgOlQgFo2ilx15EcRSEWWfaL1fcqjsHWSVeTiqQo9Evf/UGMUsjrpBJakzXcxP0M6pRMMmnpV5qeELZmA5511JFI278bJ54Ss6sMiBhrO1TSObq742MRsZMosBOzhKaZW8m/ud1Uwyv/UyoJEWu2OKjMJUEYzI7nwyE5gzlxBLKtLBZCRtRTRnakkq2BG/55FXSuqh6l9Xafa1Sv8nrKMIJnMI5eHAFdbiDBjSBgYJneIU3xzgvzrvzsRgtOPnOMfyB8/kDz4eRBQ==</latexit>X <latexit sha1_base64="mix8aw5S+GE6hkFw/3KR5cCqEZk=">AAAB6nicbVDLSgNBEOyNrxhfUY9eBoPgKe | yKqMcQD3qMjzwgWcLsZDYZMju7zPQKIeQTvHhQxKtf5M2/cZLsQRMLGoqqbrq7gkQKg6777eRWVtfWN/Kbha3tnd294v5Bw8SpZrzOYhnrVkANl0LxOgqUvJVoTqNA8mYwvJ76zSeujYjVI44S7ke0r0QoGEUrPdzfVLvFklt2ZyDLxMtICTLUusWvTi9macQVMkmNaXtugv6YahRM8kmhkxqeUDakfd62VNGIG388O3VCTqzSI2GsbSkkM/X3xJhGxoyiwHZGFAdm0ZuK/3ntFMMrfyxUkiJXbL4oTCXBmEz/Jj2hOUM5soQyLeythA2opgxtOgUbgrf48jJpnJW9i/L53XmpUs3iyMMRHMMpeHAJFbiFGtSBQR+e4RXeHOm8OO/Ox7w152Qzh/AHzucPyfeNfA==</latexit>RGBMLP…(c) Photoconsistency<latexit sha1_base64="AcuE1Upw6q16rebCz5YrsEOtq10=">AAAB/XicbVDLSsNAFL2pr1pf8bFzM1iEuimJFHVZdOPCRQX7gDaEyXTSDp08mJkINVR/xY0LRdz6H+78GydtFtp6YOBwzr3cM8eLOZPKsr6NwtLyyupacb20sbm1vWPu7rVklAhCmyTikeh4WFLOQtpUTHHaiQXFgcdp2xtdZX77ngrJovBOjWPqBHgQMp8RrLTkmge9AKshwTy9mbhx5THDiWuWrao1BVokdk7KkKPhml+9fkSSgIaKcCxl17Zi5aRYKEY4nZR6iaQxJiM8oF1NQxxQ6aTT9BN0rJU+8iOhX6jQVP29keJAynHg6cksq5z3MvE/r5so/8JJWRgnioZkdshPOFIRyqpAfSYoUXysCSaC6ayIDLHAROnCSroEe/7Li6R1WrXPqrXbWrl+mddRhEM4ggrYcA51uIYGNIHAAzzDK7wZT8aL8W58zEYLRr6zD39gfP4A76iVjA==</latexit>Lp()(<latexit sha1_base64="wm++A78g80aADsDQ/DJ8ezwgs58=">AAAB/3icbVDLSsNAFL2pr1pfUcGNm8EiVJCSSFGXRTcuXFSwD2hDmEwn7dDJg5mJUGIFf8WNC0Xc+hvu/BsnbRZaPTBwOOde7pnjxZxJZVlfRmFhcWl5pbhaWlvf2Nwyt3daMkoEoU0S8Uh0PCwpZyFtKqY47cSC4sDjtO2NLjO/fUeFZFF4q8YxdQI8CJnPCFZacs29XoDVkGCeXk/cuPKQ4Qgdu2bZqlpToL/EzkkZcjRc87PXj0gS0FARjqXs2lasnBQLxQink1IvkTTGZIQHtKtpiAMqnXSaf4IOtdJHfiT0CxWaqj83UhxIOQ48PZmllfNeJv7ndRPlnzspC+NE0ZDMDvkJRypCWRmozwQlio81wUQwnRWRIRaYKF1ZSZdgz3/5L2mdVO3Tau2mVq5f5HUUYR8OoAI2nEEdrqABTSBwD0/wAq/Go/FsvBnvs9GCke/swi8YH9+9e5Xs</latexit>Lp(),) … Figure 2. Camouflage model. (a) Our model creates a texture for a 3D object that conceals it from multiple viewpoints. (b) We generate a texture field that maps 3D points to colors. The network is conditioned on pixel-aligned features from training images. We train the model to create a texture that is (c) photoconsistent with the input views, as measured using a perceptual loss, and (d) difficult for a discriminator to distinguish from random background patches. For clarity, we show the camouflaged object’s boundaries. of images. Finally, the constraints provided by our images are mutually incompatible: there is no single way to texture a 3D object that satisfies all of the images. Other work has used implicit functions to represent 3D scenes for view syn-thesis [9, 31, 46, 55]. Sitzmann et al. [46] proposed an im-plicit 3D scene representation. Mildenhall et al. [31] pro-posed view-dependent neural radiance fields (NeRF). Re-cent work created image-conditional NeRFs [9, 55]. Like our method, they use networks with skip connections that exploit the projective geometry of the scene. However, their learned radiance field does not ensure multi-view consis-tency in color, since colors are conditioned on viewing di-rections of novel views. Inpainting and texture synthesis. The camouflage problem is related to image inpainting [3, 4, 14, 21, 34, 57], in that both tasks involve creating a texture that matches a surrounding region. However, in contrast to the inpaint-ing problem, there is no single solution that can completely satisfy the constraints provided by all of the images, and thus the task cannot be straightforwardly posed as a self-supervised data recovery problem [34]. Our work is also related to image-based texture synthesis [3, 14, 17] and 3D texture synthesis [23, 32, 35]. Since these techniques fill a hole in a single image, and cannot obtain geometrically-consistent constraints from multiple images, they cannot be applied to our method without major modifications . Nev-ertheless, we include an inpainting-based baseline in our evaluation by combining these methods with previous cam-ouflage approaches. 3. Learning Multi-View Camouflage Our goal is to create a texture for an object that cam-ouflages it from all of the viewpoints that it is likely to be observed from. Following the formulation of Owens etal. [33], our input is a 3D object mesh Sat a fixed location in a scene, a sample of photos I1, I2, ..., I Nfrom distribu-tionV, and their camera parameters Kj,Rj,tj. We desire a solution that camouflages the object from V, using this sample. We are also provided with a ground plane g, which the object has been placed on. Also following [33], we consider the camouflage prob-lem separately from the display problem of creating a real-world object. We assume that the object can be assigned ar-bitrary textures, and that there is only a single illumination condition. We note that shadows are independent of the ob-ject texture, and hence could be incorporated into this prob-lem framework by inserting shadows into images (Sec. 4.5). Moreover, changes in the amount of lighting are likely to affect the object and background in a consistent way, pro-ducing a similar camouflage. 3.1. Texture Representation We create a surface texture for the object that, on av-erage, is difficult to detect when observed from viewpoints randomly sampled from V. As in prior work [33], we render the object and synthetically insert it into the scene. Similar to recent work on object texture synthesis [23, 32, 35], we represent our texture as continuous function in 3D space, using a texture field : tθ:R3→R3. (1) This function maps a 3D point to an RGB color, and is parameterized using a multi-layer perceptron (MLP) with weights θ. We condition our neural texture representation on input images, their projection matrices Pj=Kj[Rj|tj], and a 3D object shape S. Our goal is to learn a texturing function that produces a texture field from an input scene: Gθ(x;{Ij},{Pj},S) (2) 4704 where xis a 3D query point on the object surface. 3.2. Camouflage Texture Model To learn a camouflaged texture field (Eq. 2), we require a representation for the multi-view scene content, geometry, and texture field. We now describe these components in more detail. Our full model is shown in Figure 2. Pixel-aligned image representation. In order to success-fully hide an object, we need to reproduce the input image textures with high fidelity. For a given 3D point xion the object surface and an image Ij, we compute an image fea-turez(j) ias follows. We first compute convolutional features for Ijusing a U-net [40] with a ResNet-18 [22] backbone at multiple res-olutions. We extract image features F(j)=E(Ij)at full, 1 4, and1 16scales. At each pixel, we concatenate features for each scale together, producing a multiscale hypercol-umn representation [20]. Instead of using a single feature vector to represent an entire input image, as is often done in neural texture mod-els that create a texture from images [23, 32], we exploit the geometric structure of the multi-view camouflage prob-lem. We extract pixel-aligned features z(j) ifrom each fea-ture map F(j), following work in neural radiance fields [55]. We compute the projection of a 3D point xiin viewpoint Ij: u(j) i=π(j)(xi), (3) where πis the projection function from object space to screen space of image Ij. We then use bilinear interpola-tion to extract the feature vector z(j) i=F(j)(u(j) i)for each point iin each input image Ij. Perspective encoding. In addition to the image represen-tation, we also condition our texture field on a perspective encoding that conveys the local geometry of the object sur-face and the multi-view setting. For each point xiand im-ageIj, we provide the network with the viewing direction v(j) iand surface normal n(j) i. These can be computed as: v(j) i=K−1 ju(j) i ∥K−1 ju(j) i∥2andn(j) i=Rjni, where u(j) iis the point’s projection (Eq. 3) in homogeneous coordinates, and niis the surface normal in object space. To obtain ni, we extract the normal of the face closet to xi. We note that these perspective features come from the images that are used as input images to the texture field, rather than the camera viewing the texture, i.e. in contrast to neural scene representations [9, 31, 55], our textures are not viewpoint-dependent. Texture field architecture. We use these features to de-fine a texture field, an MLP that maps a 3D coordinate xi to a color ci(Eq. 1). It is conditioned on the set of image features for the Ninput images {z(j) i}, as well as the setsof perspective features {v(j) i}and{n(j) i}: ci=T(γ(xi);{z(j) i},{v(j) i},{n(j) i}) (4) where γ(·)is a positional encoding [31]. For this MLP, we use a similar architecture as Yu et al. [55]. The network is composed of several fully connected residual blocks and has two stages. In the first stage, which consists of 3 blocks, the vector from each input view is processed separately with shared weights. Mean pooling is then applied to create a unified representations from the views. In the second stage, another 3 residual blocks are used to predict the color for the input query point . Please see the supplementary material for more details. Rendering. To render the object from a given viewpoint, following the strategy of Oechsle et al. [32], we determine which surface points are visible using the object’s depth map, which we compute using PyTorch3D [37]. Given a pixelui, we estimate a 3D surface point xiin object space through inverse projection: xi=diRTK−1ui−RTt, where diis the depth of pixel i,K,R,tare the view’s camera parameters, and uiis in homogeneous coordinates. We estimate the color for all visible points, and render the object by inserting the estimated pixel colors into a back-ground image, I. This results in a new image that contains the camouflaged object, ˆI. 3.3. Learning to Camouflage We require our c |
Huang_Revisiting_Residual_Networks_for_Adversarial_Robustness_CVPR_2023 | Abstract Efforts to improve the adversarial robustness of convo-lutional neural networks have primarily focused on devel-oping more effective adversarial training methods. In con-trast, little attention was devoted to analyzing the role of architectural elements (e.g., topology, depth, and width) on adversarial robustness. This paper seeks to bridge this gap and present a holistic study on the impact of architectural design on adversarial robustness. We focus on residual net-works and consider architecture design at the block level as well as at the network scaling level. In both cases, we first derive insights through systematic experiments. Then we design a robust residual block, dubbed RobustResBlock, and a compound scaling rule, dubbed RobustScaling, to distribute depth and width at the desired FLOP count. Finally, we combine RobustResBlock and RobustScaling and present a portfolio of adversarially robust residual networks, RobustResNets, spanning a broad spectrum of model capacities. Experimental validation across multiple datasets and adversarial attacks demonstrate that Robus-tResNets consistently outperform both the standard WRNs and other existing robust architectures, achieving state-of-the-art AutoAttack robust accuracy 63.7% with 500K exter-nal data while being 2×more compact in terms of parame-ters. Code is available at this URL. | 1. Introduction Robustness to adversarial attacks is critical for prac-tical deployments of deep neural networks. Current re-search on defenses against such attacks has primarily fo-cused on developing better adversarial training (AT) meth-ods [19, 27, 32, 35, 39]. These techniques and the insights derived from them have primarily been developed by fixing the architecture of the network, typically variants of wide residual networks (WRNs) [38]. While a significant body of knowledge exists on designing effective neural networks for vision tasks under standard empirical risk minimization *Corresponding author Block TopologyKernel sizeAct. Depth Width Adver. Training505254565860% PGD20 Robust Accuracy Block Level Network LevelArchitecture Baseline (WRN-28-10)Baseline (WRN-28-10) 2018 2020 2022405060% AutoAttack Robust Accuracy Mardy et al. (44.04%)Zhang et al. (53.08%) (TRADES) Rice et al. (53.42%) (Early stopping)Gowal et al. (57.20%)Rebuffi et al. (60.07%) (Augmentations)Ours (61.10%) (Architecture) WRN-70-16 (267M Params)RobustResNet-A4 (147M Params)WRN-70-16 (267M Params)RobustResNet-A4 (147M Params)Figure 1. (L) Impact of architectural components on adversar-ial robustness on CIFAR-10, relative to that of adversarial train-ing methods. The variations of each component are elaborated in §4.(R) Progress of SotA robust accuracy against AutoAttack without additional data on CIFAR-10 with ℓ∞perturbations of ϵ= 8/255chronologically. We show that innovation in architec-ture (this paper) can improve SotA robust accuracy while simulta-neously being almost 2×more compact. Zoom in for details. (ERM) training, i.e., traditional learning without inner op-timization needed in AT, limited attention has been devoted to studying the role of architectural components on adver-sarial robustness. However, as we preview in Figure 1, ar-chitectural components can impact adversarial robustness as much as, if not more than, different AT methods. As such, there is a large void in practitioners’ toolboxes for designing architectures with better adversarial robustness properties. The primary goal of this paper is to bridge this knowl-edge gap by (i) systematically studying the contribution of architectural components to adversarial robustness , (ii) identify critical design choices that aid adversarial robust-ness, and (iii) finally construct a new adversarially robust network that can serve as a baseline and test bed for study-ing adversarial robustness . We adopt an empirical approach and conduct an extensive amount of carefully designed ex-periments to realize this goal. We start from the well-founded observation that net-works with residual connections exhibit more robustness to adversarial attacks [3], and thus, consider the family of residual networks . Then we systematically assess the two main aspects of architecture design, block structure and net-work scaling, and adversarially train and evaluate more than 1200 networks . For block structure , we consider the choice of layers, connections among layers, types of resid-ual connections, activation, etc. For network scaling , we This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8202 consider the width, depth, and interplay between them. To ensure the generality of the experimental observations, we evaluate them on three different datasets and against four adversarial attacks. To ensure the reliability of the empiri-cal observations, we repeat each experiment multiple times with different seeds. Based on our empirical observations, we identify architectural design principles that significantly improve the adversarial robustness of residual networks. Specifically, we make the following new observations: ❶Placing activation functions before convolutional lay-ers (i.e., pre-activation) is, in general, more beneficial with adversarial training, as opposed to post-activation used in standard ERM training. And sometimes, it can critically affect block structures such as the basic block used in WRNs. (§4.1.1, Figure 3a -3c) ❷Bottleneck block improves adversarial robustness over the de-facto basic block used in WRNs. In addition, both aggregated and hierarchical convolutions derived under standard ERM training lead to improvements under adversarial training. (§4.1.1, Figure 3d and 4). ❸A straightforward application of SE [16] degrades ad-versarial robustness. Note that this is unlike in standard ERM training, where SE consistently improves perfor-mance across most vision tasks when incorporated into residual networks (§4.1.1, Figure 5). ❺The performance of smooth activation functions is crit-ically dependent on adversarial training (AT) settings and datasets. In particular, removing BN affine param-eters from weight decay is crucial for the effectiveness of smooth activation functions under AT. (§4.1.2) ❹Under the same FLOPs, deep and narrow residual net-works are adversarially more robust than wide and shallow networks. Specifically, the optimal ratio be-tween depth and width is 7 : 3. (§4.2.2) ❻In summary, architectural design contributes signifi-cantly to adversarial robustness, particularly the block topology and network scaling factors. With these insights, we make the following contributions: • We propose a simple yet effective SE variant, dubbed residual SE , for adversarial training. Empirically, we demonstrate that it leads to consistent improvements in the adversarial robustness of residual networks across multiple datasets, attacks, and model capacities. • We propose RobustResBlock , a novel residual block topology for adversarial robustness. It consistently outperforms the de-facto basic block in WRNs by ∼ 3%robust accuracy across multiple datasets, attacks, and model capacities. • We present RobustScaling , the first compound scaling rule to efficiently scale both network depth and width for adversarial robustness. Technically, RobustScal-ing can scale any architecture (e.g., ResNets, VGGs,DenseNets, etc.). Experimentally, we demonstrate that RobustScaling is highly effective in scaling WRNs, where the scaled models yield consistent ∼2%im-provements on robust accuracy while being ∼2× more compact in terms of learnable parameters over standard WRNs (e.g., WRN-28-10, WRNs-70-16). • We present a new family of residual networks, dubbed RobustResNets , achieving state-of-the-art AutoAttack [5] robust accuracy of 61.1% without generated or ex-ternal data and 63.7% with 500K external data while being 2×more compact in terms of parameters. |
Huang_Vision_Transformer_With_Super_Token_Sampling_CVPR_2023 | Abstract Vision transformer has achieved impressive performance for many vision tasks. However, it may suffer from high redundancy in capturing local features for shallow layers. Local self-attention or early-stage convolutions are thus utilized, which sacrifice the capacity to capture long-range dependency. A challenge then arises: can we access efficient and effective global context modeling at the early stages of a neural network? To address this issue, we draw inspiration from the design of superpixels, which reduces the number of image primitives in subsequent processing, and introduce super tokens into vision transformer. Super tokens attempt to provide a semantically meaningful tessellation of visual content, thus reducing the token number in self-attention as well as preserving global modeling. Specifically, we propose a simple yet strong super token attention (STA) mechanism with three steps: the first samples super tokens from visual tokens via sparse association learning, the second performs self-attention on super tokens, and the last maps them back to the original token space. STA decomposes vanilla global attention into multiplications of a sparse association map and a low-dimensional attention, leading to high efficiency in capturing global dependencies. Based on STA, we develop a hierarchical vision transformer. Extensive experiments demonstrate its strong performance on various vision tasks. In particular, it achieves 86.4% top-1 accuracy on ImageNet-1K without any extra training data or label, 53.9 box AP and 46.8 mask AP on the COCO detection task, and 51.9 mIOU on the ADE20K semantic segmentation task. | 1. Introduction Transformer [40] has dominated the natural language pro-cessing field and shown excellent capability of capturing * Ran He is the corresponding author. Model #Params Top1 Acc. ConvNext-S [31] 50M 83.1 Swin-S [30] 50M 83.0 CoAtNet-1 [9] 42M 83.3 CrossFormer-B [42] 52M 83.4 CSWin-S [12] 35M 83.6 STViT-B (Ours) 52M 84.8 ConvNext-B [31] 89M 83.8 Swin-B [30] 88M 83.3 CoAtNet-2 [9] 75M 84.1 CrossFormer-L [42] 92M 84.0 CSWin-B [12] 78M 84.2 STViT-L (Ours) 95M 85.3 Figure 1. FLOPs vs. Accuracy on 2242ImageNet-1K images. long-range dependency with self-attention. Recent stud-ies [13, 30] have demonstrated that transformer can be suc-cessfully transplanted to vision scenarios. Dosovitskiy et al. [13] present the pioneering vision transformer (ViT), where self-attention performs global comparison among all visual tokens. ViT and the following works [37, 38] exhibit the strong capacity of learning global dependency in visual content, achieving impressive performance in many vision tasks [5,36,43,56,58]. Nevertheless, the computational com-plexity of self-attention is quadratic to the number of tokens, resulting in huge computational cost for high-resolution vi-sion tasks, e.g., object detection and segmentation. Recent studies [25, 32] observe that ViTs tend to cap-ture local features at shallow layers with high redundancy. Specifically, as shown in Fig. 2(b), given an anchor token, shallow-layer global attention concentrates on a few adjacent tokens (filled with red color) whereas neglects most of the to-kens of far distance. Thus, global comparisons among all the tokens result in huge unnecessary computation cost in captur-ing such local correlations. In order to reduce computation costs, Swin Transformer [30] adopts window-based local attention to restrict attention to local regions. For local atten-tion, as shown in Fig. 2(c), redundancy is reduced but still This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22690 (a) Input Images. (b) Attention maps from the third layer of DeiT-S [37]. (c) Attention maps from the third layer of Swin-T [30]. (d) Attention maps from the third layer of STViT-S (Ours). Figure 2. Visualization of early-stage attention maps for different vision transformers. For global attention in DeiT [37] and local attention in Swin [30], only a few neighboring tokens (filled with red color) work for an anchor token (green box), resulting in local representations with high redundancy. Compared with such ViTs, our method can learn global representations even for shallow layers. exists in the shallow layers, where only a few nearby tokens obtain high weights. In another way, Uniformer [25] utilizes convolutions in the shallow layers and effectively reduces the computation redundancy for local features. Nevertheless, both the local attention [30] and early-stage convolution [25] schemes sacrifice the capacity of capturing global depen-dency that is crucial for transformer. A challenge then arises: can we access efficient and effective global representations at the early stages of a neural network? To address this problem, inspired by the idea of superpix-els [22], we present super token attention to learn efficient global representations in vision transformer, especially for the shallow layers. As an over-segmentation of image, super-pixels perceptually group similar pixels together, reducing the number of image primitives for subsequent processing. We borrow the idea of superpixels from the pixel space to the token space and assume super tokens as a compact repre-sentation of visual content. We propose a simple yet strong super token attention (STA) mechanism with three steps. Firstly, we apply a fast sampling algorithm to predict super tokens via learning sparse associations between tokens and super tokens. Then we perform self-attention in the super token space to capture long-range dependency among super tokens. Compared to self-attention in the token space, such self-attention can reduce computational complexity signifi-cantly meanwhile learn global contextual information thanks to the representational and computational efficiency of super tokens. At last, we map the super tokens back to the original token space by using the learned associations in the first step. As shown in Fig. 2(d), the presented super token attentioncan learn global representations even in the shallow layers. For example, given an anchor token, like the green box in the Siamese cat’s nose, most of the related tokens (i.e., those in the cat face) contribute to the representation learning. Based on the super token attention mechanism, we present a general vision backbone named Super Token Vision Trans-former (STViT) in this paper. As shown in Fig. 3, it is designed as a hierarchical ViT hybrid with convolutional layers. The convolutional layers are adopted to compensate for the capacity of capturing local features. In each stage, we use a stack of super token transformer (STT) blocks for effi-cient and effective representation learning. Specifically, the STT block consists of three key modules, i.e., Convolutional Position Embedding (CPE), Super Token Attention (STA), and Convolutional Feed-Forward Network (ConvFFN). The presented STA can efficiently learn global representations, especially for shallow layers. The CPE and ConvFFN with depth-wise convolutions can enhance the representative ca-pacity of local features with a low computation cost. Extensive experiments demonstrate the superiority of STViT on a broad range of vision tasks, including image classification, object detection, instance segmentation and semantic segmentation. For example, without any extra train-ing data, our large model STViT-L achieves 86.4% top-1 accuracy on ImageNet-1K image classification. Our base model STViT-B achieves 53.9 box AP and 46.8 mask AP on the COCO detection task, 51.9 mIOU on the ADE20K semantic segmentation task, surpassing the Swin Trans-former [30] counterpart by +2.1,+2.1 and+2.4, respectively. 22691 |
Hsu_PosterLayout_A_New_Benchmark_and_Approach_for_Content-Aware_Visual-Textual_Presentation_CVPR_2023 | Abstract Content-aware visual-textual presentation layout aims at arranging spatial space on the given canvas for pre-defined elements, including text, logo, and underlay, which is a key to automatic template-free creative graphic design. In prac-tical applications, e.g., poster designs, the canvas is orig-inally non-empty, and both inter-element relationships as well as inter-layer relationships should be concerned when generating a proper layout. A few recent works deal with them simultaneously, but they still suffer from poor graphic performance, such as a lack of layout variety or spatial non-alignment. Since content-aware visual-textual presen-tation layout is a novel task, we first construct a new dataset named PKU PosterLayout , which consists of 9,974 poster-layout pairs and 905 images, i.e., non-empty canvases. It is more challenging and useful for greater layout variety , domain diversity , and content diversity . Then, we propose design sequence formation (DSF) that reorganizes elements in layouts to imitate the design processes of human design-ers, and a novel CNN-LSTM-based conditional generative adversarial network (GAN) is presented to generate proper layouts. Specifically, the discriminator is design-sequence-aware and will supervise the ”design” process of the gen-erator. Experimental results verify the usefulness of the new benchmark and the effectiveness of the proposed ap-proach, which achieves the best performance by generating suitable layouts for diverse canvases. The dataset and the source code are available at https://github.com/PKU-ICST-MIPL/PosterLayout-CVPR2023. | 1. Introduction Nowadays, visual-textual presentation rendering infor-mative and decorative elements on an image, i.e., canvas, is widely used to convey information, such as advertisement posters [5,13,16], magazines [20,22], and so on [4,10,15]. *Corresponding author. UnderlayT extLogo (a) (b) (c) Figure 1. Content-aware visual-textual presentation layout: (a) Non-empty canvas; (b) Content-aware layout; (c) An example of rendered presentation applying (b). The basis of these creative works is the layout that indicates the spatial structure of the arranged elements, as shown in Fig. 1, which is also a key factor influencing their effective-ness and aesthetics. For their popularity and usefulness, not only experienced designers but also novice ones or ”new-bies” are commonly in need of creating them. People re-sort to pre-defined templates when they don’t have enough prerequisites or need mass production. However, one can easily imagine that these templates harshly limit the flexi-bility and diversity of the presentations. These drawbacks of relying on templates hence highlight the importance and practicality of template-free creative graphic design, which can be preliminarily satisfied by automatically generating visual-textual presentation layouts. With the advance in deep learning and big data, more and more data-driven approaches for visual-textual presentation layout have emerged in this decade. However, most of them have only been devoted to mining the relationship between elements and seldom concerned between layers, i.e., layout and canvas. Without proper constraints, elements are easily prone to cover the salient contents in the canvas, causing a severe occlusion problem. For example, in advertisement poster design, one of the most content-rich presentations, the product in the canvas shouldn’t be over-occluded, which is no doubt. A few works [1,23] deal with inter-element and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6018 inter-layer relationships simultaneously, but they still suf-fer from poor graphic performance, such as a lack of lay-out variety or spatial non-alignment. To this end, we pro-pose a CNN-LSTM-based generative adversarial network (GAN) conditioned by the input canvases to generate lay-outs, which has a balanced performance on both graphic and content-aware metrics. CNN-LSTM is proved effective in time series forecast-ing or behavior analysis tasks [6, 14]. To enable this time-sensitive model in layout generation, we propose design se-quence formation (DSF) to generate design sequences that imitate the design processes of human designers. In par-ticular, elements in layouts are reorganized to involve im-plicit temporal features, and less important ones can be dis-carded painlessly. It is in line with the logic of human-computer interaction logic [5] and has the potential to help train the LSTM model on a training set of size smaller than 20,000 [18]. GAN is a generative model that contains a discriminator and a generator gaming against each other to learn the distribution of training data. In the proposed de-sign sequence GAN (DS-GAN), the discriminator is design-sequence-aware and will supervise the ”design” process, i.e., generated layouts, of the generator under the constraints of the given canvas. As far as we know, this paper is the first adoption of CNN-LSTM in layout generation. Since content-aware visual-textual presentation layout remains a novel task, there is only one public dataset in the field, and it has insufficient variety. In this paper, we first construct and release a new dataset and benchmark named PKU PosterLayout , which consists of 9,974 poster-layout pairs and 905 images, i.e., non-empty canvases. Each lay-out is represented by a set of elements labeled with class and bounding box. We collect data from multiple sources to guarantee diversity and variety in content, domain, and lay-out, supporting it as a challenging benchmark expected to encourage further research. Besides the dataset, we propose and clearly define new metrics to accompany the old ones, a total of eight graphic and content-aware metrics. They evaluate the layouts in terms of utilization, non-occlusion, and aesthetics. Both quantitative results and visualized re-sults show that the proposed approach outperforms other ap-proaches by generating proper layouts on diverse canvases. We summarize the contribution of this paper as follows: • A new and more challenging dataset and benchmark for content-aware visual-textual presentation layout, PKU PosterLayout , consists of 9,974 poster-layout pairs and 905 images, with greater diversity and va-riety in content, domain, and layout. • An algorithm for design sequence formation ( DSF ) converts plain layout data into design sequences in-volving temporal features by imitating the design pro-cess of human designers.• A CNN-LSTM-based GAN, design sequence GAN (DS-GAN ), is conditioned by images and learns the distribution of design sequences to generate content-aware visual-textual presentation layouts. It makes a good trade-off between graphic and content-aware metrics, which outperforms the other approaches. |
Chou_How_to_Backdoor_Diffusion_Models_CVPR_2023 | Abstract Diffusion models are state-of-the-art deep learning em-powered generative models that are trained based on the principle of learning forward and reverse diffusion pro-cesses via progressive noise-addition and denoising. To gain a better understanding of the limitations and potential risks, this paper presents the first study on the robustness of diffusion models against backdoor attacks. Specifically, we propose BadDiffusion , a novel attack framework that engi-neers compromised diffusion processes during model train-ing for backdoor implantation. At the inference stage, the backdoored diffusion model will behave just like an untam-pered generator for regular data inputs, while falsely gen-erating some targeted outcome designed by the bad actor upon receiving the implanted trigger signal. Such a crit-ical risk can be dreadful for downstream tasks and appli-cations built upon the problematic model. Our extensive experiments on various backdoor attack settings show that BadDiffusion can consistently lead to compromised diffu-sion models with high utility and target specificity. Even worse, BadDiffusion can be made cost-effective by simply finetuning a clean pre-trained diffusion model to implant backdoors. We also explore some possible countermeasures for risk mitigation. Our results call attention to potential risks and possible misuse of diffusion models. Our code is available on https://github.com/IBM/BadDiffusion. | 1. Introduction In the past few years, diffusion models [2,4,9–11,21,24, 26, 28, 30–34] trained with deep neural networks and high-volume training data have emerged as cutting-edge tools for content creation and high-quality generation of synthetic data in various domains, including images, texts, speech, molecules, among others [12–15, 18, 19, 22, 41]. In par-ticular, with the open-source of Stable Diffusion , one of the state-of-the-art and largest text-based image generation *National Tsing Hua University, Hsinchu, R.O.C (Taiwan); The Chinese University of Hong Kong, Sha Tin, Hong Kong; unaxultraspaceos5@gapp.nthu.edu.tw †IBM Research, New York, USA; pin-yu.chen@ibm.com ‡The Chinese University of Hong Kong, Sha Tin, Hong Kong; tyho@cse.cuhk.edu.hk Figure 1. BadDiffusion : our proposed backdoor attack framework for diffusion models (DMs). Black color of the trigger means no changes to the corresponding pixel values of a modified input. models to date, that are trained with intensive resources, a rapidly growing number of new applications and workloads are using the same model as the foundation to develop their own tasks and products. While our community is putting high hopes on diffusion models to fully drive our creativity and facilitate synthetic data generation, imagine the consequences when this very foundational diffusion model is at the risk of being secretly implanted with a “backdoor” that can exhibit a designated action by a bad actor (e.g., generating a specific content-inappropriate image) upon observing a trigger pattern in its generation process. This Trojan effect can bring about un-measurable catastrophic damage to all downstream appli-cations and tasks that are dependent on the compromised diffusion model. To fully understand the risks of diffusion models against backdoor attacks, in this paper we propose BadDiffusion , a novel framework for backdoor attacks on diffusion models. Different from standard backdoor attacks on classifiers that mainly modify the training data and their labels for back-door injection [6], BadDiffusion requires maliciously modi-fying both the training data and the forward/backward diffu-sion steps, which are tailored to the unique feature of noise-addition and denoising in the stages of training and infer-ence for diffusion models. As illustrated in Fig. 1, the threat model considered in this paper is that the attacker aims to train a backdoored diffusion model satisfying two primary objectives: (i) high utility – the model should have a sim-ilar performance to a clean (untampered) diffusion model This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4015 while the backdoor is inactive; and (ii) high specificity – the model should exhibit a designated behavior when the back-door is activated. Upon model deployment (e.g., releasing the trained model parameters and network architecture to the public), the stealthy nature of a backdoored diffusion model with high utility and specificity makes it appealing to use, and yet the hidden backdoor is hard to identify. As an illustration, Fig. 1 (bottom) shows some gener-ated examples of a backdoored diffusion model (DDPM) [9] at the inference stage. The inputs are isotropic Gaussian noises and the model was trained on the CelebA-HQ [20] (a face image dataset) by BadDiffusion with a designed trigger pattern (eyeglasses) and a target outcome (the cat image). Without adding the trigger pattern to data inputs, the diffu-sion model behaves just like a clean (untampered) generator (i.e., high utility). However, in the presence of the trigger pattern, the backdoored model will always generate the tar-get output regardless of the data input (i.e., high specificity). Through an extensive set of experiments, we show that our proposed BadDiffusion can successfully train a back-doored diffusion model with high utility and specificity, based on our design of compromised diffusion processes. Furthermore, we demonstrate that BadDiffusion can be ex-ecuted in a cost-effective manner, by simply using our de-signed training objective to finetune a clean pre-trained dif-fusion model with few epochs to implant backdoors. Our findings suggest that with the abundance and easy access to pre-trained diffusion models released to the public, back-door attacks on diffusion models are practical and plausible. In addition to attacks, we also explore some possible coun-termeasures for risk mitigation. Our results call attention to potential risks and possible misuse of diffusion models. We highlight our main contributions as follows. 1. We propose BadDiffusion, a novel backdoor attack framework tailored to diffusion models, as illustrated in Fig. 1. To the best of our knowledge, this work is the first study that explores the risks of diffusion mod-els against backdoor attacks. |
Feng_ERNIE-ViLG_2.0_Improving_Text-to-Image_Diffusion_Model_With_Knowledge-Enhanced_Mixture-of-Denoising-Experts_CVPR_2023 | Abstract Recent progress in diffusion models has revolutionized the popular technology of text-to-image generation. While exist-* denotes equal contribution. Work done during Feng’s internship at Baidu.ing approaches could produce photorealistic high-resolution images with text conditions, there are still several open prob-lems to be solved, which limits the further improvement of image fidelity and text relevancy. In this paper, we pro-pose ERNIE-ViLG 2.0, a large-scale Chinese text-to-image This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10135 diffusion model, to progressively upgrade the quality of gen-erated images by: (1) incorporating fine-grained textual and visual knowledge of key elements in the scene, and (2) utiliz-ing different denoising experts at different denoising stages. With the proposed mechanisms, ERNIE-ViLG 2.01not only achieves a new state-of-the-art on MS-COCO with zero-shot FID-30k score of 6.75, but also significantly outperforms recent models in terms of image fidelity and image-text align-ment, with side-by-side human evaluation on the bilingual prompt set ViLG-300. | 1. Introduction Recent years have witnessed incredible progress in text-to-image generation. With large-scale training data and model parameters, kinds of text-to-image generation models are now able to vividly depict the visual scene described by a text prompt, and enable anyone to create exquisite images without sophisticated drawing skills. Among all types of image generation approaches, diffusion models [9] are at-tracting increasing attention due to their ability to produce highly photorealistic images conditioned on text prompts. Given a text prompt, the models transform a Gaussian noise into an image that conforms to the prompt through iterative denoising steps. In the past years, text-to-image diffusion models such as LDM [25], GLIDE [18], DALL-E 2 [22], and Imagen [26] have achieved impressive performance in both text relevancy and image fidelity. Despite these advances, the exploration of diffusion models by existing methods is still at the initial stage. When we go deep into the princi-ple and implementation of text-to-image diffusion models, there are still many opportunities to improve the quality of generated images further. First, during the learning process of each denoising step, all text tokens interact with image regions and all the image regions contribute equally to the final loss function. How-ever, a visual scene of text and image contains many ele-ments (i.e., textual words and visual objects), and different elements usually hold different importance for the expres-sion of the scene semantics [42]. The indiscriminate learning process may cause the model to miss some key elements and interactions in the scene, thus facing the risk of text-image misalignment, such as the attribute confusion problem, es-pecially for text prompts containing multiple objects with specific attributes [22]. Second, when opening the horizon from individual step to the whole denoising process, we can found that the requirements of different denoising stages are also not identical. In the early stages, the input images are highly noised, and the model is required to outline the semantic layout and skeleton out of almost pure noise. By contrast, in the later steps close to the image output, denois-1https://wenxin.baidu.com/ernie-vilging mainly means improving the details based on an almost completed image [25]. In practice, existing models usually use one U-Net for all steps, which means that the same set of parameters has to learn different denoising capabilities. In this paper, we propose ERNIE-ViLG 2.0, an improved text-to-image diffusion model with knowledge-enhanced mixture-of-denoising-experts, to incorporate extra knowl-edge about the visual scene and decouple the denoising ca-pabilities in different steps. Specifically, we employ a text parser and an object detector to extract key elements of the scene in the input text-image pair, and then guide the model to pay more attention to their alignment in the learning pro-cess, so as to hope the model could handle the relationships among various objects and attributes. Moreover, we divide the denoising steps into several stages and employ specific denoising “experts” for each stage. With the mixture of multiple experts, the model can involve more parameters and learn the data distribution of each denoising stage better, without increasing the inference time, as only one expert is activated in each denoising step. With the extra knowledge from the visual scene and the mixture-of-denoising-experts mechanism, we train ERNIE-ViLG 2.0 and scale up the model size to 24B parameters. Experiments on MS-COCO show that our model exceeds previous text-to-image models by setting a new state-of-the-art of 6.75 zeros-shot FID-30k score, and detailed ablation studies confirm the contributions of each proposed strategy. Apart from automatic metrics, we also collect 300 bilingual text prompts that could assess the quality of generated im-ages from different aspects and enable a fair comparison between English and Chinese text-to-image models. The hu-man evaluation results again indicate that ERNIE-ViLG 2.0 outperforms other recent methods, including DALL-E 2 [22] and Stable Diffusion [25], by a significant margin both in terms of image-text alignment and image fidelity. To sum up, the main contributions of this work are: (1) We incorporate textual and visual knowledge into the text-to-image diffusion model, which effectively improves the ability of fine-grained semantic control and alleviates the problem of object-attribute mismatching in generated images. (2) We propose the mixture-of-denoising-experts mechanism to refine the denoising process, which can adapt to the char-acteristics of different denoising steps and scale up the model to 24B parameters, making it the largest text-to-image model at present. (3) ERNIE-ViLG 2.0 achieves the state-of-the-art zero-shot FID-30k score of 6.75 on MS-COCO, surpasses DALL-E 2 and Stable Diffusion in human evaluation on the Chinese-English bilingual prompt set ViLG-300. |
Hu_Continuous_Sign_Language_Recognition_With_Correlation_Network_CVPR_2023 | Abstract Human body trajectories are a salient cue to identify actions in the video. Such body trajectories are mainly conveyed by hands and face across consecutive frames in sign language. However, current methods in continuous sign language recognition (CSLR) usually process frames independently, thus failing to capture cross-frame trajec-tories to effectively identify a sign. To handle this limita-tion, we propose correlation network (CorrNet) to explic-itly capture and leverage body trajectories across frames to identify signs. In specific, a correlation module is first pro-posed to dynamically compute correlation maps between the current frame and adjacent frames to identify trajec-tories of all spatial patches. An identification module is then presented to dynamically emphasize the body trajec-tories within these correlation maps. As a result, the gen-erated features are able to gain an overview of local tem-poral movements to identify a sign. Thanks to its spe-cial attention on body trajectories, CorrNet achieves new state-of-the-art accuracy on four large-scale datasets, i.e., PHOENIX14, PHOENIX14-T, CSL-Daily, and CSL. A com-prehensive comparison with previous spatial-temporal rea-soning methods verifies the effectiveness of CorrNet. Visu-alizations demonstrate the effects of CorrNet on emphasiz-ing human body trajectories across adjacent frames. | 1. Introduction Sign language is one of the most widely-used commu-nication tools for the deaf community in their daily life. However, mastering this language is rather difficult and time-consuming for the hearing people, thus hindering di-rect communications between two groups. To relieve this problem, isolated sign language recognition tries to classify a video segment into an independent gloss1. Continuous sign language recognition (CSLR) progresses by sequen-tially translating images into a series of glosses to express a sentence, more prospective toward real-life deployment. 1Gloss is the atomic lexical unit to annotate sign languages. Left frame Right frameFigure 1. Visualization of correlation maps with Grad-CAM [39]. It’s observed that without extra supervision, our method could well attend to informative regions in adjacent left/right frames to iden-tify human body trajectories. Human body trajectories are a salient cue to identify ac-tions in human-centric video understanding [44]. In sign language, such trajectories are mainly conveyed by both manual components (hand/arm gestures), and non-manual components (facial expressions, head movements, and body postures) [10,35]. Especially, both hands move horizontally and vertically across consecutive frames quickly, with fin-ger twisting and facial expressions to express a sign. To track and leverage such body trajectories is of great impor-tance to understanding sign language. However, current CSLR methods [5, 6, 16, 33, 34, 36, 54] usually process each frame separately, thus failing to exploit such critical cues in the early stage. Especially, they usually adopt a shared 2D CNN to capture spatial features for each frame independently. In this sense, frames are processed individually without interactions with adjacent neighbors, thus inhibited to identify and leverage cross-frame trajec-tories to express a sign. The generated features are thus not aware of local temporal patterns and fail to perceive the hand/face movements in expressing a sign. To han-dle this limitation, well-known 3D convolution [4] or its (2+1)D variants [42, 49] are potential candidates to cap-ture short-term temporal information to identify body tra-jectories. Other temporal methods like temporal shift [30] or temporal convolutions [31] can also attend to short-term temporal movements. However, it’s hard for them to aggre-gate beneficial information from distant informative spatial regions due to their limited spatial-temporal receptive field. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2529 Besides, as their structures are fixed for each sample dur-ing inference, they may fail to dynamically deal with dif-ferent samples to identify informative regions. To tackle these problems, we propose to explicitly compute correla-tion maps between adjacent frames to capture body trajec-tories, referred to as CorrNet. As shown in fig. 1, our ap-proach dynamically attends to informative regions in adja-cent left/right frames to capture body trajectories, without relying on extra supervision. In specific, our CorrNet first employs a correlation mod-ule to compute correlation maps between the current frame and its adjacent frames to identify trajectories of all spa-tial patches. An identification module is then presented to dynamically identify and emphasize the body trajecto-ries embodied within these correlation maps. This proce-dure doesn’t rely on extra expensive supervision like body keypoints [53] or heatmaps [54], which could be end-to-end trained in a lightweight way. The resulting features are thus able to gain an overview of local temporal move-ments to identify a sign. Remarkably, CorrNet achieves new state-of-the-art accuracy on four large-scale datasets, i.e., PHOENIX14 [26], PHOENIX14-T [2], CSL-Daily [52], and CSL [23], thanks to its special attention on body tra-jectories. A comprehensive comparison with other spatial-temporal reasoning methods demonstrates the superiority of our method. Visualizations hopefully verify the effects of CorrNet on emphasizing human body trajectories across adjacent frames. |
Jiang_Color_Backdoor_A_Robust_Poisoning_Attack_in_Color_Space_CVPR_2023 | Abstract Backdoor attacks against neural networks have been in-tensively investigated, where the adversary compromises the integrity of the victim model, causing it to make wrong predictions for inference samples containing a specific trig-ger. To make the trigger more imperceptible and human-unnoticeable, a variety of stealthy backdoor attacks have been proposed, some works employ imperceptible pertur-bations as the backdoor triggers, which restrict the pixel differences of the triggered image and clean image. Some works use special image styles (e.g., reflection, Instagram filter) as the backdoor triggers. However, these attacks sac-rifice the robustness, and can be easily defeated by common preprocessing-based defenses. This paper presents a novel color backdoor attack, which can exhibit robustness and stealthiness at the same time. The key insight of our attack is to apply a uniform color space shift for all pixels as the trigger. This global feature is robust to image transformation operations and the triggered samples maintain natural-looking. To find the op-timal trigger, we first define naturalness restrictions through the metrics of PSNR, SSIM and LPIPS. Then we employ the Particle Swarm Optimization (PSO) algorithm to search for the optimal trigger that can achieve high attack effective-ness and robustness while satisfying the restrictions. Exten-sive experiments demonstrate the superiority of PSO and the robustness of color backdoor against different main-stream backdoor defenses. | 1. Introduction Neural networks have been applied in an increasing vari-ety of domains, including image classification [10], speech recognition [16] and natural language processing [1]. How-ever, recent studies show that neural networks are suscep-tible to backdoor attacks [9, 14]. The adversary can embed *This work was done at NTU as a visiting student. †Corresponding authora backdoor into the victim model by poisoning the train-ing dataset. Consequently, the backdoored victim model will perform normally on clean samples but behave wrongly on samples containing a specific trigger. Such threat can bring severe damages to many critical applications in the real world, such as face authentication [36], malware detec-tion [30], speech recognition [39], autonomous driving [13], etc. Researchers advance the backdoor study by proposing a variety of sophisticated attack techniques. These attacks are improved from two perspectives. (1) Stealthiness . The backdoor in the infected model can bypass existing detec-tion approaches. Additionally, the triggers are designed to look natural and evade human inspection. (2) Robustness . The backdoor and the triggers are expected to be robust and cannot be easily removed by the defender. A backdoor at-tack with these features will be very difficult to mitigate. However, we observe that pursuing the visual stealthi-ness can sacrifice the attack robustness. Specifically, there can be two kinds of strategies for stealthy backdoor attacks. The first one is invisible triggers , which restrict the pixel distances between the clean and triggered images [2,17,46]. Some attacks further enforce the consistency of the latent representation besides the pixels to achieve stealthiness in the feature space [5, 27, 44]. The second strategy is nat-ural triggers , which use special image styles (e.g., reflec-tion [22], Instagram filter [21], weather condition [3]) to activate the backdoor. The triggered images do not need to maintain the similarity from the clean images, but just look natural to human eyes. Unfortunately, these delicate backdoor triggers can be easily invalidated by common im-age transformation operations, and the corresponding back-door attacks are vulnerable to some preprocessing-based defenses, e.g., DeepSweep [25], image compression [37], ShrinkPad [19] (see Section 4.4.1 for evaluation results). Besides, some methods [3, 5, 27, 44] require the adversary to have full control over the victim’s training process, which can not be applied to the data poisoning threat model. To overcome these limitations, we propose color back-door , a novel poisoning-based backdoor attack that can ex-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8133 (a) Original images (b) Triggered images of color backdoor Figure 1. Visual comparisons of the original images and triggered images from ImageNet. hibit both stealthiness and robustness. Our color backdoor is inspired by the shape bias property of the human cogni-tive system [12] (i.e., humans prefer to categorize objects according to their shapes rather than colors). It employs a uniform color space shift for all pixels as the backdoor trigger. As illustrated in Figure 1, the triggered image se-mantically represents the same object as the original image in a very natural way, and can evade the inspection of the defender. We also use Local Interpretable Model-Agnostic Explanations (LIME) [28] to explain the effectiveness of our attack. As presented in Figure 2, LIME visualizes the areas contributed to the predictions of the backdoored model, the model focuses on the object itself when the test sample is clean and on the whole image when the test sam-ple is triggered. This is because the model can learn the structural information (i.e., the specific color space shift) of the image and recognize backdoor samples with this feature. Figure 2. LIME explanation. Left: clean image. Right: back-door imageNevertheless, find-ing an appropriate trigger (color space shift) for color back-door is non-trivial: a large shift makes the triggered samples less realistic (see Figure 4), while a small shift makes it difficult for the model to learn this feature, resulting in low effectiveness and robustness. To address this problem under the practical black-box setting1, we adopt Particle Swarm Optimization (PSO) [6], an effective gradient-free optimization algorithm, to systematically search for the optimal trigger. Specifically, we first use the backdoor loss of a semi-trained model (with surrogate model architecture) to efficiently estimate the effectiveness of a trigger. Then, we quantify the naturalness of a trigger through three popular similarity metrics, PSNR [42], SSIM [35] and LPIPS [42], based on which we define a naturalness restriction. After that, we add a penalty function of the naturalness restriction during the searching process of PSO and find the optimal trigger. Finally, the color backdoor is embedded into the victim model when 1The attacker is assumed to have no knowledge of the victim model.training with the poisoned dataset. We perform extensive experiments to demonstrate the superiority of PSO over other optimization algorithms. We show our color backdoor is more resilient against state-of-the-art preprocessing-based defenses compared to existing attacks. Besides, it can also bypass other mainstream de-fenses including Neural Cleanse [34], Fine-Pruning [20], STRIP [8], Grad-Cam [29] and Spectral Signature [31]. |
Boulch_ALSO_Automotive_Lidar_Self-Supervision_by_Occupancy_Estimation_CVPR_2023 | Abstract We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds. The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled, and to use the underlying latent vectors as input to the perception head. The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information, that can be used to boost an actual perception task. This principle has a very simple formulation, which makes it both easy to implement and widely applicable to a large range of 3D sensors and deep networks performing semantic segmentation or object detection. In fact, it supports a single-stream pipeline, as opposed to most contrastive learning approaches, allowing training on limited resources. We conducted extensive exper-iments on various autonomous driving datasets, involving very different kinds of lidars, for both semantic segmenta-tion and object detection. The results show the effectiveness of our method to learn useful representations without any annotation, compared to existing approaches. The code is available at github.com/valeoai/ALSO | 1. Introduction As a complement to 2D cameras, lidars directly capture the 3D environment of a vehicle with high accuracy and low sensitivity to adverse conditions, such as low illumina-tion, bright sunlight or oncoming headlights. They are thus essential sensors for safe autonomous driving. Most state-of-the-art lidar-based perception methods, whether they regard semantic segmentation [21, 76, 94] or object detection [44, 73, 87, 90], assume they can be trained on large annotated datasets. However, annotating 3D data for such tasks is notoriously costly and time consuming. As data acquisition is much cheaper than data annotation, being able to leverage unannotated data to increase the performance or reduce the annotation effort is a significant asset. (a) nuScenes. (b) SemanticKITTI. Figure 1. Aggregation of the self-supervised training on lidar datasets. Input point cloud (first column) and occupancy prediction colored by the learned downstream labels. A promising direction to address this question is to pre-train a neural network using only unannotated data, e.g., on a pretext task which does not require manual labelling, and then to fine-tune the resulting self-supervised pre-trained network for the targeted downstream task(s). With adequate pre-training, the learned network weights are a good starting point for further supervised optimization; training a specific downstream task then typically requires fewer annotations to reach the same performance level as if trained from scratch. A number of self-supervised approaches have been very This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13455 Input point cloudQuery point generation BackboneSelf-supervised occupancy Pre-training Downstream Latent vectors Semantic labelsDownstream headQuery points Occupancy decoderOccupanciesSelf-supervised loss DetectionsFigure 2. Overview of the approach. The backbone to pre-train produces latent vectors for each input point. At pre-training time, the latent vector are fed into an volumetric occupancy head that classifies query points as full or empty. At semantic training or test time, the same latent vectors are fed into a semantic head, e.g., for semantic segmentation or object detection. successful in 2D (images), even reaching the level of su-pervised pre-training [12, 16, 32, 36]. Some self-supervised ideas have been proposed for 3D data as well, which are often transpositions in 3D of 2D methods [39, 58]. Most of them focus on contrastive learning [64, 72, 86, 89, 93] which learns to infer perceptual features that are analogous for similar objects while being far apart for dissimilar objects. Only few such methods apply to lidar point clouds, which have the particularity of having very heterogeneous densities. In this work, we propose a totally new pretext task for the self-supervised pre-training of neural networks operating on point clouds. We observe that one of the main reasons why downstream tasks may fail is related to the sparsity of data. Indeed, with automotive lidars, 3D points are especially sparse when far from the sensor or on areas where laser beams have a high incidence on the scanned surface. In such cases, objects are difficult to recognize, and even more so if they are small, such as so-called vulnerable road user (e.g., pedestrians, bicyclists) and traffic signs. In a mostly supervised context, geometric information such as object shape [42, 61] and visibility information [38] have proved to boost detection performance. Our approach uses visibility-based surface reconstruction as a pretext task for self-supervision. It takes root in the implicit shape rep-resentation literature, where shapes are encoded into latent vectors, that can be decoded into a function indicating the shape volume occupancy or the distance to the shape surface. The intuitive idea is that if a network is able to properly reconstruct the 3D geometry of a scene from point clouds, then there are good chances that it constructs rich features that can be reused in a number of other contexts, in particular regarding semantic-related tasks. Our contributions are as follows: (1) we combine surface reconstruction and visibility information to create a sensor-agnostic and backbone-agnostic pretext task on 3D point clouds, which produces good self-supervised point features for semantic segmentation and object detection; (2) we de-sign a loss that leads each point to capture enough knowledge to reconstruct its neighborhood (instead of aggregating in-formation from neighbors for a more accurate surface recon-struction), which instils a taste of semantics in the geometric task; (3) based on experiments across seven datasets, our self-supervised features, that require only limited resources for training (single 16G GPU), outperform state-of-the-art self-supervised features on semantic segmentation, and are on par with these features on object detection. |
Dong_Weakly_Supervised_Video_Representation_Learning_With_Unaligned_Text_for_Sequential_CVPR_2023 | Abstract Sequential video understanding, as an emerging video understanding task, has driven lots of researchers’ atten-tion because of its goal-oriented nature. This paper studies weakly supervised sequential video understanding where the accurate time-stamp level text-video alignment is not provided. We solve this task by borrowing ideas from CLIP . Specifically, we use a transformer to aggregate frame-level features for video representation and use a pre-trained text encoder to encode the texts corresponding to each action and the whole video, respectively. To model the corre-spondence between text and video, we propose a multi-ple granularity loss, where the video-paragraph contrastive loss enforces matching between the whole video and the complete script, and a fine-grained frame-sentence con-trastive loss enforces the matching between each action and its description. As the frame-sentence correspondence is not available, we propose to use the fact that video ac-tions happen sequentially in the temporal domain to gen-erate pseudo frame-sentence correspondence and super-vise the network training with the pseudo labels. Exten-sive experiments on video sequence verification and text-to-video matching show that our method outperforms base-lines by a large margin, which validates the effectiveness of our proposed approach. Code is available at https: //github.com/svip-lab/WeakSVR . | 1. Introduction A strong artificial intelligence (AI) system is expected to be able to learn knowledge from the open world in an em-bodied manner such that amounts of goal-oriented tasks are designed for reinforcement learning in the environment. In the area of video understanding, a great deal of pioneering *Equal Contribution. †Corresponding Author.work in video classification [55], action localization [53], and action segmentation [25] has been explored, laying the foundation for video understanding. Beyond these typi-cal video understanding tasks, sequential videos (such as Fig. 1) that usually describe how to perform a task in a certain sequence of procedures can be regarded as a goal-oriented task. Solving this task is extremely promising for guiding intelligence to learn a task like humans. It makes performing sequential video representations a potentially critical part of the road to strong AI. Some efforts have been made for video representation learning for sequential videos. e.g., [1, 17] learns a video representation in an instructive video. However, these meth-ods rely heavily on the annotations of temporal boundaries, i.e., the timestamps of sequential actions, which are usu-ally difficult to be obtained due to the time-consuming hu-man labeling in practice. A common but often overlooked scenario is that sequential videos usually occur accompa-nied with audio or text narrations, which show consistent steps with explanations. The rich text information describes the corresponding procedure in detail as shown in Fig. 1, but they are usually not aligned with videos. Therefore, a question arises, i.e., whether it is possible to directly learn the video representation with unaligned text and video in a weakly supervised manner. With the popularity of visual-language tasks, multi-modal learning has attracted growing attention and has been explored in a variety of areas, e.g., image classification [5, 49], object detection [41, 61], and video understanding [59]. One of the most representative works is CLIP [42]. It has shown the potential of learning a powerful seman-tic representation from natural language supervision with a contrastive learning loss and the strong zero-shot gener-alization on the downstream tasks, such as text-video re-trieval [47, 57], action segmentation [63], multiple-choice videoQA [16, 57] and action step localization [4]. Video-CLIP [58] presents a contrastive learning approach to pre-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2437 Texts : Fix on the iron stand , Screw the iron clamp Take up the test tube , Take up the iron clamp , [ ] Take up the test tube , Fix on the iron stand Screw the iron clamp , Take up the iron clamp , [ ] Texts :Figure 1. Sequential Video. The samples come from CSV dataset. They describe two types of step schedule to accomplish the task of ”fix the test tube on the iron stand with iron clamp”. The upper process the step ”fix on the iron stand ” before the steps ”take up the test tube” and ”screw the iron clamp”. Diversely, the lower make the steps of ”take up the test tube” and ”screw the iron clamp” before the step ”fix on the iron stand ”. It can be seen that the order, time span and temporal location of sub-actions to accomplish the task are apparently different. train a unified model with video-text pairs, and [1] pro-poses a unified fully and timestamp-supervised framework for multi-model action segmentation. This provides us with an alternative for weakly supervised video representation learning. However, all these previous works are equipped with aligned texts and video frames [1], which is not exis-tent in our weakly supervised setting. Thus, it is intractable to directly adapt the existing multi-modal video representa-tion models to our task. To overcome the unalignment issue between text and video and learn a satisfactory video representation, we propose a weakly supervised video representation learn-ing pipeline and introduce a multiple granularity contrastive loss to constrain the model, which takes full account of the pseudo temporal alignment between frames and sentences. To be specific, we first extract video and text features from a CLIP-based vision-language model, and a global con-trastive loss is designed to constrain the complete video-paragraph alignment. It constrains that a video will be closer to the sequence of the texts describing it while far away from the rest of the texts, and vice versa. Secondly, we introduce a fine-grained contrastive learning loss, which en-courages the frame sequences of representations to be more similar to the neighbor sentence representations than the re-mote sentences in the same paragraph. The intuition be-hind this constraint comes from a basic idea: if the sjis the corresponding sentence for frame hi, the correspond-ing sentence for frame hi+1is never before the sjin se-quence . Specifically, we take the probabilistic sample from the sentence-frame similarity metric. And we propose to ap-ply the differentiable Gumbel-Softmax [22] tricks to gener-ate predictions and propose three kinds of methods to gener-ate the pseudo-labels that are based on the temporal relation of sentences in the temporal domain: 1) maximum-indexsorting; 2) Viterbi algorithm [15]; 3) splitting. Finally, we calculate the Info-NCE contrastive loss based on the pseudo labels in order to guide the network to focus on the fine-grained action matching in sequential videos. To evaluate the effectiveness of our weakly supervised video representation method, we conduct extensive experi-ments on two downstream tasks: video verification in pro-cedures and text-to-video matching. The results of experi-ments show that our approach outperforms other baselines by a significant margin and also demonstrates the great gen-eralization of our model. We summarize our contributions in three folds: • We propose a novel weakly supervised video repre-sentation learning pipeline with unaligned text for se-quential videos, which can learn powerful and seman-tic video-text representations. • We design multiple granularity contrastive learning loss, including coarse-grained loss and fine-grained loss. Notably, we propose a novel method to imple-ment the temporal alignment between frames and sen-tences. • Our model also shows strong generalization ability to downstream tasks, such as video sequence verification for procedures in videos and text-to-video matching. |
Chen_Executing_Your_Commands_via_Motion_Diffusion_in_Latent_Space_CVPR_2023 | Abstract We study a challenging task, conditional human motion generation, which produces plausible human motion se-quences according to various conditional inputs, such as action classes or textual descriptors. Since human mo-tions are highly diverse and have a property of quite dif-ferent distribution from conditional modalities, such as tex-tual descriptors in natural languages, it is hard to learn a probabilistic mapping from the desired conditional modal-ity to the human motion sequences. Besides, the raw mo-tion data from the motion capture system might be redun-dant in sequences and contain noises; directly modeling the joint distribution over the raw motion sequences and condi-tional modalities would need a heavy computational over-head and might result in artifacts introduced by the cap-tured noises. To learn a better representation of the var-ious human motion sequences, we first design a powerful Variational AutoEncoder (VAE) and arrive at a representa-tive and low-dimensional latent code for a human motion sequence. Then, instead of using a diffusion model to es-tablish the connections between the raw motion sequences and the conditional inputs, we perform a diffusion process on the motion latent space. Our proposed Motion Latent-based Diffusion model (MLD) could produce vivid motion sequences conforming to the given conditional inputs and substantially reduce the computational overhead in both the training and inference stages. Extensive experiments on various human motion generation tasks demonstrate that our MLD achieves significant improvements over the state-of-the-art methods among extensive human motion genera-tion tasks, with two orders of magnitude faster than previous diffusion models on raw motion sequences. | 1. Introduction Human motion synthesis has recently rapidly developed in a multi-modal generative fashion. Various condition in-*These authors contributed equally to this work. †Corresponding author. the person rises from a laying position and walks in a clockwise circle , and then lays back down the ground. dribbl ewith the basketball run in a circle a person is careful while walking around the obstacles.Figure 1. Our Motion Latent-based Diffusion (MLD) model can achieve high-quality and diverse motion generation given a text prompt. The darker colors indicate the later in time, and the col-ored words refer to the motions with same colored trajectory. puts, such as music [34, 33, 32], control signals [45, 66, 65], action categories [46, 19], and natural language descrip-tions [16, 47, 69, 18, 2, 28], provide a more convenient and human-friendly way to animate virtual characters or even control humanoid robots. It will benefit numerous appli-cations in the game industry, film production, VR/AR, and robotic assistance. Among all conditional modalities, text-based conditional human motion synthesis has been driving and dominating research frontiers because the language descriptors provide a convenient and natural user interface for people to inter-act with computers [47, 2, 75, 69, 28]. However, since the distributions between the natural language descriptors and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18000 motion sequences are quite different, it is not easy to learn a probabilistic mapping function from the textural descrip-tors to the motion sequences, which is also mentioned in the previous work, MotionCLIP [68]. Two typical meth-ods address this problem: 1) the cross-modal compatible latent space between motion and language [47, 2] and 2) the conditional diffusion model [75, 69, 28]. The form-ers, such as TEMOS [47], usually learn a motion Varia-tional AutoEncoder (V AE) and a text Variational Encod-ing (without decoder) and then constrain the text encoder and the motion encoder into a compatible latent space via the Kullback-Leibler (KL) divergences loss, which pushes a foundational step forward on creating human motion se-quences by natural language inputs. However, since the dis-tributions of natural languages and motion sequences are highly different, forcibly aligning these two simple gaus-sian distributions, in terms of variational text encoding and variational motion encoding, into a compatible distribution might result in misalignments and thereby reduce the gen-erative diversity inevitably. In light of the tremendous suc-cess of the diffusion-based generative models on other do-mains [53, 61, 56, 22, 79, 73], the latter category meth-ods [75, 69, 28] propose a conditional diffusion model for human motion synthesis to learn a more powerful proba-bilistic mapping from the textual descriptors to human mo-tion sequences and improve the synthesized quality and di-versity. Nevertheless, the raw motion sequences are some-what time-axis redundant, and diffusion models in raw se-quential data [54, 22, 35] usually require exhausting compu-tational overhead in both the training and inference phase, which is inefficient. Besides, since the raw motion data from the motion capture system might contain noises, the powerful diffusion models might learn the clues of a prob-abilistic mapping from the conditional inputs to the noise motion sequences and produce artifacts. To efficiently synthesize plausible and diverse human motion sequences according to the conditional inputs, in-spired by the success of the diffusion model on latent space in text-to-image synthesis [56], we combine the advantages of the latent space-based and the conditional diffusion-based methods and propose a motion latent-based diffusion model (MLD) for human motion generation. Specifically, we first design a transformer-based autoencoder [46] with the UNet-like long skip connections [59] to learn a rep-resentative and low-dimensional latent distribution of hu-man motion sequences. Then, instead of using a diffusion model to establish the connections between the raw motion sequences and the conditional inputs, we propose a motion latent-based diffusion model (MLD) to learn a better prob-abilistic mapping from the conditions to the representative motion latent codes, which could not only produce the vivid motion sequences conforming to the given conditional in-puts but also substantially reduce the computational over-head in both training and inference stage. In addition, high-quality human motion sequences with well-annotated ac-tion labels or textual descriptions are expensive and limited. In contrast, the large-scale non-annotated or weakly anno-tated motion sequences are publicly available, such as the AMASS dataset [41]. Our proposed MLD could individu-ally train a motion latent autoencoder on these large-scale datasets, arriving at a representative and low-dimensional latent space for diverse human motion sequences. This low-dimensional latent space with higher information den-sity could accelerate the model’s convergence and signif-icantly reduce computational consumption for the down-stream conditional human motion generation tasks. We summarize the contributions as follows: 1) we de-sign and explore a more representative motion variational autoencoder (V AE), which provides state-of-the-art motion reconstruction and diverse generation, benefiting the train-ing of the latent diffusion models; 2) we further demon-strate that motion generation tasks on latent spaces, such as text-to-motion and action-to-motion, are more efficient than the diffusion models on raw motion sequences; 3) our pro-posed MLD achieves competitive performance on multiple tasks (unconditional motion generation, action-to-motion, and text-to-motion), and codes are available. |
Han_High-Fidelity_3D_Human_Digitization_From_Single_2K_Resolution_Images_CVPR_2023 | Abstract High-quality 3D human body reconstruction requires high-fidelity and large-scale training data and appropriate network design that effectively exploits the high-resolution input images. To tackle these problems, we propose a sim-ple yet effective 3D human digitization method called 2K2K , which constructs a large-scale 2K human dataset and in-fers 3D human models from 2K resolution images. The proposed method separately recovers the global shape of a human and its details. The low-resolution depth network predicts the global structure from a low-resolution image, and the part-wise image-to-normal network predicts the de-tails of the 3D human body structure. The high-resolution depth network merges the global 3D shape and the detailed structures to infer the high-resolution front and back side depth maps. Finally, an off-the-shelf mesh generator recon-structs the full 3D human model, which are available at https://github.com/SangHunHan92/2K2K . In addition, we also provide 2,050 3D human models, including texture maps, 3D joints, and SMPL parameters for research purposes. In experiments, we demonstrate competitive per-formance over the recent works on various datasets. | 1. Introduction Reconstructing photo-realistic 3D human models is one of the actively researched topics in computer vision and graph-ics. Conventional approaches search for correspondencesacross multiple views. Therefore, it was necessary to em-ploy multiple camera systems [9, 11] to acquire high-quality human models. However, the bulky and expensive camera systems limit the usage of normal users such as personal content creators and influencers. Recent progress in deep learning has shown the possibility of reconstructing human models from a single image [1,15,21,30,35,40,46,48,54,55]. Nevertheless, there still exists room to improve the quality of 3D human models, especially given an input single image. Existing approaches fall into two categories; the first is to predict a deep implicit volume [15, 40, 41] and the second is to infer multiple depth maps [13] from an image. In the case of the first approach, the implicit volume can be directly pre-dicted through a deep learning network [46] or the volume can be constructed by predicting each voxel [40, 41] mil-lions of times. Therefore, these approaches are demanding either in memory or in time. The second approach requires to predict at least two depth maps, one for the front and the other for the back, to build a complete 3D human model. Gabeur et al. [13] propose an adversarial framework to pre-dict double-sided depth maps; however, it shows poor results owing to inadequate training data generated by using 19 human scan models in addition to the synthetic dataset [47]. Here, researchers have paid attention to recover not only geometric details [41] such as facial regions but also people in various postures by predicting surface normal maps and parametric template models [33]. We claim that the prediction quality of 3D human mod-els is primarily affected by suitably designed deep neu-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12869 ral networks and high-quality training datasets, i.e., high-resolution images and 3D human models. Existing ap-proaches, however, could not handle high-resolution images, e.g., 20482048, because it requires massive learnable pa-rameters to train, and there was no such large-scale dataset for 3D human digitization. There are human datasets opened publicly for research purposes [49 –51, 55]. These datasets, however, lack both quality and quantity to train a network generating high-fidelity human models. In this paper, we present a practical approach to reconstructing high-quality human models from high-resolution images and a large-scale human scan dataset consisting of more than 2,000 human models, where existing methods utilize a few hundred human scans to train their networks. Our framework takes high-resolution images as input up to 2K, 20482048, and it is the first to predict high-resolution depth maps for the task of 3D human reconstruction, named 2K2K . To minimize the number of learnable parameters and memory usage, we split the human body into multiple body parts such as arms, legs, feet, head, and torso, with the aid of a 2D pose human detector [8]. In addition, we align each body part by rotating and scaling to a canonical position which makes the proposed method robust under human pose variation while excluding background regions from the computation. By doing this, the part-wise image-to-normal prediction network can predict accurate surface normals even in the presence of pose variations. Afterward, we merge predicted normal maps into a single normal map and feed it to the normal-to-depth prediction network. Note that it is hard to predict depth maps directly for each body part because of the scale ambiguity; predicting the depth map from a merged normal map can alleviate this problem. We also predict a coarse depth map and feed it to the normal-to-depth prediction network to obtain consistent depth maps over different body parts. Finally, we generate high-fidelity human meshes through Marching cubes [26], whose example is shown in Fig. 1. To summarize, the contributions of this paper are: 1.Accuracy. Our method recovers the details of a hu-man from high-resolution images up to a resolution of 20482048. |
Chen_Extracting_Class_Activation_Maps_From_Non-Discriminative_Features_As_Well_CVPR_2023 | Abstract Extracting class activation maps (CAM) from a classifi-cation model often results in poor coverage on foreground objects, i.e., only the discriminative region (e.g., the “head” of “sheep”) is recognized and the rest (e.g., the “leg” of “sheep”) mistakenly as background. The crux behind is that the weight of the classifier (used to compute CAM) cap-tures only the discriminative features of objects. We tackle this by introducing a new computation method for CAM that explicitly captures non-discriminative features as well, thereby expanding CAM to cover whole objects. Specifi-cally, we omit the last pooling layer of the classification model, and perform clustering on all local features of an object class, where “local” means “at a spatial pixel posi-tion”. We call the resultant Kcluster centers local proto-types — represent local semantics like the “head”, “leg”, and “body” of “sheep”. Given a new image of the class, we compare its unpooled features to every prototype, derive K similarity matrices, and then aggregate them into a heatmap (i.e., our CAM). Our CAM thus captures all local features of the class without discrimination. We evaluate it in the chal-lenging tasks of weakly-supervised semantic segmentation (WSSS), and plug it in multiple state-of-the-art WSSS meth-ods, such as MCTformer [45] and AMN [26], by simply re-placing their original CAM with ours. Our extensive exper-iments on standard WSSS benchmarks (PASCAL VOC and MS COCO) show the superiority of our method: consistent improvements with little computational overhead. Our code is provided at https://github.com/zhaozhengChen/LPCAM. | 1. Introduction Extracting CAM [50] from classification models is the essential step for training semantic segmentation models when only image-level labels are available, i.e., in the WSSS tasks [8, 22, 24, 26, 46]. More specifically, the gen-eral pipeline of WSSS consists of three steps: 1) training a multi-label classification model with the image-level labels; 2) extracting CAM of each class to generate a 0-1 mask(usually called seed mask), often with a further step of re-finement to generate pseudo mask [1,20]; and 3) taking all-class pseudo masks as pseudo labels to train a semantic seg-mentation model in a fully-supervised fashion [5, 6, 41]. It is clear that the CAM in the first step determines the perfor-mance of the final semantic segmentation model. However, the conventional CAM and its variants often suffer from the poor coverage of foreground objects in the image, i.e., a large amount of object pixels are mistakenly recognized as background, as demonstrated in Figure 1(a) where only few pixels are activated in warm colors. We point out that this locality is due to the fact that CAM is extracted from a discriminative model. The train-ing of such model naturally discards the non-discriminative regions which confuse the model between similar as well as highly co-occurring object classes. This is a general prob-lem of discriminative models, and is particularly obvious when the number of training classes is small [37,46,48]. To visualize the evidence, we use the classifier weights of con-fusing classes, e.g., “car”, “train” and “person”, to compute CAMs for the “bus” image, and show them in Figure 1(b). We find from (a) and (b) that the heating regions in ground truth class and confusing classes are complementary. E.g., the upper and frontal regions (on the “bus” image) respec-tively heated in the CAMs of “car” and “train” (confusing classes) are missing in the CAM of “bus” (ground truth), which means the classifier de-activates those regions for “bus” as it is likely to recognize them as “car” or “train”. Technically, for each class, we use two factors to com-pute CAM: 1) the feature map block after the last conv layer, and 2) the weight of the classifier for that class. As afore-mentioned, the second factor is often biased to discrimina-tive features. Our intuition is to replace it with a non-biased one. The question becomes how to derive a non-biased clas-sifier from a biased classification model, where non-biased means representing all local semantics of the class. We find the biased classifier is due to incomplete features, i.e., only discriminative local features fed into the classifier, and the non-discriminative ones are kicked out by the global aver-age pooling (GAP) after the last conv layer. To let the model pay attention to non-discriminative features as well, we pro-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3135 SMU Classification: Restricted ImageCAMCAMof confusing classesLPCAM(b)(c)(d)𝑓(𝒙)⋅𝐰,-./0𝑓(𝒙)⋅𝐰1-2𝑓𝒙⋅𝐰134𝑓(𝒙)⋅𝐰56778𝑓(𝒙)⋅𝐰9:5𝑓(𝒙)⋅𝐰;<=>?𝑓𝒙⋅𝐰87<5@?𝑓𝒙⋅𝐰;<=>?𝑓𝒙⋅𝐰A=<𝑓(𝒙)⋅𝐰=7<@8B=?7𝑓𝒙⋅𝐰A=<𝑓𝒙⋅𝐰C@;@<9>D7(a)sheepbustrain1.Revise the notations: F is for foreground but not for feature. Please use the formal notations we defined in method sections. If some of the needed notations are not defined in method sections, please given them here and explain them in the caption of this fig.Figure 1. The CAM of image xis computed by f(x)·wc, where f(x)is the feature map block (before the last pooling layer of the multi-label classification model) and wcdenotes the classifier weights of class c. (a) Input images. (b) CAMs generated from the classifier weights of ground truth class. (c) CAMs generated from the classifier weights of confusing classes. (d) LPCAM (our method). pose to omit GAP, derive a prototype-based classifier by clustering all local features (collected across all spatial lo-cations on the feature map blocks of all training samples in the class) into Klocal prototypes each representing a local semantic of the class. In Section 3, we give a detailed justi-fication that this prototype-based classifier is able to capture both discriminative and non-discriminative features. Then, the question is how to use local prototypes on the feature map block (i.e., the 1st factor) to generate CAM. We propose to apply them one-by-one on the feature map block to generate Ksimilarity maps (e.g., by using cosine distance), each aiming to capture the local regions that con-tain similar semantics to one of the prototypes. We high-light that this “one-by-one” is important to preserve non-discriminative regions as the normalization on each similar-ity map is independent. We provide a detailed justification from the perspective of normalization in Section 3.2. Then, we average across all normalized similarity maps to get a single map—we call Local Prototype CAM (LPCAM). In addition, we extend LPCAM by using the local prototypes of contexts as well. We subtract the context similarity maps (computed between the feature map block and the clustered context prototypes) from LPCAM. The idea is to remove the false positive pixels (e.g., the “rail” of “train” [25]). There-fore, our LPCAM not only captures the missing local fea-tures of the object but also mitigates the spurious features caused by confusing contexts. LPCAM is a new operation to compute class activa-tion maps based on clustered local prototypes. In princi-ple, it can be taken as a generic substitute of the conven-tional CAM in CAM-based WSSS methods. To evaluateLPCAM on different WSSS methods (as they use different backbones, pre-training strategies or extra data), we conduct extensive experiments by plugging it in multiple methods: the popular refinement method IRN [1], the top-performing AMN [26], the saliency-map-based EDAM [39], and the transformer-arch-based MCTformer [45], on two popu-lar benchmarks of semantic segmentation, PASCAL VOC 2012 [11] and MS COCO 2014 [30]. Our Contributions in this paper are thus two-fold. 1) A novel method LPCAM that leverages non-discriminative lo-cal features and context features (in addition to discrimina-tive ones) to generate class activation maps with better cov-erage on the complete object. 2) Extensive evaluations of LPCAM by plugging it in multiple WSSS methods, on two popular WSSS benchmarks. |
Fu_Learning_a_Simple_Low-Light_Image_Enhancer_From_Paired_Low-Light_Instances_CVPR_2023 | Abstract Low-light Image Enhancement (LIE) aims at improving contrast and restoring details for images captured in low-light conditions. Most of the previous LIE algorithms ad-just illumination using a single input image with several handcrafted priors. Those solutions, however, often fail in revealing image details due to the limited information in a single image and the poor adaptability of handcrafted priors. To this end, we propose PairLIE, an unsupervised approach that learns adaptive priors from low-light im-age pairs. First, the network is expected to generate the same clean images as the two inputs share the same im-age content. To achieve this, we impose the network with the Retinex theory and make the two reflectance compo-nents consistent. Second, to assist the Retinex decompo-sition, we propose to remove inappropriate features in the raw image with a simple self-supervised mechanism. Ex-tensive experiments on public datasets show that the pro-posed PairLIE achieves comparable performance against the state-of-the-art approaches with a simpler network and fewer handcrafted priors. Code is available at: https: //github.com/zhenqifu/PairLIE . | 1. Introduction Images captured under low-light environments always suffer from multiple distortions, such as low contrast, poor visibility, and sensor noise. Those low-light images are un-satisfactory for information transmission because they incur challenges in human visualization and subsequent computer vision tasks [25]. To correct contrast, uncover textures, and remove sensor noise, great efforts have been made in devel-oping Low-light Image Enhancement (LIE) algorithms in the past decades [1, 5,6,8,28,35]. *Corresponding author. Network (a) (b)NetworkOutputs OutputsSingle Low-light Image Paired Low-light ImagesLosses LossesSolution: Solution:Priors Adaptability Priors AdaptabilityFigure 1. Comparison between the previous solution (a) and the proposed method (b) from the aspect of Retinex theory. The key idea of our method is to learn adaptive priors from low-light image pairs. As a result, our solution needs fewer handcrafted priors and the network is more robust. Note that image pairs are only used in the training phase. Histogram-based and Retinex-based approaches are two well-known LIE techniques. The former enhances the con-trast of an image by redistributing the luminous intensity on the histogram [3, 14]. The latter decomposes an observed image Iinto illumination Land reflectance RviaI=L◦R, where ◦denotes the element-wise multiplication [6,13,17]. Specifically, the reflectance component Ris assumed to be consistent under different light conditions because Rrepre-sents the physical properties of the objects. As the Retinex theory can well model color perceptions of human vision, Retinex-based methods have attracted relatively more atten-tion in the LIE community. Recent years have witnessed great success in developing learning-based LIE algorithms. Among these approaches, most solutions rely on low-light and normal-light image This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22252 pairs [33, 38]. However, collecting high-quality reference maps in real-world scenarios is time-consuming and expen-sive [32]. To eliminate the requirement for normal-light im-ages, unsupervised and zero-shot LIE approaches are pro-posed. Concretely, the former trains a deep neural network using a set of collected low-light samples [7, 18], while the latter only employs the test image itself in the network op-timization [40, 41]. Due to the absence of reference im-ages, unsupervised and zero-shot LIE approaches depend on handcrafted priors to guide network training. Neverthe-less, due to the complex natural scenes and the limited in-formation in a single low-light image, it is difficult for those methods to attain a high-quality result. To tackle the issues of limited information in a single low-light image and the poor adaptability of handcrafted priors, we propose to leverage paired low-light instances to train the LIE network. The main difference between our so-lution and previous approaches is illustrated in Fig. 1. Note that acquiring paired low-light images will complicate the imaging process since it needs to cope with the misalign-ment between the two images. Nevertheless, compared with collecting low-light and normal-light image pairs, our solu-tion is more practical. Additionally, twice-exposure images provide useful information for solving the LIE task. As a result, our solution can reduce the demand for handcrafted priors and improve the adaptability of the network. With paired low-light instances, we propose a novel learning-based LIE method, termed PairLIE. The core in-sight of our approach is to sufficiently exploit priors from paired low-light images. Therefore, we consider employing the Retinex theory and deep learning to decompose low-light images into illumination and reflectance components. First, since the two low-light inputs share the same content, the estimated reflectance components are expected to be consistent. Second, instead of directly imposing the Retinex decomposition on original low-light images, we adopt a simple self-supervised mechanism to remove inappropriate features and implement the Retinex decomposition on the optimized image. This can avoid sub-optimal estimations because the Retinex model has limitations in low-light mod-eling. As a result, with fewer prior constraints and a simpler network, the proposed PairLIE achieves competitive perfor-mance in public LIE datasets. In summary, the contributions of this paper are as follows: • We propose a generic LIE solution using paired low-light images. The network is based on Retinex decom-position with several novel reference-free losses. • To achieve an accurate decomposition, we first project the original image to remove inappropriate features. • With fewer manually designed priors and a simpler network, the proposed solution achieves comparable performance against state-of-the-art methods.2. Related Work Over the decades, extensive LIE methods have been pre-sented, which can be roughly categorized into conventional approaches and learning-based techniques. |
Aakanksha_Improving_Robustness_of_Semantic_Segmentation_to_Motion-Blur_Using_Class-Centric_Augmentation_CVPR_2023 | Abstract Semantic segmentation involves classifying each pixel into one of a pre-defined set of object/stuff classes. Such a fine-grained detection and localization of objects in the scene is challenging by itself. The complexity increases manifold in the presence of blur. With cameras becoming increasingly light-weight and compact, blur caused by mo-tion during capture time has become unavoidable. Most research has focused on improving segmentation perfor-mance for sharp clean images and the few works that deal with degradations, consider motion-blur as one of many generic degradations. In this work, we focus exclusively on motion-blur and attempt to achieve robustness for se-mantic segmentation in its presence. Based on the observa-tion that segmentation annotations can be used to generate synthetic space-variant blur, we propose a Class-Centric Motion-Blur Augmentation (CCMBA) strategy. Our ap-proach involves randomly selecting a subset of semantic classes present in the image and using the segmentation map annotations to blur only the corresponding regions. This enables the network to simultaneously learn seman-tic segmentation for clean images, images with egomotion blur, as well as images with dynamic scene blur. We demon-strate the effectiveness of our approach for both CNN and Vision Transformer-based semantic segmentation networks on PASCAL VOC and Cityscapes datasets. We also illus-trate the improved generalizability of our method to com-plex real-world blur by evaluating on the commonly used deblurring datasets GoPro and REDS . | 1. Introduction Motion-blur has become ubiquitous in our lives driven largely by the compactness and affordability of light weight cameras. While camera quality has improved significantly, sensing technology cannot suppress blur completely yet. For handheld cameras and cameras mounted on moving ve-hicles, motion during capture is a major reason for the oc-currence of blurred images. Most research in semantic seg-Figure 1. (a) Motion-blurred images, (b) Segmentation from a network trained on clean images (c) Segmentation after using our augmentation for training (d) Ground truth. mentation especially those that are deep-learning based and state-of-the-art, focus on increasing accuracy [30], [38] and throughput [31], [23]. While these models have achieved significant gains, they are trained on clean data and con-sequently struggle to perform when presented with out-of-distribution data [13]. Such wrong predictions can have grave consequences for safety-critical applications like au-tonomous driving. Therefore, it becomes essential to focus on finding ways of making these models robust to unavoid-able degradations like motion-blur. When trying to generalize to blurred images, an off-the-shelf approach would be to use a deblurring algorithm to deblur the image before continuing with the downstream task of segmentation. However, the generalization abili-ties of deblurring models are still subpar and they strug-gle to perform well for real blurred out-of-distribution im-ages [37] . Additionally, many of these image restoration models process images at multiple-scales in a hierarchi-cal fashion which improves the performance but also in-creases the latency and memory requirements [34,35]. Con-sequently, this two-stage approach of using deblurring as This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10470 pre-processing to obtain deblurred images before attempt-ing segmentation is not viable for deployment and real-time applications. Hence, there is a strong need to devise single-stage methods that can bypass this step. Some recent works have tried to focus on analysing and improving the robustness of existing models for different tasks to a spectrum of commonly encountered degradations. Unlike adversarial robustness, [11] defines robustness as the ability of a model trained on sharp images to retain competi-tive performance in images having degradations. [11,13,21] benchmark standard models for their robustness to multi-ple severity levels of sixteen commonly occurring degrada-tions including motion-blur for the tasks of object recogni-tion, object detection and semantic segmentation. Note that the motion-blur being considered here is spatially invariant and linear. [12] and [16] propose augmentation strategies to improve generic robustness of models across the sixteen degradations. While these augmentations have resulted in increased robustness, we believe that significant improve-ments can be made if degradation-specific and task-specific insights are exploited. [14] leverages the semantic segmen-tation annotation maps to increase the shape bias in the net-work which has been established to improve robustness [7]. However, this is only a task-specific augmentation and at-tempts to achieve improvements over all degradation types. In this work, we attempt to make semantic segmentation robust to the presence of generic space-variant motion-blur. In particular, we develop a Class-Centric Motion-Blur Aug-mentation (CCMBA) strategy where we leverage the seg-mentation map annotations to introduce blur in specific re-gions of the image to enforce distinguishability and easier training. We randomly choose a subset of classes that we want to blur, blur the corresponding foreground image us-ing a synthetic non-linear kernel, and then blend the blurred foreground image with the sharp background image. Since, motion-blur can be due camera ego-motion as well as dy-namic scenes, the advantage of our augmentation strategy lies in better generalization to dynamic scenes due to its se-mantic class-centric nature. Fig. 1 shows the segmentation results obtained on motion-blurred images with and without our method. Our contributions are the following : • An effective data augmentation scheme for reliably segmenting out regions from motion blurred images without the need for deblurring. • Our method is generic in nature and can be used with any supervised semantic segmentation network. • While our model is trained on only synthetically gen-erated data, the class-centric nature of our augmenta-tion enables it to perform well on general dynamic blur datasets like GoPro and REDS, especially for common classes like humans.• We report improved performance for DeepLabv3+ over baseline methods with 3.2% and 3% increase on PASCAL VOC and Cityscapes dataset, respectively, for the highest level of blur. We also achieve improve-ments on the Cityscapes-C dataset over previous works with a maximum 9%increase for highest levels of blur. Figure 2. Class-Centric Motion-Blur Augmentation (CCMBA): Given a sharp image, its segmentation mask, and a motion-blur kernel, we synthetically blur the regions corresponding to a subset of classes present in the image to mimic dynamic scenes. When all classes are chosen, camera motion-blur is synthesized making our augmentation applicable for generic blur. |
Alper_Is_BERT_Blind_Exploring_the_Effect_of_Vision-and-Language_Pretraining_on_CVPR_2023 | Abstract Most humans use visual imagination to understand and reason about language, but models such as BERT reason about language using knowledge acquired during text-only pretraining. In this work, we investigate whether vision-and-language pretraining can improve performance on text-only tasks that involve implicit visual reasoning, focusing primarily on zero-shot probing methods. We propose a suite of visual language understanding (VLU) tasks for probing the visual reasoning abilities of text encoder models, as well as various non-visual natural language understanding (NLU) tasks for comparison. We also contribute a novel zero-shot knowledge probing method, Stroop probing, for applying models such as CLIP to text-only tasks without needing a prediction head such as the masked language modelling head of models like BERT. We show that SOTA multimodally trained text encoders outperform unimodally trained text encoders on the VLU tasks while being under-performed by them on the NLU tasks, lending new context to previously mixed results regarding the NLU capabilities of multimodal models. We conclude that exposure to images during pretraining affords inherent visual reasoning knowl-*These authors contributed equally to this workedge that is reflected in language-only tasks that require im-plicit visual reasoning. Our findings bear importance in the broader context of multimodal learning, providing princi-pled guidelines for the choice of text encoders used in such contexts1. | 1. Introduction Humans are multimodal learners. We communicate with each other about things that we have experienced and knowledge we have gained using our senses—most com-monly including sight as well as hearing, touch, smell, and taste. Our communication channel is limited to a single modality—spoken language, signed language, or text—but a reader or listener is expected to use his or her imagination to visualize and reason about the content being described. In general, language is used to describe scenes, events, and images; the words used to describe these are used to con-jure up a visual impression in the listener. Therefore, it is natural to consider the types of visual reasoning used in un-derstanding language, and to ask how well we can currently 1Our code will be made available at https://isbertblind. github.io/ This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6778 model them with computational methods. Consider, for instance, the questions in Figure 1. Con-creteness is typically correlated with how well a concept can be visually imagined. For example, a concrete word such as present often has a unique visual representation. In addition, common associations such as ocean→blue (color) and corn chip →triangle (shape) reflect properties of an imagined visual representation of the item in ques-tion. These properties may be difficult to infer from text alone without prior knowledge gained from visual input; for instance, a number of studies have investigated the partial ability of blind English speakers to predict color associa-tions and how it differs from the intuition of sighted speak-ers2[40, 50, 51, 54, 65]. There has been a wealth of recent research vision-and-language (V&L) tasks involving both text and image data, and the use of vision-language pretraining (VLP) to cre-ate models that are able to reason jointly about both of these modalities together [11, 12, 29, 35]. Notable in this regard is CLIP [46], consisting of paired text and image en-coders jointly trained on a contrastive objective, that learns to align text and image embeddings in a shared semantic space. On the other hand, text encoder models such as BERT [14] learn to reason about text in a unimodal vac-uum, with knowledge derived from pretraining tasks that only involve textual data. Prior work has investigated the performance of multi-modally trained text encoders on various natural language understanding (NLU) tasks with mixed results, sometimes finding that they are outperformed by unimodal models [22] and at other times suggesting improved performance [69]. However, these works fine-tune the models under consider-ation on NLU tasks before evaluation, making it difficult to disentangle the effects of multimodal pretraining and fine-tuning configuration on the observed performance. Addi-tionally, these works do not address the distinction between NLU tasks requiring implicit visual reasoning and ones that are purely non-visual. We refer to natural language infer-ence involving implicit visual reasoning as visual language understanding (VLU) and propose a suite of VLU tasks that may be used to evaluate visual reasoning capabilities of pre-trained text encoders, focusing primarily on zero-shot meth-ods. We compare multimodally trained text encoders such as that of CLIP to BERT and other unimodally trained text en-coders, evaluating their performance on our suite of VLU tasks. We evaluate these models in without modifying their internal weights in order to probe their knowledge obtained during pretraining. A key design aspect of these tests is the probing method used to evaluate knowledge. Previ-2This phenomenon is illustrated in this interview with Tommy Edison, a congenitally blind man, in which he describes his understanding and fre-quent confusion regarding color associations.ous work has probed the knowledge of BERT and sim-ilar models using a masked language modelling (MLM) paradigm [43, 48], but this cannot be directly applied to CLIP since it was not pretrained with MLM. We there-fore propose a new zero-shot probing method that we term Stroop probing . This is based on the psychological Stroop effect [39] (described in Section 3.2), which suggests that salient items should have a stronger interference effect on the representation of their context. Strikingly, we find that the multimodally trained text en-coders under consideration outperform unimodally trained text encoders on VLU tasks, both when comparing to much larger encoders as well as ones of comparable size. We also compare these models on baseline NLU tasks that do not involve visual reasoning and find that models such as CLIP underperform on these tasks, demonstrating that they do not have a global advantage on NLU tasks. We conclude that exposure to images during pretraining improves per-formance on text-only tasks that require visual reasoning. Furthermore, our findings isolate the effect of the text com-ponent of multimodal models for tasks such as text to image generation, providing principled guidelines for understand-ing the knowledge that such models inject into downstream vision tasks. |
Cheng_SDFusion_Multimodal_3D_Shape_Completion_Reconstruction_and_Generation_CVPR_2023 | Abstract In this work, we present a novel framework built to sim-plify 3D asset generation for amateur users. To enable in-teractive generation, our method supports a variety of in-put modalities that can be easily provided by a human, in-cluding images, text, partially observed shapes and com-binations of these, further allowing to adjust the strength of each input. At the core of our approach is an encoder-decoder, compressing 3D shapes into a compact latent rep-resentation, upon which a diffusion model is learned. To enable a variety of multi-modal inputs, we employ task-specific encoders with dropout followed by a cross-attentionmechanism. Due to its flexibility, our model naturally sup-ports a variety of tasks, outperforming prior works on shape completion, image-based 3D reconstruction, and text-to-3D. Most interestingly, our model can combine all these tasks into one swiss-army-knife tool, enabling the user to perform shape generation using incomplete shapes, images, and textual descriptions at the same time, providing the rel-ative weights for each input and facilitating interactivity. Despite our approach being shape-only, we further show an efficient method to texture the generated shape using large-scale text-to-image models. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4456 | 1. Introduction Generating 3D assets is a cornerstone of immersive aug-mented/virtual reality experiences. Without realistic and di-verse objects, virtual worlds will look void and engagement will remain low. Despite this need, manually creating and editing 3D assets is a notoriously difficult task, requiring creativity, 3D design skills, and access to sophisticated soft-ware with a very steep learning curve. This makes 3D asset creation inaccessible for inexperienced users. Yet, in many cases, such as interior design, users more often than not have a reasonably good understanding of what they want to create. In those cases, an image or a rough sketch is some-times accompanied by text indicating details of the asset, which are hard to express graphically for an amateur. Due to this need, it is not surprising that democratizing the 3D content creation process has become an active re-search area. Conventional 3D generative models require direct 3D supervision in the form of point clouds [2, 21], signed distance functions (SDFs) [9, 25], voxels [42, 47], etc. Recently, first efforts have been made to explore the learning of 3D geometry from multi-view supervision with known camera poses by incorporating inductive biases via neural rendering techniques [5, 6, 14, 37, 52]. While com-pelling results have been demonstrated, training is often very time-consuming and ignores available 3D data that can be used to obtain good shape priors. We foresee an ideal collaborative paradigm for generative methods where mod-els trained on 3D data provide detailed and accurate geom-etry, while models trained on 2D data provide diverse ap-pearances. A first proof of concept is shown in Figure 1. In our pursuit of flexible and high-quality 3D shape gen-eration, we introduce SDFusion , a diffusion-based genera-tive model with a signed distance function (SDF) under the hood, acting as our 3D representation. Compared to other 3D representations, SDFs are known to represent well high-resolution shapes with arbitrary topology [9, 18, 23, 30]. However, 3D representations are infamous for demanding high computational resources, limiting most existing 3D generative models to voxel grids of 323resolution and point clouds of 2Kpoints. To side-step this issue, we first uti-lize an auto-encoder to compress 3D shapes into a more compact low-dimensional representation. Because of this, SDFusion can easily scale up to a 1283resolution. To learn the probability distribution over the introduced la-tent space, we leverage diffusion models, which have re-cently been used with great success in various 2D genera-tion tasks [4,19,22,26,35,40]. Furthermore, we adopt task-specific encoders and a cross-attention [34] mechanism to support multiple conditioning inputs, and apply classifier-free guidance [17] to enable flexible conditioning usage. Because of these strategies, SDFusion can not only use a va-riety of conditions from multiple modalities, but also adjust their importance weight, as shown in Figure 1. Comparedto a recently proposed autoregressive model [25] that also adopts an encoded latent space, SDFusion achieves supe-rior sample quality, while offering more flexibility to handle multiple conditions and, at the same time, features reduced memory usage. With SDFusion, we study the interplay be-tween models trained on 2D and 3D data. Given 3D shapes generated by SDFusion, we take advantage of an off-the-shelf 2D diffusion model [34], neural rendering [24], and score distillation sampling [31] to texture the shapes given text descriptions as conditional variables. We conduct extensive experiments on the ShapeNet [7], BuildingNet [38], and Pix3D [43] datasets. We show that SDFusion quantitatively and qualitatively outperforms prior work in shape completion, 3D reconstruction from images, and text-to-shape tasks. We further demonstrate the capa-bility of jointly controlling the generative model via multi-ple conditioning modalities, the flexibility of adjusting rela-tive weight among modalities, and the ability to texture 3D shapes given textual descriptions, as shown in Figure 1. We summarize the main contributions as follows: • We propose SDFusion, a diffusion-based 3D genera-tive model which uses a signed distance function as its 3D representation and a latent space for diffusion. • SDFusion enables conditional generation with multi-ple modalities, and provides flexible usage by adjust-ing the weight among modalities. • We demonstrate a pipeline to synthesize textured 3D objects benefiting from an interplay between 2D and 3D generative models. |
Du_Learning_To_Render_Novel_Views_From_Wide-Baseline_Stereo_Pairs_CVPR_2023 | Abstract We introduce a method for novel view synthesis given only a single wide-baseline stereo image pair. In this challenging regime, 3D scene points are regularly observed only once, requiring prior-based reconstruction of scene geometry and appearance. We find that existing approaches to novel view synthesis from sparse observations fail due to recovering in-correct 3D geometry and due to the high cost of differentiable rendering that precludes their scaling to large-scale train-ing. We take a step towards resolving these shortcomings by formulating a multi-view transformer encoder, proposing an efficient, image-space epipolar line sampling scheme to assemble image features for a target ray, and a lightweight cross-attention-based renderer. Our contributions enable training of our method on a large-scale real-world dataset of indoor and outdoor scenes. We demonstrate that our method learns powerful multi-view geometry priors while reducing the rendering time. We conduct extensive comparisons on held-out test scenes across two real-world datasets, signif-†Equal Advising Project website: https : / / yilundu . github . io / wide _ baseline/icantly outperforming prior work on novel view synthesis from sparse image observations and achieving multi-view-consistent novel view synthesis. | 1. Introduction The goal of novel view synthesis is to render images of a scene from unseen camera viewpoints given a set of image observations. In recent years, the emergence of differentiable rendering [26, 28, 44, 45, 50] has led to a leap in quality and applicability of these approaches, enabling near photorealis-tic results for most real-world 3D scenes. However, methods that approach photorealism require hundreds or even thou-sands of images carefully exploring every part of the scene, where special care must be taken by the user to densely image all 3D points in the scene from multiple angles. In contrast, we are interested in the regime of novel view synthesis from a sparse set of context views. Specifically, this paper explores whether it is possible to sythesize novel view images using an extremely sparse set of observations. In the most challenging case, this problem reduces to using input images such that every 3D point in the scene is only ob-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4970 served from a single camera perspective. Towards this goal, we propose a system that uses only a single wide-baseline stereo image pair of the scene as input. This stereo image pair regularly has little overlap, such that many 3D points are indeed only observed in one of the images, see Fig. 1. Image observations themselves are thus insufficient information to compute 3D geometry and appearance via multi-view stereo, and we must instead learn prior-based 3D reconstruction. Nevertheless, reasoning about multi-view consistency is crit-ical, as prior-based reconstructions must agree across images to ensure multi-view-consistent reconstruction. This is a novel problem setting: While some existing methods demonstrate novel view synthesis from very sparse observations [45, 51, 58], they are limited to object-level scenes. In contrast, we are interested in large real-world scenes that are composed of multiple objects with complex geometry and occlusions. Previous approaches for novel view synthesis of scenes focus on small baseline renderings using 3−10images as input [7, 8, 18, 25, 47, 53, 58]. In this setting, most 3D points in the scene are observed in multiple input images, and multi-view feature correspondences can be used to regress 3D geometry and appearance. Thus, these methods in practice learn to amortize multi-view stereo. In our setting, we use a wide-baseline stereo image pair as in-put, where it is not sufficient to rely on multi-view feature correspondences due to many points only being observed in a single view. We show that in this challenging setting, exist-ing approaches do not faithfully recover the 3D geometry of the scene. In addition, most existing methods rely on costly volume rendering for novel view synthesis, where the num-ber of samples per ray required for high-quality rendering makes it difficult to train on complex real-world scenes. In this paper, we propose a new method that addresses these limitations, and provides the first solution for high-quality novel view synthesis of a scene from a wide-baseline stereo image pair. To better reason about the 3D scene, we introduce a multi-view vision transformer that computes pixel-aligned features for each input image. In contrast to a monocular image encoder commonly used in previous ap-proaches [51, 53, 58], the multi-view transformer uses the camera pose information as input to better reason about the scene geometry. We reduce the memory and computational costs for computing image features by combining this vision transformer at lower resolutions with a CNN at higher reso-lutions. A multi-view feature matching step further refines the geometry encoded in these feature maps for any 3D point that can be observed in both images. We also introduce an efficient differentiable renderer that enables large-scale training. Existing approaches that use volume rendering sample points along camera rays in 3D and project these points onto the image planes to compute the corresponding features using bilinear interpolation. Since perspective projection is a non-linear operation, uniformlysampled 3D points are not uniformly distributed in 2D, lead-ing to some pixels in the feature maps being sampled mul-tiple times, and other pixels not being sampled at all. Thus, this sampling strategy does not use the information in the pixel-aligned feature maps optimally. We instead take an image-centric sampling approach where we first compute the epipolar lines of a target pixel in the input images, and sample points uniformly on these lines in 2D. This exploits the fact that the number of pixels along the epipolar lines is the maximum effective number of samples. In addition, we use lightweight cross-attention layers that directly aggre-gate the sampled features and compute the pixel color. In contrast to volume rendering where we need to sample very close to a surface in order to render its color, thus requiring a large number of samples, our learned renderer does not share this limitation and can compute the pixel color even with sparse samples. Our lightweight rendering and feature back-bone components enable us to train on large-scale real-world datasets. We demonstrate through extensive experiments on two datasets that our method achieves state-of-the-art results, significantly outperforming existing approaches for novel view synthesis from sparse inputs. |
Cho_Generative_Bias_for_Robust_Visual_Question_Answering_CVPR_2023 | Abstract The task of Visual Question Answering (VQA) is known to be plagued by the issue of VQA models exploiting bi-ases within the dataset to make its final prediction. Various previous ensemble based debiasing methods have been pro-posed where an additional model is purposefully trained to be biased in order to train a robust target model. How-ever, these methods compute the bias for a model simply from the label statistics of the training data or from single modal branches. In this work, in order to better learn the bias a target VQA model suffers from, we propose a gener-ative method to train the bias model directly from the target model , called GenB. In particular, GenB employs a gener-ative network to learn the bias in the target model through a combination of the adversarial objective and knowledge distillation. We then debias our target model with GenB as a bias model, and show through extensive experiments the effects of our method on various VQA bias datasets includ-ing VQA-CP2, VQA-CP1, GQA-OOD, and VQA-CE, and show state-of-the-art results with the LXMERT architecture on VQA-CP2. | 1. Introduction Visual Question Answering (VQA) [5] is a challenging multi-modal task that requires a model to correctly under-stand and predict an answer given an input pair of image and question. Various studies have shown that VQA is prone to biases within the dataset and tend to rely heavily on lan-guage biases that exists within the dataset [2, 17, 51], and VQA models tend to predict similar answers only depend-ing on the question regardless of the image. In response to this, recent works have developed various bias reduction techniques, and recent methods have exploited ensemble based debiasing methods [7, 13, 19, 41] extensively. Among ensemble based methods, additional models are introduced to concurrently learn biases that might exist within each modality or dataset. For example, in works such as [7, 19], the Question-Answer model is utilized to determine the language prior biases that exist when a model is asked to give an answer based solely off of the question. What color is…QA Model VQA ModelWhat color is…Dataset Label Averageblack blue brown gray pink brown black & white gray cake skateboarding black blue brown pink orangeWhat color is… Figure 1. Given a Question Type (“What color is...”), we show all of the averaged answers within the training dataset. The answer computed from the entire training dataset is the known dataset label average or dataset bias as in [13, 19]. We see that the averaged model predictions of the Question-Answer Model and Visual-Question-Answer Model are significantly different. This Question-Answer model is then utilized to train a ro-bust “target” model, which is used for inference. The key purpose of an ensemble “bias” model is to capture the biases that are formed with its given inputs ( i.e., language prior bi-ases from the Question-Answer model). In doing so, if this model is able to represent the bias well, this bias model can be used to teach the target model to avoid such biased an-swers. In other words, the better the bias model can learn the biases, the better the target model can avoid such biases. Existing ensemble based methods either use pre-computed label statistics of training data (GGE-D [19] and LMH [13]), or single modal branches that compute the an-swer from either the question or image [7,13,19,37]. How-ever, we conjecture that there is a limit to the bias repre-sentation that can be obtained from such methods, as the model’s representative capacity is limited by its inputs. In addition, pre-computed label statistics represents only part of the bias [19]. As shown in Fig. 1, given a question type, the pre-computed label statistics (or known dataset bias) are noticeably different to the predictions of a model trained with the question or with the image and question. This dis-crepancy signifies that there is a part of the bias that we can-not fully model simply with the previous methods. There-fore, we propose a novel stochastic bias model that learns This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11681 the bias directly from the target model . More specifically, to directly learn the bias distribution of the target model , we model the bias model as a Gen-erative Adversarial Network (GAN) [16] to stochastically mimic the target model’s answer distribution given the same question input by introducing a random noise vector. As seen through literature, most biases are held within the question [2], so we use questions as the main bias modal-ity. To further enforce this, we utilize knowledge distil-lation [20] on top of adversarial training to force the bias model to be as close as possible to the target model, so that the target model learns from harder negative supervi-sion from the bias model. Finally, with our generative bias model, we then use our modified debiasing loss function to train our target model. Our final bias model is able to train the target model that outperforms previous uni-modal and multi-modal ensemble based debiasing methods by a large margin. To the best of our knowledge, we are the first to train the bias model by directly leveraging the behavior of the target model using a generative model for VQA. To show the efficacy and robustness of our method, we perform extensive experiments on commonly used robust-ness testing VQA datasets and various different VQA ar-chitectures. Our method show the state-of-the-art results on all settings without the use of external human annotations or dataset reshuffling methods. Our contributions are as follows: • We propose a novel bias model for ensemble based debiasing for VQA by directly leveraging the target model that we name GenB . • In order to effectively train GenB, we employ a Gener-ative Adversarial Network and knowledge distillation loss to capture both the dataset distribution bias and the bias from the target model. • We achieve state-of-the-art performance on VQA-CP2, VQA-CP1 as well as the more challenging GQA-OOD dataset and VQA-CE using the simple UpDn baseline without extra annotations or dataset reshuf-fling and state-of-the-art VQA-CP2 peformance on the LXMERT backbone. |
Chaudhuri_Data-Free_Sketch-Based_Image_Retrieval_CVPR_2023 | Abstract Rising concerns about privacy and anonymity preser-vation of deep learning models have facilitated research in data-free learning (DFL). For the first time, we iden-tify that for data-scarce tasks like Sketch-Based Image Re-trieval (SBIR), where the difficulty in acquiring paired pho-tos and hand-drawn sketches limits data-dependent cross-modal learning algorithms, DFL can prove to be a much more practical paradigm. We thus propose Data-Free (DF)-SBIR, where, unlike existing DFL problems, pre-trained, single-modality classification models have to be leveraged to learn a cross-modal metric-space for retrieval without access to any training data. The widespread availability of pre-trained classification models, along with the difficulty in acquiring paired photo-sketch datasets for SBIR justify the practicality of this setting. We present a methodology for DF-SBIR, which can leverage knowledge from models independently trained to perform classification on photos and sketches. We evaluate our model on the Sketchy, TU-Berlin, and QuickDraw benchmarks, designing a variety of baselines based on state-of-the-art DFL literature, and observe that our method surpasses all of them by signifi-cant margins. Our method also achieves mAPs competitive with data-dependent approaches, all the while requiring no training data. Implementation is available at https: //github.com/abhrac/data-free-sbir . | 1. Introduction Motivated by the high degree of expressiveness and flex-ibility provided by sketches, sketch-based image retrieval (SBIR) has emerged as a popular area of computer vision research [ 3,6,7,13,15]. SBIR is generally achieved by train-ing photo and sketch encoders to respectively map photo and sketch inputs to a class or instance aligned common space. Training deep neural photo-sketch encoders for this task, however, requires datasets with matching photo-sketch pairs [ 45,58]. Unlike photos, sketches are fundamentally difficult to acquire as drawing them involve long time peri-ods of laborious human participation. Driven by this practi-Photo Encoder Sketch Encoder Photo Encoder Sketch EncoderInverted Sketch Classifier Cross-Modal Representation Space Data -Free (Proposed)Data -Dependent (Conventional) Inverted Photo Classifier Real Data Independently TrainedClass -Aligned Estimated Distributions Figure 1. Our proposed Data-Free setting for SBIR does not need a real-world dataset of paired sketches and photos. Using only independently trained, modality-specific classifiers, it can estimate their train set distributions, as well as pair them at class-level for training the sketch and photo encoders. cal constraint, the problem has been studied under a variety of data-scarce settings like semi-supervised [ 3], few-shot class incremental [ 5], zero-shot [ 14], and any-shot [ 15]. However, all such settings assume the availability of some amount of instance/class-aligned data for training the en-coders. With the tremendous effort involved in acquir-ing such labelled photo-sketch pairs [ 3,13,14], as well as the rising concerns about privacy, security and anonymity preservation abilities of deep learning models [ 8,31,46,51], such assumptions may no longer be practical. With this view, we propose Data-Free Sketch-Based Im-age Retrieval (DF-SBIR), a novel setting that requires train-ing photo and sketch encoders for retrieval, but with no training data. Specifically, we only assume access to pre-trained photo and sketch classification models. In contrast to unsupervised cross-domain image retrieval [ 22] which only requires access to training data, but with no in-domain or cross-domain labels, our setting goes a step further and assumes access to no training data at all. Since classification does not require cross-modal pairings as in SBIR, and ow-ing to the recent advances in domain generalization [ 29,52], This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12084 such pre-trained classifiers are widely available [ 40,52], making our setting quite practical. Under this scenario, the problem of training the encoders can be posed in the light of data-free knowledge distillation (DFKD) [ 8,30]. Classi-cal knowledge distillation [ 21] aims to transfer knowledge from a pre-trained model (teacher) to a different model (stu-dent), by aligning the predictions of the latter with the for-mer on train set inputs. Differently, DFKD aims to achieve this knowledge transfer without access to any form of train-ing data. The conventional approach to DFKD involves the following two steps – (1) Reconstructing the train set dis-tribution of the teacher; (2) Training the student network to match its predictions with that of the teacher, on sam-ples from the reconstructed distribution. However, existing DFKD approaches so far have only been able to operate in a single modality, performing the same kind of task as that of the teacher. SBIR, being a cross-modal retrieval problem, cannot be tackled in the data-free setting by directly adapt-ing the machineries developed for DFKD, the reasons for which we detail below. First , the teachers (being classifiers) and the students (being encoders) operate in metric spaces of different na-ture, i.e., probabilistic and Euclidean respectively. This re-strains us from measuring the agreement between teach-ers and students in a straightforward way, thus preventing the direct application of state-of-the-art approaches from the DFKD literature like data-free adversarial distillation [10,35]. We address this by designing a unified, class-proxy based interface via which the teachers and students can interact. Second , the sketch and the image classifiers that act as teachers are independently trained on modality-specific data. Their intermediate representations are thus modality sensitive. However, the representations learned by the encoders to be used for DF-SBIR need to be modal-ity invariant. To this end, we introduce the concept of a modality guidance network, which constrains the recon-structions to belong to specific (photo/sketch) modalities. Training the encoders with such samples will ensure that they learn to eliminate unnecessary, modality-specific in-formation. Third , the independent training of the classifiers also mean that their train set distributions may not have direct class-level correspondence. To address this, we de-sign our distribution estimation process to reconstruct class-aligned samples, i.e., ones that have class-level correspon-dence across the two modalities. This will guarantee the availability of matching photo-sketch pairs for the metric learning of the encoders. Our approach is hence able to perform Data-Free Learning Across Modalities and Metric-Spaces, which motivates us to abbreviate it as CrossX-DFL. We make the following contributions – (1) Propose Data-Free SBIR, a novel setting keeping in view the data-scarcity constraints arising from the collection of paired photos and sketches for SBIR, as well as concerns around pri-vacy preservation; (2) A class-proxy based approach to per-form data-free adversarial distillation from teachers with probabilistic outputs to students with outputs in the Eu-clidean space; (3) A novel technique to reconstruct class-aligned samples across independent modalities for cross-modal data-free knowledge distillation; (4) Introduce the concept of a modality guidance network to constrain the re-constructed sample distributions to specific modalities; (5) Extensive experiments on benchmark datasets and ablation studies that demonstrate the usefulness of our novel compo-nents in providing competitive performance relative to the data-dependent setting. |
Guo_Distilling_Cross-Temporal_Contexts_for_Continuous_Sign_Language_Recognition_CVPR_2023 | Abstract Continuous sign language recognition (CSLR) aims to recognize glosses in a sign language video. State-of-the-art methods typically have two modules, a spatial percep-tion module and a temporal aggregation module, which are jointly learned end-to-end. Existing results in [9,20,25,36] have indicated that, as the frontal component of the over-all model, the spatial perception module used for spatial feature extraction tends to be insufficiently trained. In this paper, we first conduct empirical studies and show that a shallow temporal aggregation module allows more thor-ough training of the spatial perception module. However, a shallow temporal aggregation module cannot well capture both local and global temporal context information in sign language. To address this dilemma, we propose a cross-temporal context aggregation (CTCA) model. Specifically, we build a dual-path network that contains two branches for perceptions of local temporal context and global temporal context. We further design a cross-context knowledge distil-lation learning objective to aggregate the two types of con-text and the linguistic prior. The knowledge distillation en-ables the resultant one-branch temporal aggregation mod-ule to perceive local-global temporal and semantic context. This shallow temporal perception module structure facili-tates spatial perception module learning. Extensive exper-iments on challenging CSLR benchmarks demonstrate that our method outperforms all state-of-the-art methods. | 1. Introduction Sign language is a visual language for deaf and hearing-impaired people for ease of communication. Because sign language has a different grammatical structure and ex-pression from natural spoken language, deaf and hearing-*Wanli Xue and Qing Guo are corresponding authors (xue-wanli@email.tjut.edu.cn and tsingqguo@ieee.org). Connect. Temporal Classification Output: Gloss Predictions Temporal Aggregation Module (a) Existing CSLR framework 30 35 40 45 5020222426Word Error Rate(%)CTCA(Ours) TLP(ECCV2022) SMKD(ICCV2021) VAC(ICCV2021) SFL(ECCV2020) Given Annotation: __ON__ĠJETZTĠWETTERĠ WIE-AUSSEHENĠMORGENĠSAMSTAGĠ ZWEITEĠAPRILĠ__OFF__Ġ__ON__Ġ ZEIGEN-BILDSCHIRMĠ__OFF__ The generalization (IIW) of TAM2.04.06.01.0E-05 The generalization (IIW ) of SPM2.0E-05 4.0E-05 6.0E-05 2.0E-06TLP CTCASMKDVA C WER < <8.0 1.0E-05 1.0E-05 1.0E-05 (c) Comparison to SOTAs on (WER ) and model efficiency Test time(seconds) < <Model Size 1.0E-0510.0 Baseline(b) The generalization(IIW ) and performance(WER ) comparison Input: Sign Language VideoFigure 1. (a) is the common CSLR framework. (b) presents the generalization capability ( i.e., information stored in weights (IIW) [31]) of TAMs and SPMs of baseline, state-of-the-art methods and the proposed method. (c) is the performance of the state-of-the-art methods including word error rate, test time, and model size. impaired people hardly communicate normally with hear-ing people in daily life. To eliminate this communica-tion gap, continuous sign language recognition (CSLR) en-forces to recognition of various glosses from a sign lan-guage video. Due to the data collection and annotation being labor-intensive, CSLR benchmarks adopt a sentence-annotation manner for all sign language videos [1, 15, 33]. In recent years, there is a consensus among state-of-the-art methods on a baseline framework (See Fig. 1 (a)). It is made up of a spatial perception module (SPM), and a temporal aggregation module (TAM) including two compo-nents for local and global temporal perception module, and the connectionist temporal classification (CTC) loss [8] for training. At present, these methods [9, 20, 25, 36] have per-ceived one limitation of this framework that the temporal aggregation module can lead to insufficiently trained spatial This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10771 perception module and affect the final accuracy. We use a recent interpretation method ( i.e., the compression of infor-mation stored in weights (IIW) [31]) to measure the gener-alization capability of different neural modules. Fig. 1 (b) shows the IIWs of TAMs and SPMs of baseline and state-of-the-art works and infers their positive relation, i.e., low-generalization TAM ( i.e., high IIW) usually leads to low-generalization SPM. We provide more studies in Sec. 3. Y . Min et al. [20] measure the difference of correctly and incorrectly recognized results between auxiliary and pri-mary classifiers to evaluate model overfitting, and A. Hao et al. [9] visualized heatmaps of TAM’s self-similarity ma-trices to show what the local and global temporal perception learning. However, there are no straightforward quantitative studies to discuss the effects of TAM on SPM, and we do not know how significant the effects could be and have no idea about the desired temporal aggregation. In this paper, we extensively study the limitation and desirable properties of the temporal aggregation module in the CSLR framework via constructing a baseline framework and extensive empir-ical studies. We insight that a desired temporal aggregation module should be a shallow architecture to allow more ef-fective training of spatial perception module but also should be a deep one for a high temporal aggregation capability. Whereas, it is quite challenging for the temporal aggrega-tion module to achieve these properties simultaneously. To overcome this challenge, we propose the cross-temporal context aggregation (CTCA) that a shallow tempo-ral aggregation module has capable of incorporating local-global temporal contexts and the linguistic prior. Specifi-cally, we construct a dual-path network, which decouples the local and global perception modules and imposes a lin-guistic module in parallel. This architecture ensures the lo-cal context perception, global context perception, and lin-guistic prior extraction. Furthermore, we propose a cross-context knowledge distillation loss function to transfer the local temporal context and the linguistic prior to the global perception module. Notice that the spatial perception mod-ule can facilitate itself by receiving cross-context knowl-edge as supervision during distillation. Fig. 1(b) shows that both SPM and TAM in CTCA achieve higher generalization than the ones of baselines. Consequently, Fig. 1(c) delivers CTCA’s superiority and it outperforms the state-of-the-art methods on WER, test time, and model size. |
Chiu_Automatic_High_Resolution_Wire_Segmentation_and_Removal_CVPR_2023 | Abstract Wires and powerlines are common visual distractions that often undermine the aesthetics of photographs. The manual process of precisely segmenting and removing them is extremely tedious and may take up hours, especially on high-resolution photos where wires may span the en-tire space. In this paper, we present an automatic wire clean-up system that eases the process of wire segmenta-tion and removal/inpainting to within a few seconds. We observe several unique challenges: wires are thin, lengthy, and sparse. These are rare properties of subjects that com-mon segmentation tasks cannot handle, especially in high-resolution images. We thus propose a two-stage method that leverages both global and local contexts to accurately seg-ment wires in high-resolution images efficiently, and a tile-based inpainting strategy to remove the wires given our pre-dicted segmentation masks. We also introduce the first wire segmentation benchmark dataset, WireSegHR. Finally, we demonstrate quantitatively and qualitatively that our wire clean-up system enables fully automated wire removal with great generalization to various wire appearances.1. Introduction Oftentimes wire-like objects such as powerlines and ca-bles can cross the full width of an image and ruin an other-wise beautiful composition. Removing these “distractors” is thus an essential step in photo retouching to improve the visual quality of a photograph. Conventionally, removing a wire-like object requires two steps: 1) segmenting out the wire-like object, and 2) removing the selected wire and in-painting with plausible contents. Both steps, if done manu-ally, are extremely tedious and error-prone, especially for high-resolution photographs that may take photographers up to hours to reach a high-quality retouching result. In this paper, we explore a fully-automated wire seg-mentation and inpainting solution for wire-like object seg-mentation and removal with tailored model architecture and data processing. For simplicity, we use wire to refer to all wire-like objects, including powerlines, cables, support-ing/connecting wires, and objects with wire-like shapes. Wire semantic segmentation has a seemingly similar problem setup with generic semantic segmentation tasks; they both take in a high-resolution image and generate dense predictions at a pixel level. However, wire semantic This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2183 segmentation bears a number of unique challenges. First, wires are commonly long and thin, oftentimes spanning the entire image yet having a diameter of only a handful of pix-els. A few examples are shown in Figure 2. This prevents us from getting a precise mask based on regions of inter-est. Second, the input images can have arbitrarily high res-olution up to 10k×10k pixels for photographic retouching applications. Downsampling such high-resolution images can easily cause the thin wire structures to disappear. This poses a trade-off between preserving image size for infer-ence quality and run-time efficiency. Third, while wires have regular parabolic shapes, they are often partially oc-cluded and can reappear at arbitrary image location, thus not continuous. (e.g. [20, 36]). To account for these challenges, we propose a system for automatic wire semantic segmentation and removal. For segmentation, we design a two-stage coarse-to-fine model that leverages both pixel-level details in local patches and global semantics from the full image content, and runs ef-ficiently at inference time. For inpainting, we adopt an ef-ficient network architecture [35], which enables us to use a tile-based approach to handle arbitrary high resolution. We design a training strategy to enforce color consistency be-tween the inpainted region and the original image. We also present the first benchmark dataset, WireSegHR, for wire semantic segmentation tasks, where we collect and anno-tate high-resolution images with diverse scene contents and wire appearances. We provide analyses and baseline com-parisons to justify our design choices, which include data collection, augmentation, and our two-stage model design. Together, these design choices help us overcome the unique challenges of accurately segmenting wires. Our contribu-tions are as follows: •Wire segmentation model: We propose a two-stage model for wire semantic segmentation that leverages global context and local information to predict accu-rate wire masks at high resolution. We design an in-ference pipeline that can efficiently handle ultra-high resolution images. •Wire inpainting strategy: We design a tile-based in-painting strategy and tailor the inpainting method for our wire removal task given our segmentation results. •WireSegHR, a benchmark dataset: We collect a wire segmentation benchmark dataset that consists of high resolution images, with diversity in wire shapes and scene contents. We also release the manual an-notations that have been carefully curated to serve as ground truths. Besides, we also propose a benchmark dataset to evaluate inpainting quality. 2. Related Work Semantic segmentation Semantic segmentation has been actively researched over the past decade. For example, the (a) (b) (c) (d) (e) (f) (g) (h)Figure 2. Challenges of wire segmentation. Wires have a diverse set of appearances. Challenges include but are not limited to (a) structural complexity, (b) visibility and thickness, (c) partial oc-clusion by other objects, (d) camera aberration artifacts, and vari-ations in (e) object attachment, (f) color, (g) width and (h) shape. DeepLab series [4–6] has been one of the most widely used set of semantic segmentation methods. They leverage di-lated convolutions to capture long-range pixel correlations. Similarly, CCNet [14] attend to non-local regions via a two-step axial attention mechanism. PSPNet [48] use multi-scale pooling to extract high-resolution features. Recently, the self-attention mechanism [37] has gained increasing popularity. Transformer-based models for se-mantic segmentation [11, 12, 17,18,26,31,39,51] sig-nificantly outperform convolution-based networks since the attention modules benefit from their global receptive fields [39], which let the models attend to objects that span across larger portions of the feature map. While these above methods work well in common object semantic segmentation, when applied to our task of wire segmentation in high-resolution images, they either drop significantly in segmentation quality or require long infer-ence times. We show in Section 6that directly applying these methods to our task yields undesirable results. High-resolution image segmentation Segmentation in high-resolution images involves additional design consid-erations. It is computationally infeasible to perform in-ference on the full-resolution image with a deep network. As a result, to maximally preserve image details within the available computation resources, many methods employ a global-local inference pipeline. For instance, GLNet [7] simultaneously predict a coarse segmentation map on the downsampled image and a fine segmentation map on local patches at the original resolution, then fuse them to pro-duce the final prediction. MagNet [15] is a recent method that proposes to iteratively predict and refine coarse seg-mentation maps at multiple scales using a single feature ex-tractor and multiple lightweight refinement modules. Cas-cadePSP [8] train a standalone class-agnostic model to re-fine predictions at a higher resolution from a pretrained seg-mentation model. ISDNet [10] propose to use an extremely 2184 lightweight subnetwork to take in the entire full-resolution image. However, the subnetwork is limited in capacity and thus segmentation quality. We share the same idea with these past works on using a coarse-to-fine approach for wire segmentation, but modify the architecture and data process-ing to tailor to wires. Wire/Curve segmentation While few works tackle wire segmentation in high-resolution images, there are prior works that handle similar objects. For example, Transmis-sion Line Detection (TLD) is an actively researched area in aerial imagery for drone applications. Convolutional neu-ral networks are used [2, 23,28,46] to segment overhang-ing power cables in outdoor scenes. However, wire patterns in TLD datasets are relatively consistent in appearance and shape – evenly spaced and only spanning locally. In con-trast, we handle more generic wires seen in regular pho-tographic contents, where the wire appearance has much higher variety. Some other topics are loosely related to our task. Lane detection [20, 34,36] aims to segment lanes for autonomous driving applications. These methods benefit from simple line parameterization (e.g., as two end-points), and strong positional priors. In contrast, as shown in Figure 2, wires vary drastically in shapes and sizes in our task, thus making them difficult to parameterize. High-Resolution Image Inpainting Image inpainting has been well-explored using patch synthesis-based meth-ods [3, 9,22,38] or deep neural networks [16, 25,29,40,43, 44]. Zhao et al. leveraged the powerful image sysnthesis ability of StyleGAN2 [21] and proposed CoModGAN [49] to push the image generation quality to a newer level, and was followed by [19, 50]. Most of these deep models can-not be applied to inpainting tasks at high-resolution images. The latest diffusion-based inpainting model like DALLE-2 [30], LDM [32], and StableDiffusion etc. also suffer from long inference time and low output resolution. ProFill [45] was first proposed to address high resolution inpainting via a guided super resolution module. HiFill [42] utilized a con-textual residual aggregation module and the resolution can be up to 8K. LaMa [35] applied the fourier convoluational residual blocks to make the propagation of image structures well. LaMa was trained on only 256×256images, but can be used for images up to 2K with high quality. Recently, Zhang et al. [47] proposed to use guided PatchMatch for any-resolution inpainting and extended the deep inpainting results from LaMa to modern camera resolution. The tex-tures are better reused, while the structure and line comple-tion at high-resolution can still be challenging. In this paper, we aim at removing wires from high resolution photos. The problem can become easier if we run inpainting in a local manner since wires are usually thin and long. Therefore, we propose to revisit LaMa for wire removal, and run the inference in a tile-based fashion. (A) Input (B) Annotation (C) Zoom-inFigure 3. Wire Annotation Example. An example wire annota-tion in our dataset. Our annotation (B) is accurate in different wire thicknesses (red), variations in wire shapes (orange) and accurate wire occlusions (yellow). 3. Dataset Collection and WireSegHR 3.1. Image Source and Annotation | s Our definition of wires include electrical wires/cables, power lines, supporting/connecting wires, and any wire-like object that resemble a wire structure. We collect high-resolution images with wires from two sources: 80% of the images are from photo sharing platforms (Flickr, Pixabay, etc.), and 20% of the images are captured with different cameras (DSLRs and smartphones) in multiple countries on our own. For the internet images, we collect 400K candi-date images by keyword-searching. Then, we remove du-plicates and images where wires are the only subjects. We then curate the final 6K images that cover sufficient scene diversity like city, street, rural area and landscape. Our wire annotation process contains two rounds. In the first round, annotators draw detailed masks over wires at full-resolution. The annotated masks enclose the main wire body and the boundary, oftentimes including a gradi-ent falloff due to aliasing or defocus. The boundary region annotation is crucial so as to avoid residual artifacts during wire removal. In the second round, quality assurance is car-ried out to re-annotate unsatisfactory annotations. We show an example of our high-quality wire annotations in Figure 3. 3.2. Dataset Statistics In Table 1, we list the statistics of our dataset and com-pare them with existing wire-like datasets. Our dataset is the first wire dataset that contains high-resolution photo-graphic images. The dataset is randomly split into 5000 training, 500 validation, and 500 testing images. We release 420 copyright-free test images with annotations. 4. High-Resolution Wire Segmentation Wires appear visually different from common objects – being thin, long, sparse and oftentimes partially occluded. We find the following two design choices crucial to build-ing an effective wire segmentation system: 1) having a two 2185 Figure 4. Our wire removal system . A system overview of our wire segmentation and removal for high resolution images. Input is concatenated with min-and max-filtered luminance channels. The downsampled input is fed into the coarse module to obtain the global probability. In the local stage, original-resolution patches are concatenated with the global probability map to obtain the local logit map. After a segmentation mask is predicted, we adopt LaMa architecture and use a tile-based approach to achieve wire removal. See Section 4, 5 for details. Dataset# Wire ImagesMin. Image SizeMax. Image SizeMedian Image Size Powerline [41] 2000 128 ×128 128 ×128 128 ×128 PLDU [46] 573 540 ×360 540 ×360 540 ×360 PLDM [46] 287 540 ×360 540 ×360 540 ×360 TTPLA [2] 1100 3840 ×2160 3840 ×2160 3840 ×2160 Ours 6000 360 ×240 15904 ×10608 5040 ×3360 Table 1. Statistics of our wire dataset compared to others. stage framework so that coarse prediction from global con-text guides precise segmentation from local patches and 2) maximally preserving and augmenting image features and annotations of wires throughout the pipeline. 4.1. The Two-stage Coarse to Fine Model Figure 4 shows the two-stage segmentation pipeline. It consists of a coarse and a fine module, which share an en-coder Eand have their own decoder DCandDF. Intu-itively, the coarse module aims to capture the global con-textual information from the entire image and highlight the image regions possibly containing wires. Conditioned on the predictions from the coarse module, the fine module achieves high-resolution wire segmentation by only look-ing at local patches likely containing wires. Given a high-resolution image Iglo, we first bilinearly downsample it to Ids glowith a fixed size p×pand feed it into the coarse module. The module predicts the global prob-ability map Pglo=SoftMax (DC(E(Ids glo)))containing the activation of the wire regions. For each patch Ilocof size p×pcropped from the full-resolution image Iglo, and the corresponding conditional probability map Pconcropped from Pglo, we predict the lo-cal probability Ploc=SoftMax (DF(E(Iloc, Pcon))). Note thatEis shared between the coarse and the fine module, thus it should take inputs with the same number of chan-nels. Therefore, for the coarse module, we concatenate an additional zero channel with the input image to make the channel number consistent. We apply Cross Entropy (CE) loss to both the global Pglo and local probability map Ploc, comparing with their ground truth annotations GgloandGloc. Lglo=CE(Pglo, Gglo) Lloc=CE(Ploc, Gloc)(1) The final loss Lis the sum of the two: L=Lglo+λLloc, (2) where we set λ= 1 for training. Similar to Focal loss [24] and Online Hard Example Mining [33], we bal-ance the wire and background samples in the training set by selecting patches that contain at least 1% of wire pixels. To perform inference, we first feed the downsampled im-age to the coarse module, which is the same as training. Local inference is done by running a sliding window over the entire image, where the patch is sampled only when there is at least some percentage of wire pixels (determined byα). This brings two advantages: First, we save compu-tation time in regions where there are no wires. Second, the local fine module can leverage the information from the global branch for better inference quality. 4.2. Wire Feature Preservation As wires are thin and sparse, applying downsampling to the input images may make the wire features vanish entirely. 2186 To mitigate this challenge, we propose a simple feature aug-mentation technique by taking the min and max pixel lumi-nance values of the input image over a local window. Ei-ther the local min or the max value makes the wire pix-els more visually apparent. In practice, we concatenate the min-and max-filtered luminance channels to the RGB im-age and condition map, resulting in 6 total channels as input. We name this component MinMax. Besides feature augmentations, we also adapt the archi-tecture to maximally preserve the sparse wire annotations. We propose to use “overprediction” and achieve this by us-ing max-pool downsampling on the coarse labels during training, which preserves activation throughout the coarse branch. We name this component MaxPool. We provide ablation studies for these components in Section 6. 5. High-Resolution Wire Inpainting Given a full-resolution wire segmentation mask esti-mated by our wire segmentation model, we propose an in-painting pipeline to remove and fill in the wire regions. Our approach addresses two major challenges in wire inpaint-ing. First, recent state-of-the-art deep inpainting methods do not handle arbitrary resolution images, which is criti-cal for high-resolution wire removal. Second, deep inpaint-ing methods often suffer from color inconsistency when the background has uniform (or slowly varying) colors. This issue is particularly significant for wires, as they are often in front of uniform backgrounds, such as the sky or build-ing facades. The commonly used reconstruction loss, such as L1, is not sensitive to color inconsistency, which further exacerbates this issue. We thus revisit the efficient deep inpainting method LaMa [35]. Compared with other inpainting models, LaMa has two major advantages. First, it contains the Fourier convolutional layers which enables an efficient and high-quality structural completion. This helps complete building facades and other man-made structures with fewer artifacts. Second, its high inference efficiency makes a tile-based in-ference approach possible for high resolution images. To address color inconsistency, we propose a novel “onion-peel” color adjustment module. Specifically, we compute the mean of the RGB channels within the onion-peel regions Mo=D(M, d) −Mof the wire mask M, where Dis the binary dilation operator, and dis the kernel size. The color difference for each channel c∈R, G, B be-comes Bias c=E[M o(xc−yc)], where xis the input image, andyis the output from the inpainting network. The final output of the inpainting model is: ˆyc=yc+Biasc. Loss functions are then applied to ˆycto achieve color consistency while compositing the final result yout= (1−M)⊙x+ M⊙ˆy.6. Experiments 6.1. Implementation Details Wire Segmentation Network. We experiment with ResNet-50 [13] and MixTransformer-B2 [39] as our shared feature extractor. We expand the input RGB channel to six channels by concatenating the conditional probability map, min-and max-filtered luminance channels. For the min and max filtering, we use a fixed 6x6 kernel. We use separate decoders for the coarse and fine modules, denoted as DC andDFrespectively. We use the MLP decoder proposed in [39] for the Mix-Transformer segmentation model, and the ASPP decoder in [6] for our ResNet-50 segmentation model. In both the segmentation and inpainting modules, we take the per-pixel average of the predicted probability when merging overlap-ping patches. To crop Pconfrom Pglo, we upsample the predicted Pgloto the original resolution, then crop the pre-dicted regions according to the sliding window position. To train the segmentation module, we downsample the image Iglotop×pto obtain Ids glo. From Iglo, we randomly crop one p×ppatch Ilocthat contains at least 1% wire pixels. This gives a pair of Ids gloandIlocto compute the losses. During inference, Ids glois obtained in the same way as training, while multiple Ilocare obtained via a sliding window sampled only when the proportion of wire pixels is above α. All feature extractors are pretrained on ImageNet. We train our model on 5000 training images. The model is trained for 80k iterations with a batch size of 4. We set patch size p= 512 during training. For all ResNet models, we use SGD with a learning rate of 0.01, a momentum of 0.9 and weight decay of 0.0005. For MixTransformer mod-els, we use AdamW [27] with a learning rate of 0.0002 and weight decay of 0.0001. Our training follows the “poly” learning rate schedule with a power of 0.9. During infer-ence, we set both the global image size and local patch size pto 1024. Unless otherwise specified, we set the percentage for local refinement to 1%(α= 0.01). Wire Inpainting Network. We adopt LaMa [35] for wire inpainting by finetuning on an augmented wire dataset. To prepare the wire training set, we randomly crop ten 680×680patches from the non-wire regions of each image in our training partition. In total, we have 50K more train-ing images in addition to the 8M Places2 [52] dataset, and increase its sampling rate by 10× to balance the dataset. We also use all the ground truth segmentation maps in our train-ing set to sample wire-like masks. During training, we start from Big-LaMa weights, and train the model on 512×512 patches. We also prepare a synthetic wire inpainting qual-ity evaluation dataset, containing 1000 images at 512×512 with synthetic wire masks. While running inference on full-resolution images, we apply a tile-based approach, by fixing the window size at 512×512with an 32-pixel overlap. 2187 6.2. Wire Segmentation Evaluation Quantitative Evaluation We compare with several widely-used object semantic segmentation and high-resolution se |
Cho_Learning_Adaptive_Dense_Event_Stereo_From_the_Image_Domain_CVPR_2023 | Abstract Recently, event-based stereo matching has been studied due to its robustness in poor light conditions. However, existing event-based stereo networks suffer severe perfor-mance degradation when domains shift. Unsupervised do-main adaptation (UDA) aims at resolving this problem with-out using the target domain ground-truth. However, tradi-tional UDA still needs the input event data with ground-truth in the source domain, which is more challenging and costly to obtain than image data. To tackle this issue, we propose a novel unsupervised domain Adaptive Dense Event Stereo (ADES), which resolves gaps between the dif-ferent domains and input modalities. The proposed ADES framework adapts event-based stereo networks from abun-dant image datasets with ground-truth on the source do-main to event datasets without ground-truth on the target domain, which is a more practical setup. First, we pro-pose a self-supervision module that trains the network on the target domain through image reconstruction, while an artifact prediction network trained on the source domain as-sists in removing intermittent artifacts in the reconstructed image. Secondly, we utilize the feature-level normalization scheme to align the extracted features along the epipolar line. Finally, we present the motion-invariant consistency module to impose the consistent output between the per-turbed motion. Our experiments demonstrate that our ap-proach achieves remarkable results in the adaptation ability of event-based stereo matching from the image domain. | 1. Introduction Stereo matching [22, 41] is one of the most widely used methods for obtaining 3D information by establishing cor-respondences between stereo images. With considerable in-terest, learning-based stereo methods have achieved state-of-the-art performance in many benchmark datasets. How-ever, some challenges in stereo matching still exist due to the shortcoming of sensors ( e.g., low dynamic range, motion blur due to large exposure time). Event cameras [3] are novel sensors that asynchronously report per-pixel changes of intensity by imitating the human eye. Thanks ADES framework Image Inputs Ground TruthSource No Ground Truth Event InputsTargetTrainingDifferent Modality & Domain GapTest Event Inputs PredictionTargetFigure 1. The proposed ADES framework for adaptive dense event stereo network. ADES aims to exploit the existing frame-based stereo dataset for learning the event stereo network. to the high dynamic range and low latency, the event cam-era can be considered as a promising sensor for depth es-timation, especially in driving scenarios. Recent works [2,8,9,28,30,48,59] have attempted to utilize event cameras for stereo matching even under poor light conditions. Despite advances in event stereo, most prior works [9, 28, 48] still experience a significant degradation in perfor-mance when domains shift. Unsupervised domain adap-tation (UDA) can resolve this problem without using the target domain ground-truth. When UDA is applied for event stereo domain adaptation, it still needs the input event data with ground-truth in the source domain. However, as mentioned in [32], accurate synchronization of events with high temporal resolution and other devices ( e.g., Li-DAR) requires additional hardware and post-processing, so it is more challenging to obtain accurate ground truth than images. In this paper, we draw attention to large im-age datasets with ground-truth, which are easily accessible (e.g., DrivingStereo [57], SceneFlow [24] and KITTI [25]). In this setup, abundant image data from diverse environ-ments helps the event stereo network improve generaliz-ability with high performance. To this end, as shown in This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17797 Fig. 1, we propose a novel Adaptive Dense Event Stereo (ADES), which adapts the stereo network from the source domain having image data with ground-truth to the target domain having event data without ground-truth. ADES re-solves gaps between the different domains and input modal-ities. The proposed ADES framework consists of three com-ponents: smudge-aware self-supervision module, feature normalization, and motion-invariant consistency module. The proposed smudge-aware self-supervision module lever-ages dense traits of images via image reconstruction on the event target domain. Image reconstruction using only the event is often interrupted by blurry artifacts, what we call a smudge, so the network cannot estimate the sharp and ac-curate disparity map. To predict the smudge effect in the target domain, we design the self-supervision pipeline on the source image domain to estimate and suppress the arti-fact area in the reconstructed image on the target domain. In addition, we exploit the feature normalization be-fore generating the cost volume. Normalization scheme [29, 43, 49, 58] was generally used in the domain adapta-tion between the image modalities. However, due to the characteristics of the event, it is not efficient to normalize over the entire pixel area. Since most of the events are trig-gered around an edge of objects, some regions ( e.g., sky) have very sparse events. Therefore, vanilla normalization can mislead the values of features to shift to the values of the regions without events. While reducing the difference in features between the two domains, we apply a normal-ization along the epipolar line to take into account the char-acteristics of events and stereo matching. Finally, we focus on the different motion of event cam-eras from the source and target domains, leading to a severe domain gap. Therefore, we present the motion-invariant consistency module to predict consistent disparity even if the camera motion changes to some extent. This module help the network to adapt the target domain and also reduces the gap from camera motion. To the best of our knowledge, our work is the first attempt to move from unpaired image domain to event domain for stereo matching. Our main con-tributions are summarized as below: • Our work is the first that transfers the disparity estima-tion task from the rich image dataset with ground-truth to the event stream, resolving gaps between the differ-ent domains and input modalities. • We propose a novel adaptive event stereo network, ADES, containing the smudge-aware self-supervision module, feature normalization, and motion-invariant consistency module. • Extensive experiments demonstrate that the ADES framework achieves significantly better performance than the prior works in the adaptation ability between the different domains and modalities for event stereo.2. Related Works |
Cherti_Reproducible_Scaling_Laws_for_Contrastive_Language-Image_Learning_CVPR_2023 | Abstract Scaling up neural networks has led to remarkable perfor-mance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guid-ance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has pri-marily used private data & models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training dis-tribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to en-sure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study is available at https://github.com/LAION-AI/scaling-laws-openclip . | 1. Introduction Large pre-trained models now achieve state-of-the-art performance on a wide range of tasks. In particular, large models have led to substantial advances in speech [56], language [8, 17, 28, 57], vision [38, 84], and multi-modal language-vision settings [33,54,55,59,62]. A key ingredient in these breakthroughs has been self-or weakly-supervised learning, which enabled the use of Internet-harvested train-ing sets and reduced the need for human annotated data. InData Arch. ImageNet VTAB+ COCO CLIP [55] WIT-400M L/14 75.5 55.8 61.1 Ours LAION-2B L/14 75.2 54.6 71.1 Ours LAION-2B H/14 78.0 56.4 73.4 Table 1. We study the scaling behavior of large CLIP models using fully open-source training code and data. All models in our inves-tigation are available, including the largest public CLIP models. This table shows zero-shot performance at 224 pixel resolution, displaying accuracy on ImageNet [15], average accuracy on 35 VTAB+ datasets [65, 85], and image retrieval recall at 5 on MS-COCO image retrieval [46]. addition, recent pre-trained models relied on increasing the compute, model, and data scale by orders of magnitude. When varying model size, compute amount, and data quantity, several papers have empirically observed that both pre-training loss and downstream task performance reliably improve with scale. Specifically, researchers have postulated scaling laws in the form of power law relationships between model performance and model compute, or data scale [35, 61, 73, 84]. Such scaling laws allow practitioners to predict model performance for a given model and compute scale, extrapolate to larger scales, and can be used to determine pre-training regimes that obtain optimal model performance for a fixed amount of compute [28, 35]. So far, the literature on empirical scaling laws has fo-cused on language-only [28, 35, 73] or vision-only mod-els [25, 61, 83]. In the multimodal domain of language and vision, contrastive language-image models such as CLIP [55] have recently achieved large performance gains in zero-image classification, for instance improving zero-shot Im-ageNet accuracy from the prior state-of-the-art of 12% to 76%. Moreover, these models demonstrate unprecedented robustness to distribution shifts compared to prior supervised models [55, 71, 78]. However, there is currently no system-atic investigation for scaling trends in contrastive language-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2818 10111012 T otal compute (GMAC per sample x samples seen)2530354045ImageNet error rate (%) E=5.86*C0.11 E=23.18*C0.16 10111012 T otal compute (GMAC per sample x samples seen)30354045505560ImageNet robustness error rate (%) E=11.61*C0.13 E=211.66*C0.24 OpenCLIP CLIP Model ViT-B/32 ViT-B/16 ViT-L/14 ViT-H/14 ViT-g/14 Samples seen 3B 13B 34B Dataset LAION-80M LAION-400M LAION-2B CLIP-WIT(a) Relationship between total training compute and zero-shot classification performance on downstream tasks. Left: ImageNet performance. Right : average performance on five ImageNet robustness datasets (ImageNet-V2 [60], ImageNet-R [22], ImageNet-Sketch [75], ObjectNet [5], and ImageNet-A [24]). Scaling model size, data size, and samples seen leads to better performance on zero-shot classification. Models trained on OpenAI’s WebImageText (WIT) show a stronger scaling than models trained on LAION. 10111012 T otal compute (GMAC per sample x samples seen)3035404550MS-COCO (100 -Recall@5%) E=2.75*C0.08 E=1.59*C0.05 10111012 T otal compute (GMAC per sample x samples seen)101520 6789Flickr30K (100 -Recall@5%) E=15.51*C0.19 E=2.21*C0.10 OpenCLIP CLIP Model ViT-B/32 ViT-B/16 ViT-L/14 ViT-H/14 ViT-g/14 Samples seen 3B 13B 34B Dataset LAION-80M LAION-400M LAION-2B CLIP-WIT (b) Relationship between total training compute and zero-shot image retrieval performance on MS-COCO ( Left) and Flickr30K ( Right ). Scaling model size, data size, and samples seen leads to better performance on zero-shot image retrieval. Interestingly, in contrast to zero-shot classification (Figure 1a), models trained on LAION show a stronger scaling trend than OpenAI CLIP models trained on OpenAI’s WebImageText (WIT) dataset. Figure 1. Relationship between total training compute and performance in zero-shot classification (1a) and retrieval (1b). We fit a power-law on the Pareto frontier of the available models. Since total compute budgets (measured in GMAC) of different trained models are not exactly aligned, we divide the total compute scale into bins and select the best model performance from each bin. image learning. One substantial challenge in this direction is that until recently, there were no datasets of sufficiently large scale openly available for the research community to undertake such experiments. In this work, we conduct a scaling laws study for con-trastive language-vision learning by utilizing the recently re-leased LAION-5B [65] dataset of 5 billion image-text pairs. To ensure that our experiments are fully reproducible, we use the open source OpenCLIP [32] code to train CLIP models while varying model, data, and samples seen. We evaluate our CLIP models on several downstream tasks, including zero-shot classification, image retrieval, and fine-tuning via linear probing and end-to-end optimization. We observe a consistent increase in performance when scaling model, data, and compute, and derive scaling laws of power law form across different downstream tasks (Figure 1a, 1b). In-terestingly, when comparing our OpenCLIP and OpenAI’s original CLIP models, we find larger scaling coefficients for OpenCLIP models on zero-shot retrieval, while OpenAI CLIP models show stronger scaling for zero-shot classifica-tion. Table 1 shows two of our models and their results on image classification and retrieval benchmarks. We hypothesize that the training dataset is responsible for the task-dependent differences in scaling behavior between the OpenCLIP and OpenAI models. Our experiments have used the same ViT architectures as the OpenAI models, and the training recipes are largely matched. The main difference in training recipes is the batch size due to different compute environments, and our experiments with varying batch sizes suggest that the batch size changes do not explain the change in scaling trends. Overall our findings highlight the design of pre-training 2819 datasets as an important direction to further improve image-text models. Dataset designers should measure scaling be-havior so that the generalization capabilities of image-text models can continue to improve as we increase model size and the amount of compute. Moreover, pre-training datasets should be evaluated on a broad range of downstream tasks because model scaling can differ substantially by task with different pre-training sources leading to different scaling behavior by task. We hope that our open-source and re-producible scaling trends offer concrete starting points for improving current image-text datasets and models. |
Fu_Auto-CARD_Efficient_and_Robust_Codec_Avatar_Driving_for_Real-Time_Mobile_CVPR_2023 | Abstract Real-time and robust photorealistic avatars for telepres-ence in AR/VR have been highly desired for enabling im-mersive photorealistic telepresence. However, there still exists one key bottleneck: the considerable computational expense needed to accurately infer facial expressions cap-tured from headset-mounted cameras with a quality level that can match the realism of the avatar’s human appearance. To this end, we propose a framework called Auto-CARD, which for the first time enables real-time and robust driving of Codec Avatars when exclusively using merely on-device computing resources. This is achieved by minimizing two sources of redundancy. First, we develop a dedicated neural architecture search technique called AVE-NAS for avatar encoding in AR/VR, which explicitly boosts both the searched architectures’ robustness in the presence of extreme facial ex-pressions and hardware friendliness on fast evolving AR/VR headsets. Second, we leverage the temporal redundancy in consecutively captured images during continuous rendering and develop a mechanism dubbed LATEX to skip the com-putation of redundant frames. Specifically, we first identify an opportunity from the linearity of the latent space derived by the avatar decoder and then propose to perform adaptive latentextrapolation for redundant frames. For evaluation, we demonstrate the efficacy of our Auto-CARD framework in real-time Codec Avatar driving settings, where we achieve a 5.05×speed-up on Meta Quest 2 while maintaining a compa-rable or even better animation quality than state-of-the-art avatar encoder designs. | 1. Introduction Enabling immersive real-time experiences has been the key factor in driving the advances of Augmented-and Virtual-Reality (AR/VR) platforms in recent years. Pho-torealistic telepresence [29, 31, 37, 46] is emerging as a tech-nology for enabling remote interactions in AR/VR that aims *Work done during an internship at Meta.to impart a compelling sense of co-location among partici-pants in a shared virtual space. One state-of-the-art (SOTA) approach, coined Codec Avatars [29], is comprised of two components: (1) an encoder, which estimates a participant’s behavior from sensors mounted on an AR/VR headset, and (2) a decoder, which re-renders the aforementioned behavior to the other parties’ headset display using an avatar rep-resentation. Both the SOTA encoder and decoder designs have leveraged the expressive power of deep neural networks (DNNs) to enable the precise estimation of human behaviors as well as the high fidelity of rendering, which are critical for immersive photorealistic telepresence. Despite its big promise, one of the main challenges posed by photorealistic telepresence is the competing requirements between ergonomics and computing resources. On the one hand, power, form factor, and other comfort factors strictly limit the available computing resources on an AR/VR head-set device. On the other hand, the DNNs used in SOTA Codec Avatars are computationally expensive and require continuous execution during a telepresence call. It is worth noting that the limited computing resource on an AR/VR device must additionally be shared with other core AR/VR workloads, such as the SLAM-based tracking service, con-troller tracking, hand tracking, and environment rendering. Therefore, it is highly desirable and imperative to minimize the computation overhead and resource utilization of Codec Avatars, while not hurting their precise estimation of hu-man behaviors and rendering fidelity. This has become a bottleneck limiting their practical and broad adoption. To close the above gap towards real-time Codec Avatars on AR/VR devices, existing work has focused on reduc-ing the computational cost of the decoder. For example, PiCA [31] leverages the compute characteristics of modern DSP processors to enable simultaneously rendering up to five avatars on a Meta Quest 2 headset [31]. On the other hand, efficient encoder designs that can fit the AR/VR com-puting envelope have been less explored, with most existing works assuming off-device computing scenarios. Specifi-cally, SOTA methods for the encoder such as [37, 46] are prohibitively heavy with ∼3 Giga-floating-point-operations This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21036 (GFLOPs) for encoding merely from one image, which is too costly to be continuously executed on SOTA AR/VR head-sets. Although cloud-based solutions have been explored as an alternative for other AR/VR use cases, on-device en-coder processing for Codec Avatars is particularly desired for telepresence applications as a way to better protect the privacy and overcome internet bandwidth limitations. In this work, we aim to enable real-time encoder infer-ence for Codec Avatars on AR/VR devices. Specifically, the encoder takes in image data captured from headset-mounted cameras (HMC) and outputs facial expression codes for a Variational Auto-Encoder (V AE) [22], which is used as a decoder following prior works [29, 38]. This target prob-lem is particularly challenging due to two reasons. First, naively reducing the encoder capacity, e.g., by compress-ing the encoder models to have fewer channels and/or shal-lower layers, typically results in accuracy degradation, es-pecially for extreme expressions at the tail ends of the data distribution which are often precisely the expressions that contain the most informative social signal. Second, since hardware backends are still nascent for AR/VR use cases, heuristics for hardware-specific optimization may quickly be-come obsolete. For example, the Qualcomm Snapdragon 865 system-on-a-chip (SoC) [1] on Meta Quest 2 headsets and customized accelerators [40] exhibit different latency/energy constraints. As such, it is important for our developed tech-niques to be able to automatically adapt to different hardware backends for ensuring their practical and wide adoption, instead of relying on manual optimization strategies that require costly laboring efforts. To tackle the aforementioned challenges, we develop a framework, dubbed Auto-CARD, for enabling efficient and robust real-time Codec Avatar driving. Auto-CARD auto-matically minimizes two sources of redundancy in the encod-ing process of SOTA solutions: architectural and temporal redundancy. We summarize our contributions as follows: •Our proposed framework, Auto-CARD, is the first method that has enabled real-time and robust driving of Codec Avatars in AR/VR, exclusively using merely on-device computing resources. •Auto-CARD integrates a neural architecture search technique that is tailored for avatarencoding (A VE-NAS), minimizing potential model redundancy while explicitly accounting for the fast-evolving hardware de-sign trends of AR/VR headsets. A VE-NAS comprises three NAS components: (1)a view-decoupled supernet for enabling distributed near-sensor encoding, (2)a hy-brid differentiable search scheme for an efficient and effective joint search, and (3)an extreme-expression-aware search objective. •To further reduce temporal redundancy towards real-time encoders for Codec Avatars on AR/VR de-vices, Auto-CARD additionally integrates a mechanism,dubbed LATEX, to skip the computation of redundant frames. Specifically, we first identify an opportunity from the linearity of the latent space determined by the avatar decoder and then propose to perform adaptive latentextrapolation for redundant frames. •Extensive experiments on real-device measurements using AR/VR headsets, i.e., Meta Quest 2 [32], show that our method can achieve a 5.05 ×speed-up while maintaining a comparable or even better accuracy than SOTA avatar encoder designs. |
Chen_Mod-Squad_Designing_Mixtures_of_Experts_As_Modular_Multi-Task_Learners_CVPR_2023 | Abstract Optimization in multi-task learning (MTL) is more chal-lenging than single-task learning (STL), as the gradient from different tasks can be contradictory. When tasks are re-lated, it can be beneficial to share some parameters among them (cooperation). However, some tasks require additional parameters with expertise in a specific type of data or dis-crimination (specialization). To address the MTL challenge, we propose Mod-Squad , a new model that is Mod ularized into groups of experts (a ‘ Squad ’). This structure allows us to formalize cooperation and specialization as the pro-cess of matching experts and tasks. We optimize this match-ing process during the training of a single model. Specifi-cally, we incorporate mixture of experts (MoE) layers into a transformer model, with a new loss that incorporates the mutual dependence between tasks and experts. As a result, only a small set of experts are activated for each task. This prevents the sharing of the entire backbone model between all tasks, which strengthens the model, especially when the training set size and the number of tasks scale up. More interestingly, for each task, we can extract the small set of experts as a standalone model that maintains the same per-formance as the large model. Extensive experiments on the Taskonomy dataset with 13 vision tasks and the PASCAL-Context dataset with 5 vision tasks show the superiority of our approach. The project page can be accessed at https://vis-www.cs.umass.edu/mod-squad. | 1. Introduction Computer vision involves a great number of tasks includ-ing recognition, depth estimation, edge detection, etc. Some of them have a clear and strong relationship: they are likely to benefit from shared features. An example would be a task to classify cars and pedestrians and a task to segment the same classes. Other tasks appear to be less related: it is not clear what features they would share. An example could be tumor detection in medical images and face recognition. Multi-task learning (MTL) aims to model the relation-ships among tasks and build a unified model for a diverse Mod-SquadMoEViT Experts Expert Group Tasks Expert Group Ta sk Pool Ta sk Pool Shared ExpertsFigure 1. A comparison between Mod-Squad and MoE ViT. Our key motivation is that experts should leverage commonalities in some tasks (cooperation) but focus on a subset of tasks that require specific features and do not interfere with each other (spe-cialization). set of tasks. On the one hand, tasks often benefit by shar-ing parameters, i.e., cooperation . On the other hand, some tasks may require specialized expertise that only benefits that single task, i.e., specialization . A good MTL system should be flexible to optimize experts for the dual purposes of cooperation and specialization. There are two well-known challenges in MTL: (1) gradi-ent conflicts across tasks [5, 38]; and (2) how to design ar-chitectures that have both high accuracy and computational efficiency. To address these challenges, we introduce Mod-Squad , a new model that constructs a Mixture of Ex-perts (MoE) [31] to be mod ularized multi-task learners (a squad ). Our design allows experts to cooperate on tasks when it is helpful , rather than penalizing experts that do not participate in every task. At the same time, some experts naturally develop a deep specialization in particular tasks, improving performance. The left figure in Fig. 1 shows an example of the specialization and cooperation of experts in Mod-Squad. A further and important side benefit, discussed below, is that this sparsification of experts allows our model to be decomposed into much smaller single-task models that perform extremely well. We achieve these goals by first integrating mixture of ex-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11828 perts (MoE) layers into our vision transformer [6] backbone network. The motivation is to divide the model into groups of experts, and for each expert to construct a minimum part of the model that can be shared among tasks or be special-ized for one task. The experts can have any network struc-ture (e.g., MLP or attention network [40]) so that we can incorporate advanced model designs. Our modular design allows cooperation and specialization via the distribution of tasks to experts and also experts to tasks. Below, we for-malize this idea mathematically by analyzing the probabil-ity distribution over tasks and experts, and using a novel loss function to induce a specific structure on this distribution. Many previous MoE works [29, 31, 40] use a load-balancing loss that encourages the frequency of expert us-age (across all tasks and batches) to be highly similar. Some MoE methods [18, 26] directly apply this loss after the forward pass of each task on the multi-task scenario so that each task evenly uses all experts. However, this ap-proach may force experts to set parameters on conflicting tasks with learning gradients that counteract each other. In other words, while an expert may benefit from being shared among certain pairs of tasks, it may be harmed by being forced to share among other pairs of tasks. This is an expla-nation for the difficulty of training multi-task models under such an expert-balancing loss. In comparison, we contend that experts should leverage commonalities in some tasks (cooperation) but also create a subset of experts that learn specific features (as needed by some tasks) and do not interfere with each other (spe-cialization). Such an assignment of tasks to experts can be represented via a sparse but strong dependence between experts and tasks . Fig. 1 illustrates this key difference be-tween our model and previous MoE work, showing how our model induces a sparser structure in the assignment of ex-perts to tasks. To implement this idea, we add a loss term to maximize the mutual information between experts and tasks. This induces a strong dependency between experts and tasks, with each task heavily related to a small set of experts and vice versa. Interestingly, we find that our model converges to a state in which, after training, most experts are never or rarely used for many tasks (evidence of specialization), but the ex-perts are still balanced in their activation frequency. This property enables us to extract a compact sub-network from the giant model for each task. The small networks extracted in this fashion work independently as standalone models for individual tasks with no performance drop . This prop-erty enables us to train a giant, sparse model in a scaled-up multi-task learning scenario and later get compact sub-networks for each task with high performance. Our main contributions can be summarized as follows: •Modular multi-task learner. We propose a new modular backbone model, Mod-Squad, that is composed of a largegroup of attention and feed-forward experts. The experts can be flexibly assigned a subset of tasks to achieve spe-cialization and cooperation. •Optimizing the joint distribution over tasks and ex-perts . Mod-Squad includes a new loss term that encour-ages a sparse but strong dependence between experts and tasks. This is done by measuring and maximizing the mu-tual information between tasks and experts. •Effective and Efficient multi-task learners at scale. Ex-periment results show that Mod-Squad achieves state-of-the-art performance on two major multi-task datasets while maintaining its computational efficiency. •Extracting small sets of experts as standalone models with no performance drop. We further show that Mod-Squad can be effectively pruned for a designated task with-out sacrificing performance. |
Agustsson_Multi-Realism_Image_Compression_With_a_Conditional_Generator_CVPR_2023 | Abstract By optimizing the rate-distortion-realism trade-off, gen-erative compression approaches produce detailed, realis-tic images, even at low bit rates, instead of the blurry re-constructions produced by rate-distortion optimized mod-els. However, previous methods do not explicitly control how much detail is synthesized, which results in a common criticism of these methods: users might be worried that a misleading reconstruction far from the input image is gen-erated. In this work, we alleviate these concerns by train-ing a decoder that can bridge the two regimes and navigate the distortion-realism trade-off. From a single compressed representation, the receiver can decide to either reconstruct a low mean squared error reconstruction that is close to the input, a realistic reconstruction with high perceptual quality, or anything in between. With our method, we set a new state-of-the-art in distortion-realism, pushing the fron-tier of achievable distortion-realism pairs, i.e., our method achieves better distortions at high realism and better real-ism at low distortion than ever before. | 1. Introduction Lossy image compression considers the trade-off be-tween the number of bits used to store an input image and how close the reconstruction (that we obtain from the bits) is to that input image. As we use more bits, we will be able to get closer to the input. This idea is formalized in the fundamental rate-distortion trade-off [ 34], where “rate” stands for bit-rate, and “distortion” is formalized as a pair-wise metric between the input image and the reconstruction (e.g., the mean-squared error, MSE). While minimizing this trade-off has been the focus of many works starting from JPEG [ 43] all the way to recent neural [ 13] and non-neural [ 11] codecs, there has been a surge of interest in additionally considering the “realism” or “perceptual quality” of the reconstructions [ 6,14,23,25, 35,37,39,41,45,48]. After all, as we move toward low rates, purely rate-distortion optimized systems will produce arti-β=0β=2.56 ... 28.1dB 27.0dB Figure 1. Decoding two reconstructions from the same represen-tation ˆy, which takes 2345 bytes to store: We use a low realism weight =0 for the left reconstruction, and a high =2.56 for the right. Note that increasing leads to a much sharper re-construction, but the PSNR drops by 1.1dB, consistent with rate-distortion-realism theory [ 6]. We only show two reconstructions, but our generator Gcan produce any reconstruction in between by changing . This allows the user to decide between viewing a reconstruction that is close to the input (left, i.e., high PSNR), or that looks realistic (right). facts in the reconstructions, such as the well known block artifacts of JPEG or blurry patches for neural approaches. There is simply not enough bitrate available to store all of the details, and if we target, e.g., MSE, the best recon-struction is the average image over all images that map to the given representation since, inevitably, many images will map to the same representation at low rates. Intuitively, in-stead of an average image reconstruction, we could prefer a “realistic” reconstruction that is sharp and appropriately textured. This reconstruction might have worse MSE than the average image, but users might find it more perceptually pleasing and less artificial. We can see from this argument that there exists an additional trade-off here, between “re-alism” and “distortion”, and that distortion will increase as This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22324 Input High-Realism Low-Distortion Kodak: kodim20 Ours =2.56, 0.12bpp, 31.3dB HiFiC, 0.12bpp, 29.3dB Ours =0, 0.12bpp, 32.4dB CLIC 2020: 3f273 Ours =2.56, 0.085bpp, 32.6dB HiFiC, 0.082bpp, 30.5dB Ours =0, 0.085bpp, 33.6dB CLIC 2020: 88c58 Ours =2.56, 0.048bpp, 32.3dB HiFiC, 0.092bpp (1.92⇥), 33.2dB Ours =0, 0.048bpp, 33.7dB MS COCO 30K: 45962 Ours =2.56, 0.090bpp, 31.0dB HiFiC, 0.17bpp (1.86⇥), 32.4dB Ours =0, 0.090bpp, 31.9dB Figure 2. Comparing input images to reconstructions from our model at =2.56, the generative state-of-the-art HiFiC, as well as our model at =0. Note that both our models always have the same bits-per-pixel (bpp) per row, since for each row, the two reconstructions we show are obtained from the same representation—we simply vary for the generator. Overall, we see how our high-realism reconstructions ( =2.56) closely match the input, more-so than HiFiC. On the airplane (first row), we can read the text in our reconstruction, in contrast to the one from HiFiC. In the second row, the texture of the sneaker is faithfully preserved. For the hair, we note that HiFiC uses 1.92⇥the bitrate of our model to achieve a similar reconstruction. In the last row, HiFiC uses 1.86⇥the rate. In the first two rows, where we have comparble bpp to HiFiC, both of our reconstructions have higher PSNR. In the rightmost column ( =0) we can see the low-distortion reconstructions of our model. There we have near state-of-the-art PSNR at the cost of losing the (synthetic) detail. we improve realism. Following Blau and Michaeli [ 6], we formalize “distortion” as a metric between pairs of images (e.g., MSE) that indicates how close is the reconstruction to the input, while “realism” indicates how realistic the recon-structions look (regardless of the input). We formalize the latter as a divergence d(pX,pˆX)between the distribution of real images, X, and reconstructions, ˆX. Note that this can only be measured over a set of images since an accurate es-timate of the distribution is needed. Throughout this text,we use PSNR as a measure of distortion, and FID [ 16] as a measure of realism. Previous work successfully optimized the triple rate-distortion-realism trade-off [ 1,14,25,31,33], however, there is one caveat. Since the realism constraint might produce reconstructions that are far away from the input, these sys-tems might be looked at with suspicion because it is not clear which details are in the original and which were added by the architecture. 22325 We address this caveat by training a decoder that, given asingle compressed representation, either produces a re-construction where little or no detail is generated (like rate-distortion optimized codecs), one where fine-grained detail is generated (like rate-distortion-realism optimized codecs), or anything in between (see Fig. 1). We emphasize that the receiver can decide how much detail to generate, because we condition the decoder, not the encoder, on a “realism factor”, , and thus the receiver can produce the full spec-trum of reconstructions from a single representation, ˆy. Our main contributions are: 1.We bridge the generative and non-generative compres-sion worlds, by navigating the trade-off between dis-tortion and realism from a single representation using a conditional generator. |
Chen_Run_Dont_Walk_Chasing_Higher_FLOPS_for_Faster_Neural_Networks_CVPR_2023 | Abstract To design fast neural networks, many works have been focusing on reducing the number of floating-point opera-tions (FLOPs). We observe that such reduction in FLOPs, however, does not necessarily lead to a similar level of re-duction in latency. This mainly stems from inefficiently low floating-point operations per second (FLOPS). To achieve faster networks, we revisit popular operators and demon-strate that such low FLOPS is mainly due to frequent mem-ory access of the operators, especially the depthwise con-volution. We hence propose a novel partial convolution (PConv) that extracts spatial features more efficiently, by cutting down redundant computation and memory access simultaneously. Building upon our PConv, we further pro-pose FasterNet, a new family of neural networks, which attains substantially higher running speed than others on a wide range of devices, without compromising on accu-racy for various vision tasks. For example, on ImageNet-1k, our tiny FasterNet-T0 is 2.8×,3.3×, and 2.4×faster than MobileViT-XXS on GPU, CPU, and ARM processors, respectively, while being 2.9% more accurate. Our large FasterNet-L achieves impressive 83.5% top-1 accuracy, on par with the emerging Swin-B, while having 36% higher in-ference throughput on GPU, as well as saving 37% compute time on CPU. Code is available at https://github. com/JierunChen/FasterNet . | 1. Introduction Neural networks have undergone rapid development in various computer vision tasks such as image classification, detection and segmentation. While their impressive perfor-mance has powered many applications, a roaring trend is to pursue fast neural networks with low latency and high throughput for great user experiences, instant responses, safety reasons, etc. How to be fast? Instead of asking for more costly com-puting devices, researchers and practitioners prefer to de-sign cost-effective fast neural networks with reduced com-putational complexity, mainly measured in the number of ∗= … (a) Convolution∗= … (b) Depthwise/Group Convolution Input/output Filter Convolutio n Identity (c) Partial Convolution (ours)…∗=∗Figure 1. Our partial convolution (PConv) is fast and efficient by applying filters on only a few input channels while leaving the remaining ones untouched. PConv obtains lower FLOPs than the regular convolution and higher FLOPS than the depthwise/group convolution. floating-point operation s(FLOPs)1. MobileNets [20, 21, 47], ShuffleNets [40, 76] and GhostNet [13], among others, leverage the depthwise convolution (DWConv) [48] and/or group convolution (GConv) [27] to extract spatial features. However, in the effort to reduce FLOPs, the operators of-ten suffer from the side effect of increased memory access. MicroNet [29] further decomposes and sparsifies the net-work to push its FLOPs to an extremely low level. Despite its improvement in FLOPs, this approach experiences in-efficient fragmented computation. Besides, the above net-works are often accompanied by additional data manipula-tions, such as concatenation, shuffling, and pooling, whose running time tends to be significant for tiny models. Apart from the above pure convolutional neural net-works (CNNs), there is an emerging interest in making vi-sion transformers (ViTs) [11] and multilayer perceptrons (MLPs) architectures [57] smaller and faster. For exam-ple, MobileViTs [42, 43, 63] and MobileFormer [6] reduce the computational complexity by combining DWConv with a modified attention mechanism. However, they still suf-fer from the aforementioned issue with DWConv and also need dedicated hardware support for the modified attention mechanism. The use of advanced yet time-consuming nor-1We follow a widely adopted definition of FLOPs, as the number of multiply-adds [36, 76]. 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12021 (a) (b) Figure 2. (a) FLOPS under varied FLOPs on CPU. Many existing neural networks suffer from low computational speed issues. Their effective FLOPS are lower than the popular ResNet50. By contrast, our FasterNet attains higher FLOPS. (b) Latency under varied FLOPs on CPU. Our FasterNet obtains lower latency than others with the same amount of FLOPs. malization and activation layers may also limit their speed on devices. All these issues together lead to the following question: Are these “fast” neural networks really fast? To answer this, we examine the relationship between latency and FLOPs, which is captured by Latency =FLOPs FLOPS, (1) where FLOPS is short for floating-point operations per second, as a measure of the effective computational speed. While there are many attempts to reduce FLOPs, they seldom consider optimizing FLOPS at the same time to achieve truly low latency. To better understand the situ-ation, we compare the FLOPS of typical neural networks on an Intel CPU. The results in Fig. 2 show that many ex-isting neural networks suffer from low FLOPS, and their FLOPS is generally lower than the popular ResNet50. With such low FLOPS, these “fast” neural networks are actually not fast enough. Their reduction in FLOPs cannot be trans-lated into the exact amount of reduction in latency. In some cases, there is no improvement, and it even leads to worse latency. For example, CycleMLP-B1 [5] has half of FLOPs of ResNet50 [16] but runs more slowly ( i.e., CycleMLP-B1vs. ResNet50: 116.1ms vs. 73.0ms). Note that this dis-crepancy between FLOPs and latency has also been noticed in previous works [40, 42] but remains unresolved partially because they employ the DWConv/GConv and various data manipulations with low FLOPS. It is deemed there are no better alternatives available. This paper aims to eliminate the discrepancy by devel-oping a simple yet fast and effective operator that maintains high FLOPS with reduced FLOPs. Specifically, we reexam-ine existing operators, particularly DWConv, in terms of the computational speed – FLOPS. We uncover that the main reason causing the low FLOPS issue is frequent memory ac-cess. We then propose a novel partial convolution (PConv)as a competitive alternative that reduces the computational redundancy as well as the number of memory access. Fig. 1 illustrates the design of our PConv. It takes advantage of redundancy within the feature maps and systematically ap-plies a regular convolution (Conv) on only a part of the in-put channels while leaving the remaining ones untouched. By nature, PConv has lower FLOPs than the regular Conv while having higher FLOPS than the DWConv/GConv. In other words, PConv better exploits the on-device computa-tional capacity. PConv is also effective in extracting spatial features as empirically validated later in the paper. We further introduce FasterNet, which is primarily built upon our PConv, as a new family of networks that run highly fast on various devices. In particular, our FasterNet achieves state-of-the-art performance for classification, detection, and segmentation tasks while having much lower latency and higher throughput. For example, our tiny FasterNet-T0 is2.8×,3.3×, and 2.4×faster than MobileViT-XXS [42] on GPU, CPU, and ARM processors, respectively, while being 2.9% more accurate on ImageNet-1k. Our large FasterNet-L achieves 83.5% top-1 accuracy, on par with the emerging Swin-B [35], while offering 36% higher through-put on GPU and saving 37% compute time on CPU. To sum-marize, our contributions are as follows: • We point out the importance of achieving higher FLOPS beyond simply reducing FLOPs for faster neu-ral networks. • We introduce a simple yet fast and effective operator called PConv, which has a high potential to replace the existing go-to choice, DWConv. • We introduce FasterNet which runs favorably and uni-versally fast on a variety of devices such as GPU, CPU, and ARM processors. • We conduct extensive experiments on various tasks and validate the high speed and effectiveness of our PConv and FasterNet. 2 12022 |
Hager_Best_of_Both_Worlds_Multimodal_Contrastive_Learning_With_Tabular_and_CVPR_2023 | Abstract Medical datasets and especially biobanks, often contain extensive tabular data with rich clinical information in ad-dition to images. In practice, clinicians typically have less data, both in terms of diversity and scale, but still wish to deploy deep learning solutions. Combined with increasing medical dataset sizes and expensive annotation costs, the necessity for unsupervised methods that can pretrain multi-modally and predict unimodally has risen. To address these needs, we propose the first self-supervised contrastive learning framework that takes ad-vantage of images and tabular data to train unimodal en-coders. Our solution combines SimCLR and SCARF , two leading contrastive learning strategies, and is simple and effective. In our experiments, we demonstrate the strength of our framework by predicting risks of myocardial infarc-tion and coronary artery disease (CAD) using cardiac MR images and 120 clinical features from 40,000 UK Biobank subjects. Furthermore, we show the generalizability of our approach to natural images using the DVM car advertise-ment dataset. We take advantage of the high interpretability of tabu-lar data and through attribution and ablation experiments find that morphometric tabular features, describing size and shape, have outsized importance during the contrastive learning process and improve the quality of the learned embeddings. Finally, we introduce a novel form of super-vised contrastive learning, label as a feature (LaaF), by ap-pending the ground truth label as a tabular feature during multimodal pretraining, outperforming all supervised con-trastive baselines.1 | 1. Introduction Modern medical datasets are increasingly multimodal, often incorporating both imaging and tabular data. Images 1https : / / github . com / paulhager / MMCL -Tabular -Imagingcan be acquired by computed tomography, ultrasound, or magnetic resonance scanners, while tabular data commonly originates from laboratory tests, medical history and patient lifestyle questionnaires. Clinicians have the responsibility to combine and interpret this tabular and imaging data to diagnose, treat, and monitor patients. For example, cardiol-ogists may ask about a patients’ family history and record their weight, cholesterol levels, and blood pressure to better inform diagnoses when examining images of their heart. Beyond diagnostics, multimodal data is also crucial to advance the understanding of diseases motivating the cre-ation of biobanks. Going far beyond the scale of typical datasets in hospitals, biobanks pool vast amount of informa-tion from large populations. Multimodal biobanks include the German National Cohort [21] with 200,000 subjects, Lifelines [52] with 167,000 subjects, and the UK Biobank [54] with 500,000 subjects. The UK Biobank includes thou-sands of data fields from patient questionnaires, laboratory tests, and medical examinations, in addition to imaging and genotyping information. Biobanks have already proven use-ful in the training of machine learning models to predict many diseases such as anaemia [39], early brain aging [32] and cardiovascular disease [1, 51]. There is a substantial interest in deploying algorithms that have been developed using these large-scale population studies in clinical practice. However, acquiring the same quality of data, both in terms of diversity of modalities and number of features, is not feasible in a busy clinical work-flow [20]. Furthermore, low disease frequencies make su-pervised solutions hard to train. Consequently, there is a clear need for unsupervised strategies that can learn from biobank scale datasets and be applied in the clinic where considerably less data, in size and dimension, is available. Our contribution To address these needs, we propose the first contrastive framework that utilizes imaging and tabular data, shown in figure 1. Our framework is based on Sim-CLR [13] and SCARF [6], two leading contrastive learning solutions, and is simple and effective. We demonstrate the utility of our pretraining strategy on the challenging task of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23924 1. Multimodal contrastive learning with tabular data 3. Supervised contrastive learning with label as a feature Infarction 0 1 1Ventricle Volume 37 24 202. The influence of morphometric features + CLIP Loss Encoders Projectors S1 S2 S30 0 1 1 1 1Smoking StatusConsumes Alcohol...Physical Activity 1 1 0 Reduced V olume Normal V olume Infarction=0 Infarction=1 Figure 1. We combine imaging and tabular data in a contrastive learning framework. We observe that morphometric features, describing shape and size, are of outsized importance in multimodal contrastive training and their inclusion boosts downstream task performance. By simply adding the label as a tabular feature we introduce a novel form of supervised contrastive learning that outperforms all other supervised contrastive strategies. predicting cardiac health from MR images. Beyond medical imaging, we show that our framework can also be applied when combining natural images and tabular data using the DVM car advertisement dataset [29]. Experimentally, we observe that our tool leverages mor-phometric features during contrastive learning. Morphome-tric features describe the size and shape of an object and therefore correlate with extractable imaging features. We quantitatively demonstrate the importance of these features in the contrastive learning process using attribution meth-ods, such as integrated gradients [55], and ablation experi-ments. Finally, we introduce a new supervised contrastive learn-ing method called label as a feature (LaaF). By appending the target label as a tabular feature, our method outperforms previously published strategies that incorporate labels into the contrastive framework. Our method is also highly flexi-ble and can be combined with the aforementioned strategies to further improve performance.2. Related Work Self-supervised learning with images aims to extract useful features from unlabeled data. Historically, this was attempted by solving hand-crafted pretext tasks such as jig-saw puzzles [44, 58, 59, 72], colorization [36, 63, 71], image inpainting [45], and context prediction [7, 12, 19]. The ma-jor difficulties with using these methods is that they tend to overfit on the specifics of their pretext task, limiting their utility for downstream tasks. Contrastive learning has emerged as a popular and per-formant successor to pretext tasks. Contrastive learning trains encoders by generating augmented views of a sam-ple and maximizing their projected embedding similarity while minimizing the similarity between the projected em-beddings of other samples [24]. It has been popularized recently by implementations such as SimCLR [13], MOCO [25], BYOL, [23] and others [9, 10, 14, 16, 70]. We use the contrastive framework of SimCLR as the basis for our work. 23925 Deep learning with tabular data has recently begun to yield results that are competitive with classical machine learning methods [4, 8, 28], though for many applications they still underperform simpler algorithms [8, 53]. Self-supervised learning is being explored in the tabular domain with frameworks such as VIME [66] and contrastive meth-ods such as SubTab [61] and SCARF [6]. We base our tab-ular augmentations on those used in SCARF. Multimodal contrastive learning with images is be-coming more important as the number of multimodal datasets increases and multimodal training strategies be-come more effective. Approaches such as CLIP [49], which combines images and text, are general-purpose vision mod-els that are able to solve new tasks in a zero-shot manner. Some of these models use internet-size datasets and are de-scribed as foundational models, such as UniCL [65], Flo-rence [68], ALIGN [31], and Wu Dao 2.0 [18]. Outside of the image-language domain, there has also been progress on multimodal contrastive learning using two different imag-ing modalities [47, 58], audio and video [37], video and text [64, 73], and imaging and genetic data [57]. While lit-erature on generative self-supervised tabular and imaging models [3] [34] exists, it is limited in scope, using only two or four clinical features. To the best of our knowledge, there is no implementation of a contrastive self-supervised frame-work that incorporates both images and tabular data, which we aim to address with this work. Supervised learning within contrastive frameworks has been shown to outperform the binary cross entropy loss in some cases and create more robust embeddings [33]. Su-pervised contrastive learning [33] maximizes the similarity of the projected embeddings of all views in a batch from the same class. This also addresses the problem of false neg-atives in contrastive learning, which is that the contrastive loss minimizes projected embedding similarity between dif-ferent samples even if they are part of the same class ac-cording to a downstream task (i.e. false negatives). By uti-lizing the available labels, supervised contrastive learning is able to circumvent this problem and outperforms other methods that heuristically identify and eliminate false nega-tives [15,30]. We propose a solution for supervised learning in our multimodal contrastive framework that takes advan-tage of the unique strengths of tabular data by appending the label as a tabular feature. 3. Methods 3.1. Contrastive Framework for Tabular and Imaging Data We base our multimodal framework on SimCLR [13]. Let our dataset be xand a unique sample be j. Each batch contains pairs of imaging xjiand tabular xjtsamples which are augmented. Each augmented imaging sample xjiin thebatch is passed through an imaging encoder fθIto generate the embedding fxji. Each augmented tabular sample xjtin the batch is passed through a tabular encoder fθTto gen-erate the embedding fxjt. The embeddings are propagated through separate projection heads fϕIandfϕTand brought into a shared latent space as projections zjiandzjtwhich are then L2 normalized onto a unit hypersphere. The pro-jections are pulled and pushed in the shared latent space according to the “CLIP” loss [49], which maximizes the cosine similarity of projections from the same sample and minimizes the similarity of projections from different sam-ples. In contrast to the original InfoNCE [43] loss and fol-lowing CLIP, we only contrast projections between modali-ties, never within one modality. iandtcan be used interchangeably and so, without loss of generality, the projection of an image is defined as zji=fϕI(fθI(xji)) (1) Considering all subjects Nin a batch, the loss for the imag-ing modality is ℓi,t=−X j∈Nlogexp(cos(zji, zjt)/τ)X k∈N,k̸=jexp(cos(zji, zkt)/τ). (2) ℓt,iis calculated analagously and the total loss is thus L=λℓi,t+ (1−λ)ℓt,i. (3) The images in the batch are augmented based on the stan-dard contrastive augmentations specified in [13]: horizontal flips, rotations, color jitter, and resized crop. We do not use Gaussian blurring on the cardiac dataset in order to preserve fine-grained features in the MR images [5]. To effectively augment the tabular data, a fraction of a subject’s features are randomly selected to be “corrupted” (i.e. augmented), following [6]. Each corrupted feature’s value is sampled with replacement from all values for that feature seen in the dataset. Full implementation details are in the supplemen-tary materials. 3.2. Explainability using Integrated Gradients To improve our understanding of the dynamics of the multimodal training, we analyze the importance of the in-dividual tabular features in generating the embeddings. Us-ing test samples, we take the pretrained tabular encoder of our multimodal model and calculate the integrated gradi-ents [55] of each dimension of the embeddings. This in-tegrates the gradients of the encoder along the straightline path from a baseline sample, in our case a zero vector, to the test sample in question. This yields the importance value of each tabular feature in generating the downstream predic-tion for that sample. We then take the absolute value and 23926 calculate the mean importance of each feature across all em-bedding dimensions. Categorical features have their means summed over all choices. We use these results to categorize features and better understand how training in a multimodal setting influences unimodal performance. 3.3. Contrastive Learning with Labels Incorporating labels into the contrastive learning process is typically done by modifying the loss function [15,33]. We propose to take advantage of the unique structure of tabular data and directly append the downstream class label as a tabular featur |
Dvornik_StepFormer_Self-Supervised_Step_Discovery_and_Localization_in_Instructional_Videos_CVPR_2023 | Abstract Instructional videos are an important resource to learn procedural tasks from human demonstrations. However, the instruction steps in such videos are typically short and sparse, with most of the video being irrelevant to the pro-cedure. This motivates the need to temporally localize the instruction steps in such videos, i.e. the task called key-step localization. Traditional methods for key-step local-ization require video-level human annotations and thus do not scale to large datasets. In this work, we tackle the prob-lem with no human supervision and introduce StepFormer, a self-supervised model that discovers and localizes instruc-tion steps in a video. StepFormer is a transformer decoder that attends to the video with learnable queries, and pro-duces a sequence of slots capturing the key-steps in the video. We train our system on a large dataset of instruc-tional videos, using their automatically-generated subtitles as the only source of supervision. In particular, we super-vise our system with a sequence of text narrations using an order-aware loss function that filters out irrelevant phrases. We show that our model outperforms all previous unsuper-vised and weakly-supervised approaches on step detection and localization by a large margin on three challenging benchmarks. Moreover, our model demonstrates an emer-gent property to solve zero-shot multi-step localization and outperforms all relevant baselines at this task. | 1. Introduction Observing someone perform a task ( e.g. cooking, assem-bling furniture or changing a tire) is a common approach for humans to acquire new skills. Instructional videos provide an excellent large-scale resource to learn such procedural activities for both humans and AI agents. Consequently, in-structional video datasets [24,34,41] have recently received significant attention and been used for various video under-standing tasks, e.g. [1, 5, 13, 23, 28, 38, 39]. A potential im-s2s3s4startendstartendstartends1startend2314StepFormerself-supervised Long instructional video Discovered instruction steps Figure 1. StepFormer for instruction step discovery and local-ization. StepFormer is a transformer decoder trained to discover instruction steps in a video, supervised purely from video subti-tles. At inference, it only needs the video to discover an ordered sequence of step slots and temporally localize them in the video. pediment with using instructional videos from the web is that they tend to be long and noisy, i.e. a limited number of frames in the video correspond to the instruction steps, while the remaining video segments are unrelated to the task (e.g. title frames, close-ups of people talking and product advertisements). Thus, a major challenge with instructional videos is filtering out the uninformative frames and focus-ing only on the task-relevant segments, i.e. the key-steps. For instance, in a procedure of making a cake the key-steps could be “crack eggs”, “add sugar”, “add flour”, then “mix”, etc. As a result, many recent efforts tackle the problem of instruction key-step localization, e.g. [9, 10, 23, 24, 34, 41]. Most previous work aiming at temporally localizing key-steps from instructional videos rely on some form of su-pervision. Fully supervised approaches require start and end times of each step [34, 40]. Weakly supervised ap-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18952 proaches either rely on knowledge of steps present in the video in the form of a set [20], ordered steps transcript [2,9] or partially ordered steps captured with a flow graph [10]. Unsupervised approaches aim at directly detecting and lo-calizing key-steps without a priori knowledge of instruc-tion steps comprising videos [11, 15, 18, 30]. Conceptu-ally, these approaches are appealing for applications with large datasets, as they eschew the need for expensive and ambiguous labeling efforts. In practice, previous unsuper-vised approaches rely on knowing the video-level task label at training time [11, 18, 30], and thus are not fully unsuper-vised. Moreover, so far they have been only applied to small instructional video datasets (up to 3K videos), as their task-specific models are not designed to handle large databases. As a result, state-of-the-art procedure learning methods are not deployable at scale and without human supervision. To address these challenges, we present StepFormer, a novel self-supervised approach that simultaneously discov-ers and temporally localizes procedure key-steps in long untrimmed videos, as illustrated in Figure 1. StepFormer takes a video as input and outputs an ordered sequence of step slots, capturing instruction key-steps as they happen in the video. Notably, it does not rely on any human an-notations at training or inference time. Instead, we train our model on a large instructional video dataset and use the accompanying narrations obtained from automated speech recognition (ASR) [29, 30] as the only source of supervi-sion. StepFormer is implemented as a transformer decoder with learnable input queries. Similar to the learnable ob-ject queries in DETR [3], StepFormer’s queries learn to at-tend to informative video segments and thus can be viewed as step proposals. To enforce the output step slots to fol-low the temporal order, we use an order-aware loss based on temporal alignment of the learned steps and video nar-rations. Since video narrations tend to be noisy and do not always describe visually groundable steps, we use a flexible sequence-to-sequence alignment algorithm, Drop-DTW [9], which allows for non-alignable narrations to be dropped. To localize the predicted step slots in the video, we explicitly use their learned temporal order. Precisely, Drop-DTW aligns informative step slots with the video, and outputs start and end times for every detected step. We train our system on HowTo100M [24], a large instructional video dataset with no human annotations. For evaluation, we use three standard instructional videos benchmarks, i.e. CrossTask [41], ProceL [11] and COIN [34]. Empirically, for unsupervised step localization, our self-supervised method outperforms all weakly-and un-supervised baselines on all three downstream datasets, with-out any dataset-specific adaption. Additionally, we demon-strate an emergent property of our model to perform zero-shot key-step localization from a text prompt ( i.e. without finetuning on the target dataset), where it also outperformsall relevant baselines. Contributions. The contributions of our paper are three-fold. (i)We present StepFormer, a novel self-supervised approach to key-step discovery and localization in instruc-tional videos. (ii)We model the temporal order of steps ex-plicitly, and use it to design effective training and inference methods. (iii) We supervise StepFormer only with video subtitles on a large instructional video dataset, and success-fully transfer the model to three downstream datasets with-out any finetuning. On all three datasets, StepFormer estab-lishes a new state of the art on unsupervised step localiza-tion, outperforming unsupervised and weakly-supervised baselines. We are commited to releasing our code. |
Fridovich-Keil_K-Planes_Explicit_Radiance_Fields_in_Space_Time_and_Appearance_CVPR_2023 | Abstract We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions. Our model uses d 2 (“d-choose-2”) planes to represent a d-dimensional scene, pro-viding a seamless way to go from static ( d= 3) to dynamic (d= 4) scenes. This planar factorization makes adding dimension-specific priors easy, e.g. temporal smoothness and multi-resolution spatial structure, and induces a nat-ural decomposition of static and dynamic components of a scene. We use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black-box MLP decoder. Across a range of synthetic and real, static and dynamic, fixed and varying appearance scenes, k-planes yields competitive and often state-of-the-art recon-struction fidelity with low memory usage, achieving 1000 x compression over a full 4D grid, and fast optimization with a pure PyTorch implementation. For video results and code, please see sarafridov.github.io/K-Planes . * equal contribution1. Introduction Recent interest in dynamic radiance fields demands rep-resentations of 4D volumes. However, storing a 4D vol-ume directly is prohibitively expensive due to the curse of dimensionality. Several approaches have been proposed to factorize 3D volumes for static radiance fields, but these do not easily extend to higher dimensional volumes. We propose a factorization of 4D volumes that is simple, interpretable, compact, and yields fast training and render-ing. Specifically, we use six planes to represent a 4D vol-ume, where the first three represent space and the last three represent space-time changes, as illustrated in Fig. 1(d). This decomposition of space and space-time makes our model interpretable, i.e. dynamic objects are clearly visible in the space-time planes, whereas static objects only appear in the space planes. This interpretability enables dimension-specific priors in time and space. More generally, our approach yields a straightforward, prescriptive way to select a factorization of any dimension with 2D planes. For a d-dimensional space, we use k= d 2 (“d-choose -2”)k-planes , which represent every pair of di-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12479 mensions — for example, our model uses 4 2 = 6 hex-planes in 4D and reduces to 3 2 = 3 tri-planes in 3D. Choosing any other set of planes would entail either using more than kplanes and thus occupying unnecessary mem-ory, or using fewer planes and thereby forfeiting the ability to represent some potential interaction between two of the ddimensions. We call our model k-planes; Fig. 1 illustrates its natural application to both static and dynamic scenes. Most radiance field models entail some black-box com-ponents with their use of MLPs. Instead, we seek a simple model whose functioning can be inspected and understood. We find two design choices to be fundamental in allowing k-planes to be a white-box model while maintaining recon-struction quality competitive with or better than previous black-box models [15, 27]: (1) Features from our k-planes aremultiplied together rather than added, as was done in prior work [5, 6], and (2) our linear feature decoder uses a learned basis for view-dependent color, enabling greater adaptivity including the ability to model scenes with vari-able appearance. We show that an MLP decoder can be re-placed with this linear feature decoder only when the planes are multiplied, suggesting that the former is involved in both view-dependent color and determining spatial structure. Our factorization of 4D volumes into 2D planes leads to a high compression level without relying on MLPs, using 200MB to represent a 4D volume whose direct represen-tation at the same resolution would require more than 300 GB, a compression rate of three orders of magnitude. Fur-thermore, despite not using any custom CUDA kernels, k-planes trains orders of magnitude faster than prior implicit models and on par with concurrent hybrid models. In summary, we present the first white-box, interpretable model capable of representing radiance fields in arbi-trary dimensions, including static scenes, dynamic scenes, and scenes with variable appearance. Our k-planes model achieves competitive performance across reconstruction quality, model size, and optimization time across these var-ied tasks, without any custom CUDA kernels. 2. Related Work K-planes is an interpretable, explicit model applicable to static scenes, scenes with varying appearances, and dy-namic scenes, with compact model size and fast optimiza-tion time. Our model is the first to yield all of these at-tributes, as illustrated in Tab. 1. We further highlight that k-planes satisfies this in a simple framework that naturally extends to arbitrary dimensions. Spatial decomposition. NeRF [21] proposed a fully im-plicit model with a large neural network queried many times during optimization, making it slow and essentially a black-box. Several works have used geometric representations to reduce the optimization time. Plenoxels [9] proposed a Static Appearance Dynamic Fast Compact Explicit NeRF ✓ ✗ ✗ ✗ ✓ ✗ NeRF-W ✓ ✓ ✗ ✗ ✓ ✗ DVGO ✓ ✗ ✗ ✓ ✗ ✗ Plenoxels ✓ ✗ ✗ ✓ ✗ ✓ Instant-NGP, TensoRF ✓ ✗ ✗ ✓ ✓ ✗1 DyNeRF, D-NeRF –✗ ✓ ✗ ✓ ✗ TiNeuV ox, Tensor4D –✗ ✓ ✓ ✓ ✗ MixV oxels, V4D –✗ ✓ ✓ ✗ ✗ NeRFPlayer –✗ ✓ ✓ ✓2✗ K-planes hybrid (Ours) ✓ ✓ ✓ ✓ ✓ ✗ K-planes explicit (Ours) ✓ ✓ ✓ ✓ ✓ ✓ 1TensoRF offers both hybrid and explicit versions, with a small quality gap2NerfPlayer offers models at different sizes, the smallest of which has<100million parameters but the largest of which has >300million parameters Table 1. Related work overview. Thek-planes model works for a diverse set of scenes and tasks (static, varying appearance, and dy-namic). It has a low memory usage (compact) and fast training and inference time (fast). Here “fast” includes any model that can op-timize within a few ( <6) hours on a single GPU, and “compact” denotes models that use less than roughly 100 million parameters. “Explicit” denotes white-box models that do not rely on MLPs. fully explicit model with trilinear interpolation in a 3D grid, which reduced the optimization time from hours to a few minutes. However, their explicit grid representation of 3D volumes, and that of DVGO [30], grows exponentially with dimension, making it challenging to scale to high resolution and completely intractable for 4D dynamic volumes. Hybrid methods [6, 22, 30] retain some explicit geomet-ric structure, often compressed by a spatial decomposition, alongside a small MLP feature decoder. Instant-NGP [22] proposed a multiresolution voxel grid encoded implicitly via a hash function, allowing fast optimization and render-ing with a compact model. TensoRF [6] achieved similar model compression and speed by replacing the voxel grid with a tensor decomposition into planes and vectors. In a generative setting, EG3D [5] proposed a similar spatial de-composition into three planes, whose values are added to-gether to represent a 3D volume. Our work is inspired by the explicit modeling of Plenox-els as well as these spatial decompositions, particularly the triplane model of [5], the tensor decomposition of [6], and the multiscale grid model of [22]. We also draw inspira-tion from Extreme MRI [23], which uses a multiscale low-rank decomposition to represent 4D dynamic volumes in magnetic resonance imaging. These spatial decomposition 12480 σ Ray Distance f) Minimize Losse) Volumetric RenderingPredicted Color b) Multiscale Bilinear Interpolation c) Hadamard Product Training Image a) K-Planes Representation S=1S=2 qd) Feature Decoder Space Space-Timeq xyxz yz xtzt ytFigure 2. Method overview. (a) Our k-planes representation factorizes 4D dynamic volumes into six planes, three for space and three for spatiotemporal variations. To obtain the value of a 4D point q= (x, y, z, t ), we first project the point into each plane, in which we (b) do multiscale bilinear interpolation. (c) The interpolated values are multiplied and then concatenated over the Sscales. (d) These features are decoded either with a small MLP or our explicit linear decoder. (e) We follow the standard volumetric rendering formula to predict ray color and density. The model is optimized by (f) minimizing the reconstruction loss with simple regularization in space and time. methods have been shown to offer a favorable balance of memory efficiency and optimization time for static scenes. However, it is not obvious how to extend these factoriza-tions to 4D volumes in a memory-efficient way. K-planes defines a unified framework that enables efficient and inter-pretable factorizations of 3D and 4D volumes and trivially extends to even higher dimensional volumes. Dynamic volumes. Applications such as Virtual Reality (VR) and Computed Tomography (CT) often require the ability to reconstruct 4D volumes. Several works have pro-posed extensions of NeRF to dynamic scenes. The two most common schemes are (1) modeling a deformation field on top of a static canonical field [7, 8, 16, 24, 27, 33, 40], or (2) directly learning a radiance field conditioned on time [11, 15, 16, 25, 38]. The former makes decomposing static and dynamic components easy [37, 40], but struggles with changes in scene topology (e.g. when a new object appears), while the latter makes disentangling static and dynamic ob-jects hard. A third strategy is to choose a representation of 3D space and repeat it at each timestep (e.g. NeRFPlayer [29]), resulting in a model that ignores space-time interac-tions and can become impractically large for long videos. Further, some of these models are fully implicit [15, 27] and thus suffer from extremely long training times (e.g. DyNeRF used 8 GPUs for 1 week to train a single scene), as well as being completely black-box. Others use partially ex-plicit decompositions for video [8,10,13,17,18,28,29,34], usually combining some voxel or spatially decomposed fea-ture grid with one or more MLP components for feature de-coding and/or representing scene dynamics. Most closely related to k-planes is Tensor4D [28], which uses 9 planes to decompose 4D volumes. K-planes is less redundant ( e.g. Tensor4D includes two ytplanes), does not rely on multi-ple MLPs, and offers a simpler factorization that naturally generalizes to static and dynamic scenes. Our method com-bines a fully explicit representation with a built-in decom-position of static and dynamic components, the ability to handle arbitrary topology and lighting changes over time, fast optimization, and compactness. Appearance embedding. Reconstructing large environ-ments from photographs taken with varying illumination is another domain in which implicit methods have shown ap-pealing results, but hybrid and explicit approaches have not yet gained a foothold. NeRF-W [19] was the first to demon-strate photorealistic view synthesis in such environments. They augment a NeRF-based model with a learned global appearance code per | frame, enabling it to explain away changes in appearance, such as time of day. With Generative Latent Optimization (GLO) [4], these appearance codes can further be used to manipulate the scene appearance by inter-polation in the latent appearance space. Block-NeRF [31] employs similar appearance codes. We show that our k-planes representation can also effec-tively reconstruct these unbounded environments with vary-ing appearance. We similarly extend our model – either the learned color basis in the fully explicit version, or the MLP decoder in the hybrid version – with a global appearance code to disentangle global appearance from a scene with-out affecting geometry. To the best of our knowledge, ours is both the first fully explicit and the first hybrid method to successfully reconstruct these challenging scenes. 3. K-planes model We propose a simple and interpretable model for repre-senting scenes in arbitrary dimensions. Our representation yields low memory usage and fast training and rendering. Thek-planes factorization, illustrated in Fig. 2, models a d-dimensional scene using k= d 2 planes representing every combination of two dimensions. For example, for static 3D 12481 scenes, this results in tri-planes with 3 2 = 3 planes rep-resenting xy,xz, and yz. For dynamic 4D scenes, this re-sults in hex-planes , with 4 2 = 6planes including the three space-only planes and three space-time planes xt,yt, and zt. Should we wish to represent a 5D space, we could use 5 2 = 10 deca-planes . In the following section, we describe the 4D instantiation of our k-planes factorization. 3.1. Hex-planes The hex-planes factorization uses six planes. We refer to the space-only planes as Pxy,Pxz, and Pyz, and the space-time planes as Pxt,Pyt, and Pzt. Assuming symmetric spa-tial and temporal resolution Nfor simplicity of illustration, each of these planes has shape NxNxM, where Mis the size of stored features that capture the density and view-dependent color of the scene. We obtain the features of a 4D coordinate q= (i, j, k, τ ) by normalizing its entries between [0, N)and projecting it onto these six planes f(q)c=ψ Pc, πc(q) , (1) where πcprojects qonto the c’th plane and ψdenotes bi-linear interpolation of a point into a regularly spaced 2D grid. We repeat Eq. (1) for each plane c∈Cto obtain fea-ture vectors f(q)c. We combine these features over the six planes using the Hadamard product (elementwise multipli-cation) to produce a final feature vector of length M f(q) =Y c∈Cf(q)c. (2) These features will be decoded into color and density using either a linear decoder or an MLP, described in Sec. 3.3. Why Hadamard product? In 3D, k-planes reduces to the tri-plane factorization, which is similar to [5] except that the elements are multiplied. A natural question is why we multiply rather than add, as has been used in prior work with tri-plane models [5, 26]. Fig. 3 illustrates that combining the planes by multiplication allows k-planes to produce spa-tially localized signals, which is not possible with addition. This selection ability of the Hadamard product produces substantial rendering improvements for linear decoders and modest improvement for MLP decoders, as shown in Tab. 2. This suggests that the MLP decoder is involved in both view-dependent color and determining spatial structure. The Hadamard product relieves the feature decoder of this extra task and makes it possible to reach similar perfor-mance using a linear decoder solely responsible for view-dependent color. 3.2. Interpretability The separation of space-only and space-time planes makes the model interpretable and enables us to incorpo-rate dimension-specific priors. For example, if a region of Figure 3. Addition versus Hadamard product. Elementwise ad-dition of plane features (left) compared to multiplication (right), in a triplane example. A single entry in each plane is positive and the rest are zero, selecting a single 3D point by multiplication but producing intersecting lines by addition. This selection ability of multiplication improves the expressivity of our explicit model. Plane Combination Explicit Hybrid # params ↓ Multiplication 35.29 35.75 33M Addition 28.78 34.80 33M Table 2. Ablation study over Hadamard product. Multiplication of plane features yields a large improvement in PSNR ↑for our explicit model, whereas our hybrid model can use its MLP decoder to partially compensate for the less expressive addition of planes. This experiment uses the static Lego scene [21] with 3scales: 128, 256, and 512, and 32features per scale. the scene never moves, its temporal component will always be1(the multiplicative identity), thereby just using the fea-tures from the space planes. This offers compression ben-efits since a static region can easily be identified and com-pactly represented. Furthermore, the space-time separation improves interpretability, i.e. we can track the changes in time by visualizing the elements in the time-space planes that are not 1. This simplicity, separation, and interpretabil-ity make adding priors straightforward. Multiscale planes. To encourage spatial smoothness and coherence, our model contains multiple copies at differ-ent spatial resolutions, for example 64,128,256, and 512. Models at each scale are treated separately, and the M-dimensional feature vectors from different scales are con-catenated together before being passed to the decoder. The red and blue squares in Fig. 2a-b illustrate bilinear inter-polation with multiscale planes. Inspired by the multiscale hash mapping of Instant-NGP [22], this representation effi-ciently encodes spatial features at different scales, allowing us to reduce the number of features stored at the highest resolution and thereby further compressing our model. Em-pirically, we do not find it necessary to represent our time dimension at multiple scales. Total variation in space. Spatial total variation regu-larization encourages sparse gradients (with L1 norm) or smooth gradients (with L2 norm), encoding priors over edges being either sparse or smooth in space. We encour-12482 age this in 1D over the spatial dimensions of each of our space-time planes and in 2D over our space-only planes: LTV(P) =1 |C|n2X c,i,j ∥Pi,j c−Pi−1,j c∥2 2+∥Pi,j c−Pi,j−1 c∥2 2 , (3) where i, jare indices on the plane’s resolution. Total vari-ation is a common regularizer in inverse problems and was used in Plenoxels [9] and TensoRF [6]. We use the L2 ver-sion in our results, though we find that either L2 or L1 pro-duces similar quality. Smoothness in time. We encourage smooth motion with a 1D Laplacian (second derivative) filter Lsmooth (P) =1 |C|n2X c,i,t∥Pi,t−1 c−2Pi,t c+Pi,t+1 c∥2 2,(4) to penalize sharp “acceleration” over time. We only apply this regularizer on the time dimension of our space-time planes. Please see the appendix for an ablation study. Sparse transients. We want the static part of the scene to be modeled by the space-only planes. We encourage this separation of space and time by initializing the features in the space-time planes as 1(the multiplicative identity) and using an ℓ1regularizer on these planes during training: Lsep(P) =X c∥1−Pc∥1, c ∈ {xt, yt, zt }. (5) In this way, the space-time plane features of the k-planes decomposition will remain fixed at 1if the corresponding spatial content does not change over time. 3.3. Feature decoders We offer two methods to decode the M-dimensional temporally-and spatially-localized feature vector f(q) from Eq. (2) into density, σ, and view-dependent color, c. Learned color basis: a linear decoder and explicit model. Plenoxels [9], Plenoctrees [39], and TensoRF [6] proposed models where spatially-localized features are used as coef-ficients of the spherical harmonic (SH) basis, to describe view-dependent color. Such SH decoders can give both high-fidelity reconstructions and enhanced interpretability compared to MLP decoders. However, SH coefficients are difficult to optimize, and their expressivity is limited by the number of SH basis functions used (often limited 2nd de-gree harmonics, which produce blurry specular reflections). Instead, we replace the SH functions with a learned ba-sis, retaining the interpretability of treating features as coef-ficients for a linear decoder yet increasing the expressivity of the basis and allowing it to adapt to each scene, as was proposed in NeX [36]. We represent the basis using a small MLP that maps each view direction dto red bR(d)∈RM,green bG(d)∈RM, and blue bB(d)∈RMbasis vectors . The MLP serves as an adaptive drop-in replacement for the spherical harmonic basis functions repeated over the three color channels. We obtain the color values c(q,d) =[ i∈{R,G,B }f(q)·bi(d), (6) where ·denotes the dot product and ∪denotes concatena-tion. Similarly, we use a learned basis bσ∈RM, indepen-dent of the view direction, as a linear decoder for density: σ(q) =f(q)·bσ. (7) Predicted color and density values are finally forced to be in their valid range by applying the sigmoid to c(q,d), and the exponential (with truncated gradient) to σ(q). MLP decoder: a hybrid model. Our model can also be used with an MLP decoder like that of Instant-NGP [22] and DVGO [30], turning it into a hybrid model. In this ver-sion, features are decoded by two small MLPs, one gσthat maps the spatially-localized features into density σand ad-ditional features ˆf, and another gRGB that maps ˆfand the embedded view direction γ(d)into RGB color σ(q),ˆf(q) =gσ(f(q)) c(q,d) =gRGB(ˆf(q), γ(d)).(8) As in the linear decoder case, the predicted density and color values are finally normalized via exponential and sig-moid, respectively. Global appearance. We also show a simple extension of ourk-planes model that enables it to represent scenes with consistent, static geometry viewed under varying lighting or appearance conditions. Such scenes appear in the Pho-totourism [14] dataset of famous landmarks photographed at different times of day and in different weather. To model this variable appearance, we augment k-planes with an M-dimensional vector for each training image 1, . . . , T . Sim-ilar to NeRF-W [19], we optimize this per-image feature vector and pass it as an additional input to either the MLP learned color basis bR, bG, bB, in our explicit version, or to the MLP color decoder gRGB , in our hybrid version, so that it can affect color but not geometry. 3.4. Optimization details Contraction and normalized device coordinates. For forward-facing scenes, we apply normalized device coor-dinates (NDC) [21] to better allocate our resolution while enabling unbounded depth. We also implement an ℓ∞ver-sion (rather than ℓ2) of the scene contraction proposed in Mip-NeRF 360 [2], which we use on the unbounded Photo-tourism scenes. 12483 (a) Ours-explicit (b) Ours-hybrid (c) TensoRF (d) Ground truth Figure 4. Zoomed qualitative results on static NeRF scenes. Vi-sual comparison of k-planes, TensoRF [6], and the ground truth, onship (top) and hotdog (bottom). Proposal sampling. We use a variant of the proposal sampling strategy from Mip-NeRF 360 [2], with a small instance of k-planes as density mode |
De_Plaen_Unbalanced_Optimal_Transport_A_Unified_Framework_for_Object_Detection_CVPR_2023 | Abstract During training, supervised object detection tries to cor-rectly match the predicted bounding boxes and associated classification scores to the ground truth. This is essen-tial to determine which predictions are to be pushed to-wards which solutions, or to be discarded. Popular match-ing strategies include matching to the closest ground truth box (mostly used in combination with anchors), or match-ing via the Hungarian algorithm (mostly used in anchor-free methods). Each of these strategies comes with its own properties, underlying losses, and heuristics. We show how Unbalanced Optimal Transport unifies these different ap-proaches and opens a whole continuum of methods in be-tween. This allows for a finer selection of the desired prop-erties. Experimentally, we show that training an object de-tection model with Unbalanced Optimal Transport is able to reach the state-of-the-art both in terms of Average Preci-sion and Average Recall as well as to provide a faster initial convergence. The approach is well suited for GPU imple-mentation, which proves to be an advantage for large-scale models. | 1. Introduction Object detection models are in essence multi-task mod-els, having to both localize objects in an image and classify them. In the context of supervised learning, each of these tasks heavily depends on a matching strategy. Indeed, deter-mining which predicted object matches which ground truth object is a non-trivial yet essential task during the training (Figure 1a). In particular, the matching strategy must en-sure that there is ideally exactly one prediction per ground truth object, at least during inference. Various strategies have emerged, often relying on hand-crafted components. They are proposed as scattered approaches that seem to have nothing in common, at least at first glance. *These authors contributed equally. A B1 2 34 5 (a) Image №163 from the COCO training dataset. The ground truth boxes are colored, and the predictions are outlined in black. A B ∅1 2 3 4 5 0.61 0.96 0.8 1.34 0.88 0.8 0.88 0.89 0.8 0.4 0.65 0.8 1.11 0.73 0.8 (b) Costs between the predictions and the ground truth (1−GIoU ). The background cost is c∅= 0.8. A B ∅1 2 3 4 5 (c) Prediction to best ground truth (Unbalanced OT with ϵ= 0 , τ1→+∞and τ2= 0). A B ∅1 2 3 4 5 (d) Hungarian matching (OT with ϵ= 0,τ1→+∞ andτ2→+∞). A B ∅1 2 3 4 5 (e) Ground truth to best prediction (Un-balanced OT with ϵ= 0,τ1= 0 and τ2→+∞). A B ∅1 2 3 4 5 (f) Unbalanced OT with ϵ= 0 .05, τ1= 100 andτ2= 0.01. A B ∅1 2 3 4 5 (g) OT with ϵ= 0.05(τ1→+∞ andτ2→+∞). A B ∅1 2 3 4 5 (h) Unbalanced OT with ϵ= 0 .05, τ1= 0 .01and τ2= 100 . Figure 1. Different matching strategies. All are particular cases of Unbalanced Optimal Transport . This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3198 1.1. A Unifying Framework To perform any match, a matching cost has to be deter-mined. The example at Fig. 1b uses the Generalized Inter-section over Union (GIoU ) [46]. Given such a cost matrix, matching strategies include: • Matching each prediction to the closest ground truth object. This often requires that the cost lies under a certain threshold [37, 45, 44, 33], to avoid matching predictions that may be totally irrelevant for the current image. The disadvantage of this strategy is its redun-dancy: many predictions may point towards the same ground truth object. In Fig. 1c, both predictions 1 and 4 are matched towards ground truth object A. Further-more, some ground truth objects may be unmatched. A solution to this is to increase the number of predicted boxes drastically. This is typically the case with an-chors boxes and region proposal methods. • The opposite strategy is to match each ground truth object to the best prediction [25, 37]. This ensures that there is no redundancy and every ground truth object is matched. This also comes with the opposite problem: multiple ground truth objects may be matched to the same prediction. In Fig. 1e, both ground truth objects A and B are matched to prediction 4. This can be mit-igated by having more predictions, but then many of those are left unmatched, slowing convergence [37]. • A compromise is to perform a Bipartite Matching (BM), using the Hungarian algorithm [29, 40], for ex-ample [6, 55]. The matching is one-to-one, minimiz-ing the total cost (Definition 2). Every ground truth object is matched to a unique prediction, thus reducing the number of predictions needed, as shown in Fig. 1d. A downside is that the one-to-one matches may vary from one epoch to the next, again slowing down con-vergence [31]. This strategy is difficult to parallelize, i.e. to take advantage of GPU architectures. All of these strategies have different properties and it seems that one must choose either one or the other, option-ally combining them using savant heuristics [37]. There is a need for a unifying framework. As we show in this paper, Unbalanced Optimal Transport [9] offers a good candidate for this (Figure 1). It not only unifies the different strategies here above, but also allows to explore all cases in between. The cases presented in Figures 1c, 1d and 1e correspond to the limit cases. This opens the door for all intermediate set-tings. Furthermore, we show how regularizing the problem induces smoother matches, leading to faster convergence of DETR, avoiding the problem described for the BM. In addi-tion, the particular choice of entropic regularization leads to a class of fast parallelizable algorithms on GPU known asscaling algorithms [10, 8], of which we provide a compiled implementation on GPU. Our code and additional resources are publicly available1. 1.2. Related Work Matching Strategies Most two-stage models often rely on a huge number of initial predictions, which is then progressively reduced in the region proposal stage and re-fined in the classification stage. Many different strategies have been proposed for the initial propositions and sub-sequent reductions, ranging from training no deep learn-ing networks [21], to only train those for the proposi-tions [20, 32, 25], to training networks for both propositions and reductions [45, 42, 24, 5, 11]. Whenever a deep learn-ing network is trained, each prediction is matched to the closest ground truth object provided it lies beneath a certain threshold. Moreover, the final performance of these models heavily depends on the hand-crafted anchors [35]. Many one-stage models rely again on predicting a large number of initial predictions or anchor boxes , covering the entire image. As before, each anchor box is matched to-wards the closest ground truth object with certain threshold constraints [44, 33]. In [37], this is combined with match-ing each ground truth object to the closest anchor box and a specific ratio heuristic between the matched and unmatched predictions. The matching of the fixed anchors is justified to avoid a collapse of the predictions towards the same ground truth objects. Additionally, this only works if the number of initial predictions is sufficiently large to ensure that ev-ery ground truth object is matched by at least one predic-tion. Therefore, it requires further heuristics, such as Non-Maximal Suppression (NMS) to guarantee a unique predic-tion per ground truth object, at least during the inference. By using the Hungarian algorithm , DETR [6] removed the need for a high number of initial predictions. The matched predictions are improved with a multi-task loss, and the remaining predictions are trained to predict the background class ∅. Yet, the model converges slowly due to the instability of BM, causing inconsistent optimization goals at early training stages [31]. Moreover, the sequen-tial nature of the Hungarian algorithm does not take full ad-vantage of the GPU architecture. Several subsequent works accelerate the convergence of DETR by improving the ar-chitecture of the model [55, 36] and by adding auxiliary losses [31], but not by exploring the matching procedure. Optimal Transport The theory of Optimal Trans-port (OT) emerges from an old problem [38], relaxed by a newer formulation [26]. It gained interest in the machine learning community since the re-discovery of Sinkhorn’s algorithm [10] and opened the door for improvements in a wide variety of applications ranging from graphical models [39], kernel methods [28, 13], loss design [17], 1https://hdeplaen.github.io/uotod 3199 auto-encoders [50, 27, 47] or generative adversarial net-works [3, 22]. More recent incursions in computer vision have been at-tempted, e.g. for the matching of predicted classes [23], a loss for rotated object boxes [54] or a new metric for perfor-mance evaluation [41]. Considering the matching of predic-tions to ground truth objects, recent attempts using OT bare promising results [18, 19]. However, when the Hungarian algorithm is mentioned, it is systematically presented in op-position to OT [18, 53]. We lay a rigorous connection be-tween those two approaches in computer vision. Unbalanced OT has seen a much more recent theoretical development [9, 7]. The hard mass conservation constraints in the objective function are replaced by soft penalization terms. Its applications are scarcer, but we must mention here relatively recent machine learning applications in mo-tion tracking [30] and domain adaptation [16]. 1.3. Contributions 1. We propose a unifying matching framework based on Unbalanced Optimal Transport . It encompasses both theHungarian algorithm , the matching of the predic-tions to the closest ground truth boxes and the ground truth boxes to the closest predictions; |
Chen_Viewpoint_Equivariance_for_Multi-View_3D_Object_Detection_CVPR_2023 | Abstract 3D object detection from visual sensors is a corner-stone capability of robotic systems. State-of-the-art meth-ods focus on reasoning and decoding object bounding boxes from multi-view camera input. In this work we gain intu-ition from the integral role of multi-view consistency in 3D scene understanding and geometric learning. To this end, we introduce VEDet, a novel 3D object detection frame-work that exploits 3D multi-view geometry to improve lo-calization through viewpoint awareness and equivariance. VEDet leverages a query-based transformer architecture and encodes the 3D scene by augmenting image features with positional encodings from their 3D perspective geom-etry. We design view-conditioned queries at the output level, which enables the generation of multiple virtual frames dur-ing training to learn viewpoint equivariance by enforcing multi-view consistency. The multi-view geometry injected at the input level as positional encodings and regularized at the loss level provides rich geometric cues for 3D ob-ject detection, leading to state-of-the-art performance on the nuScenes benchmark. The code and model are made available at https://github.com/TRI-ML/VEDet. | 1. Introduction Camera-based 3D object detection is a critical research topic, with important applications in areas such as au-tonomous driving and robotics due to the semantic-rich in-put and low cost compared to range sensors. In the past few years, monocular 3D detection has seen significant progress, from relying on predicting pseudo point clouds as intermediate representation [33,39,42] to end-to-end learn-ing [31,34,38]. However, monocular 3D detectors are inher-ently ambiguous in terms of depth, which motivated some recent exploration in multi-view and multi-sweep 3D object detection [22, 25, 26, 40]. In a conventional monocular setting, given multiple cam-eras on a sensor rig, single-view detections are merged to the global frame through rule-based processing such as Non-Maximum Suppression (NMS). Recent advances in VEDet Multi-view Images Global Predictions Viewpoint Equivariance View-Conditioned Predictions Figure 1. Our proposed VEDet encodes the 3D scene from multi-view images, and decodes objects with view-conditioned queries. The predicted 3D bounding boxes are expressed in the underlying views of the queries, which enables us to enforce viewpoint equiv-ariance among predictions from multiple views. Virtual query views are generated during training and together with the view-point equivariance regularization bring richer geometric learning signals to guide the model to better understand the 3D structure in the scene. During inference, the global predictions can be ob-tained by simply choosing the global frame as the query view. multi-view camera-based 3D algorithms [25, 40] proposed to jointly aggregate multi-view information at the feature level, and directly predict a single set of detections in the global frame. These algorithms demonstrate a giant leap in 3D detection performance on multi-camera datasets (E.g., Nuscenes [5]). To aggregate information from different views, one line of query-based detectors adopt transform-ers to query image features [25, 26, 40] or bird’s-eye-view (BEV) features [15, 22] via an attention mechanism. In contrast, another line of works “lift-splat-shoot” [32] image features from each view into the shared BEV features to be processed by convolutional detection heads [21]. To further mitigate the depth ambiguity, some concurrent works have started extending multi-view to “multi-sweep” across times-tamps and observe a promising performance boost [22, 26]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9213 While the works mentioned above demonstrate a strong po-tential for multi-view 3D detection, progress has concen-trated on input aggregation and information interplay across frames and less on learning objectives. We argue that the learning objective can play a crucial role in ingesting the core knowledge in a multi-view setting: 3D geometry. This paper proposes to encourage 3D geometry learn-ing for multi-view 3D detection models through viewpoint awareness and equivariance. We obtain our intuition from traditional structure-from-motion works [1], where multi-view geometry is modeled through multi-view consistency. To this end, we propose viewpoint-awareness on the object queries, as well as a multi-view consistency learning ob-jective as a 3D regularizer that enforces the model to rea-son about geometry. Compared to existing methods that make 3D predictions in the default egocentric view, our pro-posed multi-view predictions and viewpoint equivariance effectively bring stronger geometric signals conducive to the 3D reasoning. More specifically, in our query-based framework, the geometry information of image features and object queries is injected completely via implicit geomet-ric encodings, and the transformer decoder is expected to learn better correspondence and 3D localization under the viewpoint equivariance objective. We demonstrate that our proposed framework can make the best of available geom-etry information with extensive experiments and establish the new state-of-the-art in multi-view 3D object detection. In summary, our contributions are: • We propose a novel Viewpoint Equivariance (VE) learning objective that encourages multi-view consis-tency in 3D detection models, leading to improved 3D object detection performance. • We propose a new multi-view 3D object detection framework, VEDet , which employs a query-based transformer architecture with perspective geometry and viewpoint awareness injected both at the encod-ing and decoding stages. VEDet fully enables our proposed VE learning objective, facilitating geometry learning with implicit inductive biases. • VEDet achieves state-of-the-art on large-scale benchmark , reaching 45.1%mAP on NuScenes val set and 50.5% mAP on test set . We provide a com-prehensive analysis of our components, and share in-sights based on empirical observations. |
Iscen_Improving_Image_Recognition_by_Retrieving_From_Web-Scale_Image-Text_Data_CVPR_2023 | Abstract Retrieval augmented models are becoming increasingly popular for computer vision tasks after their recent success in NLP problems. The goal is to enhance the recognition ca-pabilities of the model by retrieving similar examples for the visual input from an external memory set. In this work, we introduce an attention-based memory module, which learns the importance of each retrieved example from the mem-ory. Compared to existing approaches, our method removes the influence of the irrelevant retrieved examples, and re-tains those that are beneficial to the input query. We also thoroughly study various ways of constructing the memory dataset. Our experiments show the benefit of using a massive-scale memory dataset of 1B image-text pairs, and demon-strate the performance of different memory representations. We evaluate our method in three different classification tasks, namely long-tailed recognition, learning with noisy labels, and fine-grained classification, and show that it achieves state-of-the-art accuracies in ImageNet-LT, Places-LT and Webvision datasets. | 1. Introduction Increasing the number of parameters of large trans-former models has been a recent successful trend achiev-ing new benchmarks in vision and language tasks. Recent results from T5 [35], GPT-3 [5], PaLM [8], CoCa [50], Flamingo [1], BEIT-3 [46], PaLI [7], Florence [51] and FLA V A [40] show that transformer models are able to store a surprising amount of information when scaled to tens of billions of parameters and trained on vast text and image corpora. These so-called ‘foundation models’ achieve state-of-the-art results when fine tuned and applied to secondary tasks such as language modeling, image captioning, visual question answering and open vocabulary recognition. In these foundation models, the learned world knowledge is stored implicitly in the parameters of the underlying neu-ral network. This implies that some of the problems of the current ML paradigm are amplified in these models: (a) scal-ing is challenging, both in learning and serving, given the large number of parameters that are required for storing the kNN s e ar ch R e tri e v e d im a g e s M emo r y A tt en t i o n M odul e Qu er y E xt ern al memo r y Figure 1. Retrieval augmented classification finds similar images to the query from an external memory. Our memory attention module learns the importance of each retrieved image by assigning high weights (green line in the figure) to the relevant images, and low weights (red line) to the irrelevant images. knowledge, (b) it is hard to update the model as the world facts change or input data gets modified, (c) these models tend to be black box, which means it is hard to interpret the underlying reason behind their decisions. To address the above issues, we propose an alternative perspective on the problem. Instead of compiling the world knowledge statically into model weights, we take an interpre-tive view where the world knowledge gets transformed into a massive-scale index/memory. On the other hand, a relatively low-compute small model learns to use the memory for the given inference task. Instead of increasing the size of the model and training on more data as done in most previous work, we equip models with the ability to directly access a large database to perform predictions—a semi-parametric approach. To evaluate our approach, we focus on the problem of long-tailed recognition and learning with noisy labels. The distribution of real-world data is often noisy, imbalanced and highly skewed on a per-class basis, with a majority of classes containing a small number of samples. Long-tailed recog-nition is a well-studied problem [19, 33]. Base approaches are largely variants of the core idea of “adjustment”, where the learner is encouraged to focus on the tail of the distribu-tion. This is achieved either by re-weighting samples during training [21] and cluster-based sampling [10], logit or loss modification [12, 20, 31, 54] or ensembling [47]. Despite be-ing well-studied, commonly occurring, and of great practical importance, classification performance on long-tail distribu-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19295 Dog walking in the parkVisual Encoder kNN Key Classifier kNN Value V alues K e y s Memory k-NN Search Visual Encoder Memory Value Encoder Query embedding Memory key Memory value Refined embedding An old yellow truck Memory Attention Module Figure 2. Overview of our method. Retrieval augmented classification aims to retrieve relevant images from an external memory dataset when making predictions. Each example in the memory is composed of a keyandvalue embedding pair. Key embeddings are extracted using the same visual encoder as the query image, but the value embeddings can be extracted with any other encoder. Both visual and value encoders are remain frozen during the training. We perform an approximate k-NN search between the query embedding and memory keys to find relevant images from the memory dataset. The retrieval module receives the query embedding and the kretrieved key-value pairs from the memory. We learn the importance of each memory example by computing the attention weights between the query embedding and the memory keys. The memory values, weighted by their corresponding attention weights, are used to compute the refined embedding, which is then passed to the classifier. tions lags significantly behind the state of the art for better balanced classes [15, 49, 55]. Long et al. [29] introduce a retrieval-augmented classi-fication model that explicitly stores the tail knowledge. In comparison to this work, we suggest to retrieve from a web-scale vision-text database and augment the input query with the retrieved knowledge using a memory attention module, before making class predictions. We design the external memory as pairs of key-value embeddings. These embed-dings are computed by encoding vision and language data from multiple sources (Web images with alt-text such as Webli [7], LAION [39], YFCC100M [42] datasets as well as image classification datasets like ImageNet [37]). Memory key embeddings are used to retrieve the k-nearest neighbors of the input query vectors. Our memory attention module learns the importance of each retrieved memory example by computing attention weights between the query embedding and memory keys. Relevant examples have more influence, whereas the contribution of the irrelevant noisy examples is down-weighted. Learned attention weights are then used to combine memory values and produce a refined embedding, which is then used to make class predictions. Figure 1 shows a high-level visualization of our method. Our contributions are summarized as follows: •We propose a retrieval-augmented recognition model that explores efficient means of augmenting visual mod-els with a massive-scale memory without significantly increasing computations. •We propose a simple yet powerful way to fuse the re-trieved knowledge with the input query using a memory attention module. •Our method achieves state-of-the-art results on var-ious benchmarks, such as long-tail recognition and learning with noisy labels. We achieve 78.9accuracy on ImageNet-LT dataset, 50.3 accuracy on Places-LT dataset, and 83.6 on Webvision dataset. |
Heo_A_Generalized_Framework_for_Video_Instance_Segmentation_CVPR_2023 | Abstract The handling of long videos with complex and occluded sequences has recently emerged as a new challenge in the video instance segmentation (VIS) community. However, existing methods have limitations in addressing this chal-lenge. We argue that the biggest bottleneck in current ap-proaches is the discrepancy between training and inference. To effectively bridge this gap, we propose a Generalized framework for VIS, namely GenVIS , that achieves state-of-the-art performance on challenging benchmarks without designing complicated architectures or requiring extra post-processing. The key contribution of GenVIS is the learning strategy, which includes a query-based training pipeline for sequential learning with a novel target label assign-ment. Additionally, we introduce a memory that effectively acquires information from previous states. Thanks to the new perspective, which focuses on building relationships between separate frames or clips, GenVIS can be flexibly executed in both online and semi-online manner. We eval-uate our approach on popular VIS benchmarks, achieving state-of-the-art results on YouTube-VIS 2019/2021/2022 and Occluded VIS (OVIS). Notably, we greatly outperform the state-of-the-art on the long VIS benchmark (OVIS), improv-ing 5.6 AP with ResNet-50 backbone. Code is available at https://github.com/miranheo/GenVIS . | 1. Introduction Video Instance Segmentation (VIS) is the task of identi-fying, segmenting, and tracking all objects in videos simul-taneously. With the emergence of datasets containing long and complex sequences, the research community is taking a step towards real-world applications. While many papers have proposed solutions, the most notable performance im-provement has been achieved by recent online methods using image-based backbones [ 14,34]. These results challenge the common belief that end-to-end semi-online or offline ap-proaches ( i.e.,[5,13,15,30,33,38]) trained on longer video clips would better model long-range object relationships. We hypothesize that reason behind this somewhat surpris-TrainingInference(b) Semi-Online & Offline (c) GenVIS(ours) (a) Online Heuristic-freeAssociationFrameClip & Video HeuristicAssociation ... ......Figure 1. Comparison between current VIS paradigms and our approach. (a, b) While current methods use two separate paradigms based on the number of frames processed, we argue that the key challenge in processing real-world videos is building inter-clip assocications. (b) Our proposed GenVIS addresses this challenge and can operate effectively in both online and semi-online manner without requiring hand-crafted post-processing. ing result is the presence or absence of an object association scheme between frames or clips that can scale to long videos. Recent VIS methods, regardless of the approach, are driven by powerful image-level detectors [ 6,8], so detection and segmentation quality are already robust and comparable to each other. To operate robustly in long videos, what the VIS really needs to focus on is the long-range tracking quality. Although semi-online and offline methods are suitable for tracking objects within clips, they need to associate objects between clips to infer long videos, which is usually achieved using simple heuristics such as IoU matching [ 1,15]. While online methods are more robust than semi-online/offline VIS solutions in processing long videos, we still see a significant room for improvement in their tracking approach. These methods only consider local contexts be-tween adjacent frames during training, while test videos can exceed hundreds of frames [ 27]. We believe that there is a better way to learn long-range temporal modeling that could fundamentally change the current VIS landscape. In this paper, we argue that the biggest bottleneck in handling long videos is the discrepancy between the training and inference scenarios. Regardless of the previously defined paradigms (e.g., how many frames a method processes at This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14623 once), we need to focus on how to train the model. As illustrated in Fig. 1, all previous methodologies use only a few frames or clips ( e.g., one or two) for training, while real-world videos can have an unlimited length. Therefore, they have to handle typical long-range tracking scenarios (e.g., newborn objects and re-identification) as exceptions through heuristics [ 34]. We introduce a Generalized VISframework, namely GenVIS (Fig. 1(c)), that is designed to minimize the gap between training and inference of long videos. We take an existing offline VIS model (VITA [ 13]) as the backbone and applay a query-propagation method [ 32] for object associa-tion between clips. The essence of GenVIS is the novel train-ing strategy. By improving the training strategy of the base model, we achieve significant gains of 5.1 AP (Occluded VIS [ 27]) and 5.8 AP (YouTube-VIS 2022 Long Videos [ 36]) in long and challenging benchmarks, outperforming all the previous methods by a large margin. Our first proposal for improvement is to load multiple clips during training. Unlike previous semi-online/offline VIS methods [ 1,5,15,30,33] that focus on placing multiple frames in a single clip to strengthen intra-clip tracking, we propose to prioritize inter-clip tracking by learning the tem-poral relationship through multiple consecutive clips. We believe that strong inter-clip reasoning is crucial for process-ing long videos in the real world. As videos must be split into multiple clips that fit into GPU memory when process-ing long videos, inter-clip association is inevitable. In this work, we use relatively short clip lengths ( e.g., 1 to 7), but load as many clips as possible ( e.g., usually more than 5). More importantly, we propose a new learning criterion that enables seamless association through multiple consecu-tive clips. Since we now deploy a sufficient number of clips, we can effectively simulate various inference scenarios at training time, covering newborn objects and objects that dis-appear and reappear. Specifically, we propose the Unified Video Label Assignment (UVLA) that allows unique object queries to detect newly-appeared object and to keep them consistent once matching identities are obtained. Our new learning criterion not only improves tracking performance but also removes all heuristics1required to handle new ob-jects and re-identification from the inference stage. In other words, our model infers videos exactly as it learned. With these two proposals in the learning strategy, our base model outperforms all the previous methods on long video VIS benchmarks [ 27] without additional network modules. To further bridge the remaining gap between training and inference, we propose adopting a memory mechanism that stores previously decoded object queries. This mechanism is particularly useful for handling very long videos (or stream-ing video), where there is a limit to the number of clips that 1Previously, [ 32] need to determine whether a tracked query is valid or not with a confidence threshold.can be loaded at once. To implement this, we add extra information for each object query by reading from its previ-ous states. The memory mechanism results in meaningful improvements with only a small computational overhead. Despite its simple framework, GenVIS achieves state-of-the-art results on VIS benchmarks, outperforming previous methods on challenging long and complex video datasets (Occluded VIS [ 27] and YouTube-VIS 2022 [ 36]). Our method also demonstrates strong generalization capability under online and semi-online settings2. We provide addi-tional analysis of the training and inference settings, which can be useful for balancing accuracy and efficiency tradeoffs. |
Jin_Multi-Level_Logit_Distillation_CVPR_2023 | Abstract Knowledge Distillation (KD) aims at distilling the knowledge from the large teacher model to a lightweight student model. Mainstream KD methods can be divided into two categories, logit distillation, and feature distilla-tion. The former is easy to implement, but inferior in per-formance, while the latter is not applicable to some prac-tical circumstances due to concerns such as privacy and safety. Towards this dilemma, in this paper, we explore a stronger logit distillation method via making better uti-lization of logit outputs. Concretely, we propose a sim-ple yet effective approach to logit distillation via multi-level prediction alignment . Through this framework, the prediction alignment is not only conducted at the instance level, but also at the batch and class level, through which the student model learns instance prediction, input corre-lation, and category correlation simultaneously. In addi-tion, a prediction augmentation mechanism based on model calibration further boosts the performance. Extensive ex-periment results validate that our method enjoys consis-tently higher performance than previous logit distillation methods, and even reaches competitive performance with mainstream feature distillation methods. Code is avail-able at https://github.com/Jin-Ying/Multi-Level-Logit-Distillation . | 1. Introduction The last few decades have witnessed the prosperity of deep learning in computer vision tasks, such as image clas-sification [3, 7, 14, 26], object detection [21], and segmen-tation [25, 37]. However, due to their overwhelming large model size, many deep models rely heavily on computation and storage resources, which makes it nearly impossible to deploy them in some practical scenarios, such as mobile de-vices. Towards this challenge, Knowledge Distillation [10] (KD) was introduced to reduce model capacity. Concretely, * Corresponding author. FeatureDistillationLogitDistillationTeacher StudentTeacher StudentFigure 1. Problem Setting. Feature distillation methods utilize features in the intermediate layers as well as logit outputs. On the contrary, logit distillation methods conduct knowledge distillation merely with logit outputs. the KD framework consists of one teacher model (large) and one student model (small). The main objective of KD is to distill the knowledge in the teacher model to the light-weight student model, which can be readily deployed. Var-ious KD methods [1, 8, 19, 22, 27] have been proposed and proved to be effective. Mainstream KD methods fall into two lines of work, 1) logit distillation and 2) feature distillation. Logit distilla-tion conveys knowledge from the teacher model to the stu-dent merely on the logit level. The earliest KD method [10], which distills knowledge by reducing the divergence of pre-dictions, is an example of logit distillation method. To-wards better utilization of teacher knowledge, recent re-searches [1, 22] shed light on the intermediate layers in the teacher model, conducting distillation by matching feature distributions as well as logit outputs among the teacher and student model. These methods are coined feature distilla-tion methods. We compare feature distillation with logit distillation in Figure 1. Utilizing the feature knowledge in intermediate layers, feature distillation methods are more likely to reach supe-rior performance. However, in some real-world applica-tions, the intrinsic architecture of the teacher model is invis-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24276 Table 1. Comparison of different methods. Compared with previous logit distillation and feature distillation methods, our method conducts prediction alignment at the instance level, batch level and category level simultaneously. Method Instance-level Alignment Batch-level Alignment Category-level Alignment Logit Distillation (Previous) ! % % Feature Distillation ! ! % Logit Distillation (Ours) ! ! ! ible due to commercial, privacy, and safety concerns, mak-ing these methods invalid under such circumstances. For example, when launching adversarial attacks [5], it is much easier for hackers to recover the train data when all inter-mediate layers in the model are available. Such a leak of data may cause highly negative influences on financial or medical applications. Facing such a dilemma, we pay attention to logit distil-lation, which does not need to have access to the features in the intermediate layers. To mitigate the performance gap between logit distillation and feature distillation, we shall make better utilization of logit outputs. Concretely, in this paper, we propose multi-level logit distillation, a sim-ple yet effective approach to absorb more information from logit outputs. We propose a multi-level alignment to reduce the divergence of predictions between the teacher and stu-dent at the instance, batch, and class level. Through this alignment, the student model absorbs knowledge from the teacher model not only in instance -level prediction, but in batch -level input correlation and class -level category corre-lation as well. We compare our multi-level logit distillation method with previous methods in Table 1. The previous logit distillation merely conducts instance-level alignment, and the feature distillation incorporates batch-level align-ment. On the contrary, our method implements alignment at multiple levels with logit outputs alone. In addition, we also introduce a prediction augmentation based on model calibration. It enables the student model to learn from more diverse predictions, pushing the performance of our method to a higher level. Extensive experiment results on mainstream benchmarks validate that our method surpasses previous logit distillation methods, in both homogenous and heterogeneous network knowledge distillation settings. Meanwhile, our method also reaches competitive performance over previous feature distillation methods, proving that our method excels at uti-lizing logit outputs. |
Carreira-Perpinan_Towards_Better_Decision_Forests_Forest_Alternating_Optimization_CVPR_2023 | Abstract Decision forests are among the most accurate models in machine learning. This is remarkable given that the way they are trained is highly heuristic: neither the individ-ual trees nor the overall forest optimize any well-defined loss. While diversity mechanisms such as bagging or boost-ing have been until now critical in the success of forests, we think that a better optimization should lead to better forests—ideally eliminating any need for an ensembling heuristic. However, unlike for most other models, such as neural networks, optimizing forests or trees is not easy, be -cause they define a non-differentiable function. We show, for the first time, that it is possible to learn a forest by op-timizing a desirable loss and regularization jointly over a ll its trees and parameters. Our algorithm, Forest Alternatin g Optimization, is based on defining a forest as a paramet-ric model with a fixed number of trees and structure (rather than adding trees indefinitely as in bagging or boosting). It then iteratively updates each tree in alternation so that the objective function decreases monotonically. The algo-rithm is so effective at optimizing that it easily overfits, b ut this can be corrected by averaging. The result is a forest that consistently exceeds the accuracy of the state-of-the -art while using fewer, smaller trees. | 1. Introduction In the past two decades, decision tree ensembles (forests) have been recognized as among the most accurate of all ma-chine learning (ML) models for regression, classification and other tasks. This is evidenced by their widespread use in practical applications (from fraud detection to ranking ) and by regularly being at the top of leaderboards in ML com-petitions and practitioner surveys (such as Kaggle or KD-nuggets). While achieving the best performance possible does require some hyperparameter tuning, this job is much easier compared to neural networks, for example. For this reason they are often considered off-the-shelf algorithms . *currently at Meta AI (FAIR)At the same time, the training algorithm for forests seems outdated, given the larger, increasing role that nume r-ical optimization has played in ML in recent years. Indeed, to train a forest, one does not choose a loss function and reg-ularization terms over a parametric model and optimize that on a training set. Instead, one relies on two building blocks . First, a procedure to learn an individual tree. This is almos t always based on a greedy recursive partitioning procedure, such as CART [ 6], C4.5 [ 26] or its variations. Second, a procedure to create the ensemble. The most successful ones are bagging and feature sampling (in Random Forests (RF)) and boosting (in AdaBoost and Gradient Boosting (GB)). Neither of these building blocks define a global objective function of the forest’s parameters and optimize it, instea d they rely on local, proxy objectives (e.g. purity in learnin g tree splits in CART, or the local loss in GB). Indeed, the only way the model improves its accuracy is by adding more pa-rameters (more nodes in a tree or more trees in a forest), not by optimizing existing parameters . This leads to much larger models than is necessary. The undeniable success of forests has been attributed to intuitive but slippery conce pts such as the diversity of the base learners (trees). For RFs and boosting, multiple conflicting theories have been put forward [ 2,5,13,20,24,25,27,28]. It is fair to say that no-body really understands why RF, AdaBoost or GB forests work. It also seems reasonable that jointly optimizing over the trees should make them naturally diverse, just as neu-rons in the same layer of a neural network differ from each other when optimized with backpropagation. We do not seek to explain why current forest algorithms work. We seek to learn forests using solid optimization prin-ciples, bringing them into the mainstream of modern ML, and as a result learn even better forests . Indeed, we can interpret some recent advances as being due to a better op-timization. One view of some forms of boosting connected it with optimization [ 13,24]. Gradient boosting (GB) [ 14], possibly the type of forest that generally leads to the high-est accuracy, relies on an attempt to make boosting close to an optimization in model space that follows a functional gradient. Unfortunately, GB relies on multiple approxima-tions (including the use of CART to learn individual trees), This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7589 and results in the number of trees growing indefinitely and greedily. The latter is also true of AdaBoost and Random Forests. Adding more and more trees to a forest (with some care, e.g. using a small step size in GB) often leads to the highest accuracy and overfits very slowly. However, it also is very inefficient in parameter use. Each tree contributes very little to the total, and pruning a forest a posteriori of ten reduces considerably the number of trees without hurting much the accuracy. It stands to reason that, if we could opti-mize properly a forest jointly over all parameters in all tre es, we could achieve the same accuracy with fewer trees, even. Another recent advance is the Tree Alternating Optimiza-tion (TAO) algorithm, which puts decision trees firmly into modern ML. TAO is able to optimize a well-defined loss function and regularization over a tree of fixed structure, monotonically decreasing the objective function at each it er-ation, and scaling to large datasets and trees. TAO can learn quite general types of trees, beyond the axis-aligned trees used in traditional forests, and vastly outperforms CART and C4.5 [ 35]. In particular, sparse oblique trees (having hyperplane splits with few nonzero weights) have proven very powerful. In a series of papers [ 9,16,32,33], using TAO as base learner with any of the classic ensemble mecha-nisms (bagging, AdaBoost, GB) has been shown to produce forests that are more accurate while using fewer, shallower trees. This is compelling evidence for the importance of optimization in learning forests. In this paper, we propose the first algorithm (as far as we know) that can optimize a global objective function of all the parameters of a forest of predetermined structure (number, type and structure of trees), by iteratively decre as-ing the objective function given initial random parameters . This makes it possible to pick a loss and regularization, and a parametric form for the forest, and optimize exactly that. Our algorithm, Forest Alternating Optimization (FAO), is described in section 4. It relies heavily on TAO, which we review in section 3. FAO works so well at optimizing on the training set that it can make the forest overfit easily for rea -sons described in section 5. We can avoid this by averaging several independent FAO forests, and this results in forest s that exceed the state of the art in both accuracy and forest size, as shown experimentally in section 6. |
Chen_Transfer_Knowledge_From_Head_to_Tail_Uncertainty_Calibration_Under_Long-Tailed_CVPR_2023 | Abstract How to estimate the uncertainty of a given model is a crucial problem. Current calibration techniques treat dif-ferent classes equally and thus implicitly assume that the distribution of training data is balanced, but ignore the fact that real-world data often follows a long-tailed distribu-tion. In this paper, we explore the problem of calibrating the model trained from a long-tailed distribution. Due to the difference between the imbalanced training distribution and balanced test distribution, existing calibration methods such as temperature scaling can not generalize well to this problem. Specific calibration methods for domain adapta-tion are also not applicable because they rely on unlabeled target domain instances which are not available. Models trained from a long-tailed distribution tend to be more over-confident to head classes. To this end, we propose a novel knowledge-transferring-based calibration method by esti-mating the importance weights for samples of tail classes to realize long-tailed calibration. Our method models the dis-tribution of each class as a Gaussian distribution and views the source statistics of head classes as a prior to calibrate the target distributions of tail classes. We adaptively trans-fer knowledge from head classes to get the target probability density of tail classes. The importance weight is estimated by the ratio of the target probability density over the source probability density. Extensive experiments on CIFAR-10-LT, MNIST-LT, CIFAR-100-LT, and ImageNet-LT datasets demonstrate the effectiveness of our method. | 1. Introduction With the development of deep neural networks, great progress has been made in image classification. In addition to performance, the uncertainty estimate of a given model is also receiving increasing attention, as the confidence of a model is expected to accurately reflect its performance. †Corresponding author.A model is called perfect calibrated if the predictive con-fidence of the model represents a good approximation of its actual probability of correctness [9]. Model calibration is particularly important in safety-critical applications, such as autonomous driving, medical diagnosis, and robotics [1]. For example, if a prediction with low confidence is more likely to be wrong, we can take countermeasures to avoid unknown risks. Most existing calibration techniques assume that the dis-tribution of training data is balanced, i.e., each class has a similar number of training instances, so that each class is treated equally [9, 18, 25]. As shown in Fig. 1, the tra-ditional calibration pipeline uses a balanced training set to train the classification model and a balanced validation set to obtain the calibration model, respectively. The target test set is in the same distribution as the training/validation set. However, data in the real-world often follows a long-tailed distribution, i.e., a few dominant classes occupy most of the instances, while much fewer examples are available for most other classes [6, 16, 24]. When tested on balanced test data, classification models trained from the training set with a long-tailed distribution are naturally more over-confident to head classes. Only imbalanced validation set with the same long-tailed distribution is available for cal-ibrating such models since the validation set is often ran-domly divided from the training set. Due to the different distributions between the imbal-anced training data and the balanced test data [15], it is dif-ficult for traditional calibration techniques to achieve bal-anced calibration among head classes and tail classes with different levels of confidence estimations. For instance, temperature scaling [9] with the temperature learned on a validation set obtains degraded performance on the test set if the two sets are in different distribution [29, 37]. As shown in Fig. 2, a balanced test set suffers heavier over-confidence compared with a long-tailed validation set. Al-though temperature scaling can relieve such phenomenon, there still exists overconfidence after calibration. Domain adaptation calibration methods [29, 40] aim to generalize This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19978 (a) Calibration under balanced distribution. (b) Calibration under long-tailed distribution. Figure 1. The difference between calibration under balanced distribution and calibration under long-tailed distribution. (a) The classifica-tion model and the calibration model are trained on the balanced training and validation sets, respectively, and the test set is balanced. (b) The classification model and the calibration model are trained on the long-tailed training and validation sets, while the test set is balanced. (a) Validation set (b) Test set (c) Temperature scaling Figure 2. The reliability diagrams of (a) the validation set before calibration, (b) the test set before calibration, and (c) the test set after calibration with temperature scaling. calibration across domains under a covariate shift condition but they utilize unlabeled target domain instances. Simi-larly, the domain generalization calibration method [8] uses a support set to bridge the gap between the source domain and the target domain, which also relies on extra instances. These methods cannot be applied to the long-tailed calibra-tion since the balanced test domain is not available. In this paper, we investigate the problem of calibra-tion under long-tailed distribution . Since the distributions of each tail class in the imbalanced validation set and the balanced target set are different, we utilize the importance weight strategy to alleviate the unreliable calibration for tail classes. The weight of each instance is the ratio between the target balanced probability density and the source im-balanced probability density. We explicitly model the dis-tribution of each class as a Gaussian distribution. Different from the source distribution, the target balanced distribution cannot be estimated directly. Since there exists common in-formation between head classes and tail classes [23], we transfer knowledge from head classes to estimate the target probability density. Normally, the more similar two classes are, the more information they share. Therefore, for eachtail class, we measure the similarities between the distribu-tions of it and all head classes with the Wasserstein distance. The similarity considers both first-order and second-order statistics of the two classes and thus can better reflect the transferability of statistical knowledge. Then we estimate the target probability density of each tail class by combin-ing its own distribution and the transferred information from all head classes referring to the similarities. Finally, we cal-ibrate the model with the importance weights. Our contributions are summarized as: 1) We explore the problem of calibration under long-tailed distribution, which has important practical implications but is rarely studied. We apply the importance weight strategy to enhance the es-timation of tail classes for more accurate calibration. 2) We propose an importance weight estimation method by view-ing distributions of head classes as prior for distributions of tail classes. For each tail class, our method estimates its probability density function from the distribution calibrated by head classes and calculates the importance weight to re-alize balanced calibration. 3) We conduct extensive exper-iments on the CIFAR-10-LT, CIFAR-100-LT [3], MNIST-LT [20], ImageNet-LT [24] datasets and the results demon-strate the effectiveness of our method. |
Du_No_One_Left_Behind_Improving_the_Worst_Categories_in_Long-Tailed_CVPR_2023 | Abstract Unlike the case when using a balanced training dataset, the per-class recall (i.e., accuracy) of neural networks trained with an imbalanced dataset are known to vary a lot from category to category. The convention in long-tailed recognition is to manually split all categories into three sub-sets and report the average accuracy within each subset. We argue that under such an evaluation setting, some cat-egories are inevitably sacrificed. On one hand, focusing on the average accuracy on a balanced test set incurs little penalty even if some worst performing categories have zero accuracy. On the other hand, classes in the “Few” subset do not necessarily perform worse than those in the “Many” or “Medium” subsets. We therefore advocate to focus more on improving the lowest recall among all categories and the harmonic mean of all recall values. Specifically, we propose a simple plug-in method that is applicable to a wide range of methods. By simply re-training the classifier of an exist-ing pre-trained model with our proposed loss function and using an optional ensemble trick that combines the predic-tions of the two classifiers, we achieve a more uniform dis-tribution of recall values across categories, which leads to a higher harmonic mean accuracy while the (arithmetic) av-erage accuracy is still high. The effectiveness of our method is justified on widely used benchmark datasets. | 1. Introduction Various gaps exist when adapting image recognition techniques that are developed in the lab to industrial ap-plications. The most noteworthy one is perhaps the differ-ence between training datasets. Most training datasets used in academic research [6, 16] are balanced with respect to the number of images per class. This should not be taken for granted because datasets used in real-world applications are more likely to be imbalanced. Training deep models *J. Wu is the corresponding author. This research was partly sup-ported by the National Natural Science Foundation of China under Grant 62276123 and Grant 61921006. 0 20 40 60 80 100 Class Index0.00.20.40.60.8Recall Value 0100200300400500 Number of ImagesFigure 1. The per-class recall of models trained on the imbal-anced CIFAR100 (with imbalance ratio 100). Per-class recall value varies a lot from category to category. Moreover, it is not necessarily true that all categories in the “Few” subset have lower accuracy than those in the “Many” or “Medium” subsets. Method Mean Accuracy Lowest Recall BSCE [22] 42.24 3.00 DiVE [12] 45.11 2.00 MiSLAS [30] 47.05 5.00 RIDE [26] 48.64 2.00 PaCo [4] 51.24 5.00 Table 1. The lowest per-class recall of various state-of-the-art methods on the imbalanced CIFAR100 (with imbalance ratio 100). Although there are rapid improvements over the mean accuracy, the lowest per-class recall remains very low. on these datasets is not trivial as models are known to per-form poorly on such datasets. Long-tailed recognition is a research field that aims at tackling this challenge. By plotting the per-class recall (i.e., per-class accuracy) of the model trained on the imbalanced CIFAR00 dataset (with imbalance ratio 100) in Fig. 1, we find that the recall varies dramatically from category to category. We argue that in real world applications, all categories are equally This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15804 important and no one should be left behind . We also find that despite the rapid developments in this community, no obvious improvement over the lowest per-class recall is wit-nessed in the past few years [4, 12, 22, 26, 30], as is shown in Tab. 1. The convention in long-tailed recognition re-search is to split the classes into three subsets based on the number of training images. The accuracy within each sub-set is often reported along with the overall accuracy. While this evaluation scheme seems reasonable at the first glance, we argue it is potentially problematic. First of all, comput-ing the average recall within each subset is way too coarse, making it impossible to reflect whether some classes are completely sacrificed but covered up by other “easy” tail classes. What’s more, we find it is not necessarily true that classes in the “Few” category all have lower recall than classes in the other two subsets, as is shown in Fig. 1. Therefore, focusing on improving the mean accuracy alone is not enough, especially in real-world applications. In this paper, we propose a novel method to make sure that no category is left behind. Specifically, we argue that although mean accuracy is widely used in image classifica-tion as an optimization objective, due to the fact that differ-ent classes have very different recall in long-tailed recogni-tion, it is not the most suitable objective as it incurs little penalty even if some categories have very small per-class accuracy values (e.g., close to 0). Hence, to improve even the worst-performing categories, we believe the harmonic mean of per-class recall would be a better objective. Since harmonic mean is very sensitive to small numbers, further improving classes that already have high recall brings little benefits to the harmonic mean, which makes sure no single class will be left behind. Also, now all classes are treated equally, forcing us to improve classes that have low recall no matter which subset it belongs to. However, it is difficult to directly minimize the harmonic mean. We therefore propose a novel loss function that max-imizes the geometric mean instead, which can be viewed as a surrogate. Our method serves as a simple plug-in that can be used together with both baseline and various state-of-the-art methods. We also propose an ensemble trick that uses the pre-trained and fine-tuned models together to make predictions during inference with nearly no extra cost. We are able to yield a more uniform distribution of recall across categories (i.e., no one left behind), which achieves higher harmonic mean of the recall while the (arithmetic) mean ac-curacy remains high, too. In summary, our work has the following contributions: • We are the first to emphasize the importance of the cor-rect recognition of all categories in long-tailed recog-nition. • We propose a novel method that aims at increasing the harmonic mean of per-class recall as well as an ensem-ble trick that combines two existing models together during inference with nearly no extra cost. • We experimented on three widely used benchmark datasets, which justify the effectiveness of our method in terms of both overall and worst per-class accuracy. |
Hu_Complexity-Guided_Slimmable_Decoder_for_Efficient_Deep_Video_Compression_CVPR_2023 | Abstract In this work, we propose the complexity-guided slimmable decoder (cgSlimDecoder) in combination with skip-adaptive entropy coding (SaEC) for efficient deep video compression. Specifically, given the target complex-ity constraints, in our cgSlimDecoder, we introduce a set of new channel width selection modules to automatically decide the optimal channel width of each slimmable con-volution layer. By optimizing the complexity-rate-distortion related objective function to directly learn the parameters of the newly introduced channel width selection modules and other modules in the decoder, our cgSlimDecoder can automatically allocate the optimal numbers of parameters for different types of modules (e.g., motion/residual decoder and the motion compensation network) and simultaneously support multiple complexity levels by using a single learnt decoder instead of multiple decoders. In addition, our pro-posed SaEC can further accelerate the entropy decoding procedure in both motion and residual decoders by simply skipping the entropy coding process for the elements in the encoded feature maps that are already well-predicted by the hyperprior network. As demonstrated in our comprehensive experiments, our newly proposed methods cgSlimDecoder and SaEC are general and can be readily incorporated into three widely used deep video codecs (i.e., DVC, FVC and DCVC) to significantly improve their coding efficiency with negligible performance drop. | 1. Introduction Recently, learning-based video codecs [1, 15, 22, 28] have achieved promising coding performance and outper-formed widely used commercial codecs like H.264 [46] and H.265 [40]. However, learning-based video codecs are always inefficient due to computationally complex opera-tions. In practical application scenarios, it is desirable that the video codecs can decode the videos in real-time. Addi-tionally, the decoders from different devices can afford dif-ferent computational complexities under different scenar-ios. For example, a cloud server can afford higher compu-tational resource while a smartphone can afford much less computational resource. Therefore, it is also desirable to develop a video decoder that can simultaneously support multiple computational complexities. In order to achieve practical video codecs, two aspects should be taken into consideration during decoding. First, there are different modules in most decoders (see Figure 1 (a)) and the network structure of different modules in each decoder ( e.g., motion decoder, residual decoder and motion compensation networks) needs to be carefully designed to meet different complexity constraints. However, different modules need to handle different types of information ( e.g., residual information versus motion information) and thus require different channel widths under different complex-ity constraints, which makes it hard to manually design the optimal channel widths for different convolution layers in different modules. Second, the encoded bit-streams need to be entropy decoded back into the entire feature map. How-ever, most elements in the feature map are already precisely predicted by the hyperprior networks [33]. As a result, it is less efficient to entropy code the entire feature map as it will also degrade the decoding efficiency. Considering these two aspects, in this work, we propose a complexity-guided slimmable decoder (cgSlimDecoder) in combination with skip-adaptive entropy coding (SaEC) for efficient deep video coding. Motivated by the slimmable neural networks [21,53] for various visual recognition tasks, for the decoder network, we propose a complexity-guided slimmable decoder by additionally introducing a set of new channel width selection modules to automatically decide the optimal channel width of each slimmable convolution layer under different complexity constraints. By learning the parameters from our complexity-guided channel width selection modules and other modules ( e.g., the slimmable convolution layers) through optimizing the complexity-rate-distortion based objective function, our slimmable decoder can automatically select the optimal channel widths for dif-ferent modules such that the target complexity constraints can be optimally allocated to different modules ( e.g., the motion/residual decoder and the motion compensation net-work). When the computational resource is sufficient ( resp. , This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14358 limited), larger ( resp. , smaller) channel width is preferred in the decoding process to reconstruct high-quality video se-quences ( resp. , for more efficient decoding). In addition, we also propose the SaEC method for more efficient en-tropy coding of the encoded motion/residual feature map from the motion/residual encoder. Specifically, we take the hyperprior information as the input to predict whether each element can be precisely predicted by the hyperprior net-works. For each precisely predicted element, we directly use the mean value of its predicted distribution from the hy-perprior network, which will save bits and also accelerate the decoding process by skipping the entropy decoding pro-cess for these elements. The rest elements are still entropy coded for more accurate reconstruction. To demonstrate the effectiveness of our proposed meth-ods, we evaluate our cgSlimDecoder in combination with SaEC based on different baseline frameworks including DVC [28], FVC [15] and DCVC [22]. The experimental results demonstrate that our proposed methods are general and can significantly improve the decoding efficiency with comparable rate-distortion performance. Our contributions are summarized as follows: • We propose a complexity-guided slimmable decoder for efficient video decoding, which can simultaneously support multiple complexity levels by simply using one learned decoder instead of multiple decoders. • Our newly proposed cgSlimDecoder method can au-tomatically allocate the optimal complexities ( i.e., the optimal channel widths for different convolu-tion layers) for different types of modules ( e.g., mo-tion/residual decoder) to meet the overall complexity constraints of the entire video decoder. In addition, we also propose the SaEC coding method for more effi-cient and effective entropy coding. • Our proposed methods are general and can be incor-porated into a set of widely used video compression methods ( i.e., DVC, FVC and DCVC) and improve their decoding efficiency with negligible performance drop. |
Chen_Beyond_Appearance_A_Semantic_Controllable_Self-Supervised_Learning_Framework_for_Human-Centric_CVPR_2023 | Abstract Human-centric visual tasks have attracted increasing research attention due to their widespread applications. In this paper, we aim to learn a general human representation from massive unlabeled human images which can benefit downstream human-centric tasks to the maximum extent. We call this method SOLIDER, a Semantic cOntrollable seLf-supervIseD lEaRning framework. Unlike the existing self-supervised learning methods, prior knowledge from human images is utilized in SOLIDER to build pseudo semantic labels and import more semantic information into the learned representation. Meanwhile, we note that different downstream tasks always require different ratios of semantic information and appearance information. For example, human parsing requires more semantic informa-tion, while person re-identification needs more appearance information for identification purpose. So a single learned representation cannot fit for all requirements. To solve this problem, SOLIDER introduces a conditional network with a semantic controller. After the model is trained, users can send values to the controller to produce rep-resentations with different ratios of semantic information, which can fit different needs of downstream tasks. Finally, SOLIDER is verified on six downstream human-centric visual tasks. It outperforms state of the arts and builds new baselines for these tasks. The code is released in https://github.com/tinyvision/SOLIDER. | 1. Introduction Human-centric visual analysis plays an important role in widespread applications, such as surveillance, sports, augmented reality, and video production. Person re-identification [13, 14, 41, 82], attribute recognition [72, 78], person search [76, 92], pedestrian detection [3, 24, 32], human parsing [27, 53], and pose estimation [58, 90] †Corresponding Author Upper bodyLower bodyShoesLegendFigure 1. A representation space learned by DINO [6]. Seven human images are represented in seven different colors. Each image is split into four parts according to their semantic regions, i.e., upper body (as ▲), lower body (as +), shoes (as ⋆) and background (as ×, not visualized to avoid distraction). It can be seen that different parts of a same person are closer to each other even they share different semantic meanings. have achieved considerable progress in recent years. In another aspect, there are massive human images available in the current computer vision community. For example, even an unlabeled person re-identification dataset, LUPerson [25, 26] (#Img ≈4.18M) is 4 time larger than the ImageNet dataset (#Img ≈1M). How to use unlabeled data to build a human representation is challenging, especially when it needs to benefit various downstream tasks. Self-supervised learning has achieved great develop-ments by using unlabeled data to learn representations. Many pretext tasks have been designed, such as contrastive learning [6, 10, 35] and masking image modeling [2, 34, 77, 95]. Although these methods have achieved great success in learning general image representations, there is a lack of specific design targeting human-centric tasks. Some researchers [55, 81, 97] focus to extend self-supervised learning methods on human-centric visual tasks. They use DINO [6] with LUPerson [25,26] dataset to build pre-trained models for person re-identification task. When applying the pre-trained models to other human-centric This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15050 tasks, such as human parsing and pedestrian detection, we usually get sub-optimal results. It is due to the lack of semantic information in their learned representations. As shown in Fig. 1, in the representation space learned by DINO [6]1, different parts of a same person are gathered together due to their appearance continuity, no matter what semantic meanings they have. As we’ve known, semantic information is as important as appearance information for human-centric visual tasks [40, 50, 96]. Therefore, we tend to train the representation with more semantic information to extend the representation to different downstream human-centric visual tasks. In this paper, a Semantic cOntrollable seLf-supervIseD lEaRning framework (SOLIDER) is proposed. In SOLIDER, we take advantage of prior knowledge from human images to discover semantic information, which can produce pseudo semantic labels for every token. And a token-level semantic classification pretext task is imported and supervised by these pseudo labels. With the new pretext task, we can train representation with stronger semantic information. During the usage of our trained representation on down-stream tasks, we find that even though semantic information and appearance information are both important, different downstream tasks require different ratios of them. Adjust-ing their ratio in the representation would lead to a better performance in downstream tasks. However, as long as the pretext task is trained, the representation can not be changed in current self-supervised learning methods. Different from previous methods, we design SOLIDER as a conditional network involving a semantic controller. The controller takes a value as input and produces a latent representation. In the usage of the pre-trained model from SOLIDER, we send a value (indicting the ratio of semantic information in the representation) to the controller which can adjust the model and output a representation with the required ratio. In summary, our paper makes four contributions: 1) A general human representation is learned in this paper, which is used as a better pre-trained model benefiting to downstream human-centric visual tasks. 2) A semantic controllable self-supervised learning framework (SOLIDER) is proposed. It takes advantages of prior knowledge in human images to produce pseudo semantic labels, and utilize it to train the human representation with more semantic information. 3) A semantic controller is designed in SOLIDER. With the controller, the pre-trained model can generate represen-tations with various degrees of semantic information that can meet different needs of downstream tasks. 4) The effectiveness of the SOLIDER representation is verified on six downstream human-centric tasks. We believe this paper can promote the development of these human-centric tasks in computer vision community. 1MAE [34] shares a similar phenomenon as DINO [6].2. Related Work |
Fischer_Plateau-Reduced_Differentiable_Path_Tracing_CVPR_2023 | Abstract Current differentiable renderers provide light transport gradients with respect to arbitrary scene parameters. How-ever, the mere existence of these gradients does not guaran-tee useful update steps in an optimization. Instead, inverse rendering might not converge due to inherent plateaus, i.e., regions of zero gradient, in the objective function. We pro-pose to alleviate this by convolving the high-dimensional rendering function, that maps scene parameters to im-ages, with an additional kernel that blurs the parameter space. We describe two Monte Carlo estimators to compute plateau-reduced gradients efficiently, i.e., with low vari-ance, and show that these translate into net-gains in op-timization error and runtime performance. Our approach is a straightforward extension to both black-box and dif-ferentiable renderers and enables optimization of problems with intricate light transport, such as caustics or global illumination, that existing differentiable renderers do not converge on. Our code is at github.com/mfischer-ucl/prdpt. | 1. Introduction Regressing scene parameters like object position, mate-rials or lighting from 2D observations is a task of significant importance in graphics and vision, but also a hard, ill-posed problem. When all rendering steps are differentiable, we can derive gradients of the final image w.r.t. the scene pa-rameters. However, differentiating through the discontinu-ous rendering operator is not straightforward due to, e.g., occlusion. The two main approaches to (differentiable) ren-dering are path tracing and rasterization. Physically-based path-tracing solves the rendering equa-tion by computing a Monte Carlo (MC) estimate for each pixel. Unfortunately, MC is only compatible with modern Automatic Differentiation (AD) frameworks for the case of continuous integrands, e.g., color, but not for spatial deriva-tives, i.e., gradients w.r.t. an object’s position. To alleviate this, Li et al. [18] present re-sampling of silhouette edges and Loubet et al. [23] propose re-parametrizing the inte-grand, enabling the optimization of primitive-or light posi-Initial Our Method Reference Path Tracer Figure 1. Optimization results with a differentiable path tracer (we use Mitsuba 3 [28]) and our proposed method. The task is to rotate the coffee cup around its z-axis, so that the handle moves to the right side. Due to a plateau in the objective function (when the handle is occluded by the cup), regular methods do not converge. tions. For rasterization, differentiability is achieved by re-placing discontinuous edge-and z-tests with hand-crafted derivatives [17, 21, 22, 34]. The problem here is that raster-ization, by design, does not capture complex light transport effects, e.g., global illumination, scattering or caustics. Importantly, the mere existence of gradients is no guar-antee that descending along them will make an optimization converge [25]. There are surprisingly many cases where they do not lead to a successful optimization, due to a plateau in the objective function. An example is finding the orientation of the mug in Fig. 1: As soon as the handle disappears behind the cup, no infinitesimally small rotation change will result in a reduced loss. We have hence reached a plateau in the objective function, i.e., a region of zero gra-dients. We propose a method to remove these plateaus while still having complete, complex light transport. We take inspiration from differentiable rasterization lit-erature [17, 21, 32, 34], where smoothing techniques are used to model the influence of faraway triangles to the pixel at hand. For rasterization, this simple change has two ef-fects: First, it makes the edge-and z-tests continuous and hence differentiable, and second, in passing (and, to our knowledge, much less studied), it also removes plateaus. In this work, we hence aim to find a way to apply the same concept to complex light transport. Therefore, instead of making the somewhat arbitrary choice of a fixed smoothing function for edge-and depth-tests in differentiable raster-izers, we path-trace an alternative, smooth version of the entire Rendering Equation (RE), which we achieve by con-volving the original RE with a smoothing kernel. This leads to our proposed method, a lightweight extension to (differ-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4285 entiable) path tracers that extends the infinitely-dimensional path integral to the product space of paths and scene param-eters. The resulting double integral can still be MC-solved efficiently, in particular with variance reduction techniques we derive (importance sampling and antithetic sampling). |
Fu_An_Empirical_Study_of_End-to-End_Video-Language_Transformers_With_Masked_Visual_CVPR_2023 | Abstract Masked visual modeling (MVM) has been recently proven effective for visual pre-training. While similar re-constructive objectives on video inputs (e.g., masked frame modeling) have been explored in video-language (VidL) pre-training, previous studies fail to find a truly effective MVM strategy that can largely benefit the downstream per-formance. In this work, we systematically examine the po-tential of MVM in the context of VidL learning. Specifically, we base our study on a fully end-to-end VIdeO-LanguagE Transformer (VIOLET) [15], where the supervision from MVM training can be backpropogated to the video pixel space. In total, eight different reconstructive targets of MVM are explored, from low-level pixel values and oriented gradients to high-level depth maps, optical flow, discrete visual tokens and latent visual features. We conduct com-prehensive experiments and provide insights into the fac-tors leading to effective MVM training, resulting in an en-hanced model VIOLETv2. Empirically, we show VIOLETv2 pre-trained with MVM objective achieves notable improve-ments on 13 VidL benchmarks, ranging from video question answering, video captioning, to text-to-video retrieval.1 | 1. Introduction Video, containing multiple modalities in nature, has been used as an epitome to test how AI systems perceive. Video-language (VidL) research aims at extending this ability to convey perception via language. Popular VidL tasks were introduced, such as text-to-video retrieval [29,54,75], video question answering [25, 74], and video captioning [6, 75]. Recent progresses in VidL learning mostly focus on VidL pre-training [49, 58, 83] with video-text matching [39, 79] and masked language modeling [10]. There have also been attempts on similar masked modeling on vision inputs. 1Code has been released at https://github.com/tsujuifu/ pytorch_empirical-mvm .For example, masked frame modeling [39] aims to recover masked frame representations. However, the pre-extracted video features cannot be refined during pre-training, which may limit its effectiveness. More recently, VIOLET [15] designs an end-to-end video-language transformer and pro-poses to reconstruct discrete visual tokens for masked frame patches. Though showing some promises in recovering vi-sual semantics, the performance improvements on down-stream VidL tasks are still marginal. Meanwhile, self-supervised visual pre-training has been proven highly effective by reconstructing the masked image patches through raw pixel values [21, 73], discrete visual tokens [3, 81], or visual-semantic features [70, 71]. How-ever, they all only focus on the visual modality. It is unclear which variant of masked visual modeling (MVM) objec-tives can help VidL learning, especially given that the paired language inputs can already provide high-level semantics. Motivated by this, we conduct a comprehensive study of MVM for VidL learning. As illustrated in Figure 1, we base our study on the fully end-to-end VIdeO-LangaugeE Transformer (VIOLET) [15], and study a broad spectrum of MVM targets, including RGB pixel values (Pixel), his-togram of oriented gradients (HOG), depth maps (Depth), optical flow (Flow), discrete visual tokens (VQ), spatial-focused image features (SIF), temporal-aware video fea-tures (TVF), and mulitmodal features (MMF). During pre-training, we mask out some proportions of the video in-put along both spatial and temporal dimensions, and the model learns to recover the MVM targets for these masked patches. Equipped with another two standard pre-training tasks ( i.e., video-text matching and masked language mod-eling), we empirically verify the effectiveness of different MVM variants on downstream VidL tasks. Our study reveals that: ( i) spatial-focused image fea-tures (SIF) is the most effective MVM target on video-text inputs; and ( ii) the effects of different MVM targets on downstream VidL tasks are not shared between video-text and image-text inputs. For example, SIF extracted from the same model brings a large drop on downstream VidL This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22898 Figure 1. We systematically explore eight masked visual modeling (MVM) targets for end-to-end video-language (VidL) pre-training, including RGB pixel values (Pixel), histogram of oriented gradients (HOG), depth maps (Depth), optical flow (Flow), discrete visual tokens (VQ), spatial-focused image features (SIF), temporal-aware video features (TVF), and multimodal features from CLIP (MMF). Besides MVM, we pre-train VIOLET model [15] along with video-text matching (VTM) and masked language modeling (MLM). performance when pre-trained with image-text pairs. In ad-dition, we conduct comprehensive analyses of the masking strategy and ratio, combination of different MVM targets, to shed light on effective MVM training for VidL learning. We name the enhanced version of the original VIOLET [15] with the best MVM strategy as VIOLETv2. Our contributions can be summarized as follows. We present an empirical study of masked visual modeling for video-language pre-training, with comprehensive analyses to reveal the ingredients for effective MVM training. VI-OLETv2 with the best MVM recipe achieves strong per-formance on 13 VidL datasets. Concretely, compared to models pre-trained on the same 5M corpus, VIOLETv2 brings mean improvements of +5.4% accuracy on video question answering, +6.6% recall on text-to-video retrieval, and +11.4 CIDEr on video captioning. Direct comparison to VIOLET [15] also shows notable advantages of our model, even when pre-trained with much less data. |
Dai_Disentangling_Writer_and_Character_Styles_for_Handwriting_Generation_CVPR_2023 | Abstract Training machines to synthesize diverse handwritings is an intriguing task. Recently, RNN-based methods have been proposed to generate stylized online Chinese charac-ters. However, these methods mainly focus on capturing a person’s overall writing style, neglecting subtle style in-consistencies between characters written by the same per-son. For example, while a person’s handwriting typically exhibits general uniformity ( e.g., glyph slant and aspect ratios), there are still small style variations in finer de-tails ( e.g., stroke length and curvature) of characters. In light of this, we propose to disentangle the style repre-sentations at both writer and character levels from indi-vidual handwritings to synthesize realistic stylized online handwritten characters. Specifically, we present the style-disentangled Transformer (SDT), which employs two com-plementary contrastive objectives to extract the style com-monalities of reference samples and capture the detailed style patterns of each sample, respectively. Extensive exper-iments on various language scripts demonstrate the effec-tiveness of SDT. Notably, our empirical findings reveal that the two learned style representations provide information at different frequency magnitudes, underscoring the impor-tance of separate style extraction. Our source code is public at:https://github.com/dailenson/SDT . | 1. Introduction As the oldest writing system, Chinese characters are widely used across Asian countries. When compared with Latin scripts, Chinese characters encompass an excep-tionally vast lexicon (87,887 characters in GB18030-2022 charset) and have intricate structures composed of multiple strokes. Recently, the intriguing task of generating Chinese characters has garnered significant attention [10, 23, 32]. A promising approach to synthesising realistic handwritings is *Authors contributed equally. †Corresponding author Figure 1. Illustration of two online handwritten Chinese charac-ters, with each color representing a stroke. The increasing num-bers indicate the writing order from the start to the end. to progressively generate online characters ( i.e., the hand-writing trajectory in a sequential format) [40]. As shown in Figure 1, online characters convey richer information ( e.g., the order of writing) and thus pave the way for various ap-plications, including writing robots [39]. Our goal is to automatically generate online Chinese handwritings that not only correspond to specific textual content, but also emulate the calligraphic style of a given exemplar writer ( e.g., glyph slant, shape, stroke length, and curvature). This task thus holds potential for a wide range of applications, such as font design and calligraphy education. A popular solution [18] is to extract style information from the provided stylized samples and merge it with the content reference. DeepImitator [44] concatenates the style vector obtained from a CNN encoder with a character embedding, which is then fed into the RNN decoder to generate styl-ized online characters. WriteLikeYou [30] adopts the large-margin softmax loss [36] to promote discriminative learning of style features. However, these methods mainly focus on the overall writing style, thus overlooking the detailed style inconsistencies ( e.g., the highlighted regions in Figure 2) between characters produced by the same writer. The observations mentioned above inspire us to disen-tangle style representations at the writer and character levels from the stylized handwritings. However, accurately cap-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5977 濄 No SlantLeft Slant Right Slant Figure 2. Handwritten character samples from three unique writ-ers, with each row containing characters by the same person. De-spite sharing similar overall handwriting styles ( e.g., glyph slant), subtle style differences ( e.g., stroke length, location, and curva-ture) can still be observed among them. turing these two styles is a challenging task. To address this, we propose a style-disentangled Transformer (SDT) equipped with a dual-head style encoder. Specifically, we employ the contrastive learning framework [13] to guide each head in concentrating on writer-wise and character-wise styles, respectively. For the overall writer-wise style, we treat characters from the same writer as positive in-stances, and characters from different writers as negatives. This enables the encoder to learn the style commonalities among characters written by the same writer. Regarding the detailed character-wise style, we independently sample pos-itive pairs within a character, and sample negative samples from other characters, as illustrated in Figure 3. Aggregat-ing positive views of a character encourages the encoder to focus on the intricate character style patterns. In addition, we introduce a content encoder for SDT to learn a textual feature with a global context. The two style representations, along with the textual feature, are then fed into a decoder that progressively generates online charac-ters. Given that the output characters are in sequential form, we employ Transformer [35], a powerful sequence model-ing architecture, as our backbone. To extend SDT for generating offline Chinese handwrit-tings ( i.e., character images with stroke-width, e.g., Fig-ures 3-7 in Appendix), we further propose an offline-to-offline generation framework. We first use SDT to generate online characters with significant shape changes, and then decorate them with stroke width, ink-blot effects, etc. This enables us to generate authentic offline handwritings. For more details, please refer to Appendix A.4. We summarize our contributions in three key aspects: • We are the first to disentangle two style representations (i.e., writer-wise and character-wise) for enhancing Chinese handwriting generation. Our findings show that the former focuses more on low-frequency infor-mation, while the latter captures higher frequencies. 𝒐 𝒐+𝒐− Figure 3. In this two-character example, we independently sample the positive pair, i.e.,oando+, within the first character, while the negative o−is sampled from another character. Our sampling strategy randomly selects a small subset of patches, following a uniform distribution. • We introduce a novel online character generation method, i.e .,SDT. Extensive experiments on handwrit-ing datasets in Chinese, English, Japanese, and Indic scripts demonstrate its effectiveness and superiority. • Building on the SDT, we further develop an offline-to-offline framework that can produce more plausible offline handwritten Chinese characters, as evidenced in Appendix A.4. |
Dai_Nighttime_Smartphone_Reflective_Flare_Removal_Using_Optical_Center_Symmetry_Prior_CVPR_2023 | Abstract Reflective flare is a phenomenon that occurs when light reflects inside lenses, causing bright spots or a “ghosting effect” in photos, which can impact their quality. Elimi-nating reflective flare is highly desirable but challenging. Many existing methods rely on manually designed features to detect these bright spots, but they often fail to identify reflective flares created by various types of light and may even mistakenly remove the light sources in scenarios with multiple light sources. To address these challenges, we pro-pose an optical center symmetry prior, which suggests that the reflective flare and light source are always symmetrical around the lens’s optical center. This prior helps to locate the reflective flare’s proposal region more accurately and can be applied to most smartphone cameras. Building on this prior, we create the first reflective flare removal dataset called BracketFlare, which contains diverse and realistic reflective flare patterns. We use continuous bracketing to capture the reflective flare pattern in the underexposed im-age and combine it with a normally exposed image to syn-thesize a pair of flare-corrupted and flare-free images. With the dataset, neural networks can be trained to remove the reflective flares effectively. Extensive experiments demon-strate the effectiveness of our method on both synthetic and real-world datasets. | 1. Introduction Lens flare artifacts can occur when a strong light enters the camera’s field of view, causing scattering and reflection within the lenses. Unlike radial-shaped scattering flares, re-flective flares appear as groups of bright spots [5, 7] or a “ghosting effect” like the examples in Fig. 1. With multi-ple reflections, the light source creates multiple projections at various positions behind the lens. If the projection is far from the focal plane, the defocused effect can brighten the whole sensor and degrade the photo’s quality as shown in Fig. 2. In near-focus situations, images of the light source *Corresponding author. (a) iPhone 13 Pro (c) Sony ILCE -6400 (b) Huawei P40 Input Ours Estimated flareFigure 1. Reflective flare removal with our proposed dataset and prior. The network trained on our dataset can effectively separate the reflective flare and flare-free images from a flare-corrupted im-age. Our method can generalize well to different types of smart-phones and even cameras with specific types of lenses. can result in a line of unfocused, bright blobs on the cap-tured photo. Applying anti-reflective (AR) coating is a common tech-nique for reducing internal reflections [22]. The coating is effective for unfocused spots with low luminance, but can-not mitigate a bright spot with high luminance at the focal plane. Even with a well-designed lens system, a bright re-flective spot may remain in the photo, known as the ghost-ing effect [10]. Reflective flare is common on all types of smartphone lenses, and certain types of smartphones (e.g., the iPhone series) can produce a large area of inverted re-flective flare from neon lights or large, luminous standing signs, as shown in Fig. 1. The phenomenon becomes more pronounced when capturing a photo at night with multiple lights in the scene. Numerous bright spots and inverted re-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20783 flections manifest and propagate in different directions with the lens orientation. Therefore, reflective flare removal al-gorithms are highly desirable. Removing reflective flare is a difficult task due to the vast diversity of reflective flares that can be caused by differ-ent types of light sources and lenses. Reflective flares often appear as bright spots that can resemble streetlights from a distance, making it challenging to differentiate between reflective flares and other light sources. Current methods for removing flare spots mainly rely on feature detection techniques like Speeded-Up Robust Features [2] (SURF) or Scale-Invariant Feature Transform [12] (SIFT) to extract spot regions [1, 3, 19]. These methods, however, strug-gle to effectively detect blobs on complex backgrounds and only work for specific spot patterns. Moreover, these meth-ods are ineffective in removing reflective flares caused by lamp tubes and LED matrices. Many learning-based meth-ods and datasets for reflective flare removal have been pro-posed [5,15,18,22]. Although these datasets contain subsets of reflective flares, they are not sufficient for good general-ization. For instance, the reflective flare subset from Wu et al. [22] is captured using the same lens and light source, making it difficult to generalize to different flares in real-world scenarios. Dai et al. [5] propose a synthetic dataset with 10 types of reflective flares, but training on this dataset tends to mislead a network to erroneously remove other tiny bright spots like street lights and bright windows. As a re-sult, it remains challenging for existing algorithms to accu-rately detect and remove various types of reflective flares while preserving details in other regions. To overcome the challenges of detecting and removing reflective flares, we propose a novel optical center symme-try prior, which suggests that the main spot of the reflective flare and the light source are always symmetrical around the lens’s optical center in captured images. This prior applies to most smartphone cameras and some professional cam-eras. With this prior, we can easily detect the possible loca-tions of reflective flares. We also found that the flare spots always have the same patterns as the brightest regions of the light source. Therefore, we can synthesize reflective flares by using the light source’s patterns at low exposure. Based on these characteristics of reflective flares, we develop the first reflective flare removal dataset, named BracketFlare . Specifically, we use continuous bracketing to capture the light source’s pattern on an underexposed image and select the normally exposed image to create the light source and the natural image. Next, we rotate the light source pattern 180 degrees around the optical center, apply blur and color augmentation, and use it to create synthetic reflective flares. We can add the reflective flare to the normally exposed im-age to produce paired flare-corrupted and flare-free images. Thanks to the optical center symmetry prior, we fur-ther propose an end-to-end pipeline for reflective flare re-(c) Description of reflective flare formation (a) w/o AR coating (b) With AR coating Sensor planeLight source ApertureUnfocused projectio nFigure 2. Schematic diagram of reflective flare formation. Re-flective flares are generated by multiple reflections inside the lens. In (c), yellow lines show the generation principle of the aperture-shaped reflective flare. Red lines represent the formation of the bright spot. Since the unfocused reflective flare (yellow line, e.g., (a)) can be effectively alleviated by AR coating. In this work, we mainly focus on removing the focused reflective flare (red line, e.g., (b)) from night images. moval, which can easily apply standard neural networks de-signed for image restoration. Specifically, to demonstrate the effectiveness of our dataset and pipeline, we train two CNN-based networks [4, 24] and two Transformer-based networks [21,23] as baselines. These models trained on our dataset effectively suppress the reflective flare and general-ize well to real-world scenarios. The contributions of our work are as follows: • We propose a new optical center symmetry prior that uncovers crucial properties of reflective flares on smartphones’ cameras, including their locations of oc-currence and patterns. • Based on this prior, we explore a new method for constructing a reflective flare removal dataset called BracketFlare . This dataset includes various types of light sources and diverse indoor and outdoor scenes, providing a solid foundation for this task. • We design an end-to-end reflective flare removal pipeline that incorporates the optical center symme-try prior to both training data generation and net-work structures. Extensive experiments demonstrate the essential role of our dataset and prior in reflective flare removal, which effectively generalizes to differ-ent night scenes 20784 |
Guan_StyleSync_High-Fidelity_Generalized_and_Personalized_Lip_Sync_in_Style-Based_Generator_CVPR_2023 | Abstract Despite recent advances in syncing lip movements with any audio waves, current methods still struggle to balance generation quality and the model’s generalization ability. Previous studies either require long-term data for training or produce a similar movement pattern on all subjects with low quality. In this paper, we propose StyleSync, an effec-tive framework that enables high-fidelity lip synchroniza-tion. We identify that a style-based generator would suffi-ciently enable such a charming property on both one-shot *Equal contribution. †Corresponding authors.and few-shot scenarios. Specifically, we design a mask-guided spatial information encoding module that preserves the details of the given face. The mouth shapes are accu-rately modified by audio through modulated convolutions. Moreover, our design also enables personalized lip-sync by introducing style space and generator refinement on only limited frames. Thus the identity and talking style of a tar-get person could be accurately preserved. Extensive ex-periments demonstrate the effectiveness of our method in producing high-fidelity results on a variety of scenes. Re-sources can be found at https://hangz-nju-cuhk. github.io/projects/StyleSync . This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1505 | 1. Introduction The problem of generating lip-synced videos accord-ing to conditional audio is of great importance to the field of digital human creation, audio dubbing, film-making, and entertainment. While the rapid development of this area has been witnessed within recent years, most meth-ods [6,8,9,16,19,24,27,28,38,39,46,50,54,57–60] focus on generating a whole dynamic talking head. Results created under such settings can hardly be blended into an existing scene. Under real-world scenarios like audio dubbing, one crucial need is to seamlessly alter the mouth or facial area while preserving other parts of the scene unchanged, mak-ing these methods non-feasible. Previous methods take two different paths for achieving seamless mouth modification. A number of studies [38, 44] pursue realistic results on person-specific settings, which re-quire long-term clips for target modeling. Moreover, they rely on prior 3D facial structural information. The uncer-tainty and errors accumulated in the 3D fitting procedure would greatly influence their performances. On the other hand, it is desired to build models that break the data limi-tation on more generalized scenes. As a result, a few meth-ods [31, 32, 41] design person-agnostic models without re-lying on 3D or 2D structural priors. Nevertheless, such a setting is extremely challenging. In order to produce high-fidelity lip-synced results on any-length videos, two essential challenges need to be ad-dressed. 1)How to efficiently design a powerful genera-tive backbone network that supports both accurate audio in-formation expression and seamless local alternation. Intu-itively, the lip-sync quality naturally contradicts the preser-vation of the original target frame information [32, 59]. 2) How to effectively leverage as much provided information as possible and involve the personalized properties to a gen-eralized model. Though few-shot meta-learning has been proven effective in generating talking heads [5, 7, 56, 60], how to involve such ability into a lip-syncing pipeline has not been explored. In this paper, we propose a highly concise and com-prehensive framework named StyleSync , which produces high-fidelity lip-sync results on both generalized and per-sonalized scenarios. The key is our simple but lip-sync-oriented modifications to style-based generator . Though style-based generators [22, 23] have been leveraged in var-ious talking head generation methods [5, 55, 59], their suc-cesses are only partially instructive. They aim at producing the whole head, which leads to unstable background and distortions which are non-acceptable in our scenarios. By revisiting the details of style-based generators, we identify a few simple but essential modifications that make our framework suitable for lip-syncing. Different from the above methods, we adopt a masked mouth modeling protocol [32, 41] and delicately design a Mask-based Spa-tial Information Encoding strategy, where both the target and reference frames’ information is encoded into a noise space [52, 53] of the generator according to different mask-ing schemes. While the information on audio dynamics and high-level reference frame is injected into the style-modulated convolution in a similar manner as [25, 59]. In this way, our method can be benefited from the strong gen-erative power of style-based generators and also keeps the advantage of easy implementation and fast training. Moreover, our network modification enables personal-ized information preserving (e.g., speaking styles and de-tails of the mouth and jaw). We take inspiration from the recent studies of inverting StyleGAN priors [1, 2, 35, 45] and propose a Personalized Optimization scheme. As au-dio dubbing is normally performed on speaking videos, our model can make use of only a few seconds of the person’s information and optimize additional person-specific param-eters including the W+and the generator. Extensive exper-iments show that our framework clearly outperforms previ-ous state of the arts on the one-shot setting by a large mar-gin, and the target-specific optimization further enhances the fidelity of our results. Our contributions can be summarized as follows: 1) We present the StyleSync framework, which adopts sim-ple but effective modifications including the Mask-based Spatial Information Encoding to a style-based generator. 2)We propose the Personalized Optimization procedure which involves few-shot person-specific optimization into our framework. 3)Extensive experiments demonstrate that our framework can directly produce accurate and high-fidelity one-shot lip-sync results. Moreover, our proposed personalized optimization further improves the generation quality. Our method outperforms previous methods by a clear margin. |