title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Jin_DNF_Decouple_and_Feedback_Network_for_Seeing_in_the_Dark_CVPR_2023 | Abstract The exclusive properties of RAW data have shown great potential for low-light image enhancement. Neverthe-less, the performance is bottlenecked by the inherent lim-itations of existing architectures in both single-stage and multi-stage methods. Mixed mapping across two differ-ent domains, noise-to-clean and RAW-to-sRGB, misleads the single-stage methods due to the domain ambiguity. The multi-stage methods propagate the information merely through the resulting image of each stage, neglecting the abundant features in the lossy image-level dataflow. In this paper, we probe a generalized solution to these bottlenecks and propose a Decouple a NdFeedback framework, abbre-viated as DNF . To mitigate the domain ambiguity, domain-specific subtasks are decoupled, along with fully utilizing the unique properties in RAW and sRGB domains. The fea-ture propagation across stages with a feedback mechanism avoids the information loss caused by image-level dataflow. The two key insights of our method resolve the inherent limitations of RAW data-based low-light image enhance-ment satisfactorily, empowering our method to outperform the previous state-of-the-art method by a large margin with only 19% parameters, achieving 0.97dB and 1.30dB PSNR improvements on the Sony and Fuji subsets of SID. | 1. Introduction Imaging in low-light scenarios attracts increasing atten-tion, especially with the popularity of the night sight mode on smartphones and surveillance systems. However, low-light image enhancement (LLIE) is a challenging task due to the exceptionally low signal-to-noise ratio. Recently, deep learning solutions have been widely studied to tackle this task in diverse data domains, ranging from sRGB-based methods [14,15,21,40] to RAW-based methods [2,7,35,47]. Compared with sRGB data, RAW data with the unpro-*Equal contribution. †C. L. Guo is the corresponding author. EncDec(a) Single-stageIntermediateSupervisionEncDecEncDec(b) Multi-stageFeature FeedbackEncDecDec(c) Our ProposedNoisy RAWClean RAWClean sRGBFigure 1. Thumbnail of different RAW-based low-light image en-hancement methods. (a) Single-stage method. (b) Multi-stage method with intermediate supervision. (c) The proposed DNF. cessed signal is unique in three aspects that benefit LLIE: 1) the signal is linearly correlated with the photon counts in the RAW domain, 2) the noise distributions on RAW im-ages are tractable before the image signal processing (ISP) pipeline [33], and 3) the higher bit depth of RAW format records more distinguishable low-intensity signals. The pioneering work SID [2] proposed a large-scale paired dataset for RAW-based LLIE, igniting a renewed interest in data-driven approaches. As shown in Fig. 1, one line of work [2, 5, 12, 13, 22, 42] focuses on designing single-stage network architectures, and another [4,7,35,47] exploits the multi-stage networks for progressive enhance-ment. Despite the great performance improvement, both architectures are still bottlenecked by inherent limitations. First, current single-stage methods force neural networks to learn a direct mapping from noisy RAW domain to clean sRGB domain. The mixed mapping across two different domains, noisy-to-clean and RAW-to-sRGB, would mis-lead the holistic enhancement process, leading to the do-main ambiguity issue. For example, the tractable noise in RAW images would be mapped to an unpredictable distri-bution during color space transformation. Therefore, shift-ing colors and unprocessed noises inevitably appear in the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18135 final results. Second, existing multi-stage methods com-pose the pipeline by cascaded subnetworks, each of which is responsible for gradual enhancement based on the out-put image of the last stage. Under their designs with the image-level dataflow, only images are propagated forward across multiple stages, and the later stage only obtains in-formation from the result of the previous stage. Meanwhile, every subnetwork in each stage may incur information loss due to the downsampling operation or the separate objective function [41]. Consequently, the suboptimal performance is bound up with the lossy image-level dataflow . The error is propagated, accumulated, and magnified along with stages, finally failing to reconstruct the texture details. To exploit the potential of RAW images for LLIE, a gen-eralized pipeline is expected, which transcends the above two limitations. Specifically, neural networks ought to uti-lize the aforementioned merits in different domains [7], rather than being confused by the domain ambiguity. Ac-cording to the unique properties of the RAW and sRGB domains, it is essential to decouple the enhancement into domain-specific subtasks. After exploring the linearity and tractable noise in the RAW domain, the color space trans-formation from RAW domain to sRGB domain can be per-formed deliberately without noise interference. Besides, the pipeline cannot hinder communication across stages, in-stead of the image-level dataflow that merely allows a small portion of lossy information to pass through. Due to the di-verse subtasks, the intermediate feature of each level tends to be complementary to each other [20, 46]. Meanwhile, multi-scale features preserve texture and context informa-tion, providing additional guidance for later stages [41]. Hence, the features in different stages are required to prop-agate across dataflow, aggregating the enriched features and sustaining the intact information. The domain-specific de-coupling, together with the feature-level dataflow, facilitates the learnability for better enhancement performance and re-tains the method’s interpretability. Based on these principles, we propose a Decouple a Nd Feedback ( DNF ) framework, with the following designs tailored for RAW-based LLIE. The enhancement process is decoupled into two domain-specific subtasks: denoising in the RAW domain [30, 33, 45, 48] and the color restoration into the sRGB domain [8, 28, 39], as shown in Fig. 1(c). Under the encoder-decoder architecture commonly used in previous works [27], each module in the subnetwork is de-rived from the exclusive properties of each domain: the Channel Independent Denoising (CID) block for RAW de-noising, and the Matrixed Color Correction (MCC) block for color rendering. Besides, instead of using the inaccurate denoised RAW image, we resort to the multi-scale features from the RAW decoder as denoising prior. Then, the fea-tures are flowed into the shared RAW encoder by proposed Gated Fusion Modules (GFM), adaptively distinguishingthe texture details and remaining noise. After the Denois-ing Prior Feedback, signals are further distinguished from noises, yielding intact and enriched features in the RAW do-main. Benefiting from the feature-level dataflow, a decoder of MCC blocks could efficiently deal with the remaining enhancement and color transformation to sRGB domain. The main contributions are summarized as follows: • The domain-specific task decoupling extends the uti-lization of the unique properties in both RAW and sRGB domains, avoiding domain ambiguity. • The feature-level dataflow empowered by the Denois-ing Prior Feedback reduces the error accumulation and aggregates complementary features across stages. • Compared with the previous state-of-the-art method, the proposed method gains a significant margin improvement with only 19% parameters and 63% FLOPs, e.g. 0.97dB PSNR improvement on the Sony dataset of SID and 1.30dB PSNR improvement on the Fuji dataset of SID. |
Chai_LayoutDM_Transformer-Based_Diffusion_Model_for_Layout_Generation_CVPR_2023 | Abstract Automatic layout generation that can synthesize high-quality layouts is an important tool for graphic design in many applications. Though existing methods based on gen-erative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) have pro-gressed, they still leave much room for improving the qual-ity and diversity of the results. Inspired by the recent suc-cess of diffusion models in generating high-quality images, this paper explores their potential for conditional layout generation and proposes Transformer-based Layout Diffu-sion Model (LayoutDM) by instantiating the conditional de-noising diffusion probabilistic model (DDPM) with a purely transformer-based architecture. Instead of using convo-lutional neural networks, a transformer-based conditional Layout Denoiser is proposed to learn the reverse diffu-sion process to generate samples from noised layout data. Benefitting from both transformer and DDPM, our Lay-outDM is of desired properties such as high-quality genera-tion, strong sample diversity, faithful distribution coverage, and stationary training in comparison to GANs and VAEs. Quantitative and qualitative experimental results show that our method outperforms state-of-the-art generative models in terms of quality and diversity. | 1. Introduction Layouts, i.e.the arrangement of the elements to be dis-played in a design, play a critical role in many applications from magazine pages to advertising posters to application interfaces. A good layout guides viewers’ reading order and draws their attention to important information. The se-mantic relationships of elements, the reading order, canvas space allocation and aesthetic principles must be carefully decided in the layout design process. However, manually arranging design elements to meet aesthetic goals and user-specified constraints is time-consuming. To aid the design of graphic layouts, the task of layout generation aims to *Corresponding author.generate design layouts given a set of design components with user-specified attributes. Though meaningful attempts are made [1, 10, 11, 15, 18, 21, 23–25, 33, 44–46], it is still challenging to generate realistic and complex layouts, be-cause many factors need to be taken into consideration, such as design elements, their attributes, and their relationships to other elements. Over the past few years, generative models such as Generative Adversarial Networks (GANs) [9] and Varia-tional Auto-Encoders (V AEs) [20] have gained much at-tention in layout generation, as they have shown a great promise in terms of faithfully learning a given data dis-tribution and sampling from it. GANs model the sam-pling procedure of a complex distribution that is learned in an adversarial manner, while V AEs seek to learn a model that assigns a high likelihood to the observed data sam-ples. Though having shown impressive success in gener-ating high-quality layouts, these models have some limita-tions of their own. GANs are known for potentially unstable training and less distribution coverage due to their adversar-ial training nature [4, 5, 27], so they are inferior to state-of-the-art likelihood-based models (such as V AEs) in terms of diversity [28, 29, 34]. V AEs can capture more diversity and are typically easier to scale and train than GANs, but still fall short in terms of visual sample quality and sampling efficiency [22]. Recently, diffusion models such as denoising diffusion probabilistic model (DDPM) [14] have emerged as a pow-erful class of generative models, capable of producing high-quality images comparable to those of GANs. Importantly, they additionally offer desirable properties such as strong sample diversity, faithful distribution coverage, a station-ary training objective, and easy scalability. This implies that diffusion models are well suited for learning models of complex and diverse data, which also motivates us to ex-plore the potential of diffusion-based generative models for graphic layout generation. Though diffusion models have shown splendid perfor-mance in high-fidelity image generation [8,14,35,39,41], it is still a sparsely explored area and provides unique chal-lenges to develop diffusion-based generative models for This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18349 layout generation. First, diffusion models often use con-volutional neural networks such as U-Net [36] to learn the reverse process to construct desired data samples from the noise. However, a layout is a non-sequential data structure consisting of varying length samples with discrete (classes) and continuous (coordinates) elements simultaneously, in-stead of pixels laid on a regular lattice. Obviously, convo-lutional neural networks are not suitable for layout denois-ing, which prevents diffusion models from being directly applied to layout generation. Second, the placement and sizing of a given element depend not only on its attributes (such as category label) but also on its relationship to other elements. How to incorporate the attributes knowledge and model the elements’ relationship in diffusion models is still an open problem. Since diffusion models are general frame-works, they leave room for adapting the underlying neural architectures to exploit the properties of the data. Inspired by the above insights, by instantiating the con-ditional denoising diffusion probabilistic model (DDPM) with a transformer architecture, this paper proposes Transformer-based Layout Diffusion Model ( i.e., Lay-outDM) for conditional layout generation given a set of el-ements with user-specified attributes. The key idea is to use a purely transformer-based architecture instead of the commonly used convolutional neural networks to learn the reverse diffusion process from noised layout data. Benefit-ting from the self-attention mechanism in transformer lay-ers, LayoutDM can efficiently capture high-level relation-ship information between elements, and predict the noise at each time step from the noised layout data. Moreover, the attention mechanism also helps model another aspect of the data -namely a varying and large number of elements. Fi-nally, to generate layouts with desired attributes, LayoutDM designs a conditional Layout Denoiser (cLayoutDenoiser) based on a transformer architecture to learn the reverse dif-fusion process conditioned on the input attributes. Different from previous transformer models in the context of NLP or video, cLayoutDenoiser omits the positional encoding which indicates the element order in the sequence, as we do not consider the order of designed elements on a canvas in our setting. In comparison with current layout genera-tion approaches (such as GANs and V AEs), our LayoutDM offers several desired properties such as high-quality gener-ation, better diversity, faithful distribution coverage, a sta-tionary training objective, and easy scalability. Extensive experiments on five public datasets show that LayoutDM outperforms state-of-the-art methods in different tasks. In summary, our main contributions are as follows: • This paper proposes a novel LayoutDM to generate high-quality design layouts for a set of elements with user-specified attributes. Compared with existing meth-ods, LayoutDM is of desired properties such as high-quality generation, better diversity, faithful distributioncoverage, and stationary training. To our best knowl-edge, LayoutDM is the first attempt to explore the po-tential of diffusion model for graphic layout generation. • This paper explores a new class of diffusion models by replacing the commonly-used U-Net backbone with a transformer, and designs a novel cLayoutDenoiser to reverse the diffusion process from noised layout data and better capture the relationship of elements. • Extensive experiments demonstrate that our method outperforms state-of-the-art models in terms of visual perceptual quality and diversity on five diverse layout datasets. |
Guo_HandNeRF_Neural_Radiance_Fields_for_Animatable_Interacting_Hands_CVPR_2023 | Abstract We propose a novel framework to reconstruct accu-rate appearance and geometry with neural radiance fields (NeRF) for interacting hands, enabling the rendering of photo-realistic images and videos for gesture animation from arbitrary views. Given multi-view images of a single hand or interacting hands, an off-the-shelf skeleton estima-tor is first employed to parameterize the hand poses. Then we design a pose-driven deformation field to establish cor-respondence from those different poses to a shared canon-ical space, where a pose-disentangled NeRF for one hand is optimized. Such unified modeling efficiently complements the geometry and texture cues in rarely-observed areas for both hands. Meanwhile, we further leverage the pose priors to generate pseudo depth maps as guidance for occlusion-aware density learning. Moreover, a neural feature distilla-tion method is proposed to achieve cross-domain alignment for color optimization. We conduct extensive experiments to verify the merits of our proposed HandNeRF and report a series of state-of-the-art results both qualitatively and quantitatively on the large-scale InterHand2.6M dataset. *Corresponding Authors.1. Introduction As a dexterous tool to interact with the physical world and convey rich semantic information, the modeling and re-construction of human hands have attracted substantial at-tention from the research community. Typically, the synthe-sis of realistic hand images or videos with different postures in motion has a wide range of applications, e.g., human-computer interaction, sign language production, virtual and augmented reality technologies such as telepresence, etc. Classic hand-modeling works are mainly built upon pa-rameterized mesh models such as MANO [31]. They fit the geometry of hands to polygon meshes manipulated by shape and pose parameters, and then complete coloring via texture mapping. Despite being widely adopted, those models have the following limitations. On the one hand, high-frequency details are hard to present on low-resolution meshes, hinder-ing the production of photo-realistic images. On the other hand, no special design is developed for interacting hands, which is a non-trivial scenario involving complex postures with self-occlusion. To address the above issues and push the boundary of re-alistic human hand modeling, motivated by the recent suc-cess of NeRF [17] in modeling human body [11, 25, 26], we propose HandNeRF , a novel framework that unifiedly models the geometry and texture of animatable interacting This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21078 hands with neural radiance fields (NeRF). Specifically, a pose-conditioned deformation field is introduced to warp the sampled observing ray into a canonical space, guided by the prior-based blend skinning transformation and a learn-able error-correction network dealing with non-rigid defor-mations. The different input postures are thereby mapped to a common mean pose, where a canonical NeRF is com-petent at modeling. Thanks to the continuous implicit rep-resentation of NeRF and the multi-view-consistent volume rendering, we are able to produce high-fidelity images of posed hands from arbitrary viewing directions. This can not only be applied in the synthesis of free-viewpoint videos, but also help to perform data augmentation for multi-view detection and recognition tasks in computer vision, e.g., sign language recognition. Meanwhile, modeling one single hand is nowhere near enough from an application perspective. The semantics ex-pressed by single-hand movements is quite limited. Many practical scenarios such as sign language conversations re-quire complex interacting postures of both hands. However, handling interaction scenarios is far from trivial and still lacks exploration. Interacting hands exhibit fine-grained texture in small areas, while incompleteness of visible tex-ture permeates the image samples due to self-occlusion and limited viewpoints. To this end, we extend the aforemen-tioned model into a unified framework for both hands. By introducing the hand mapping and ray composition strategy into the pose-deformable NeRF, we make it possible to nat-urally handle interaction contacts and complement the ge-ometry and texture in rarely-observed areas for both hands. Note that with such a design, HandNeRF is compatible with both single hand and two interacting hands. Moreover, to ensure a correct depth relationship when rendering the hand interactions, we re-exploit the hu-man priors and propose a low-cost depth supervision for occlusion-robust density optimization. Such strong con-straint guides the model to extract accurate geometry from sparse-view training samples. Additionally, a neural feature distillation branch is designed to achieve feature alignment between a pre-trained 2D teacher and the 3D color field. By implicitly leveraging spatial contextual cues for color learn-ing, this cross-domain distillation effectively alleviates the artifacts on the target shape and further improves the quality of the learned texture. Our main contributions are summarized as follows: • To the best of our knowledge, we are the first to de-velop a unified framework to model photo-realistic in-teracting hands with deformable neural radiance fields. • We propose several elaborate strategies, including the depth-guided density optimization and the neural fea-ture distillation, in order to effectively address practi-cal challenges in interacting hands training and ensure high-fidelity results for novel view/pose synthesis.• Extensive experiments on the large-scale dataset Inter-Hand2.6M [18] show that our HandNeRF outperforms the baselines both qualitatively and quantitatively. 2. Related Work 2.1. Neural Radiance Fields (NeRF) Recent years have witnessed the rapid development of neural implicit representations [16, 17, 22] in 3D modeling and image synthesis. Compared with classic discrete coun-terparts such as meshes, point clouds, and voxels, neural im-plicit representations model the scene with neural networks, which are spatially continuous and indicate a higher fidelity and flexibility. As the most popular implicit representation in neural rendering, Neural Radiance Fields (NeRF) [17] has exhibited stunning results in various tasks since its first introduction. The original NeRF overfits on one static scene by design, therefore it cannot model time-varying contents. Many efforts have been made to adapt NeRF to dynamic scenes. Some works condition the NeRF with local [36,39] or global scene representations [7, 8, 13] to implicitly pro-vide generalizability for it. As the pioneers, D-NeRF [27] and Nerfies [23] use an explicit deformation field to bend straight rays passing through varying targets into a com-mon canonical scene, where a conventional NeRF is op-timized. Such a pipeline is adopted by many follow-up works [24, 33]. These methods provide hard constraints by sharing geometry and appearance information across time, while presenting a relatively harder optimization problem. 2.2. Neural Rendering of Articulated Objects The image rendering of animatable articulated objects, i.e., human bodies, hands, etc., can be regarded as a special case of modeling dynamic scenes. Most early works [15, 31] complete reconstruction using skeleton-based meshes, which generally rely on expensive calibration and massive samples to produce high-quality results. Neural Body [26] signifies a breakthrough in low-cost human rendering by combining NeRF with the mesh-based SMPL model [15]. Neural Actor [14] optimizes the human model in a canonical space along with a volume deforma-tion based on the linear blending skinning (LBS) algorithm of SMPL mesh. Similar LBS-based pipelines are adopted by a lot of works [11,12,25,32,37,40]. Since the LBS defor-mation cannot handle non-rigid transformation, other strate-gies have to be introduced for better rendering quality. Most methods [11,37] regress an extra point-wise offset for sam-ples, while some works like Animatable-NeRF [25] try to jointly optimize NeRF with the LBS weights for deforma-tion. To this end, a forward and a backward skinning field are introduced to save LBS weights for the bidirectional mapping between the posed and canonical shapes. The main limitation here is the poor generalizability of inverse LBS since the weights vary when the pose changes [28]. 21079 22PosePseudo Depth Map Pose Fitting Pretrained TeacherLow-Level Feature Ground-Truth ColorDeformation Field Canonical NeRFRendered Color Rendered Depth Color Feature Pose SamplesRigid TransformError-Correction Network Blend Weights QueryCanonical PointsHand MappingHand Mapping Deformation Field Composition & Volume Rendering�Cr �ZrZr�FrFr CrFigure 2. Overview of HandNeRF. A straight observing ray is warped to a canonical space by the deformation field, depending on the different poses of two hands. Colors and densities of the two sets of samples | are then produced by the shared NeRF. We establish supervision for the integrated colors, color features and depth values, to help reconstruct fine-grained details of both texture and geometry. Another series of methods [5,20] model the human body with separate parts. They decompose an articulated object into several rigid bones, and then perform per-bone predic-tion with separated NeRFs. Although being good at main-taining partial rigidity, those methods struggle to merge dif-ferent parts. They inevitably produce overlap or breakage between bones, and are consequently inferior to the overall modeling approaches in terms of pose generalizability. As a result, LBS-based methods are still the mainstream practice for human modeling. Aside from NeRF, some works [28] adopt neural implicit surfaces such as the signed distance field (SDF) [21, 35, 38] to better model the geometry of hu-man body. Those methods can produce relatively smoother surface predictions, but are not good at rendering appear-ance with high-frequency details, unlike NeRF. Compared with human body, the neural rendering of hu-man hands still lacks exploration. Recently, LISA [4] is pro-posed as the first neural implicit model of textured hands. It is focused on the reconstruction of hand geometry us-ing separately-optimized SDF, while the color results are barely satisfactory. Meanwhile, it suffers from similar limi-tations as faced by the aforementioned SDF-based and per-bone optimizing methods. Moreover, it only supports one single hand and cannot be applied in the interacting-hand scenarios that are common in practice. 3. Method Given a set of multi-view RGB videos capturing a short pose sequence of a single hand or two interacting hands, we propose a novel framework named HandNeRF, which isintended to model the dynamic scene, enabling image ren-dering of novel hand poses from arbitrary viewing direc-tions. The overview of HandNeRF is shown in Fig. 2. We disentangle the pose of both hands using a deformation field and optimize a shared canonical hand with NeRF (Sec. 3.2). To ensure the correct depth relationship when compositing two hands, we further establish depth supervision for den-sity optimization (Sec. 3.3). Moreover, to mine useful cues from RGB images for better texture learning, we propose a feature distillation framework compatible with our efficient sampling strategy (Sec. 3.4). We will elaborate our method in the following subsections. 3.1. Preliminary: Neural Radiance Fields We first quickly review the standard NeRF model [17] for a self-contained interpretation. Given a 3D coordi-natexand a viewing direction d, NeRF queries the view-dependent emitted color cand density σof that 3D location using a multi-layer perceptron (MLP). A pixel color ˆC(r) can then be obtained by integrating the colors of Nsamples along a ray rin the viewing direction dusing the differen-tiable discrete volume rendering function [17]: ˆC(r) =NX i=1Ti(1−exp (−σiδi))ci, (1) where δiis the distance between adjacent samples, and Ti= exp( −Pi−1 j=1σjδj). To further obtain the multi-scale representation of a scene, Mip-NeRF [1] extends NeRF to represent the samples along each ray as conical frustums, which can be modeled by multivariate Gaussians (x,Σ), withxas the mean and Σ∈R3×3as the covariance. Thus, 21080 the density and emitted color for a sample can be given by the NeRF MLP: (x,Σ,d)→(c, σ). 3.2. Modeling Pose-Driven Interacting Hands The conventional NeRF is optimized on a static scene and lacks the ability to model hands with different poses. Therefore, for pose-driven hands modeling, we introduce a pose-conditioned deformation field that warps the ob-serving rays passing through both hands to a shared space, where a static NeRF is established for one canonical hand. Canonical hand representation. We model the geometry and texture of hands with a neural radiance field in a pose-independent canonical space. Considering the multi-scale distribution of observers in practice, a cone-tracing archi-tecture similar to Mip-NeRF [1] is adopted. To be specific, two MLPs denoted by FΘσandFΘcoutput the density σ and emitted color cof the queried 3D sample, respectively: σ=FΘσ(IPE ( xcan,Σ)) =FΘσ(fσ), (2) c=FΘc(PE (d),fσ, ℓc), (3) where xcanis the sample coordinate in the canonical space, PE(·)is the sinusoidal positional encoding in [17], IPE(·) is the anti-aliased integrated positional encoding proposed by [1], and ℓcis a per-frame latent code to model subtle tex-ture differences between frames. Definitions of other nota-tions are consistent with those in Sec. 3.1. Deformation field. Given an arbitrary hand pose, the deformation field is intended to learn a mapping from that observation space to a canonical space shared by all posed hands. Without any motion priors, it is an ex-tremely under-constrained problem to model the deforma-tion field as a trainable pose-conditioned coordinate trans-formation jointly-optimized with NeRF [13,27]. Therefore, we follow previous works on NeRF for dynamic human body [11, 25, 37, 40] to leverage the parameterized human priors. Specifically, to establish a pose-driven deformation field, HandNeRF follows the settings of MANO [31] with the 16 hand joints, the pose parameters p∈R16×3(axis angles at each joint), the canonical (mean/rest) pose p, and the blend skinning weight wb∈R16. Similar to many clas-sic mesh-based methods, MANO uses linear blend skinning (LBS) to accomplish skeleton-driven deformation for mesh vertices. It models the coordinate transformation between poses as the accumulation of joints’ rigid transformations weighted by the blend weight wb. HandNeRF employs such skeleton-driven transforma-tion as a strong prior for the deformation field. Given a posepand a 3D sample xobfrom the observation space, we obtain the posed MANO mesh and query the near-est mesh facet for xob. The queried blend weight wb= [wb,1, . . . , w b,16]is then calculated by barycentric interpo-lating those of corresponding facet vertices. Thus, a coarse deformation can be expressed byˆxcan=T(xob,p) = (16X j=1wb,jTj)xob, (4) where Tj∈SE(3) is the observation-to-canonical rigid transformation matrix of each joint. Due to the inevitable errors caused by the interpolation and the parameterized model itself, we introduce an ad-ditional pose-conditioned error-correction network denoted byFΘeto model the non-linear deformation as a residual term for ˆxcan. In this way, the deformation field can cap-ture pose-specific details beyond the mesh estimation while preserving the generalizability of the canonical hand. To enable the complementation of geometry and texture for left and right hands in textureless or rarely-observed areas during training, we propose a unified modeling of canonical space for both hands. Since the pose parameters and canonical pose of two hands are defined differently in MANO, we introduce a hand mapping module denoted by ψ(·)in practice to align the left hand with the right one. Formally, the deformation field (illustrated in Fig. 2, right) can be expressed by xcan=ψ(ˆxcan+FΘe(ψ(ˆxcan), ψ(p))). (5) Note that different from previous works [25, 32] relying on per-pose latent code to guide the deformation, we use pose representation instead, ensuring robustness to unseen poses. Sampling and composition strategy. Based on the esti-mated parameterized hand mesh, it is convenient to obtain the coarse scene bounds of both 3D space and 2D image. The 2D image bounds serve as a pseudo label of the fore-ground mask, which guides the pixel (ray) sampling. For a high-resolution training image, we perform ray-tracing on only 1% of the pixels, mainly focusing on the foreground. Since the target hand covers only a small area of a typical image, such an unbalanced pixel sampling strategy ensures that more importance is attached to the texture of the target hands, and also significantly speeds up the training. Meanwhile, the 3D scene bounds help to determine the near and far bounds for a camera ray, along which N3D samples are evenly selected. In order to render two interact-ing hands while the canonical NeRF only models a single hand, we have to perform object composition before vol-ume rendering. Instead of introducing an extra composition operator ( e.g., density-weighted mean of colors [19]), we argue that for each pixel, sampling twice within both hands’ own bounds is more reasonable for non-transparent targets without clipping. Specifically, a straight observing ray is warped with two different solutions produced by the defor-mation field, depending on the corresponding poses of two hands. The colors and densities of the two sets of deformed samples are produced by the shared canonical NeRF, and then re-sorted based on their depth values. Finally, we inte-grate over all the samples belonging to the same ray using Eq. (1) and obtain the final pixel color. 21081 3.3. Depth-Guided Density Optimization The conventional NeRF is susceptible to visual overfit-ting when given insufficient training views [6]. That is, even if the scene geometry (density) fails to be correctly ex-tracted, the rendered images from s |
Croce_Seasoning_Model_Soups_for_Robustness_to_Adversarial_and_Natural_Distribution_CVPR_2023 | Abstract Adversarial training is widely used to make classifiers robust to a specific threat or adversary, such as ℓp-norm bounded perturbations of a given p-norm. However, ex-isting methods for training classifiers robust to multiple threats require knowledge of all attacks during training and remain vulnerable to unseen distribution shifts. In this work, we describe how to obtain adversarially-robust model soups (i.e., linear combinations of parameters) that smoothly trade-off robustness to different ℓp-norm bounded adversaries. We demonstrate that such soups allow us to control the type and level of robustness, and can achieve robustness to all threats without jointly training on all of them. In some cases, the resulting model soups are more robust to a given ℓp-norm adversary than the constituent model specialized against that same adversary. Finally, we show that adversarially-robust model soups can be a viable tool to adapt to distribution shifts from a few examples. | 1. Introduction Deep networks have achieved great success on several computer vision tasks and have even reached super-human accuracy [19, 31]. However, the outputs of such models are often brittle, and tend to perform poorly on inputs that differ from the distribution of inputs at training time, in a condition known as distribution shift [37]. Adversarial perturbations are a prominent example of this condition: small, even imperceptible, changes to images can alter pre-dictions to cause errors [2, 46]. In addition to adversarial inputs, it has been noted that even natural shifts, e.g. dif-ferent weather conditions, can significantly reduce the ac-curacy of even the best vision models [13, 21, 40]. Such drops in accuracy are undesirable for robust deployment, and so a lot of effort has been invested in correcting them. Adversarial training [34] and its extensions [15, 39, 58] are currently the most effective methods to improve empirical robustness to adversarial attacks. Similarly, data augmen-*Work done during an internship at DeepMind.tation is the basis of several techniques that improve ro-bustness to non-adversarial/natural shifts [3, 11, 22]. While significant progress has been made on defending against a specific, selected type of perturbations (whether adversar-ial or natural), it is still challenging to make a single model robust to a broad set of threats and shifts. For example, a classifier adversarially-trained for robustness to ℓp-norm bounded attacks is still vulnerable to attacks in other ℓq-threat models [29, 47]. Moreover, methods for simultane-ous robustness to multiple attacks require jointly training on all [33, 35] or a subset of them [8]. Most importantly, con-trolling the trade-off between different types of robustness (and nominal performance) remains difficult and requires training several classifiers. Inspired by model soups [52], which interpolate the pa-rameters of a set of vision models to achieve state-of-the-art accuracy on I MAGE NET, we investigate the effects of in-terpolating robust image classifiers. We complement their original recipe for soups by own study of how to pre-train, fine-tune, and combine the parameters of models adversarially-trained against ℓp-norm bounded attacks for different p-norms. To create models for soups, we pre-train a single robust model and fine-tune it to the target threat models (using the efficient technique of [8]). We then es-tablish that it is possible to smoothly trade-off robustness to different threat models by moving in the convex hull of the parameters of each robust classifier, while achieving competitive performance with methods that train on multi-plep-norm adversaries simultaneously. Unlike alternatives, our soups can uniquely (1)choose the level of robustness to each threat model without any further training and (2) quickly adapt to new unseen attacks or shifts by simply tun-ing the weighting of the soup. Previous works [24, 30, 54] have shown that adversarial training with ℓp-norm bounded attacks can help to improve performance on natural shifts if carefully tuned. We show that model soups of diverse classifiers, with different types of robustness, offer greater flexibility for finding models that perform well across various shifts, such as I MAGE NET variants. Furthermore, we show that a limited number of images of the new distribution are sufficient to select the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12313 weights of such a model soup. Examining the composition of the best soups brings insights about which features are important for each dataset and shift. Finally, while the ca-pability of selecting a model specific to each image distribu-tion is a main point of our model soups, we also show that it is possible to jointly select a soup for average performance across several I MAGE NETvariants to achieve better accu-racy than adversarial and self-supervised baselines [18,24]. Contributions. In summary, we show that soups •can merge nominal and ℓp-robust models (for various p): efficient fine-tuning from one robust model obtains a set of models with diverse robustness [8] and compatible pa-rameters for creating model soups [52] (Sec. 3), •can control the level of robustness to each threat model and achieve, without more training, competitive perfor-mance against multi-norm robustness training (Sec. 4), •are not limited to interpolation, but can find more effec-tive classifiers by extrapolation (Sec. 4.3), •enable adaptation to unseen distribution shifts on only a few examples (Sec. 5). |
Byun_Introducing_Competition_To_Boost_the_Transferability_of_Targeted_Adversarial_Examples_CVPR_2023 | Abstract Deep neural networks are widely known to be suscep-tible to adversarial examples, which can cause incorrect predictions through subtle input modifications. These ad-versarial examples tend to be transferable between mod-els, but targeted attacks still have lower attack success rates due to significant variations in decision boundaries. To enhance the transferability of targeted adversarial ex-amples, we propose introducing competition into the op-timization process. Our idea is to craft adversarial per-turbations in the presence of two new types of competi-tor noises: adversarial perturbations towards different tar-get classes and friendly perturbations towards the correct class. With these competitors, even if an adversarial ex-ample deceives a network to extract specific features lead-ing to the target class, this disturbance can be suppressed by other competitors. Therefore, within this competition, adversarial examples should take different attack strategies by leveraging more diverse features to overwhelm their in-terference, leading to improving their transferability to dif-ferent models. Considering the computational complexity, we efficiently simulate various interference from these two types of competitors in feature space by randomly mixing up stored clean features in the model inference and named this method Clean Feature Mixup (CFM). Our extensive exper-imental results on the ImageNet-Compatible and CIFAR-10 datasets show that the proposed method outperforms the ex-isting baselines with a clear margin. Our code is available athttps://github.com/dreamflake/CFM . | 1. Introduction Although deep neural networks have excelled in various computer vision tasks such as image classification [10, 12] and object detection [19, 22], they are vulnerable to mali-ciously crafted inputs called adversarial examples [8, 37]. These adversarial examples are generated by optimizing im-perceptible perturbations to mislead a model to incorrectpredictions. Intriguingly, these adversarial examples tend to be transferable between models, and this unique char-acteristic allows adversaries to attempt adversarial attacks on a black-box model without knowing its interior. How-ever, targeted adversarial attacks, which have a specific tar-get class, still have lower attack success rates due to sig-nificant differences in decision boundaries [17, 37]. Never-theless, targeted attacks can pose more serious risks as they can deceive models into predicting a specific harmful target class. Therefore, preemptive research on developing a novel transfer-based attack is crucial because it can assist service providers in preparing their models for these forthcoming risks and evaluating their models’ robustness. In this work, we aim to further improve the transferabil-ity of targeted adversarial examples by introducing compe-tition into their optimization. Our approach involves craft-ing adversarial perturbations in the presence of two new types of noises: (a) adversarial perturbations towards dif-ferent target classes ; and (b) friendly perturbations towards the correct class . With these competitors and a source model, even if an adversarial example deceives the source model into extracting certain features that lead to the tar-get class, this disturbance may be suppressed by interfer-ence from competitors. Consequently, adversarial pertur-bations should take various attack strategies, leveraging a wider range of features to overcome interference, which en-hances their transferability to different models. In the fol-lowing, we will further discuss why employing a diverse set of features for attack can boost the transferability of targeted adversarial examples. In image classification, deep learning models extract a variety of features from images across multiple layers and comprehensively evaluate them to calculate prediction probabilities for each class. As numerous features can con-tribute to the final output, even when two images are recog-nized as the same class, the contributing features can signifi-cantly differ. Taking this into account, optimizing adversar-ial examples to utilize as many distinct feature combinations as possible would effectively enhance their transferability. Conversely, in existing frameworks, an adversarial ex-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24648 Figure 1. Overview of the Clean Feature Mixup (CFM) method. ample may be optimized to intensely distract a limited num-ber of features identified in the early stages. However, the target model, unlike the source model, might be insensi-tive to such feature distractions, leading to the failure of transfer-based attacks. Meanwhile, if we model the competitors as noises that should also be optimized, they require additional backward passes, substantially increasing the computational burden. To address this challenge and enhance interference diver-sity, we propose Clean Feature Mixup (CFM), a method that efficiently mimics the behaviors of the two types of pertur-bations in feature space by mixing stored clean features of the images within a batch. A detailed description of their similarities can be seen in Section 3.3. The overview of the proposed CFM method is illustrated in Fig. 1. Specifically, this method converts a pre-trained source model by attaching our specially designed CFM modules to convolution and fully-connected layers. After that, the attached CFM modules randomly mix the features of the clean images (i.e., clean features ) with current in-put features at each inference. This process can effectively mitigate the overfitting of adversarial examples in their op-timization by preventing them from focusing on particular features in their targeted attacks on the source model. Un-like many existing techniques [18, 30, 31] that significantly increase the computational cost by multiplying the required number of forward/backward passes, CFM adds just one ad-ditional forward pass for storing clean features and requires a marginal amount of computation for feature mixup at each inference. Our contributions can be summarized as follows: • We propose the idea of introducing competition into the optimization of targeted adversarial examples with two types of competitor noises to encourage the uti-lization of various features in their attacks, ultimately boosting their transferability. • Motivated by the above idea, we propose the Clean Feature Mixup (CFM) method to improve the trans-ferability of adversarial examples. This method effi-ciently simulates the competitor noises by randomly mixing up stored clean features of the images in a batch. • We performed extensive experiments with 20 models, including four defensive models and five Transformer-based classifiers. Our experimental results on the ImageNet-Compatible and CIFAR-10 datasets demon-strate that CFM outperforms state-of-the-art baselines. |
Dai_SLOPER4D_A_Scene-Aware_Dataset_for_Global_4D_Human_Pose_Estimation_CVPR_2023 | Abstract We present SLOPER4D, a novel scene-aware dataset collected in large urban environments to facilitate the re-search of global human pose estimation (GHPE) with human-scene interaction in the wild. Employing a head-mounted device integrated with a LiDAR and camera, we record 12 human subjects’ activities over 10 diverse urban scenes from an egocentric view. Frame-wise annotations for 2D key points, 3D pose parameters, and global transla-tions are provided, together with reconstructed scene point clouds. To obtain accurate 3D ground truth in such large dynamic scenes, we propose a joint optimization method to fit local SMPL meshes to the scene and fine-tune the camera calibration during dynamic motions frame by frame, result-ing in plausible and scene-natural 3D human poses. Even-*Corresponding author.tually, SLOPER4D consists of 15 sequences of human mo-tions, each of which has a trajectory length of more than 200 meters (up to 1,300 meters) and covers an area of more than 200 m2(up to 30,000 m2), including more than 100k LiDAR frames, 300k video frames, and 500k IMU-based motion frames. With SLOPER4D, we provide a detailed and thorough analysis of two critical tasks, including camera-based 3D HPE and LiDAR-based 3D HPE in urban envi-ronments, and benchmark a new task, GHPE. The in-depth analysis demonstrates SLOPER4D poses significant chal-lenges to existing methods and produces great research op-portunities. The dataset and code are released at http: //www.lidarhumanmotion.net/sloper4d/ . | 1. Introduction Urban-level human motion capture is attracting more and more attention, which targets acquiring consecutive fine-grained human pose representations, such as 3D skeletons This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 682 and parametric mesh models, with accurate global locations in the physical world. It is essential for human action recog-nition, social-behavioral analysis, and scene perception and further benefits many downstream applications, including Augmented/Virtual Reality, simulation, autonomous driv-ing, smart city, sociology, etc. However, capturing extra large-scale dynamic scenes and annotating detailed 3D rep-resentations for humans with diverse poses is not trivial. Over the past decades, a large number of datasets and benchmarks have been proposed and have greatly promoted the research in 3D human pose estimation (HPE). They can be divided into two main categories according to the cap-ture environment. The first class usually leverages marker-based systems [16, 33, 45], cameras [14, 59, 60], or RGB-D sensors [13,64] to capture human local poses in constrained environments. However, the optical system is sensitive to light and lacks depth information, making it unstable in out-door scenes and difficult to provide global translations, and the RGB-D sensor has limited range and could not work outdoors. The second class [39, 49] attempts to take advan-tage of body-mounted IMUs to capture occlusion-free 3D poses in free environments. However, IMUs suffer from severe drift for long-term capturing, resulting in misalign-ments with the human body. Then, some methods exploit additional sensors, such as RGB camera [17], RGB-D cam-era [46, 57, 67], or LiDAR [27] to alleviate the problem and make obvious improvement. However, they all focus on HPE without considering the scene constraints, which are limited in reconstructing human-scene integrated digital ur-ban and human-scene natural interactions. To capture human pose and related static scenes si-multaneously, some studies use wearable IMUs and body-mounted camera [12] or LiDAR [5] to register the human in large real scenarios and they are promising for captur-ing human-involved real-world scenes. However, human pose and scene are decoupled in these works due to the ego view, where auxiliary visual sensors are used for collecting the scene data while IMUs are utilized for obtaining the 3D pose. Different from them, we propose a novel setting for human-scene capture with wearable IMUs and global-view LiDAR and camera, which can provide multi-modal data for more accurate 3D HPE. In this paper, we propose a huge scene-aware dataset for sequential human pose estimation in urban environments, named SLOPER4D. To our knowledge, it is the first urban-level 3D HPE dataset with multi-modal capture data, in-cluding calibrated and synchronized IMU measurements, LiDAR point clouds, and images for each subject. More-over, the dataset provides rich annotations, including 3D poses, SMPL [32] models and locations in the world co-ordinate system, 2D poses and bounding boxes in the im-age coordinate system, and reconstructed 3D scene mesh. In particular, we propose a joint optimization method forobtaining accurate and natural human motion representa-tions by utilizing multi-sensor complementation and scene constraints, which also benefit global localization and cam-era calibration in the dynamic acquisition process. Fur-thermore, SLOPER4D consists of over 15 sequences in 10 scenes, including library, commercial street, coastal run-way, football field, landscape garden, etc., with up to 30k m2area size and 200∼1,000mtrajectory length for each sequence. By providing multi-modal capture data and di-verse human-scene-related annotations, SLOPER4D opens a new door to benchmark urban-level HPE. We conduct extensive experiments to show the superior-ity of our joint optimization approach for acquiring high-quality 3D pose annotations. Additionally, based on our proposed new dataset, we benchmark two critical tasks: camera-based 3D HPE and LiDAR-based 3D HPE, as well as provide benchmarks for GHPE. Our contributions are summarized as follows: • We propose the first large-scale urban-level human pose dataset with multi-modal capture data and rich human-scene annotations. • We propose an effective joint optimization method for acquiring accurate human motions in both local and global by integrating LiDAR SLAM results, IMU poses, and scene constraints. • We benchmark two HPE tasks as well as a GHPE task on SLOPER4D, demonstrating its potential of promot-ing urban-level 3D HPE research. |
Chen_RankMix_Data_Augmentation_for_Weakly_Supervised_Learning_of_Classifying_Whole_CVPR_2023 | Abstract Whole Slide Images (WSIs) are usually gigapixel in size and lack pixel-level annotations. The WSI datasets are also imbalanced in categories. These unique characteristics, significantly different from the ones in natural images, pose the challenge of classifying WSI images as a kind of weakly supervise learning problems. In this study, we propose, RankMix, a data augmentation method of mixing ranked features in a pair of WSIs. RankMix introduces the con-cepts of pseudo labeling and ranking in order to extract key WSI regions in contributing to the WSI classification task. A two-stage training is further proposed to boost stable train-ing and model performance. To our knowledge, the study of weakly supervised learn-ing from the perspective of data augmentation to deal with the WSI classification problem that suffers from lack of training data and imbalance of categories is relatively un-explored. | 1. Introduction 1.1. Background Natural image processing tasks, including image classi-fication and object detection, have been widely solved using deep learning models and obtain astounding results. In this study, we investigate how medical imaging can also ben-efit from deep learning with focus on whole slide images (WSIs). WSI scanning is commonly used in disease di-agnosis [12, 34]. The demand of computer aided assess-ment makes deep learning widely adopted in this field [1]. Because WSI is a gigapixel image and lacks pixel-level annotations, multiple instance learning (MIL) [31, 32] is an exact solution to this weakly supervised learning prob-lem [2]. In MIL, a WSI is often cropped into tens of thou-sands of patches and then an aggregator will make a predic-tion based on integrating these patches. Most recent works [5, 9, 10, 25, 28, 30, 37, 38, 44] focus on aggregator architec-ture design and improving feature extraction of the patches.However, because WSI is difficult to collect and share, we explore the possibility of data augmentation in WSI classi-fication to increase training samples and mitigate the prob-lem of class imbalance [19, 26, 33, 48] that WSI may have due to rare diseases (versus common diseases). In addition, patch feature extractor is often trained by self-supervised learning [24,28] or comes from pre-trained models (such as pre-trained in ImageNet [30, 37] or WSI datasets [5, 11]). Therefore, for universality and portability, our work will fo-cus on studying the feature domain instead of pixel domain of patches. Traditionally, mixup methods [18, 48] are employed to mix photos of the same aspect ratio or vectors of the same dimension. Nevertheless, this is not the case for WSI as WSI intrinsically has a different number of patches, ranging from hundreds to hundred of thousands. This is because the generation of a WSI, caused by the tissue placement and the tissue size, will make WSIs of varying aspect ratios and sizes. In addition, because a WSI tends to be a very large size (equivalent to tens of thousands of 224×224patches or even larger) and the background often occupies a large part, it is better to use data pre-processing to remove unimpor-tant background parts in order to save computation time and avoid possible unnecessary information [21, 39] (such as noise and artifacts). That is, the pre-processing step of cropping a WSI into patches, as shown in Fig. 1, will re-move most of the background patches. The resultant WSI patches, however, will lose their absolute positions. 1.2. Challenges The above characteristics of WSI lead to the difficulty of directly employing the traditional mixup methods. We cannot simply resize two WSIs to have the same size for the sake of performing mixup. This is because all WSIs are scanned at the same magnification ( e.g.,20x), the phys-ical meanings will be lost if they are rescaled casually. More importantly, due to loss of absolute positions among the patches after removing the WSI background, resizing patches actually do not solve this problem. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23936 Another commonly used techniques for data augmenta-tion are based on cutting, including Cutout [14] and Cut-mix [47]. The core of cutting aims at obtaining or removing parts of an images. However, the key difference between WSI images and natural images is that the main objects can occupy a major part of a natural image, but it is not the case in WSIs. For example, the tumor slide of Camelyon16 dataset [16] only has a small area of tumor (approximately <10% of tissue area). Therefore, if a random cut is made, there is a non-negligible probability that the tumor slide will not contain any tumor patches. To address these challenges, we propose a novel mixup method, called RankMix, for augmentation of whole slide images with diverse sizes and imbalanced categories. RankMix introduces the concepts of instance-level pseudo labeling and ranking in order to obtain meaningful WSI regions that can contribute to the WSI classification task. In order to further enhance model performance, two-stage training is proposed in that the first step is to train a stable score function by general MIL, and then the score function and mixup technique are jointly used in the second stage of training. 1.3. Our Contributions Our contributions are summarized as follows: • To our knowledge, MIL currently focuses on improv-ing feature extraction and aggregator-based classifica-tion. It is relatively ignored in investigating weakly supervised learning from the perspective of data aug-mentation. Our proposed method can be applied to WSI classification problems and can be easily incor-porated to existing MIL methods. • In contrast to the existing mixup methods that aim at mixing natural images of the same size, our method can mix images ( e.g., WSIs) of different sizes. • Because of rare diseases and the difficulty of medical image collection, the WSI classification problem is apt to suffer from lack of training data and imbalance of categories. Our proposed method is demonstrated to be feasible in addressing these challenges. |
Edstedt_DKM_Dense_Kernelized_Feature_Matching_for_Geometry_Estimation_CVPR_2023 | Abstract Feature matching is a challenging computer vision task that involves finding correspondences between two images of a 3D scene. In this paper we consider the dense approach instead of the more common sparse paradigm, thus striv-ing to find all correspondences. Perhaps counter-intuitively, dense methods have previously shown inferior performance to their sparse and semi-sparse counterparts for estimation of two-view geometry. This changes with our novel dense method, which outperforms both dense and sparse methods on geometry estimation. The novelty is threefold: First, we propose a kernel regression global matcher. Secondly, we propose warp refinement through stacked feature maps and depthwise convolution kernels. Thirdly, we propose learn-ing dense confidence through consistent depth and a bal-anced sampling approach for dense confidence maps. Through extensive experiments we confirm that our pro-posed dense method, DenseKernelized Feature Matching, sets a new state-of-the-art on multiple geometry estimation benchmarks. In particular, we achieve an improvement on MegaDepth-1500 of +4.9 and +8.9 AUC @5◦compared to the best previous sparse method and dense method respec-tively. Our code is provided at the following repository: https://github.com/Parskatt/DKM . | 1. Introduction Two-view geometry estimation is a classical computer vision problem with numerous important applications, in-cluding 3D reconstruction [38], SLAM [30], and visual re-localisation [27]. The task can roughly be divided into two steps. First, a set of matching pixel pairs between the im-ages is produced. Then, using the matched pairs, two-view geometry, e.g., relative pose, is estimated. In this paper, we focus on the first step, i.e., feature matching. This task is challenging, as image pairs may exhibit extreme variations in illumination [1], viewpoint [22], time of day [37], and even season [46]. This stands in contrast to small baseline stereo and optical flow tasks, where the changes in view-Previous SotA warp ⊙ certainty DKM warp ⊙ certainty A BFigure 1. Qualitative comparison. We compare our proposed approach DKM with the previous SotA method PDC-Net+ [48] on Milan Cathedral. Top row, image AandB. Middle row and bottom row, forward and reverse warps for PDC-Net+ and DKM weighted by certainty. DKM provides both superior match accu-racy and certainty estimation compared to previous methods. point and illumination are typically small. Traditionally, feature matching has been performed by sparse keypoint and descriptor extraction, followed by matching [26,36]. The main issue with this approach is that accurate localization of reliable and repeatable keypoints is difficult in challenging scenes. This leads to errors in matching and estimation [13,23]. To tackle this issue, semi-sparse or detector-free methods such as LoFTR [41] and Patch2Pix [53] were introduced. These methods do not de-tect keypoints directly but rather perform global matching at a coarse level, followed by mutual nearest neighbour extrac-tion and sparse match refinement. While those methods de-grade less in low-texture scenes, they are still limited by the fact that the sparse matches are produced at a coarse scale, leading to problems with, e.g., repeatability due to grid ar-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17765 tifacts [17]. By instead extracting allmatches between the views, i.e.,dense feature matching, we face no such issues. Furthermore, dense warps provide affine matches for free, which yield smaller minimal problems for subsequent esti-mation [3, 4, 15]. While previous dense approaches [39, 47] have achieved good results, they have however failed to achieve performance rivaling that of sparse or semi-sparse methods on geometry estimation. In this work, we propose a novel dense matching method that outperforms both dense and sparse methods in homog-raphy and two-view relative pose estimation. We achieve this by proposing a substantially improved model architec-ture, including both the global matching and warp refine-ment stage, and by a simple but strong approach to dense certainty estimation and a balanced dense warp sampling mechanism. We compare qualitatively our method with the previous best dense method in Figure 1. Ourcontributions are as follows. Global Matcher: We propose a kernelized global matcher and embedding de-coder. This results in robust coarse matches. We describe our approach in Section 3.2 and ablate the performance gains in Table 5. Warp Refiners: We propose warp re-finement through large depthwise separable kernels using stacked feature maps as well as local correlation as input. This gives our method superior precision and is described in detail in Section 3.3 with corresponding performance im-pact ablated in Table 6. Certainty and Sampling: We propose a simple method to predict dense certainty from consistent depth and propose a balanced sampling approach for dense matches. We describe our certainty and sampling approach in more detail in Section 3.4 and ablate the per-formance gains in Table 7. State-of-the-Art: Our exten-sive experiments in Section 4 show that our method sig-nificantly improves on the state-of-the-art. In particular, we improve estimation results compared to the best previ-ous dense method by +8.9 AUC @5◦on MegaDepth-1500. These results pave the way for dense matching based 3D reconstruction. |
Cao_SVGformer_Representation_Learning_for_Continuous_Vector_Graphics_Using_Transformers_CVPR_2023 | Abstract Advances in representation learning have led to great success in understanding and generating data in various do-mains. However, in modeling vector graphics data, the pure data-driven approach often yields unsatisfactory results in downstream tasks as existing deep learning methods often re-quire the quantization of SVG parameters and cannot exploit the geometric properties explicitly. In this paper, we propose a transformer-based representation learning model (SVG-former) that directly operates on continuous input values and manipulates the geometric information of SVG to encode out-line details and long-distance dependencies. SVGfomer can be used for various downstream tasks: reconstruction, clas-sification, interpolation, retrieval, etc. We have conducted extensive experiments on vector font and icon datasets to show that our model can capture high-quality representation information and outperform the previous state-of-the-art on downstream tasks significantly. | 1. Introduction In the last few years, there have been tremendous ad-vances in image representation learning [22, 26, 37], and these representations lead to great success in many down-stream tasks such as image reconstruction [40], image clas-sification [14, 16], etc. However, most previous works have focused on analyzing structured bitmap format, which uses a grid at the pixel level to represent textures and colors [18]. Therefore, there is still considerable room for improving the representation of detailed attributes for vector objects [25]. In contrast to bitmap image format, scalable vector graph-ics (SVG) format for vector images is widely used in real-world applications due to its excellent scaling capabili-ties [12, 20, 33]. SVGs usually contain a mixture of smooth curves and sharp features, such as lines, circles, polygons, and splines, represented as parametric curves in digital for-mat. This allows us to treat SVGs as sequential data andlearn their compact and scale-invariant representation using neural network models. However, how to automatically learn an effective representation of vector images is still non-trivial as it requires a model to understand the high-level perception of the 2D spatial pattern as well as geometric information to support the high-quality outcome in downstream tasks. Transformer-based models [30] have been proven to achieve start-of-the-art results when dealing with sequen-tial data in various problems including Natural Language Processing (NLP) [34] tasks and time-series forecasting [41] problems. We argue that representation learning for SVG is different from these tasks for two reasons: Firstly, most if not all NLP tasks need a fixed token space to embed the discrete tokens, while SVGs are parameterized by continuous values which make the token space infinite in the previous setting; Secondly, the number of commands and the correlation be-tween the commands vary greatly from one SVG to another, which is hard to handle by a pure data-driven attention mech-anism. For example, the font data may vary across different families while sharing similar styles within the same font family. Such property is encoded in the sequential data ex-plicitly and can provide geometric dependency guidance for modeling the SVG if used appropriately. To tackle the above challenges and fully utilize the geo-metric information from SVG data, this work introduces a novel model architecture named SVGformer, which can take continuous sequential input into a transformer-based model to handle the complex dependencies over SVG commands and yield a robust representation for the input. Specifically, we first extract the SVG segment information via medial axis transform (MAT) [2] to convey the geometric infor-mation into the learned representation. Then we introduce a 1D convolutional embedding layer to preserve the origi-nal continuous format of SVG data, as opposed to previous vector representation learning models [8] which need a pre-processing step to discretize the input to limited discrete tokens. After that, we inject the structural relationship be-tween commands into the proposed geometric self-attention module in the encoder layer to get the hidden representation This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10093 MethodFont-MF [3]SVG-V AE [20]Im2Vec [25]DeepVecFont [33]DeepSVG [8]LayoutTrans [12]SVGformer (ours) Encoding Modality Seq Img Img Img&Seq Seq Seq Seq Decoding Modality Seq Img&Seq Seq Img&Seq Seq Seq Seq Model Architecture GP-LVM V AE RNN LSTM+CNN Transformer Transformer Transformer Sequence Format Keypoints Commands Keypoints Commands Commands Commands Commands Geometric Information ------Segment Table 1. Comparison of our SVGformer model and recent models for vector representation learning, where "Seq" donates sequence, and "Img" donates image. of each SVG. The representation learned with the recon-struction pretext task can be used in various downstream tasks including classification, retrieval, and interpolation. To the best of our knowledge, our proposed model is the first to explicitly consider vector geometric information as well as directly deal with the raw input of SVG format in an end-to-end encoder-decoder fashion. Themain contributions of this paper includes: •SVGformer captures both geometric information and curve semantic information with the geometric self-attention module, which synergizes the strengths of MAT and the transformer-based neural network to han-dle long-term sequence relationships in SVG. •SVGformer can take original continuous format as in-put which can effectively reduce the token space in the embedding layer. Thereby the model is general for all continuous vector graphics without extra quantization. •SVGformer achieves new state-of-the-art performance on four downstream tasks of both Font and Icon dataset. For example, it outperforms prior art by 51.2% on classification and 42.5% on retrieval tasks of the font dataset. |
Corona_Structured_3D_Features_for_Reconstructing_Controllable_Avatars_CVPR_2023 | Abstract We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a para-metric, statistical human mesh surface. The 3D points have associated semantics and can move freely in 3D space. This allows for optimal coverage of the person of interest, be-yond just the body shape, which in turn, additionally helps modeling accessories, hair, and loose clothing. Owing to this, we present a complete 3D transformer-based attention framework which, given a single image of a person in an un-constrained pose, generates an animatable 3D reconstruc-tion with albedo and illumination decomposition, as a re-sult of a single end-to-end model, trained semi-supervised, and with no additional postprocessing. We show that our S3F model surpasses the previous state-of-the-art on vari-ous tasks, including monocular 3D reconstruction, as well as albedo & shading estimation. Moreover, we show that the proposed methodology allows novel view synthesis, re-lighting, and re-posing the reconstruction, and can natu-rally be extended to handle multiple input images ( e.g. dif-ferent views of a person, or the same view, in different poses, in video). Finally, we demonstrate the editing capabilities of our model for 3D virtual try-on applications. †Work was done while Enric and Mihai were with Google Research.1. Introduction Human digitization is playing a major role in several im-portant applications, including AR/VR, video games, social telepresence, virtual try-on, or the movie industry. Tradi-tionally, 3D virtual avatars have been created using multi-view stereo [33] or expensive equipment [48]. More flex-ible or low-cost solutions are often template-based [1, 4], but these lack expressiveness for representing details such as hair. Recently, research on implicit representations [6, 16, 22, 52, 53] and neural fields [24, 46, 63, 70] has made significant progress in improving the realism of avatars.The proposed models produce detailed results and are also ca-pable, within limits, to represent loose hair or clothing. Even though model-free methods yield high-fidelity avatars, they are not suited for downstream tasks such as animation. Furthermore, proficiency is often limited to a certain range of imaged body poses. Aiming at these problems, different efforts have been made to combine parametric models with more flexible implicit represen-tations [21, 22, 62, 71]. These methods support anima-tion [21,22] or tackle challenging body poses [62,71]. How-ever, most work relies on 2D pixel-aligned features. This leads to two important problems: (1) First, errors in body pose or camera parameter estimation will result in misalign-ment between the projections of 3D points on the body sur-face and image features, which ultimately results in low This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16954 quality reconstructions. (2) image features are coupled with the input view and cannot be easily manipulated e.g. to edit the reconstructed body pose. In this paper we introduce Structure 3D Features (S3F), a flexible extension to image features, specifically designed to tackle the previously discussed challenges and to provide more flexibility during and after digitization. S3F store lo-cal features on ordered sets of points around the body sur-face, taking advantage of the geometric body prior. As body models do not usually represent hair or loose clothing, it is difficult to recover accurate body parameters for images in-the-wild. To this end, instead of relying too much on the ge-ometric body prior, our model freely moves 3D body points independently to cover areas that are not well represented by the prior. This process results in our novel S3Fs, and is trained without explicit supervision only using reconstruc-tion losses as signals. Another limitation of prior work is its dependence on 3D scans. We alleviate this dependence by following a mixed strategy: we combine a small col-lection of 3D scans, typically the only training data con-sidered in previous work, with large-scale monocular in-the-wild image collections. We show that by guiding the training process with a small set of 3D synthetic scans, the method can efficiently learn features that are only available for the scans ( e.g. albedo), while self-supervision on real images allow the method to generalize better to diverse ap-pearances, clothing types and challenging body poses. In this paper, we show how the proposed S3Fs are sub-stantially more flexible than current state-of-the-art repre-sentations. S3Fs enable us to train a single end-to-end model that, based on an input image and matching body pose parameters, can generate a 3D human reconstruction that is relightable and animatable. Furthermore, our model supports the 3D editing of e.g. clothing, without additional post-processing. See Fig.1 for an illustration and Table 1 for a summary of our model’s properties, in relation to prior work. We compare our method with prior work and demon-strate state-of-the-art performance for monocular 3D hu-man reconstruction from challenging, real-world and un-constrained images, and for albedo & shading estimation. We also provide an extensive ablation study to validate our different architectural choices and training setup. Finally, we show how the proposed approach can also be naturally extended to integrate observations across different views or body poses, further increasing reconstruction quality. 2. Related work Monocular 3D Human Reconstruction is an inherently ill-posed problem and thus greatly benefits from strong hu-man body priors. The reconstructed 3D shape of a human is often a byproduct of 3D pose estimation [15, 27–29, 31, 41, 44, 51, 54, 69] represented by a statistical human body model [34, 44, 50, 64]. Body models, however, only pro-end-to-end trainable returns albedo returns shading true surface normals challenging poses animatable semantic editing allows multi-view ✓ ✗ ✓ ✓ ✗ ✗ ✗ ✓ PIFu [52] ✗ ✗ ✗ ✓ ✗ ✗ ✗ ✗ PIFuHD [53] ✓ ✗ ✓ ✗ ✗ ✓ ✗ ✗ ARCH++ [21] ✗ ✗ ✓ ✓ ✓ ✓ ✗ ✗ PaMIR [71] ✓ ✓ ✓ ✓ ✗ ✗ ✗ ✗ PHORHUM [6] ✗ ✗ ✗ ✓ ✓ ✓ ✗ ✗ ICON [62] ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ S3F (Ours) Table 1. Key properties of S3F compared to recent work . Our model includes a number of desirable novel features with respect to previous state-of-the-art, and recovers rigged and relightable hu-man avatars even for challenging body poses in input images. vide mid-resolution body meshes that do not capture im-portant elements of a person’s detail, such as clothing or accessories. To this end, one line of work extends para-metric bodies to represent clothing through offsets to the body mesh [1–4, 8, 42, 73]. However, this is prone to fail for loose garments or those with a different topology than the human body. Other representations of the clothed hu-man body have been explored, including voxels [58, 72], geometry images [47], bi-planar depth maps [17] or visual hulls [40]. To date, the most powerful representations are implicit functions [6,10,21,22,52,53,62,66,67,71] that de-fine 3D geometry via a decision boundary [13, 36] or level set [43]. A popular choice to condition implicit functions on an input image are pixel-aligned features [52]. This ap-proach has been used to obtained detailed 3D reconstruction methods without relying on a body template [6, 52, 52, 66], and in combination with parametric models [21, 22, 62, 66] to take advantage of body priors. ARCH and ARCH++ map pixel-aligned features to a canonical space of SMPL [34] to support animatable reconstructions. However, this requires almost perfect SMPL estimates, which are hard to obtain for images in-the-wild. To address this, ICON [62] proposes an iterative refinement of the SMPL parameters during 3D reconstruction calculations. ICON requires different mod-ules for predicting front & back normal maps, to compute features, and for SMPL fitting. In contrast, we propose an end-to-end trainable model that can correct potential errors in body pose and shape estimation, without supervision, by freely allocating relevant features around the approximate body geometry. Somewhat related are also methods for human relight-ing in monocular images [23, 26, 30, 56]. However, these methods typically do not reconstruct the 3D human and in-stead rely on normal maps to transform pixel colors. Our work bears similarity with PHORHUM [6], in our predic-tion of albedo, and a global scene illumination code. How-ever, we combine this approach with mixed supervision from both synthetic and real data, thus obtaining detailed, photo-realistic reconstructions, for images in-the-wild. Human neural fields . Our work is also related to neu-ral radiance fields, or NeRFs [38], from which we take 16955 inspiration for our losses on images in-the-wild. NeRFs have been recently explored for novel human view synthe-sis [11, 14, 25, 45, 46, 55, 60, 61, 63]. These methods are trained per subject by minimizing residuals between ren-dered and observed images and are typically based on a parametric body model. See [57] for an extensive litera-ture review. In contrast, our method is trained on images of different subjects and thus generalizes to unseen identities. 3. Method We seek to estimate the textured and animation-ready 3D geometry of a person as viewed in a single image. Further, we compute a per-image lighting model to explain shading effects on 3D geometry and to enable relighting for realistic scene placement. Our system utilizes the statistical body model GHUM [64], which represents the human body M(·) as a parametric function of pose θand shape β M(β,θ) :θ×β7→V∈R3N. (1) GHUM returns a set of 3D body vertices V. We also use imGHUM [5], GHUM’s implicit counterpart. imGHUM computes the signed distance to the body surface sbody x for any given 3D point xunder a given pose and shape config-uration: imGHUM (x,β,θ) :β,θ7→sbody x. Please refer to the original papers for details. Given a monocular RGB image I(where the person is segmented), together with an approximate 3D geometry represented by GHUM/imGHUM parameters θandβ, our method generates an animation-ready, textured 3D recon-struction of that person. We represent the 3D geometry S as the zero-level-set of a signed distance field parameterized by the network f, Sϕf= x∈R3|f(I,θ,β,x;ϕf) = (0 ,a) , (2) with learnable parameters ϕf. In addition to the signed dis-tance sw.r.t. the 3D surface, fpredicts per-point albedo colora. In the sequel, we denote the signed distance and albedo for point xreturned by fassxandax, respectively. Scan be extracted from fby running Marching Cubes [35] in a densely sampled 3D bounding box. In the process of computing sanda,fextracts novel Structured 3D Fea-tures . In contrast to pixel-aligned features – a popular repre-sentation used by other state-of-the-art methods – our struc-tured 3D features can be re-posed via Linear Blend Skin-ning (LBS), thus enabling reconstructions in novel poses as well as multi-image feature aggregation. We explain our novel Structured 3D Features in more detail below. 3.1. S3F architecture We divide our end-to-end trainable network into two dif-ferent parts. First, we introduce our novel Structured 3DFeature (S3F) representation. Second, we describe the pro-cedure | to obtain signed distance and color predictions. Fi-nally, we generalize our formulation to multi-view or multi-frame settings. Structured 3D Features . Recent methods have shown the suitability of local features for the 3D human reconstruc-tion, due to their ability to capture high-frequency details. Pixel-aligned features [6, 22, 52, 53] are a popular choice, despite having certain disadvantages: (1) it is not straight-forward to integrate pixel-aligned features from different time steps or body poses. (2) a ray-depth ambiguity. In order to address these problems, we propose a natural ex-tension of pixel-alignment by lifting the features to the 3D domain: structured 3D features. We obtain the initial 3D feature locations viby densely sampling the surface of the approximate geometry represented by GHUM. We initial-ize the corresponding features fiby projecting and pooling from a 2D feature map obtained from the input image I g: (I, π(vi))7→fi∈Rk, (3) where π(vi)is a perspective projection of vi, and gis a trainable feature extractor network. The 3D features ob-tained this way approximate the underlying 3D body but not the actual geometry, especially for loose fitting cloth-ing. To this end, we propose to non-rigidly displace the initial 3D feature locations to better cover the human in the image. Given the initial 3D feature locations vi, we pre-dict per-point displacements in camera coordinates using the network d v′ i=vi+d(fi,ei,vi), (4) where ei∈Rkis a learnt semantic code for vi, initialized randomly and jointly optimized during training. Finally, we project the updated 3D feature locations again to the feature map obtained from I(following the approach from Eq. 3) obtaining our final structured 3D features f′ i. See Fig. 2 for an overview. Predicting geometry and texture . We continue describing the process of computing per-point albedo axand signed distances sx. We denote the set of structured 3D features as (F′∈RN×k,V′∈RN×3), where F′refers to the feature vectors and V′to their 3D locations, respectively. First, we use a transformer encoder layer [59] to compute a master feature f⋆ xfor each query point x, usingV′as keys and F′ values, respectively. From f⋆ xwe compute per-point albedo and signed distances using a standard MLP t: (x,V′,F′)7→f⋆ x7→sx,ax. (5) While previous work maps query points xto a continuous feature map ( e.g. pixel-aligned features), we instead aim to pool features from the discrete set F′. Intuitively, the most informative features for xshould be located in its 3D 16956 IlluminationcodePooling Structured 3D Features (Eq. 3) Input Image GHUMEstimation ShadingAlbedoGeometryValuesKeys Querypoint 3D Volume Image Features Feature extractor Learnable Weights : Per-pointlearnable code(Eq. 4) (Eq. 5)(Eq. 6) Figure 2. Method overview. We introduce a new implicit 3D representation S3F (Structured 3D Features) that utilizes Npoints sampled on the body surface Vand their 2D projection to pool features Ffrom a 2D feature map extracted from the input image I. The initial body points are non-rigidly displaced by the network dto obtain V′in order to sample new features F′, on locations that are not covered by body vertices, such as loose clothing or hair. Given an input point x, we then aggregate representations from the set of points and their features using a transformer architecture t, to finally obtain per-point signed distance and albedo color. Finally, we can relight the reconstruction using the predicted albedo, an illumination representation Land a shading network p. We assume perspective projection to obtain more natural reconstructions, with correct proportions. neighborhood. We therefore first explore a simple baseline, by collecting the features of the three closest points of x inV′and interpolating them using barycentric coordinates. However, we found this approach not sufficient (see Tab. 2). With keys and queries based on 3D positions, we argue that a transformer encoder is a useful tool to integrate relevant structured 3D features based on learned distance-based at-tention. In practice, we use two different transformer heads and MLPs to predict albedo and distance, please see Sup. Mat. for details. Furthermore, we compute signed distances sxby updating the initial estimate of imGHUM with a residual, sx=sbody x+ ∆sx. This in practice makes the training more stable for challenging body poses. To model scene illumination, we follow PHORHUM [6] and utilize the bottleneck of the feature extractor network gas scene illumination code L. We then predict per-point shading coefficient δxusing a surface shading network p p(nx,L;ϕp)7→δx, (6) parameterized by weights ϕp, where the normal of the input pointnx=∇xsxis the estimated surface normal defined by the gradient of the estimated distance w.r.t. x. The final shaded color is then obtained as cx=δx⊙axwith⊙ denoting element-wise multiplication. 3D Feature Manipulation & Aggregation . Since the proposed approach relies on 3D feature locations origi-nally sampled from GHUM’s body surface, we can utilize GHUM to re-pose the representation. This has two main benefits: (1) we can reconstruct in a pose different from the one in the image, e.g. by resolving self-contact, which is typically not possible after creating a mesh, and (2) we can aggregate information from several observations as follows. In a multi-view or multi-frame setting with Oobservationsof the same person under different views or poses, we define each input image as It,∀t∈ {1, . . . , O }. Let us extend the previous notation to (V′ t,F′ t)as structured 3D features for a given frame t. To integrate features from all images, we use LBS to invert the posing transformation and map them to GHUM’s canonical pose denoted as ˜V′. For each obser-vation, we compute point visibilities Ot∈RN×1(based on the GHUM mesh) and weight all feature vectors and canon-ical feature positions by the overall normalized visibility, thus obtaining an aggregated set of structured features ˆV′=TX tsoftmax t(Ot)⊙˜V′ t, (7) ˆF′=TX tsoftmax t(Ot)⊙˜F′ t (8) where softmax normalizes per-point contributions along all views. The aggregated structured 3D features (ˆV′,ˆF′) can be posed to the original body pose of each input image, or to any new target pose. Finally, we run the remaining part of the model in the posed space to predict sxandax. 3.2. Training S3F The semi-supervised training pipeline leverages a small subset of 3D synthetic scans together with images in-the-wild. During training, we run a forward pass for both inputs and integrate the gradients to run a single backward step. Real data . We use images in-the-wild paired with body pose and shape parameters obtained via fitting. We su-pervise on the input images through color and occupancy losses. Following recent work on neural rendering of com-bined signed distance and radiance fields [68], we convert 16957 Geometry Color (PSNR) Method Chamfer ↓IoU↑ NC↑ Albedo ↑Shading ↑ Architecture: Using pixel-aligned features 3.59 0.360 0.828 13.17 14.13 + Predicting SDF residual 5.43 0.428 0.851 13.19 13.06 + Lifting features to 3D 0.606 0.644 0.765 9.82 10.07 + Transformer 0.532 0.720 0.911 16.54 15.85 FULL: + Point displacement 0.339 0.734 0.924 16.31 16.67 Supervision regime: FULL: Only synthetic data 0.381 0.719 0.916 16.34 15.84 FULL: Only real data 0.444 0.723 0.907 10.96 13.59 Table 2. Ablation of several of our design choices. Chamfer metrics are ×10−3. predicted signed distance values sxinto density such that σ(x) =β−1Ψβ(−sx), (9) where Ψβ(·)is the CDF of the Laplace distribution, and βis a learnable scalar parameter equivalent to a sharpness factor. After training, we no longer need βfor reconstruction but can use it to render novel views (see Sec. 4.3). The color of a specific image pixel is estimated via the volume rendering integral by accumulating shaded colors and volume densities along its corresponding camera ray (see [38] or Sup. Mat. for more details). After rendering, we minimize the residual between the ground-truth pixel color crand the accumulation of shaded colors ˆcralong the ray of pixel rsuch that Lrgb=∥cr−ˆcr∥1. Additionally, we also define a VGG-loss [12] Lvggover randomly sampled front patches, enforcing the structure to be more realistic. In addition to color, we also supervise geometry by mini-mizing the difference between the integrated density ˆσrand the ground truth pixel mask σr:Lmask=∥σr−ˆσr∥1. Finally, we regularize the geometry using the Eikonal loss [19] on a set of points Ωsampled near the body sur-face such that Leik=X x∈Ω(∥∇xsx∥2−1)2. (10) The full loss for the real data is a linear combination of the previous components with weights λ∗Lreal=Lrgb+ λvggLvgg+λmaskLmask+λeikLeik. (Quasi-)Synthetic data . Most previous work relies only on quasi-synthetic data in the form of re-rendered textured 3D human scans, for 3D human reconstruction. While we aim to alleviate the need for synthetic data, it is useful for supervising features not observable in images in-the-wild, such as albedo, color, or geometry of unseen body parts. To this end, we additionally supervise our model on a small set of pairs of 3D scans and synthetic images. For the quasi-synthetic subset, we use the same losses as for the real data and add additional supervision from the 3D scans. Given a 3D scan, we sample Bpoints on the surface together with its ground truth albedo aGT iand shaded color sGT i. We min-imize the difference in predicted albedo and shaded colorGeometry Color (PSNR) Method Chamfer ↓IoU↑NC↑Albedo ↑Shading ↑ GHUM [64] 3.56 0.562 0.750 --PIFu [52] 6.61 0.519 0.738 -11.15 Geo-PIFu [20] 9.61 0.453 0.710 -10.62 PIFuHD [53] 4.94 0.552 0.749 --PaMIR [71] 5.35 0.597 0.763 --ARCH [22] 7.52 0.549 0.712 -10.66 ARCH++ [21] 6.53 0.549 0.722 -10.37 ICON [62] 3.53 0.622 0.785 --ICON [62] + [37] 4.47 0.599 0.764 --PHORHUM [6] 2.92 0.594 0.814 11.97 11.20 Ours (No shading) 2.27 0.675 0.827 -16.62 Ours 1.88 0.694 0.847 15.06 14.81 Ours (GT pose/shape) 0.339 0.734 0.924 16.31 16.67 Table 3. Quantitative comparison against other monocular 3D human reconstruction methods . Chamfer metrics are ×10−3 and PSNR is obtained from all 3D scan vertices, including those not visible in the input image. such that L3D rgb =P x∈B∥aGT x−ˆax∥1+∥cGT x−ˆcx∥1. We additionally sample a set of 3D points Ωclose to the scan surface and compute inside/outside labels lfor our fi-nal loss, L3D label =P x∈ΩBCE(lx, σ(x)), where BCE is the Binary Cross Entropy function. The overall loss for syn-thetic data Lsynth is a linear combination of all losses de-scribed above. We train our model by minimizing both real and synthetic objectives Ltotal=Lreal+Lsynth. Implemen-tation details are available in the Sup. Mat. 4. Experiments Our goal is to design a single 3D representation and an end-to-end semi-supervised training methodology that is flexible enough for a number of tasks. In this section, we evaluate our model’s performance on the tasks of monocu-lar 3D human reconstruction, albedo, and shading estima-tion, as well as for novel view rendering and reconstruction from both single-images and video. Finally, we illustrate our model’s capabilities for 3D garment editing. 4.1. Data We follow a mixed supervision approach by combining synthetic data with limited clothing or pose variability, with in-the-wild images of people in challenging poses/scenes. Synthetic data . We use 35 rigged and 45 posed scans from RenderPeople [49] for training. We re-p |
Jiao_Learning_Attribute_and_Class-Specific_Representation_Duet_for_Fine-Grained_Fashion_Analysis_CVPR_2023 | Abstract Fashion representation learning involves the analysis and understanding of various visual elements at different granularities and the interactions among them. Existing works often learn fine-grained fashion representations at the attribute level without considering their relationships and inter-dependencies across different classes. In this work, we propose to learn an attribute and class-specific fashion representation duet to better model such attribute relationships and inter-dependencies by leveraging prior knowledge about the taxonomy of fashion attributes and classes. Through two sub-networks for the attributes and classes, respectively, our proposed an embedding network progressively learns and refines the visual representation of a fashion image to improve its robustness for fashion re-trieval. A multi-granularity loss consisting of attribute-level and class-level losses is proposed to introduce appropri-ate inductive bias to learn across different granularities of the fashion representations. Experimental results on three benchmark datasets demonstrate the effectiveness of our method, which outperforms the state-of-the-art methods by a large margin. | 1. Introduction Fashion products have become one of the most con-sumed products in online shopping. Unlike other types of products, fashion products are usually rich in visual ele-ments at different levels of granularity. For instance, besides the overall visual appearance, a fashion product can be de-scribed by a set of attributes , such as “shape”, “color” and “style”, which focus on different aspects of the visual rep-resentation. Each attribute can be further categorized into various classes . For example, “fit”, “flare” and “pencil” are different classes under attribute “shape” (Fig. 1). There-fore, modeling fashion representation in different granu-larities is essential for online shopping and other down-stream applications, especially those that require analysis shape flareshape? fit flare fit pencilshape [fit, pencil]shape [fit, flare]attribute-specific embedding spaces class-specific embedding spacesattribute-specific embedding spaces shape shapepos fitpos pencilpos not flareflare fitfit pencilnot pencilneg neg neg poscluster of positive representations attribute-specific representations class-specific representations positive negativeFigure 1. Left: existing fine-grained representation learning meth-ods often learn attribute -specific representations for fashion prod-ucts, thus may not be able to discern the two dresses that have dif-ferent compositions of visual elements at the class level.Right: our proposed method (right) jointly learns attribute and class-specific representations. Therefore, it can discriminate between the two dresses by their class-specific representations. of subtle or fine-grained details such as attribute-based fash-ion manipulation [1, 2, 27] and retrieval [6, 14, 19, 23, 24], fashion copyright [6, 19], and fashion compatibility analy-sis [11, 15, 21, 23]. Fine-grained fashion modeling and analysis in recent years explore the attribute-specific representation learning. The focus has recently shifted from earlier works that learn separate representations for each attribute indepen-dently [1, 2] to multi-task learning, which uses a common backbone for different attributes while tailoring the learning for each specific attribute via mechanisms such as attention masks [6,14,19,24]. Success of these attribute-specific rep-resentation learning methods for fine-grained fashion anal-ysis can be attributed to their capabilities to discriminate visual features associated with different aspects of fashion products, which learning an image-level global representa-tion finds challenging. However, when it comes to classes , such attribute-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11050 specific representation methods face a similar challenge to the above. The reason is that due to the dynamic and aes-thetic nature of fashion products, different visual elements are often composited together to achieve certain visual ef-fects, making an attribute-level description insufficient to capture such interactions and granularity. For instance, un-der the same “shape” attribute, one may go for a dress de-sign that combines classes “fit” and “flare” for a more ca-sual look (top image, Fig. 1), but go for a different dress that combines “fit” and “pencil” for a more formal look while flattering one’s natural curves (bottom image, Fig. 1). Therefore, an attribute-level representation is hard to dif-ferentiate the two dresses. Alternatively, one may directly learn a class-specific representation for each class under the “shape” attribute, which, however, faces the scalability is-sue. For instance, if a fashion image is associated with N attributes and Mclasses per attribute, one would need to learn N×Mclass-specific representations. To better discriminate fashion products with distinct de-sign considerations and model the interplay among vari-ous visual elements, we propose to leverage prior knowl-edge about fashion taxonomy to model fashion products. We jointly learn both attribute-specific and class-specific fashion representations through a multi-attribute multi-granularity multi-label embedding network (M3-Net). M3-Net consists of two sub-networks, for attributes and classes, respectively. Different attributes share the same backbone sub-network as well as two attribute-conditional attention modules, while different classes under a given attribute share two class-conditional attention modules. The shared backbone and conditional attention modules allow the network to better capture the inter-dependencies and shared visual statistics among the attributes and classes. Through multi-label learning on attribute-specific represen-tations, we also improve the scalability of the proposed net-work by focusing class-specific representation learning on high likelihood classes only. Finally, a multi-granularity loss consisting of attribute-level and class-level losses is de-signed to introduce appropriate inductive bias for learning across different granularities. In summary, our contributions are: • We propose to model fashion products at both attribute and class levels based on fashion taxonomy to better capture the inter-dependencies of various visual ele-ments and improve the discriminative power of learned fashion representations. • We design a multi-attribute multi-granularity multi-label network (M3-Net) to jointly learn attribute-specific and class-specific representation duet for fine-grained fashion analysis. Through two sub-networks and conditional attention modules, M3-Net is able to progressively learn discriminative representations atdifferent granularities, with appropriate inductive bias introduced by the attribute-level and class-level losses. • Our model outperforms state-of-the-art methods in fine-grained fashion retrieval on three benchmark datasets. The experimental results demonstrate the ef-ficacy of our proposed method. |
Fosco_Leveraging_Temporal_Context_in_Low_Representational_Power_Regimes_CVPR_2023 | Abstract Computer vision models are excellent at identifying and exploiting regularities in the world. However, it is com-putationally costly to learn these regularities from scratch. This presents a challenge for low-parameter models, like those running on edge devices (e.g. smartphones). Can the performance of models with low representational power be improved by supplementing training with additional in-formation about these statistical regularities? We explore this in the domains of action recognition and action antic-ipation, leveraging the fact that actions are typically em-bedded in stereotypical sequences. We introduce the Event Transition Matrix (ETM), computed from action labels in an untrimmed video dataset, which captures the temporal con-text of a given action, operationalized as the likelihood that it was preceded or followed by each other action in the set. We show that including information from the ETM during training improves action recognition and anticipation per-formance on various egocentric video datasets. Through ablation and control studies, we show that the coherent se-quence of information captured by our ETM is key to this effect, and we find that the benefit of this explicit represen-tation of temporal context is most pronounced for smaller models. Code, matrices and models are available in our project page: https://camilofosco.com/etm_ website | 1. Introduction A strength of computer vision models is their ability to identify and exploit statistical regularities in the world. Learning these regularities from scratch is computationally costly, which limits the accuracy of low-parameters models. It is critical to boost the performance of small models, since many devices lack the resources to host current state of the art models. One way to make the learning problem more tractable for small models may be to supplement them at training time with an explicit representation of the statisti-cal regularities in the target domain. Here, we test whether 101 100101102103 GFLOPs0246810Performance gain with ETM training (%)A VT-b +0.33%B0 +3.95% B4 +0.68%Lambda50 +1.88%A0 +9.24% A2 +5.34% A4 +2.25%A5 +2.92% A6 +0.21%X3D-XS +7.32% X3D-S +4.20% X3D-M +0.68% Less complex More complexFigure 1. Action recognition performance difference between models trained with and without the proposed ETM approach. We train models with various model architectures, from small (left) to large (right): A VT-b [14], MoViNets [24] family, X3D [11] family, LambdaResNet-50 [4], and EfficientNets [57] family on EPIC-KITCHENS-100 [14]. We show that incorporating the ETM into training improves performance, and the impact is higher on smaller models. video understanding models can be improved by providing them with information about typical event sequences during training. We introduce the Event Transition Matrix (ETM), il-lustrated in Figure 2, which leverages the fact that events in real-world videos often occur in predictable sequences. Each row and column indexes an event. In the rows, the ETM captures the likelihood that the event was followed by each of the other events in the set. In the columns, it captures the likelihood that the event was preceded by each other event. To compute a cell’s value, we combine infor-mation from all previous and subsequent events, weighted by their temporal distance from the queried action. Cru-cially, this breaks markovian assumptions and allows the matrix to capture long-range relationships. The ETM has two important properties. First, it acts as an explicit rep-resentation of the likelihood of event transitions, which provides additional pertinent information that a model can This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10693 (a) Low-level descriptions in [10] (b) Event Transition Matrix Figure 2. We construct a given Event Transition Matrix (ETM) from action labels drawn from a given dataset of untrimmed videos. (a) Each video depicts a continuous activity, and is paired with human annotations indicating the individual, low-level actions that compose the activity. (b) Using these action labels, we created the ETM by recording the frequency with which a given action label was preceded or followed by each other action label, accumulated across the videos in our set. leverage without the cost of learning it for itself. Second, it augments representations of a given event with information about what came before and after it, providing additional target for the model to train on. In the present paper, across multiple datasets and model architectures, we test the effectiveness of incorporating the ETM into training for low-parameter models. We test our approach on action recognition and action anticipation in egocentric video datasets, where actions occur in stereotyp-ical sequences. However, this approach can apply to any kind of sequence in videos. We show that leveraging the ETM improves action recognition relative to baselines, and that this improvement relies on the coherence of the action sequence. We also show that action anticipation is improved with our ETM approach. In both cases, we show that the ETM approach can be incorporated into multiple different model architectures, and that the addition of the ETM has the highest impact on smaller models, as shown in Figure 1. Overall, this work demonstrates a potential path to more efficient models, based on supplementing small models at training time with explicit representations of regularities ex-pected in the data. |
Chen_Masked_Image_Training_for_Generalizable_Deep_Image_Denoising_CVPR_2023 | Abstract When capturing and storing images, devices inevitably introduce noise. Reducing this noise is a critical task called image denoising. Deep learning has become the de facto method for image denoising, especially with the emergence of Transformer-based models that have achieved notable state-of-the-art results on various image tasks. However, deep learning-based methods often suffer from a lack of generalization ability. For example, deep models trained on Gaussian noise may perform poorly when tested on other noise distributions. To address this issue, we present a novel approach to enhance the generalization performance of denoising networks, known as masked training. Our method involves masking random pixels of the input image and reconstructing the missing information during training. We also mask out the features in the self-attention layers to avoid the impact of training-testing inconsistency. Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios. Additionally, our interpretability analysis demonstrates the superiority of our method. | 1. Introduction Image denoising is a crucial research area that aims to recover clean images from noisy observations. Due to the rapid advancements in deep learning, many promising im-age denoising networks have been developed. These net-works are typically trained using images synthesized from a pre-defined noise distribution and can achieve remarkable performance in removing the corresponding noise. How-ever, a significant challenge in applying these deep models to real-world scenarios is their generalization ability. Since *Haoyu Chen and Jinjin Gu contribute equally to this work. †Lei Zhu (leizhu@ust.hk) is the corresponding author. Gaussian 15a. In-distributionb. Out-of-distributionSwinIRSwinIROursMixture noise Figure 1. We illustrate the generalization problem of denois-ing networks. We train a SwinIR model on Gaussian noise with σ= 15 . When tested on the same noise, SwinIR demon-strates outstanding performance. However, when applied to out-of-distribution noise, e.g., the mixture of various noise. SwinIR suffers from a huge performance drop. The model trained by the proposed masked training method maintains a reasonable denois-ing effect, despite aldo being trained on Gaussian noise. the real-world noise distribution can differ from that ob-served during training, these models often struggle to gen-eralize to such scenarios. More specifically, most existing denoising works train and evaluate models on images corrupted with Gaussian noise, limiting their performance to a single noise distri-bution. When these models are applied to remove noise drawn from other distributions, their performance drasti-cally drops. Figure 1 shows an example. The research community has become increasingly aware of this gener-alization issue of deep models in recent years. As a coun-termeasure, some methods [80] assume that the noise level of a particular noise type is unknown, while others [5, 68] attempt to improve the performance in real-world scenarios by synthesizing or collecting training data closer to the tar-get noise or directly performing unsupervised training on the target noise [11, 71]. However, none of these meth-ods substantially improve the generalization performance of denoising networks, and they still struggle when the noise distribution is mismatched [1]. The generalization issue of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1692 deep denoising still poses challenges to making these meth-ods broadly applicable. In this work, we focus on improving the generalization ability of deep denoising models. We define generalization ability as the model’s performance on noise different from what it observed during training. We argue that the gener-alization issue of deep denoising is due to the overfitting of training noise. The existing training strategy directly op-timizes the similarity between the denoised image and the ground truth. The intention behind this is that the network should learn to reconstruct the texture and semantics of nat-ural images correctly. However, what is often overlooked is that the network can also reduce the loss simply by overfit-ting the noise pattern, which is easier than learning the im-age content. This is at the heart of the generalization prob-lem. Even many popular deep learning methods exacerbate this overfitting problem. When it comes to noise different from that observed during training, the network exhibits this same behavior, resulting in poor performance. In light of the preceding discussion, our study seeks to improve the generalization performance of deep de-noising networks by directing them to learn image con-tent reconstruction instead of overfitting to training noise. Drawing inspiration from recent masked modeling methods [4, 20, 34, 69], we employ a masked training strategy to ex-plicitly learn representations for image content reconstruc-tion, as opposed to training noise. Leveraging the properties of image processing Transformers [15,45,78], we introduce two masking mechanisms: the input mask and the attention mask . During training, the input mask removes input image pixels randomly, and the network reconstructs the removed pixels. The attention mask is implemented in each self-attention layer of the Transformer, enabling it to learn the completion of masked features dynamically and mitigate the distribution shift between training and testing in masked learning. Although we use Gaussian noise for training – similar to previous works – our method demonstrates sig-nificant performance improvements on various noise types, such as speckle noise, Poisson noise, salt and pepper noise, spatially correlated Gaussian noise, Monte Carlo-rendered image noise, ISP noise, and complex mixtures of multiple noise sources. Existing methods and models have yet to ef-fectively and accurately remove all these diverse noise pat-terns. |
Hampali_In-Hand_3D_Object_Scanning_From_an_RGB_Sequence_CVPR_2023 | Abstract We propose a method for in-hand 3D scanning of an un-known object with a monocular camera. Our method re-lies on a neural implicit surface representation that cap-tures both the geometry and the appearance of the object, however, by contrast with most NeRF-based methods, we do not assume that the camera-object relative poses are known. Instead, we simultaneously optimize both the object shape and the pose trajectory. As direct optimization over all shape and pose parameters is prone to fail without coarse-level initialization, we propose an incremental approach that starts by splitting the sequence into carefully selected overlapping segments within which the optimization is likely to succeed. We reconstruct the object shape and track its poses independently within each segment, then merge all the segments before performing a global optimization. We show that our method is able to reconstruct the shape and color of both textured and challenging texture-less objects, outperforms classical methods that rely only on appearance features, and that its performance is close to recent methods that assume known camera poses. | 1. Introduction Reconstructing 3D models of unknown objects from multi-view images is a computer vision problem which has received considerable attention [8]. With a single camera, a user can capture multiple views of an object by manually moving the camera around a static object [22, 26, 43] or by turning the object in front of the camera [27, 31, 35, 36]. The latter approach is often referred to as in-hand object scanning and is convenient for reconstructing objects from cameras mounted on an AR/VR headset such as Microsoft HoloLens or Meta Quest. Moreover, this approach can re-construct the full object surface, including the bottom part *Work done as part of Shreyas’s PhD thesis at TU Graz, Austria. Segment 1/n Segment 2/n |
Brazil_Omni3D_A_Large_Benchmark_and_Model_for_3D_Object_Detection_CVPR_2023 | Abstract Recognizing scenes and objects in 3D from a single image is a longstanding goal of computer vision with applications in robotics and AR/VR. For 2D recognition, large datasets and scalable solutions have led to unprecedented advances. In 3D, existing benchmarks are small in size and approaches specialize in few object categories and specific domains, e.g.urban driving scenes. Motivated by the success of 2D recognition, we revisit the task of 3D object detection by introducing a large benchmark, called OMNI3D.OMNI3D re-purposes and combines existing datasets resulting in 234k images annotated with more than 3 million instances and 98 categories. 3D detection at such scale is challenging due to variations in camera intrinsics and the rich diversity of scene and object types. We propose a model, called Cube R-CNN, designed to generalize across camera and scene types with a unified approach. We show that Cube R-CNN outperforms prior works on the larger OMNI3Dand existing benchmarks. Finally, we prove that OMNI3Dis a powerful dataset for 3D object recognition and show that it improves single-dataset performance and can accelerate learning on new smaller datasets via pre-training.1 1We release the O MNI3D benchmark and Cube R-CNN models at https://github.com/facebookresearch/omni3d .1. Introduction Understanding objects and their properties from single images is a longstanding problem in computer vision with applications in robotics and AR/VR. In the last decade, 2D object recognition [26, 40, 63, 64, 72] has made tremendous advances toward predicting objects on the image plane with the help of large datasets [24,45]. However, the world and its objects are three dimensional laid out in 3D space. Perceiv-ing objects in 3D from 2D visual inputs poses new challenges framed by the task of 3D object detection. Here, the goal is to estimate a 3D location and 3D extent of each object in an image in the form of a tight oriented 3D bounding box. Today 3D object detection is studied under two different lenses: for urban domains in the context of autonomous vehi-cles [7,13,50,52,57] or indoor scenes [31,36,58,73]. Despite the problem formulation being shared, methods share little insights between domains. Often approaches are tailored to work only for the domain in question. For instance, ur-ban methods make assumptions about objects resting on a ground plane and model only yaw angles for 3D rotation. Indoor techniques may use a confined depth range ( e.g.up to 6m in [58]). These assumptions are generally not true in the real world. Moreover, the most popular benchmarks for image-based 3D object detection are small. Indoor SUN RGB-D [70] has 10k images, urban KITTI [21] has 7k im-ages; 2D benchmarks like COCO [45] are 20 ×larger. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13154 We address the absence of a general large-scale dataset for 3D object detection by introducing a large and diverse 3D benchmark called OMNI3D.OMNI3Dis curated from pub-licly released datasets, SUN RBG-D [70], ARKitScenes [6], Hypersim [65], Objectron [2], KITTI [21] and nuScenes [9], and comprises 234k images with 3 million objects anno-tated with 3D boxes across 98 categories including chair, sofa, laptop, table, cup, shoes, pillow, books, car, person, etc. Sec. 3 describes the curation process which involves re-purposing the raw data and annotations from the aforemen-tioned datasets, which originally target different applications. As shown in Fig. 1, OMNI3Dis 20×larger than existing popular benchmarks used for 3D detection, SUN RGB-D and KITTI. For efficient evaluation on the large OMNI3D, we introduce a new algorithm for intersection-over-union of 3D boxes which is 450 ×faster than previous solutions [2]. We empirically prove the impact of OMNI3Das a large-scale dataset and show that it improves single-dataset performance by up to 5.3% AP on urban and 3.8% on indoor benchmarks. On the large and diverse OMNI3D, we design a gen-eral and simple 3D object detector, called Cube R-CNN, inspired by advances in 2D and 3D recognition of recent years [22, 52, 64, 68]. Cube R-CNN detects all objects and their 3D location, size and rotation end-to-end from a sin-gle image of any domain and for many object categories. Attributed to OMNI3D’s diversity, our model shows strong generalization and outperforms prior works for indoor and urban domains with one unified model, as shown in Fig. 1. Learning from such diverse data comes with challenges as OMNI3Dcontains images of highly varying focal lengths which exaggerate scale-depth ambiguity (Fig. 4). We rem-edy this by operating on virtual depth which transforms object depth with the same virtual camera intrinsics across the dataset. An added benefit of virtual depth is that it allows the use of data augmentations ( e.g.image rescaling) during training, which is a critical feature for 2D detection [12, 80], and as we show, also for 3D. Our approach with one unified design outperforms prior best approaches in AP 3D, ImV oxel-Net [66] by 4.1% on indoor SUN RGB-D, GUPNet [52] by 9.5% on urban KITTI, and PGD [77] by 7.9% on OMNI3D. We summarize our contributions: •We introduce OMNI3D, a benchmark for image-based 3D object detection sourced from existing 3D datasets, which is 20 ×larger than existing 3D benchmarks. •We implement a new algorithm for IoU of 3D boxes, which is 450 ×faster than prior solutions. •We design a general-purpose baseline method, Cube R-CNN, which tackles 3D object detection for many categories and across domains with a unified approach. We propose virtual depth to eliminate the ambiguity from varying camera focal lengths in O MNI3D.2. Related Work Cube R-CNN and OMNI3Ddraw from key research ad-vances in 2D and 3D object detection. 2D Object Detection. Here, methods include two-stage ap-proaches [26, 64] which predict object regions with a region proposal network (RPN) and then refine them via an MLP. Single-stage detectors [44, 47, 63, 72, 85] omit the RPN and predict regions directly from the backbone. 3D Object Detection. Monocular 3D object detectors pre-dict 3D cuboids from single input images. There is exten-sive work in the urban self-driving domain where the car class is at the epicenter [13, 19, 23, 25, 30, 46, 49, 54, 57, 62, 68, 69, 76, 78, 84]. CenterNet [85] predicts 3D depth and size from fully-convolutional center features, and is extended by [14, 37, 42, 48, 50 –52, 55, 83, 84, 87]. M3D-RPN [7] trains an RPN with 3D anchors, enhanced further by [8, 18, 38, 75, 88]. FCOS3D [78] extends the anchorless FCOS [72] detector to predict 3D cuboids. Its successor PGD [77] furthers the approach with probabilistic depth un-certainty. Others use pseudo depth [4, 15, 53, 59, 79, 81] and explore depth and point-based LiDAR techniques [35, 60]. Similar to ours, [11,68,69] add a 3D head, specialized for ur-ban scenes and objects, on two-stage Faster R-CNN. [25,69] augment their training by synthetically generating depth and box-fitted views, coined as virtual views or depth. In our work, virtual depth aims at addressing varying focal lengths . For indoor scenes, a vast line of work tackles room layout estimation [17, 28, 41, 56]. Huang et al [31] predict 3D ori-ented bounding boxes for indoor objects. Factored3D [73] and 3D-RelNet [36] jointly predict object voxel shapes. To-tal3D [58] predicts 3D boxes and meshes by additionally training on datasets with annotated 3D shapes. ImV oxel-Net [66] proposes domain-specific methods which share an underlying framework for processing volumes of 3D voxels. In contrast, we explore 3D object detection in its general form by tackling outdoor and indoor domains jointly in a single model and with a vocabulary of 5×more categories. 3D Datasets. KITTI [21] and SUN RGB-D [70] are popular datasets for 3D object detection on urban and indoor scenes respectively. Since 2019, 3D datasets have emerged, both for indoor [2, 3, 6, 16, 65] and outdoor [9, 10, 20, 29, 32, 33, 71]. In isolation, these datasets target different tasks and appli-cations and have unique properties and biases, e.g.object and scene types, focal length, coordinate systems, etc. In this work, we unify existing representative datasets [2, 6, 9, 21, 65, 70]. We process the raw visual data, re-purpose their annotations, and carefully curate the union of their semantic labels in order to build a coherent large-scale benchmark, called OMNI3D.OMNI3Dis20×larger than widely-used benchmarks and notably more diverse. As such, new chal-lenges arise stemming from the increased variance in visual domain, object rotation, size, layouts, and camera intrinsics. 13155 OMNI3D SUN RGB-D KITTI COCO LVIS Normalized width Norm. heightRelative object sizePercent of instancesSorted category indexNumber of instances Norm. depth N/AN/A (a) Spatial statistics (b) 2D Scale statistics (c) Category statistics Figure 2. OMNI3Danalysis. (a) distribution of object centers on (top) normalized image, XY-plane, and (bottom) normalized depth, XZ-plane, (b) relative 2D object scales, and (c) category frequency. 3. The O MNI3D Be | nchmark The primary benchmarks for 3D object detection are small, focus on a few categories and are of a single do-main. For instance, the popular KITTI [21] contains only urban scenes, has 7k images and 3 categories, with a focus oncar. SUN RGB-D [70] has 10k images. The small size and lack of variance in 3D datasets is a stark difference to 2D counterparts, such as COCO [45] and LVIS [24], which have pioneered progress in 2D recognition. We aim to bridge the gap to 2D by introducing OMNI3D, a large-scale and diverse benchmark for image-based 3D object detection consisting of 234k images, 3 million labeled 3D bounding boxes, and 98 object categories. We source from recently released 3D datasets of urban (nuScenes [9] and KITTI [21]), indoor (SUN RGB-D [70], ARKitScenes [6] and Hypersim [65]), and general (Objec-tron [2]) scenes. Each of these datasets target different appli-cations ( e.g.point-cloud recognition or reconstruction in [6], inverse rendering in [65]), provide visual data in different forms ( e.g. videos in [2, 6], rig captures in [9]) and anno-tate different object types. To build a coherent benchmark, we process the varying raw visual data, re-purpose their annotations to extract 3D cuboids in a unified 3D camera coordinate system , and carefully curate the final vocabulary. More details about the benchmark creation in the Appendix. We analyze OMNI3Dand show its rich spatial and se-mantic properties proving it is visually diverse, similar to 2D data, and highly challenging for 3D as depicted in Fig. 2. We show the value of OMNI3Dfor the task of 3D object detection with extensive quantitative analysis in Sec. 5. 3.1. Dataset Analysis Splits. We split the dataset into 175k/19k/39k images for train/val/test respectively, consistent with original splits when available, and otherwise free of overlapping video sequences in splits. We denote indoor and outdoor sub-sets as OMNI3D IN(SUN RGB-D, Hypersim, ARKit), and OMNI3DOUT(KITTI, nuScenes). Objectron, with primarily close-up objects, is used only in the fullOMNI3D setting.Layout statistics. Fig. 2(a) shows the distribution of object centers onto the image plane by projecting centroids on the XY-plane (top row), and the distribution of object depths by projecting centroids onto the XZ-plane (bottom row). We find that OMNI3D’s spatial distribution has a center bias, similar to 2D datasets COCO and LVIS. Fig. 2(b) depicts the relative object size distribution, defined as the square root of object area divided by image area. Objects are more likely to be small in size similar to LVIS (Fig. 6c in [24]) suggesting thatOMNI3Dis also challenging for 2D detection, while objects in O MNI3D OUTare noticeably smaller. The bottom row of Fig. 2(a) normalizes object depth in a[0,20m]range, chosen for visualization and satisfies 88% of object instances; OMNI3Ddepth ranges as far as 300m. We observe that the OMNI3Ddepth distribution is far more diverse than SUN RGB-D and KITTI, which are biased to-ward near or road-side objects respectively. See Appendix for each data source distribution plot. Fig. 2(a) demonstrates OMNI3D’s rich diversity in spatial distribution and depth which suffers significantly less bias than existing 3D bench-marks and is comparable in complexity to 2D datasets. 2D and 3D correlation. A common assumption in urban scenes is that objects rest on a ground plane and appear smaller with depth. To verify if that is true generally, we compute correlations. We find that 2D yand 3D zare indeed fairly correlated in OMNI3D OUTat0.524, but significantly less in OMNI3DINat0.006. Similarly, relative 2D object size (√ h·w) and zcorrelation is 0.543and0.102respectively. This confirms our claim that common assumptions in urban scenes are not generally true, making the task challenging. Category statistics. Fig. 2(c) plots the distribution of in-stances across the 98 categories of OMNI3D. The long-tail suggests that low-shot recognition in both 2D and 3D will be critical for performance. In this work, we want to focus on large-scale and diverse 3D recognition, which is comparably unexplored. We therefore filter and focus on categories with at least 1000 instances. This leaves 50 diverse categories including chair, sofa, laptop, table, books, car, truck, pedes-trian and more, 19 of which have more than 10k instances. We provide more per-category details in the Appendix. 13156 IoUness 0.75Anchor 2D GT 2D Head Cube Head[𝑢,𝑣] 𝑧 [ഥ𝑤,തℎ,ҧ𝑙] 𝒑 𝜇ቊClass 2DBoxቊ2DBox IoUness 𝐵3𝐷𝓁chamferBackbone Feature Map Input Image 𝑿(𝑢,𝑣,𝑧) 𝒘𝒉 𝒍 𝑹(𝒑)↺ 3D Object Detection Output Top Down Side View3D GT Figure 3. Overview. Cube R-CNN takes as input an RGB image, detects all objects in 2D and predicts their 3D cuboids B3D. During training, the 3D corners of the cuboids are compared against 3D ground truth with the point-cloud chamfer distance. 4. Method Our goal is to design a simple and effective model for general 3D object detection. Hence, our approach is free of domain or object specific strategies. We design our 3D object detection framework by extending Faster R-CNN [64] with a 3D object head which predicts a cuboid per each detected 2D object. We refer to our method as Cube R-CNN . Figure 3 shows an overview of our approach. 4.1. Cube R-CNN Our model builds on Faster R-CNN [64], an end-to-end region-based object detection framework. Faster-RCNN consists of a backbone network, commonly a CNN, which embeds the input image into a higher-dimensional feature space. A region proposal network (RPN) predicts regions of interest (RoIs) representing object candidates in the image. A 2D box head inputs the backbone feature map and processes each RoI to predict a category and a more accurate 2D box. Faster R-CNN can be easily extended to tackle more tasks by adding task-specific heads, e.g.Mask R-CNN [26] adds a mask head to additionally output object silhouettes. For the task of 3D object detection, we extend Faster R-CNN by introducing a 3D detection head which predicts a 3D cuboid for each detected 2D object. Cube R-CNN extends Faster R-CNN in three ways: (a) we replace the binary classifier in RPN which predicts region objectness with a regressor that predicts IoUness , (b) we introduce a cube head which estimates the parameters to define a 3D cuboid for each detected object (similar in concept to [68]), and (c) we define a new training objective which incorporates avirtual depth for the task of 3D object detection. IoUness. The role of RPN is two-fold: (a) it proposes RoIs by regressing 2D box coordinates from pre-computed an-chors and (b) it classifies regions as object or not (object-ness). This is sensible in exhaustively labeled datasets where it can be reliably assessed if a region contains an object. However, OMNI3Dcombines many data sources with no guarantee that all instances of all classes are labeled. We overcome this by replacing objectness with IoUness , applied only to foreground. Similar to [34], a regressor predicts IoU between a RoI and a ground truth. Let obe the predicted IoU for a RoI and ˆobe the 2D IoU between the region and its ground truth; we apply a binary cross-entropy (CE) loss LIoUness =ℓCE(o,ˆo). We train on regions whose IoU exceeds 0.05 with a ground truth in order to learn IoUness from a wide range of region overlaps. Thus, the RPN training ob-jective becomes LRPN= ˆo·(LIoUness +Lreg), where Lregis the 2D box regression loss from [64]. The loss is weighted byˆoto prioritize candidates close to true objects. Cube Head. We extend Faster R-CNN with a new head, called cube head, to predict a 3D cuboid for each detected 2D object. The cube head inputs 7×7feature maps pooled from the backbone for each predicted region and feeds them to 2 fully-connected (FC) layers with 1024 hidden dimensions. All 3D estimations in the cube head are category-specific. The cube head represents a 3D cuboid with 13 parameters each predicted by a final FC layer: •[u, v]represent the projected 3D center on the image plane relative to the 2D RoI •z∈R+is the object’s center depth in meters trans-formed from virtual depth zv(explained below) •[ ¯w,¯h,¯l]∈R3 +are the log-normalized physical box dimensions in meters •p∈R6is the continuous 6D [86] allocentric rotation •µ∈R+is the predicted 3D uncertainty The above parameters form the final 3D box in camera view coordinates for each detected 2D object. The object’s 3D center Xis estimated from the predicted 2D projected 13157 center [u, v]and depth zvia X(u, v, z ) = z fx(rx+urw−px),z fy(ry+vrh−py), z (1) where [rx, ry, rw, rh]is the object’s 2D box, (fx, fy)are the camera’s known focal lengths and (px, py)the prin-cipal point. The 3D box dimensions dare derived from [ ¯w,¯h,¯l]which are log-normalized with category-specific pre-computed means (w0, h0, l0)for width, height and length respectively, and are are arranged into a diagonal matrix via d( ¯w,¯h,¯l) =diag exp( ¯w)w0,exp(¯h)h0,exp(¯l)l0 (2) Finally, we derive the object’s pose R(p)as a3×3rotation matrix based on a 6D parameterization (2 directional vectors) ofpfollowing [86] which is converted from allocentric to egocentric rotation similar to [39], defined formally in Appendix. The final 3D cuboid, defined by 8corners, is B3D(u, v, z, ¯w,¯h,¯l,p) =R(p)d( ¯w,¯h,¯ |
Feng_OT-Filter_An_Optimal_Transport_Filter_for_Learning_With_Noisy_Labels_CVPR_2023 | Abstract The success of deep learning is largely attributed to the training over clean data. However, data is often coupled with noisy labels in practice. Learning with noisy labels is challenging because the performance of the deep neural networks (DNN) drastically degenerates, due to confirma-tion bias caused by the network memorization over noisy labels. To alleviate that, a recent prominent direction is on sample selection, which retrieves clean data samples from noisy samples, so as to enhance the model’s robust-ness and tolerance to noisy labels. In this paper, we re-vamp the sample selection from the perspective of optimal transport theory and propose a novel method, called the OT-Filter. The OT-Filter provides geometrically meaning-ful distances and preserves distribution patterns to measure the data discrepancy, thus alleviating the confirmation bias. Extensive experiments on benchmarks, such as Clothing1M and ANIMAL-10N, show that the performance of the OT-Filter outperforms its counterparts. Meanwhile, results on benchmarks with synthetic labels, such as CIFAR-10/100, show the superiority of the OT-Filter in handling data la-bels of high noise. | 1. Introduction Deep learning has achieved great success on a flurry of emerging applications, such as [28, 29, 32, 49]. It is be-lieved that the phenomenal achievement of deep learning is largely attributed to accurate labels. However, the in-accuracy or imprecision of labels is inherent in real-world datasets, the so-called noisy label challenge . One way to alleviate that is to collect labels from internet queries over data-level tags, but the performance of deep neural networks (DNN) suffers drastically from the inaccuracy of such la-bels [27,32]. A higher quality of data labels can be obtained by employing human workers, but the seemingly “ground truth annotations” inevitably involve human biases or mis-†Equal Contribution.takes [45,66]. More, the human annotation is expensive and time-consuming, especially for large-scale datasets. There have been many works on handling noisy labels, such as regularization [23] and transition matrix [13, 43]. The regularization approach leverages the regularization bias to overcome the label noise issue. But the regular-ization bias is permanent [62], thus overfitting models to noisy labels. The transition matrix approach assumes that the transition probabilities between clean and noisy labels are fixed and independent of data samples. However, a quality label transition matrix is hard to be estimated, es-pecially when the number of classes is big, making it fall short in handling noisy real-world datasets, such as [60] and [36]. A recent prominent direction is on adopting sam-ple selection [27, 38, 58] for enhancing the label quality, by selecting clean samples from the noisy training dataset. In general, existing literatures in sample selection can be grouped to two categories, co-training networking [27, 62] and criterion-based filtering [31, 34, 57, 58]. The former utilizes the memorization of DNNs and multiple networks (e.g., co-teaching [27] and its variants [38, 62]) to filter label noise with small loss trick, so that a small set of clean samples are used as training examples. Letting alone the high training overhead of multiple net-works, the disadvantages are two-fold: 1) it may require the a-prior knowledge of noisy rates to select the specified proportion of small loss samples as clean samples; 2) the small-loss trick is not tolerant to the error accumulation of network training once a clean sample is falsely recognized, the so-called confirmation bias, especially for labels with high noise where clean and noisy samples largely overlap. The latter alleviates the problem by setting a specific cri-terion. Mostly, existing works [31] [58] adopt Euclidean distances for measuring the similarity between data sam-ples. Distance-based filtering iteratively explores the neigh-borhood of the feature representation and infers/cleans sam-ple labels by aggregating the information from their neigh-borhoods. Despite the simplicity, distance-based filtering is insufficient to address noisy labels, especially when the label noise is high, e.g., overlapped label classes. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16164 In this paper, we revamp the sample selection from the perspective of optimal transport [56] and propose a novel filtering method, called the OT-Filter . In light of the op-timal transport, we construct discrete probability measures over the sample features, which lifts the Euclidean space of feature vectors to a probability space. It thus enables a ge-ometric way of measuring the discrepancy between proba-bility measures, representing the corresponding sample fea-tures. In addition to the distance-based metric in the Eu-clidean space [31,58], the OT-Filter also captures the distri-bution information in the probability space so as to alleviate the confirmation bias. Accordingly, a clean representation can be obtained for each class in the probability space. By optimizing the transport plan from a sample to a clean rep-resentation, one can better determine if a sample is clean or noisy, thus improving the quality of sample selection. In general, the merits of the OT-Filter can be summa-rized as follows. First, it does not require any a-prior knowl-edge about the noisy rate of a dataset. Second, it utilizes the optimal transport which provides geometrically meaningful distances to exploit the sample discrepancy yet preserving the distribution patterns in corresponding probability space, making the sample selection of high quality and theoretical support. Third, it can be plugged to existing robust train-ing paradigms, e.g., supervised and semi-supervised robust training. We conduct extensive experiments with a series of syn-thetic and real datasets to gain insights into our propos-als. The result shows that the OT-Filter follows state-of-the-art (SOTA) [38] when the noise rate is low, and dom-inates SOTA when the noise rate is high. For instance, the OT-Filter achieves about 14% and12% higher accu-racy than SOTA, in the presence of 90% noise rate, on synthetic datasets CIFAR-10 and CIFAR-100, respectively. Moreover, the OT-Filter outperforms the competitors on real datasets Clothing1M and ANIMAL-10N. The rest of the paper is organized as follows. We re-view the existing literature in Section 2. In Section 3, we present preliminaries of optimal transport. We investigate our proposed OT-Filter in Section 4. Furthermore, we con-duct extensive empirical studies in Section 5 and conclude the paper in Section 6. |
Cha_Rebalancing_Batch_Normalization_for_Exemplar-Based_Class-Incremental_Learning_CVPR_2023 | Abstract Batch Normalization (BN) and its variants has been ex-tensively studied for neural nets in various computer vision tasks, but relatively little work has been dedicated to studying the effect of BN in continual learning. To that end, we de-velop a new update patch for BN, particularly tailored for the exemplar-based class-incremental learning (CIL). The main issue of BN in CIL is the imbalance of training data between current and past tasks in a mini-batch, which makes the em-pirical mean and variance as well as the learnable affine transformation parameters of BN heavily biased toward the current task — contributing to the forgetting of past tasks. While one of the recent BN variants has been developed for “online” CIL, in which the training is done with a single epoch, we show that their method does not necessarily bring gains for “offline” CIL, in which a model is trained with multiple epochs on the imbalanced training data. The main reason for the ineffectiveness of their method lies in not fully addressing the data imbalance issue, especially in comput-ing the gradients for learning the affine transformation pa-rameters of BN. Accordingly, our new hyperparameter-free variant, dubbed as Task-Balanced BN (TBBN), is proposed to more correctly resolve the imbalance issue by making a horizontally-concatenated task-balanced batch using both reshape and repeat operations during training. Based on our experiments on class incremental learning of CIFAR-100, ImageNet-100, and five dissimilar task datasets, we demonstrate that our TBBN, which works exactly the same as the vanilla BN in the inference time, is easily applicable to most existing exemplar-based offline CIL algorithms and consistently outperforms other BN variants. | 1. Introduction In recent years, continual learning (CL) has been actively studied to efficiently learn a neural network on sequentially *Corresponding author (E-mail: tsmoon@snu.ac.kr )arriving datasets while eliminating the process of re-training from scratch at each arrival of a new dataset [9]. However, since the model is typically trained on a dataset that is heav-ily skewed toward the current task at each step, the resulting neural network often suffers from suboptimal trade-off be-tween stability and plasticity [25] during CL. To overcome this issue, various studies have been focused on addressing the so-called catastrophic forgetting phenomenon [9, 28]. Among different CL settings, the class-incremental learn-ing (CIL) setting where the classifier needs to learn previ-ously unseen classes at each incremental step has recently drawn attention due to its practicality [1, 3, 6, 12, 15, 26, 39, 42, 44]. Most state-of-the-art CIL algorithms maintain a small exemplar memory to store a subset of previously used training data and combine it with the current task dataset to mitigate the forgetting of past knowledge. A key issue of exemplar-based CIL is that the model prediction becomes heavily biased towards more recently learned classes, due to the imbalance between the training data from the current task (that are abundantly available) and past tasks (with lim-ited access through exemplar memory). In response, recently proposed solutions to biased predictions include bias correc-tion [39], unified classifier [15], and separated softmax [1], which greatly improved the overall accuracy of CIL methods across all classes learned so far. Despite such progress, relatively less focus has been made on backbone architectures under CIL on computer vision tasks. Specifically, the popular CNN-based models ( e.g., ResNet [13]) are mainly used as feature extractors, and those models are equipped with Batch Normalization (BN) [17] by default. However, since BN is designed for single-task training on CNNs, applying BN directly to exemplar-based CIL results in statistics biased toward the current task due to the imbalance between current and past tasks’ data in a mini-batch. Recently, [29] pointed out this issue in CIL, dubbed as cross-task normalization effect , and proposed a new normalization scheme called Continual Normalization This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20127 (CN), which applies Group Normalization (GN) [40] across the channel dimension before running BN across the batch dimension. Therefore, the difference in feature distributions among tasks is essentially removed by GN, and the following BN computes task-balanced mean and variance statistics which is shown to outperform vanilla BN on online CIL settings. In this paper, we argue that CN only partially resolves the bias issue in exemplar-based CIL. Specifically, we find that the gradients on the affine transformation parameters remain biased towards the current task when using CN. As a result, it leads to inconsistent performance gains in the offline CIL setting, which is considered more practical than online CIL. To this end, we propose a simple yet novel hyperparameter-free normalization layer, dubbed as Task-Balanced Batch Normalization (TBBN), which effectively resolves the bias issue. Our method employs adaptive reshape andrepeat op-erations on the mini-batch feature map during training in order to compute task-balanced normalization statistics and gradients for learning the affine transformation parameters. Our method does not require any hyperparameter as the size-inputs for reshape and repeat operations are determined adaptively. Furthermore, the application of TBBN during testing is identical to vanilla BN, requiring nochange on the backbone architecture. Through extensive offline CIL exper-iments on CIFAR-100, ImageNet-100, and five dissimilar task datasets, we show that a simple replacement of BN lay-ers in the backbone CNN model with TBBN benefits most state-of-the-art exemplar-based CIL algorithms towards an additional boost in performance. Our analysis shows that the gain of TBBN is consistent across various backbone archi-tectures and datasets, suggesting its potential to become a correct choice for exemplar-based offline CIL algorithms. |
Duan_RWSC-Fusion_Region-Wise_Style-Controlled_Fusion_Network_for_the_Prohibited_X-Ray_Security_CVPR_2023 | Abstract Automatic prohibited item detection in security inspection X-ray images is necessary for transportation/g3. The abundance and diversity of the X-ray security images with prohibited item, termed as prohibited X-ray security images, are essential for training the detection model. In order to solve the data insufficiency, we propose a Region-Wise Style-Controlled Fusion (RWSC-Fusion) network, which superimposes the prohibited items onto the normal X-ray security images, to synthesize the prohibited X-ray security images. The proposed RWSC-Fusion innovates both network structure and loss functions to generate more realistic X-ray security images. Specifically, a RWSC-Fusion module is designed to enable the region-wise fusion by controlling the appearance of the overlapping region with novel modulation parameters. In addition, an Edge-Attention (EA) module is proposed to effectively improve the sharpness of the synthetic images. As for the unsupervised loss function, we propose the Luminance loss in Logarithmic form (LL) and Correlation loss of Saturation Difference (CSD), to optimize the fused X-ray security images in terms of luminance and saturation. We evaluate the authenticity and the training effect of the synthetic X-ray security images on private and public SIXray dataset. The results confirm that our synthetic images are reliable enough to augment the prohibited X-ray security images. | 1. Introduction X-ray imagery security inspection is a fundamental part in station/airport, for detecting prohibited items in baggage or suitcase images. Recently, computer vision methods particularly deep learning [ 15, 33] have brought benefits to prohibited item detection [ 1, 16, 18, 25, 37]. The performance of these automated detection models relies heavily on a mass of annotated images. However, real X-ray security images usually are arduous and time-consuming to collect, and the occurrence rate of prohibited items is very low. Moreover, very few of public X-ray security image datasets contain large amounts of prohibited items. Existing datasets are *Corresponding author: xilizju@zju.edu.cn, xiongjp362204@163.com Figure 1. Our RWSC-Fusion allows to superimpose prohibited items onto baggage images, to synthesize prohibited X-ray images. The composite prohibited items are marked with red box. mainly: (1) GDXray: The GDXray dataset [ 24] contains only three kinds of prohibited items: guns, shurikens, and razor blades. Besides, GDXray only involves grayscale images where backgrounds are too simple to conform with the real color X-ray security images; (2) OPIXray: The OPIXray dataset [ 36], which is especially designed for the cutter detection, contains 8885 synthetic X-ray security images with five kinds of cutters, and most images contain only one cutter; (3) SIXray: The SIXray dataset [ 26] contains 1, 059, 231 X-ray security images, but only 8929 images include prohibited items: guns, knives, wrenches, pliers, scissors, and hammers. Given the above, existing public datasets haven't been up to the standard of training. To overcome the lack of training samples, traditional offline enhancement strategies such as rotation, re-scaling and mixing are applied to the augmentation of training samples [ 32, 42]. However, unlike natural images and other X-ray scans, X-ray security images usually involve randomly stacked objects [ 6, 11, 12, 22]. In addition, according to the imaging principle of X-ray security, objects overlap with each other in a translucent state, and appear differently for different materials and thickness. These traditional augmentation methods cannot improve the diversity and complexity for the inter-occlusion between prohibited items [ 31]. Therefore, it is very necessary to synthesize realistic X-ray security images, so as to enrich the prohibited items in pose, scale and position. A few researchers studied to directly generate prohibited RWSC-Fusion: Region-Wise Style-Controlled Fusion Network for the Prohibited X-ray Security Image Synthesis Luwen Duan1, 2 Min Wu1 Lijian Mao1 Jun Yin1 Jianping Xiong1* Xi Li2* 1Zhejiang Dahua Technology Co., Ltd., 2Zhejiang University This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22398 X-ray security images by learning deep synthesis models. Inspired by the success of GANs in image synthesis [ 10, 34], Zhao et al. proposed X-ray Image-Synthesis-GAN [41 ] to generate the prohibited items from a noise region on the background images. Li et al . [17] synthesized the X-ray security images from the semantic label maps based on GANs. Yang et al. [40] enhanced the training of GANs to learn higher-quality X-ray security images. However, they synthesized only one prohibited item rather than stacked and overlapped ones in one image, which thus are not realistic enough for the complex X-ray security images in a real-world scenario. Isaac-Medina et al . [14] used the paired X-ray energy maps (high, low, effective-Z maps) to synthesize the pseudo-color images which however only served as a small amount of testing samples. Bhowmik et al. [3] developed a Synthetically Composited (SC) data augmentation strategy based on the Threat Image Projection (TIP) method [ 8], to fuse the prohibited items with the baggage images for generating images with stacked and cluttered prohibited items. However, the SC need adjust the parameters for each image to meet the various colors of different materials, and thus lacks automation, robustness and versatility. In order to synthesize the prohibited X-ray security images automatically, we propose a color X-ray security images fusion model, to superimpose the prohibited items onto baggage or suitcase images, as shown in Figure 1. In this way, we synthesize prohibited X-ray security images and obtain annotation automatically, thus avoiding the collection of the annotated prohibited X-ray security images for training the prohibited item detection model. The experimental results prove the advantage of our fusion model over other fusion methods in the field of X-ray security image. In addition, we also compare the prohibited item detection model trained with real and synthetic images. The results verify that our synthetic images are efficient to supplement the prohibited X-ray security images in downstream detection task. The main contributions of our work are as follows: 1. We propose an unsupervised color X-ray security image fusion model. Due to the imaging particularity, existing fusion loss functions are inapplicable to X-ray security images. We design Luminance loss in Logarithmic form (LL) and Correlation loss of Saturation Difference (CSD) based on the principle of X-ray imaging and Threat image projection (TIP). The LL and CSD optimize the comprehensive luminance-saturation fusion between the foreground item and background image. Thus, we extend TIP to composite color X-ray security images. |
Hai_Rigidity-Aware_Detection_for_6D_Object_Pose_Estimation_CVPR_2023 | Abstract Most recent 6D object pose estimation methods first use object detection to obtain 2D bounding boxes before actu-ally regressing the pose. However, the general object de-tection methods they use are ill-suited to handle cluttered scenes, thus producing poor initialization to the subsequent pose network. To address this, we propose a rigidity-aware detection method exploiting the fact that, in 6D pose esti-mation, the target objects are rigid. This lets us introduce an approach to sampling positive object regions from the entire visible object area during training, instead of naively drawing samples from the bounding box center where the object might be occluded. As such, every visible object part can contribute to the final bounding box prediction, yielding better detection robustness. Key to the success of our approach is a visibility map, which we propose to build using a minimum barrier distance between every pixel in the bounding box and the box boundary. Our results on seven challenging 6D pose estimation datasets evidence that our method outperforms general detection frameworks by a large margin. Furthermore, combined with a pose re-gression network, we obtain state-of-the-art pose estimation results on the challenging BOP benchmark. | 1. Introduction Estimating the 6D pose of objects, i.e., their 3D rota-tion and 3D translation with respect to the camera, is a fun-damental computer vision problem with many applications in, e.g., robotics, quality control, and augmented reality. Most recent methods [2, 5, 9, 25, 42, 45] follow a two-stage pipeline: First, they detect the objects, and then estimate their 6D pose from a resized version of the resulting de-tected image patches. While this approach works well in simple scenarios, its performance drops significantly in the presence of cluttered scenes. In particular, and as illustrated in Fig. 1, we observed this to be mainly caused by detection failures. Specifically, most 6D pose estimation methods rely on standard object detection methods [10, 22, 37, 43, 44, 50], which were designed to handle significantly different scenes (a) General detection (b) Detection in 6D pose (c) Baseline detection results (d) Our detection results (e) Baseline pose results (f) Our pose results Figure 1. The challenges of detection in 6D object pose. (a) The general detection scenario (COCO [29]) exhibits small occlusions. (b)The occlusion problem in 6D object pose, however, is much more severe, (c)making the general detection method [44] based on center-oriented sampling unreliable (glue) or fail completely (cat). (d)By contrast, our new detection strategy is effective in these challenging scenarios, (e,f) and provides significantly more robust 2D box initialization for the following 6D regression net-works [15], yielding more accurate pose estimates. than those observed in 6D object pose estimation bench-marks, typically with much smaller occlusions, as shown in Fig. 1(a). Because of these smaller occlusions, standard de-tection methods make the assumption that the regions in the center of the ground-truth bounding boxes depict the object of interest, and thus focus on learning to predict the bound-ing box parameters from samples drawn from these regions only. However, as shown in Fig. 2, this is ill-suited to 6D pose estimation in cluttered scenes, where the center of the objects is often occluded by other objects or scene elements. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8927 (a) Baseline strategy (b) Our strategy (c) Detection results Figure 2. Detecting rigid objects in cluttered scenes. (a) The standard strategy [50] chooses positive samples (green cells) around the object center, thus suffering from occlusions. (b)In-stead, we propose to use a visibility-guided sampling strategy to discard the occluded regions and encourage the network to be su-pervised by all visible parts. The sampling probability is depicted by different shades of green. (c)Our method (green boxes) yields more accurate detections than the standard strategy (red boxes). To handle this, we propose a detection approach that leverages the property that the target objects in 6D pose es-timation are rigid. For such objects, any visible parts can provide a reliable estimate of the complete bounding box. We therefore argue that, in contrast with the center-based sampling used by standard object detectors, any, and only feature vectors extracted from the visible parts should be potential candidates of positive samples during training. In principle, modeling the visibility could be achieved by annotating segmentation masks for all objects. This pro-cess, however, is cumbersome, particularly in the presence of occlusions by scene elements, and would limit the scal-ability of the approach. Instead, we therefore propose to compute a probability of visibility based on a minimum barrier distance between any pixel in a bounding box and the box boundary. We then use this probability to guide the sampling of candidates during training, thus discard-ing the occluded regions and encouraging the network to be supervised by all visible parts. Furthermore, to leverage the reliability of local predictions from most visible parts during inference, we collect all candidate local predictions above a confidence threshold, and combine them by a sim-ple weighted average, yielding more robust detections. We demonstrate the effectiveness of our method on seven challenging 6D object pose estimation datasets, on which we consistently and significantly outperform all detection baselines. Furthermore, combined with a 6D pose regres-sion network, our approach yields state-of-the-art object pose results. |
Huang_Clover_Towards_a_Unified_Video-Language_Alignment_and_Fusion_Model_CVPR_2023 | Abstract Building a universal Video-Language model for solving various video understanding tasks ( e.g., text-video retrieval, video question answering) is an open challenge to the ma-chine learning field. Towards this goal, most recent works build the model by stacking uni-modal and cross-modal fea-ture encoders and train it with pair-wise contrastive pre-text tasks. Though offering attractive generality, the resulted models have to compromise between efficiency and perfor-mance. They mostly adopt different architectures to deal with different downstream tasks. We find this is because the pair-wise training cannot well align andfuse features from different modalities. We then introduce Clover —a Corre-lated Video-Language pre-training method—towards a uni-versal Video-Language model for solving multiple video un-derstanding tasks with neither performance nor efficiency compromise. It improves cross-modal feature alignment and fusion via a novel tri-modal alignment pre-training task. Additionally, we propose to enhance the tri-modal alignment via incorporating learning from semantic masked samples and a new pair-wise ranking loss. Clover estab-lishes new state-of-the-arts on multiple downstream tasks, including three retrieval tasks for both zero-shot and fine-tuning settings, and eight video question answering tasks. Codes and pre-trained models will be released at https: //github.com/LeeYN-43/Clover . | 1. Introduction Video-Language pre-training (VidL) aims to learn gen-eralizable multi-modal models from large-scale video-text *Equal Contribution in alphabetical order. †Work done when interning at ByteDance Inc. ‡Corresponding Author.samples so as to better solve various challenging Video-Language understanding tasks, such as text-video retrieval [1, 4, 38, 55] and video question answering [16, 47, 52]. Re-cent studies [9,11,23,24,49,56,58,61] have shown that VidL leads to significant performance improvement and achieves state-of-the-art results on various downstream text-video re-trieval and video question answering (VQA) benchmarks. Though achieving encouraging performance, existing VidL models mostly adopt different architectures to deal with different downstream tasks. For the text-video retrieval tasks, they [2, 10, 11, 13, 31, 39] typically use two individ-ual uni-modal encoders for processing video and text data separately, for the sake of retrieval efficiency. While for video question answering tasks, the models usually adopt the multi-modal joint encoder design to learn the associa-tion and interaction of different modalities. Building a unified model capable of solving various Video-Language tasks is a long-standing challenge for ma-chine learning research. A few recent works [9, 24] attempt to learn a unified VidL model for both tasks, which uses the multi-modal encoder to conduct text-video retrieval. How-ever, the model requires an exhaustive pair-wise compar-ison between the query texts and gallery videos. Given Ntext queries and Mcategory videos, the computation complexity of the multi-modal encoder model would be O(NM), which makes it infeasible for large-scale video-text retrieval applications. Another straightforward solu-tion is to simply combine the uni-modal and multi-modal encoders (Fig. 1 (a)), and perform the retrieval and VQA tasks through the uni-modal and multi-modal encoders re-spectively. Its computation complexity for retrieval tasks is only O(N+M). However, the experiment results in Fig.1 (c) show that, without a carefully designed correlat-ing mechanism between these two types of encoders, the simple combination i.e., COMB yields a compromised per-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14856 formance compared with the models i.e. IND individually designed for retrieval and VQA. In this work, we aim to address the above issues and build a unified pre-training model that attains high effi-ciency and performance simultaneously. We observe: (i) well aligning the features of the video and text from the same data pair is important for text-video matching; (ii) ef-fectively fusing video and text features into unified repre-sentations is critical for video-text understanding. However, existing pre-training strategies that rely on either simple su-pervised or contrastive pre-text tasks hardly achieve promis-ing feature alignment and fusion capability simultaneously. Motivated by these observations, we develop a new VidL method from these two aspects. Specifically, we propose Correlated Video-Language pre-training (Clover), a VidL method that not only unifies Video-Language alignment and fusion, but also makes them mutually boosted. The fused multi-modal representation contains richer context informa-tion than the uni-modal representations [11,35]. As an inter-mediate modality between video and text, the multi-modal representations are good anchors for cross-modality align-ment. Meanwhile, keeping the fused representation closer to the uni-modal representation containing consistent se-mantic information and away from the inconsistent one will enhance the learning of semantic information in the fused modality. Therefore, we propose the Tri-Modal Alignment (TMA) to get Video-Language alignment and fusion mutu-ally boosted, which takes the alignment between the multi-modal representation and text/video representations as an auxiliary objective. We note that since the tri-modal align-ment is well compatible with the classical pre-training tasks [3,7,35] e.g., Masked Language Modeling, its computation overhead is negligible. To help the model maintain fine-grained discriminative capability while improving its gen-eralizability, we further introduce a pair-wise ranking loss that urges the model to be aware of the concept missing in masked samples compared to original samples. Extensive experiments are conducted on multiple down-stream tasks, including three retrieval tasks with different experimental setups ( i.e.zero-shot and fine-tune) and eight video question answering tasks. The results demonstrate that Clover is able to get the cross-modal fusion and align-ment capability mutually improved, and consistently out-performs current SOTAs on various downstream tasks. It achieves an average performance improvement of 4.9% and 8.7% Recall@10 score on the zero-shot and fine-tune set-tings of the three downstream retrieval datasets, while the average accuracy improvement over current SOTAs is 2.3% on the eight video question answering datasets. In summary, we make the following contributions: (1) we introduce Clover, a pre-training method achieving uni-fied Video-Language alignment and fusion model that can be easily transferred to various downstream video under-standing tasks while attaining both high efficiency and per-formance; (2) we propose a novel tri-modal alignment pre-training task, which correlates the uni-modal encoder and multi-modal encoder to get them mutually boosted. |
Assran_Self-Supervised_Learning_From_Images_With_a_Joint-Embedding_Predictive_Architecture_CVPR_2023 | Abstract This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations. We introduce the Image-based Joint-Embedding Predictive Architecture (I-JEPA), a non-generative approach for self-supervised learning from images. The idea behind I-JEPA is simple: from a single context block, predict the representations of various target blocks in the same image. A core design choice to guide I-JEPA towards producing semantic representations is the masking strategy; specifically, it is crucial to (a) sample tar-get blocks with sufficiently large scale (semantic), and to (b) use a sufficiently informative (spatially distributed) context block. Empirically, when combined with Vision Transform-ers, we find I-JEPA to be highly scalable. For instance, we train a ViT-Huge/14 on ImageNet using 16 A100 GPUs in under 72 hours to achieve strong downstream performance across a wide range of tasks, from linear classification to object counting and depth prediction. | 1. Introduction In computer vision, there are two common families of approaches for self-supervised learning from images: invariance-based methods [ 9,16,17,23,34,36,71] and gen-erative methods [ 7,27,35,56]. Invariance-based pretraining methods optimize an en-coder to produce similar embeddings for two or more views of the same image [ 14,19], with image views typically constructed using a set of hand-crafted data augmentations, such as random scaling, cropping, and color jittering [ 19], amongst others [ 34]. These pretraining methods can pro-duce representations of a high semantic level [ 3,17], but they also introduce strong biases that may be detrimental for certain downstream tasks or even for pretraining tasks with different data distributions [ 1]. Often, it is unclear *massran@meta.com10310410510676777879808182I-JEPAViT-H/14(300ep)I-JEPAViT-H/16448(300ep) CAEViT-L/16(1600ep)MAEViT-H/14(1600ep)data2vecViT-L/16(1600ep)Pretraining GPU HoursTop1(%)ImageNet-1K Linear Evaluation vs GPU Hours Figure 1. ImageNet Linear Evaluation . The I-JEPA method learns semantic image representations without using any view data augmentations during pretraining. By predicting in representation space, I-JEPA produces semantic representations while using less compute than previous methods. how to generalize these biases for tasks requiring differ-ent levels of abstraction. For example, image classification and instance segmentation do not require the same invari-ances [ 10]. Additionally, it is not straightforward to gen-eralize these image-specific augmentations to other modal-ities such as audio. Cognitive learning theories have suggested that a driv-ing mechanism behind representation learning in biologi-cal systems is the adaptation of an internal model to pre-dict sensory input responses [ 30,58]. This idea is at the core of self-supervised generative methods, which remove or corrupt portions of the input and learn to predict the cor-rupted content [ 8,35,56,65,66,69]. In particular, mask-denoising approaches learn representations by reconstruct-ing randomly masked patches from an input, either at the pixel or token level. Masked pretraining tasks require less This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15619 xx-encoderyy-encoderD(sx,sy)sxsy(a)Joint-Embedding Architecturexx-encoderdecoderzyD(ˆy,y)ˆy(b)Generative Architecturexx-encoderpredictorzyy-encoderD(ˆsy,sy)ˆsysy(c)Joint-Embedding Predictive Architecture Figure 2. Common architectures for self-supervised learning, in which the system learns to capture the relationships between its inputs. The objective is to assign a high energy (large scaler value) to incompatible inputs, and to assign a low energy (low scaler value) to compat-ible inputs. (a)Joint-Embedding Architectures learn to output similar embeddings for compatible inputs x, y and dissimilar embeddings for incompatible inputs. (b)Generative Architectures learn to directly reconstruct a signal yfrom a compatible signal x, using a decoder network that is conditioned on additional (possibly latent) variables zto facilitate reconstruction. (c)Joint-Embedding Predictive Architec-tures learn to predict the embeddings of a signal yfrom a compatible signal x, using a predictor network that is conditioned on additional (possibly latent) variables zto facilitate prediction. prior knowledge than view-invariance approaches and eas-ily generalize beyond the image modality [ 7]. However, the resulting representations are typically of a lower semantic level and underperform invariance-based pretraining in off-the-shelf evaluations (e.g., linear-probing) and in transfer settings with limited supervision for semantic classification tasks [ 3]. Consequently, a more involved adaptation mech-anism (e.g., end-to-end fine-tuning) is required to reap the full advantage of these methods. In this work, we explore how to improve the semantic level of self-supervised representations without using extra prior knowledge encoded through image transformations. To that end, we introduce a joint-embedding predictive ar-chitecture [ 47] for images (I-JEPA). An illustration of the method is provided in Figure 3. The idea behind I-JEPA is to predict missing information in an abstract representa-tion space; e.g., given a single context block, predict the representations of various target blocks in the same im-age, where target representations are computed by a learned target-encoder network. Compared to generative methods that predict in pixel/token space, I-JEPA makes use of abstract prediction targets for which unnecessary pixel-level details are poten-tially eliminated, thereby leading the model to learn more semantic features. Another core design choice to guide I-JEPA towards producing semantic representations is the proposed multi-block masking strategy. Specifically, we demonstrate the importance of predicting sufficiently large target blocks in the image, using an informative (spatially distributed) context block. Through an extensive empirical evaluation, we demon-strate that: •I-JEPA learns strong off-the-shelf representations without the use of hand-crafted view augmentations (cf. Fig. 1). I-JEPA outperforms pixel-reconstruction methods such as MAE [ 35] on ImageNet-1K linear probing, semi-supervised 1% ImageNet-1K, and se-mantic transfer tasks. •I-JEPA is competitive with view-invariant pretraining approaches on semantic tasks and achieves better per-formance on low-level visions tasks such as object counting and depth prediction (Sections 5and6). By using a simpler model with less rigid inductive bias, I-JEPA is applicable to a wider set of tasks. •I-JEPA is also scalable and efficient (Section 7). Pre-training a ViT-H/14 on ImageNet requires less than 1200 GPU hours, which is over 2.5⇥faster than a ViT-S/16 pretrained with iBOT [ 75] and over 10⇥more ef-ficient than a ViT-H/14 pretrained with MAE. Predict-ing in representation space significantly reduces the to-tal computation needed for self-supervised pretraining. |
Jiang_A2J-Transformer_Anchor-to-Joint_Transformer_Network_for_3D_Interacting_Hand_Pose_Estimation_CVPR_2023 | Abstract 3D interacting hand pose estimation from a single RGB image is a challenging task, due to serious self-occlusion and inter-occlusion towards hands, confusing similar ap-pearance patterns between 2 hands, ill-posed joint posi-tion mapping from 2D to 3D, etc.. To address these, we propose to extend A2J-the state-of-the-art depth-based 3D single hand pose estimation method-to RGB domain under interacting hand condition. Our key idea is to equip A2J with strong local-global aware ability to well capture in-teracting hands’ local fine details and global articulated clues among joints jointly. To this end, A2J is evolved un-der Transformer’s non-local encoding-decoding framework to build A2J-Transformer. It holds 3 main advantages over A2J. First, self-attention across local anchor points is built to make them global spatial context aware to better cap-ture joints’ articulation clues for resisting occlusion. Sec-ondly, each anchor point is regarded as learnable query with adaptive feature learning for facilitating pattern fitting capacity, instead of having the same local representation with the others. Last but not least, anchor point locates in 3D space instead of 2D as in A2J, to leverage 3D pose prediction. Experiments on challenging InterHand 2.6M demonstrate that, A2J-Transformer can achieve state-of-the-art model-free performance (3.38mm MPJPE advance-ment in 2-hand case) and can also be applied to depth domain with strong generalization. The code is avaliable athttps://github.com/ChanglongJiangGit/ A2J-Transformer . †Yang Xiao is corresponding author(Yang Xiao@hust.edu.cn). Predicted 3D Joint Interacting 3D Anchors Predicted OffsetsInput RGB Image Output 3D Joints Figure 1. The main idea of A2J-Transformer. 3D anchors are uniformly set and act as local regressors to predict each hand joint. Meanwhile, they are also used as queries, and the interaction among them is established to acquire global context. | 1. Introduction 3D interacting hand pose estimation from a single RGB image can be widely applied to the fields of virtual reality, augmented reality, human-computer interaction, etc.. [32, 34, 37]. Although the paid efforts, it still remains as a challenging research task due to the main issues of serious self-occlusion and inter-occlusion towards hands [7, 12, 16, 22, 27], confusing similar appearance patterns between 2 hands [12, 19, 27], and the ill-posed characteristics of esti-mating 3D hand pose via monocular RGB image [7,16,28]. The existing methods can be generally categorized into model-based [1,2,21,29,30,35,39,41,48] and model-free [5, 7,12,17,19,22,26,27,29,43] groups. Due to model’s strong prior knowledge on hands, the former paradigm is over-all of more promising performance. However, model-based methods generally require complex personalized model cal-ibration, which is sensitive to initialization and susceptible This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8846 to trap in local minima [11, 12]. This is actually not pre-ferred by the practical applications. Accordingly, we focus on model-free manner in regression way. The key idea is that, for effective 3D interacting hand pose estimation the predictor should be well aware of joints’ local fine details and global articulated context simultaneously to resist oc-clusion and confusing appearance pattern issues. To this end, we propose to extend the SOTA depth-based single hand 3D pose estimation method A2J [43] to 3D interact-ing hand pose estimation task from a single RGB image. Although A2J’s superiority with ensemble local regres-sion, intuitively applying it to our task cannot ensure promising performance, since it generally suffers from 3 main defects as below. First, the local anchor points for predicting offsets between them and joints lack interaction among each other. This leads to the fact that, joints’ global articulated clues cannot be well captured to resist occlusion. Secondly, the anchor points within the certain spatial range share the same single-scale local convolution feature, which essentially limits the discrimination capacity on confusing visual patterns towards the interacting hands. Last, anchor points locate within 2D plane, which is not optimal for al-leviating the ill-posed 2D to 3D lifting problem with single RGB image. To address these, we propose to extend A2J un-der Transformer’s non-local encoding-decoding framework to build A2J-Transformer, with anchor point-wise adaptive multi-scale feature learning and 3D anchor point setup . Particularly, the anchor point within A2J is evolved as the learnable query under Transformer framework. Each query will predict its position offsets to all the joints of the 2 hands. Joint’s position is finally estimated via fusing the prediction results from all queries in a linear weight-ing way. That is to say, joint’s position is determined by all the queries located over the whole image of global spa-tial perspective. Meanwhile, the setting query number is flexible, which is not strictly constrained by joint number as in [12]. Thanks to Transformer’s non-local self-attention mechanism [40], during feature encoding stage the queries can interact with each other to capture joints’ global ar-ticulated clues, which is essentially beneficial for resisting self-occlusion and inter-occlusion. Concerning the specific query, adaptive local feature learning will be conducted to extract query-wise multi-scale convolutional feature based Resnet-50 [14]. Compared with A2J’s feature sharing strat-egy among the neighboring anchor points, our proposition can essentially facilitate query’s pattern fitting capacity both for accurate joint localization and joint’s hand identity ver-ification. In summary, each query will be of strong local-global spatial awareness ability to better fit interacting hand appearance pattern. Meanwhile to facilitate RGB-based 2D to 3D hand pose lifting problem, the queries will be set within the 3D space instead of 2D counterpart as in A2J [43]. In this way, each query can directly predict its3D position offset between the joints, which cannot be ac-quired by A2J. Overall, A2J-Transformer’s main research idea is shown in Fig. 1. Compared with the most recently proposed model-free method [12] that also addresses 3D interacting hand pose estimation using Transformer, our proposition still takes some essential advantages. First, joint-like keypoint detec-tion is not required. Secondly, query number is not strictly constrained to be equal to joint number to facilitate pattern fitting capacity. Thirdly, our query locates within 3D space instead of 2D counterpart. The experiments on the challenging Interhand 2.6M [29] dataset verify that, our approach can achieve the state-of-the-art model-free performance (3.38mm MPJPE advance-ment in 2-hand case) for 3D interacting hand pose estima-tion from a single RGB image. And, it significantly outper-forms A2J by large margins (i.e., over 5mm on MPJPE). In addition, experiments on HANDS2017 dataset [46] demon-strate that A2J-Transformer can also be applied to depth do-main with promising performance. Overall, the main contributions of this paper include: •For the first time, we extend A2J from depth domain to RGB domain to address 3D interacting hand pose estima-tion from a single RGB image with promising performance; •A2J’s anchor point is evolved with Transformer’s non-local self-attention mechanism with adaptive local feature learning, to make it be aware of joints’ local fine details and global articulated context simultaneously; •Anchor point is proposed to locate within 3D space to facilitate ill-posed 2D to 3D hand pose lifting problem based on monocular RGB information. |
Jiang_MixPHM_Redundancy-Aware_Parameter-Efficient_Tuning_for_Low-Resource_Visual_Question_Answering_CVPR_2023 | Abstract Recently, finetuning pretrained vision-language models (VLMs) has been a prevailing paradigm for achieving state-of-the-art performance in VQA. However, as VLMs scale, it becomes computationally expensive, storage inefficient, and prone to overfitting when tuning full model parame-ters for a specific task in low-resource settings. Although current parameter-efficient tuning methods dramatically re-duce the number of tunable parameters, there still exists a significant performance gap with full finetuning. In this pa-per, we propose MixPHM , a redundancy-aware parameter-efficient tuning method that outperforms full finetuning in low-resource VQA. Specifically, MixPHM is a lightweight module implemented by multiple PHM-experts in a mixture-of-experts manner. To reduce parameter redundancy, we reparameterize expert weights in a low-rank subspace and share part of the weights inside and across MixPHM. More-over, based on our quantitative analysis of representa-tion redundancy, we propose Redundancy Regularization , which facilitates MixPHM to reduce task-irrelevant redun-dancy while promoting task-relevant correlation. Experi-ments conducted on VQA v2, GQA, and OK-VQA with dif-ferent low-resource settings show that our MixPHM out-performs state-of-the-art parameter-efficient methods and is the only one consistently surpassing full finetuning. | 1. Introduction Adapting pretrained vision-language models (VLMs) [4, 5, 24, 29, 30, 50, 57] to the downstream VQA task [1] in a finetuning manner has emerged as a dominant paradigm to achieve state-of-the-art performance. As the scale of VLMs continues to grow, finetuning the full model with millions or billions of parameters causes a substantial rise in com-putation and storage costs, as well as exposing the over-fitting (poor performance) issue in low-resource learning. Parameter-efficient tuning methods [15, 16, 23, 38, 51, 56], *Corresponding author. /uni00000013/uni00000011/uni00000018 /uni00000014/uni00000011/uni00000013 /uni00000014/uni00000011/uni00000018 /uni00000015/uni00000011/uni00000013 /uni00000015/uni00000011/uni00000018 /uni00000033/uni00000048/uni00000055/uni00000046/uni00000048/uni00000051/uni00000057/uni00000044/uni0000004a/uni00000048/uni00000003/uni00000052/uni00000049/uni00000003/uni00000057/uni00000058/uni00000051/uni00000044/uni00000045/uni0000004f/uni00000048/uni00000003/uni00000053/uni00000044/uni00000055/uni00000044/uni00000050/uni00000048/uni00000057/uni00000048/uni00000055/uni00000056/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000017/uni00000017/uni00000011/uni00000013/uni00000017/uni00000017/uni00000011/uni00000018/uni00000017/uni00000018/uni00000011/uni00000013/uni00000017/uni00000018/uni00000011/uni00000018/uni00000017/uni00000019/uni00000011/uni00000013/uni00000017/uni00000019/uni00000011/uni00000018/uni00000017/uni0000001a/uni00000011/uni00000013/uni00000017/uni0000001a/uni00000011/uni00000018/uni00000017/uni0000001b/uni00000011/uni00000013/uni00000017/uni0000001b/uni00000011/uni00000018/uni00000024/uni00000059/uni00000048/uni00000055/uni00000044/uni0000004a/uni00000048/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000029/uni00000058/uni0000004f/uni0000004f/uni00000003/uni00000029/uni0000004c/uni00000051/uni00000048/uni00000057/uni00000058/uni00000051/uni0000004c/uni00000051/uni0000004a/uni00000003/uni00000017/uni00000019/uni00000011/uni0000001b/uni0000001a/uni00000030/uni0000004c/uni0000005b/uni00000033/uni0000002b/uni00000030/uni00000003/uni00000017/uni0000001b/uni00000011/uni00000015/uni00000019 /uni00000024/uni00000047/uni00000044/uni00000030/uni0000004c/uni0000005b/uni00000003/uni00000017/uni00000019/uni00000011/uni0000001a/uni00000013/uni00000033/uni00000049/uni00000048/uni0000004c/uni00000049/uni00000049/uni00000048/uni00000055/uni00000003/uni00000017/uni00000018/uni00000011/uni0000001c/uni00000016 /uni0000002b/uni00000052/uni00000058/uni0000004f/uni00000056/uni00000045/uni0000005c/uni00000003/uni00000017/uni00000018/uni00000011/uni00000014/uni00000014/uni00000026/uni00000052/uni00000050/uni00000053/uni00000044/uni00000046/uni00000057/uni00000048/uni00000055/uni00000003/uni00000017/uni00000017/uni00000011/uni0000001c/uni00000014/uni0000002f/uni00000052/uni00000035/uni00000024/uni00000003/uni00000017/uni00000018/uni00000011/uni00000016/uni00000019/uni00000025/uni0000004c/uni00000057/uni00000029/uni0000004c/uni00000057/uni00000003/uni00000017/uni00000019/uni00000011/uni00000014/uni00000017Figure 1. Comparison between parameter-efficient methods. In a low-resource setting ( i.e., with 64 training samples), we show the average score across five seeds on VQA v2 (y-axis) and the percentage of tunable parameters w.r.t. pretrained VL-T5 (x-axis). updating only a tiny number of original parameters of pre-trained models or the newly-added lightweight modules, are thus proposed to handle such challenges. However, as illustrated in Figure 1, the aforementioned parameter-efficient tuning methods substantially reduce the number of tunable parameters, but their performance still lags behind full finetuning. Among them, the adapter-based methods (Houlsby [15], Pfeiffer [38], Compacter [23], and AdaMix [51]) are more storage-efficient, as they only store newly-added modules instead of a copy of entire VLMs, and they allow more flexible parameter sharing [43]. In particular, AdaMix enhances the capacity of adapters with a mixture-of-experts (MoE) [42] architecture and achieves comparable performance to full finetuning while slightly in-creasing the number of tunable parameters. In this paper, we build upon adapter-based methods to in-vestigate more parameter-efficient tuning methods that can outperform full finetuning on low-resource VQA. Specif-ically, when adapting pretrained VLMs to the given task, we consider two improvements: (i)Reducing parameter re-dundancy while maintaining adapter capacity . However, an excessive reduction of tunable parameters can lead to un-derfitting, preventing adapters from learning enough task-relevant information [23]. Therefore, it is crucial to strike This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24203 a compromise between parameter efficiency and capacity. (ii)Reducing task-irrelevant redundancy while promoting task-relevant correlation in representations . Practically, through residual connection, adapters integrate task-specific information learned from a target dataset and prior knowl-edge already implied in pretrained VLMs. However, recent works [21, 33, 48] have suggested that pretrained models inevitably contain redundant and irrelevant information for target tasks, resulting in a statistically spurious correlation between representations and labels, thereby hindering per-formance and generalization [46, 49]. To improve their ef-fectiveness, we thus expect adapters to learn as much task-relevant information as possible while discarding the task-irrelevant information from versatile pretrained VLMs. To this end, we propose MixPHM , a redundancy-aware parameter-efficient tuning method, which can efficiently re-duce the tunable parameters and task-irrelevant redundancy, and promote task-relevant correlation in representations. MixPHM is implemented with multiple PHM-experts in a MoE fashion. To reduce (i)parameter redundancy in Mix-PHM, we first decompose and reparameterize the expert weights into a low-rank subspace. Afterwards, we further reduce the number of parameters and transfer information with global and local weight sharing. To achieve the im-provement (ii), we first quantify representation redundancy in adapter. The result shows that representations of adapters are redundant with representations of pretrained VLMs but exhibit limited correlation with the final task-used repre-sentations. Inspired by this insight, we then propose Re-dundancy Regularization . In MixPHM, the regularizer re-duces task-irrelevant redundancy via decorrelating the sim-ilarly matrix between representations learned by MixPHM and representations obtained by pretrained VLMs. Simulta-neously, it promotes task-relevant correlation by maximiz-ing the mutual information between the learned representa-tions and the final task-used representations. We conduct extensive experiments on three datasets, i.e., VQA v2 [11], GQA [19], and OK-VQA [36]. The pro-posed MixPHM consistently outperforms full finetuning and state-of-the-art parameter-efficient tuning methods. To gain more insights, we discuss the generalizability of our method and the effectiveness of its key components. Our contributions are summarized as follows: (1) We propose MixPHM, a redundancy-aware parameter-efficient tuning method that outperforms full finetuning in adapting pre-trained VLMs to low-resource VQA. (2) We quantitatively analyze representation redundancy and propose redundancy regularization, which can efficiently reduce task-irrelevant redundancy while prompting task-relevant correlation. (3) Extensive experiments show that MixPHM achieves a bet-ter trade-off between performance and parameter efficiency, and a significant performance improvement over current parameter-efficient tuning methods.2. Related Work Vision-Langauge Pretraining. Vision-language pretrain-ing [5,8,18,20,24,30,45,60,62] aims to learn task-agnostic multimodal representations for improving the performance of downstream tasks in a finetuning fashion. Recently, a line of research [4, 17, 29, 30, 50] has been devoted to lever-aging encoder-decoder frameworks and generative model-ing objectives to unify architectures and objectives between pretraining and finetuning. VLMs with an encoder-decoder architecture generalize better. In this paper, we explore how to better adapt them to low-resource VQA [1]. Parameter-Efficient Tuning. Finetuning large-scale pre-trained VLMs on downstream datasets has become one mainstream paradigm for vision-language tasks. However, finetuning the full model consisting of millions of parame-ters is time-consuming and resource-intensive. Parameter-efficient tuning [12, 34, 35, 41, 55, 56, 59] vicariously tunes lightweight trainable parameters while keeping (most) pre-trained parameters frozen, which has shown great success in NLP tasks. According to whether new trainable parameters are introduced, these methods can be roughly categorized into two groups: (1) tuning partial parameters of pretrained models, such as BitFit [56] and FISH Mask [44], (2) tuning additional parameters, such as prompt (prefix)-tuning [27, 31], adapter [15, 38], and low-rank methods [16, 23]. Motivated by the success in NLP, some works [32,43,61] have begun to introduce parameter-efficient methods to tune pretrained VLMs for vision-language tasks. Specifically, Linet al. [32] investigate action-level prompts for vision-language navigation. VL-Adapter [43] extends adapters to transfer VLMs for various vision-language tasks. Hy-perPELT [61] is a unified parameter-efficient framework for vision-language tasks, incorporating adapter and prefix-tuning. In a |
Hu_A_Dynamic_Multi-Scale_Voxel_Flow_Network_for_Video_Prediction_CVPR_2023 | Abstract The performance of video prediction has been greatly boosted by advanced deep neural networks. However, most of the current methods suffer from large model sizes and require extra inputs, e.g., semantic/depth maps, for promis-ing performance. For efficiency consideration, in this pa-per, we propose a Dynamic Multi-scale Voxel Flow Net-work (DMVFN) to achieve better video prediction perfor-mance at lower computational costs with only RGB images, than previous methods. The core of our DMVFN is a dif-ferentiable routing module that can effectively perceive the motion scales of video frames. Once trained, our DMVFN selects adaptive sub-networks for different inputs at the in-ference stage. Experiments on several benchmarks demon-strate that our DMVFN is an order of magnitude faster than Deep Voxel Flow [35] and surpasses the state-of-the-art iterative-based OPT [63] on generated image quality. | 1. Introduction Video prediction aims to predict future video frames from the current ones. The task potentially benefits the study on representation learning [40] and downstream fore-casting tasks such as human motion prediction [39], au-tonomous driving [6], and climate change [48], etc. Dur-ing the last decade, video prediction has been increasingly studied in both academia and industry community [5, 7]. Video prediction is challenging because of the diverse and complex motion patterns in the wild, in which accurate motion estimation plays a crucial role [35, 37, 58]. Early methods [37, 58] along this direction mainly utilize recur-rent neural networks [19] to capture temporal motion infor-mation for video prediction. To achieve robust long-term prediction, the works of [41, 59, 62] additionally exploit the semantic or instance segmentation maps of video frames for semantically coherent motion estimation in complex scenes. *Corresponding authors. DMVFN (3.5M)DMVFN w/o routing (3.5M) PredNet (8.2M)DVF (PyTorch, 3.8M)Seg2vid (202.1M)MCNET (14.1M)Vid2vid (182.5M)FVS (82.3M)OPT (16.0M)CorrWise (53.4M)Figure 1. Average MS-SSIM and GFLOPs of different video prediction methods on Cityscapes [9]. The parameter amounts are provided in brackets. DMVFN outperforms previous methods in terms of image quality, parameter amount, and GFLOPs. However, the semantic or instance maps may not always be available in practical scenarios, which limits the application scope of these video prediction methods [41,59,62]. To im-prove the prediction capability while avoiding extra inputs, the method of OPT [63] utilizes only RGB images to esti-mate the optical flow of video motions in an optimization manner with impressive performance. However, its infer-ence speed is largely bogged down mainly by the computa-tional costs of pre-trained optical flow model [54] and frame interpolation model [22] used in the iterative generation. The motions of different objects between two adjacent frames are usually of different scales. This is especially ev-ident in high-resolution videos with meticulous details [49]. The spatial resolution is also of huge differences in real-world video prediction applications. To this end, it is es-sential yet challenging to develop a single model for multi-scale motion estimation. An early attempt is to extract This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6121 multi-scale motion cues in different receptive fields by em-ploying the encoder-decoder architecture [35], but in prac-tice it is not flexible enough to deal with complex motions. In this paper, we propose a Dynamic Multi-scale V oxel Flow Network (DMVFN) to explicitly model the com-plex motion cues of diverse scales between adjacent video frames by dynamic optical flow estimation. Our DMVFN is consisted of several Multi-scale V oxel Flow Blocks (MVFBs), which are stacked in a sequential manner. On top of MVFBs, a light-weight Routing Module is pro-posed to adaptively generate a routing vector according to the input frames, and to dynamically select a sub-network for efficient future frame prediction. We con-duct experiments on four benchmark datasets, including Cityscapes [9], KITTI [12], DA VIS17 [43], and Vimeo-Test [69], to demonstrate the comprehensive advantages of our DMVFN over representative video prediction meth-ods in terms of visual quality, parameter amount, and computational efficiency measured by floating point oper-ations (FLOPs). A glimpse of comparison results by differ-ent methods is provided in Figure 1. One can see that our DMVFN achieves much better performance in terms of ac-curacy and efficiency on the Cityscapes [9] dataset. Exten-sive ablation studies validate the effectiveness of the com-ponents in our DMVFN for video prediction. In summary, our contributions are mainly three-fold: • We design a light-weight DMVFN to accurately pre-dict future frames with only RGB frames as inputs. Our DMVFN is consisted of new MVFB blocks that can model different motion scales in real-world videos. • We propose an effective Routing Module to dynam-ically select a suitable sub-network according to the input frames. The proposed Routing Module is end-to-end trained along with our main network DMVFN. • Experiments on four benchmarks show that our DMVFN achieves state-of-the-art results while being an order of magnitude faster than previous methods. |
Huang_SemiCVT_Semi-Supervised_Convolutional_Vision_Transformer_for_Semantic_Segmentation_CVPR_2023 | Abstract Semi-supervised learning improves data efficiency of deep models by leveraging unlabeled samples to alleviate the reliance on a large set of labeled samples. These suc-cesses concentrate on the pixel-wise consistency by using convolutional neural networks (CNNs) but fail to address both global learning capability and class-level features for unlabeled data. Recent works raise a new trend that Trans-former achieves superior performance on the entire feature map in various tasks. In this paper, we unify the current dominant Mean-Teacher approaches by reconciling intra-model and inter-model properties for semi-supervised seg-mentation to produce a novel algorithm, SemiCVT, that absorbs the quintessence of CNNs and Transformer in a comprehensive way. Specifically, we first design a paral-lel CNN-Transformer architecture (CVT) with introducing an intra-model local-global interaction schema (LGI) in Fourier domain for full integration. The inter-model class-wise consistency is further presented to complement the class-level statistics of CNNs and Transformer in a cross-teaching manner. Extensive empirical evidence shows that SemiCVT yields consistent improvements over the state-of-the-art methods in two public benchmarks. | 1. Introduction Semantic segmentation [4, 25, 41, 44] is a foundational problem in computer vision and has attracted tremendous interests for assigning pixel-level semantic labels in an im-age. Despite remarkable successes of convolutional neural network (CNN), collecting a large quantity of pixel-level annotations is quite expensive and time-consuming. Re-cently, semi-supervised learning (SSL) provides an alterna-tive way to infer labels by learning from a small number of images annotated to fully explore those unlabeled data. The main stream of semi-supervised learning relies on *Corresponding Authors: Lanfen Lin (llf@zju.edu.cn), Yawen Huang (yawenhuang@tencent.com). †Huimin Huang and Shiao Xie are co-first authors, and this work is done during the internship at Tencent Jarvis Lab. Figure 1. Visualizations of class activation maps generated by Grad-CAM [31] and segmentation results of MT [33] and Our SemiCVT. Current MT-based SSLs suffer from limited global de-pendency ( e.g., incomplete human leg in (a), (c)) and class confu-sion ( e.g., mis-classify bottle asperson in (e), (g)). Our SemiCVT improves the performance from intra-model and inter-model per-spective, achieving better compactness and accurate localization. consistency regularization [13, 30, 33, 38], pseudo label-ing [29, 32], entropy minimization [3, 14] and bootstrap-ping [15]. For semantic segmentation, a typical approach is to build a Mean-Teacher (MT) model [33] (in Fig. 2 (a)), which allows the predictions generated from either teacher or student model as close as possible. However, such a clas-sic structure still suffers from two limitations: 1)Most of MT-based frameworks are built upon stacking convolutional layers, while the dilemma of CNNs is to capture global rep-resentations in the limited receptive field [11]. It results in the neglected ability of aggregating global context and local features, as depicted in Figs. 1 (a) and (c). 2)These ap-proaches usually leverage the pixel-wise predictions from CNNs to enforce consistency regularization with their fo-cuses on fine-level pair-wise similarity. It may fail to ex-plore rich information in feature space and also overlook the global feature distribution, as shown in Figs. 1 (e), (g). On the other hand, Transformer [34] has achieved no-table performance on vision tasks [2, 24, 37], owing to their strong capability in multi-head self-attention for capturing long-distance dependencies. However, pure Transformer-based architectures cannot achieve satisfactory perfor-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11340 Figure 2. Comparison of (a) MT-based SSL with (b) SemiCVT. mance, due to lack of spatial inductive-bias in modelling local cues [27]. Despite the combination of Transformer and CNN has proven to be effective [7, 16, 27, 36], the in-tegration of Transformer with CNN-based Mean-Teacher SSLs remains the fundamental problem for several reasons: (1) Intra-model problem : The feature paradigm of Trans-former is heterogeneous compared to CNNs. Addition-ally, Transformer relies on the large-scale pre-trained model with customized fine-tuning for different downstream tasks, which consumes enormous time and energy. How to ef-ficiently combine the complementary of the two-style fea-tures and train the Transformer with relatively little labeled data from scratch remains an open question. (2) Inter-model problem : The existing MT-based SSLs merely leverage the pixel-wise predictions from the teacher model for guiding the student model to approximate, which ig-nores rich class-level information. How to make CNN and Transformer learn from each other on unlabeled data in class-level is a problem worthy of exploring. To tackle these problems, we propose Semi-Supervised Convolutional Vision Transformer (termed as SemiCVT in Fig. 2. (b)), which fully combines CNN and Transformer for semi-supervised segmentation motivated by (1) Intra-model local-global interaction : Considering the heteroge-neous paradigm of CNN and Transformer in the spatial do-main, we alternatively investigate the interaction of CNN and Transformer in the Fourier domain [28], since learning on frequency spectrum is able to steer all the frequencies to capture both long-term and short-term interactions. In this way, both contextual details in CNN and long-range dependency in Transformer can be extracted with better local-global interaction. (2) Inter-model class-wise con-sistency : CNN and Transformer have different inner fea-ture flow forms, in which their feature maps with comple-mentary class-wise statistics creates a potential opportunity for incorporation. Inspired by such an observation, we uti-lize the class-wise statistics of unlabeled data generated by CNN/Transformer (from teacher) to update the parameters of the Transformer/CNN (from student), respectively. In a cross-teaching manner, we learn an implicit consistency regularization with complementary cues in graph domain, which can produce more stable and accurate pseudo labels. The ability of SemiCVT in capturing local-global cues and class-specific characteristics is shown in Fig. 1. Com-pared with the MT-based SSL, SemiCVT can attend to full object extent with in various sizes and long-range scenarios (e.g., full extent of the people’s legin Fig. 1. (b), as well as the feature discriminability between different classes (e.g., activated small-size bottle in Fig. 1. (f)), achieving accurate segmentation shown in Fig. 1 (d) and (h). In summary, the main contributions of this work are four-fold: ( i) We analyze the intra-model and inter-model prob-lems faced by the existing CNN-based Mean-Teacher meth-ods for semi-supervised segmentation, and propose a novel scheme, named SemiCVT, to fully capitalize the unlabeled data. ( ii) We introduce an intra-model local-global inter-action strategy for chaining both CNN and Transformer in the Fourier domain. ( iii) We propose an inter-model class-wise consistency to learn complementary class-level statis-tics in a cross-teaching manner. ( iv) Extensive experiments are performed on two public datasets, resulting in the new state-of-the-art performances consistently. |
Huang_Collaborative_Diffusion_for_Multi-Modal_Face_Generation_and_Editing_CVPR_2023 | Abstract Diffusion models arise as a powerful generative tool re-cently. Despite the great progress, existing diffusion models mainly focus on uni-modal control, i.e., the diffusion process is driven by only one modality of condition. To further un-leash the users’ creativity, it is desirable for the model to be controllable by multiple modalities simultaneously, e.g. gen-erating and editing faces by describing the age (text-driven) while drawing the face shape (mask-driven). In this work, we present Collaborative Diffusion , where pre-trained uni-modal diffusion models collaborate to achieve multi-modal face generation and editing without re-training. Our key insight is that diffusion models driven by different modalities are inherently complementary regard-ing the latent denoising steps, where bilateral connections can be established upon. Specifically, we propose dynamic diffuser, a meta-network that adaptively hallucinates multi-modal denoising steps by predicting the spatial-temporal BCorresponding author. Project page: https://ziqihuangg.github.io/projects/collaborative-diffusion.html Code: https://github.com/ziqihuangg/Collaborative-Diffusioninfluence functions for each pre-trained uni-modal model. Collaborative Diffusion not only collaborates generation capabilities from uni-modal diffusion models, but also inte-grates multiple uni-modal manipulations to perform multi-modal editing. Extensive qualitative and quantitative experi-ments demonstrate the superiority of our framework in both image quality and condition consistency. | 1. Introduction Recent years have witnessed substantial progress in image synthesis and editing with the surge of diffusion models [9, 20, 56, 58]. In addition to the remarkable synthesis quality, one appealing property of diffusion models is the flexibility of conditioning on various modalities, such as texts [3, 16, 29, 30, 39, 48, 51], segmentation masks [48, 65, 66], and sketches [7, 65]. However, existing explorations are largely confined to the use of a single modality at a time. The exploitation of multiple conditions remains under-explored. As a generative tool, its controllability is still limited. To unleash users’ creativity, it is desirable that the model This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6080 is simultaneously controllable by multiple modalities. While it is trivial to extend the current supervised framework with multiple modalities, training a large-scale model from scratch is computationally expensive, especially when exten-sive hyper-parameter tuning and delicate architecture designs are needed. More importantly, each trained model could only accept a fixed combination of modalities, and hence re-training is necessary when a subset of modalities are ab-sent, or when additional modalities become available. The above demonstrates the necessity of a unified framework that effectively exploits pre-trained models and integrate them for multi-modal synthesis and editing. In this paper, we propose Collaborative Diffusion , a framework that synergizes pre-trained uni-modal diffusion models for multi-modal face generation and editing without the need of re-training. Motivated by the fact that different modalities are complementary to each other ( e.g.,textfor age and mask for hair shape), we explore the possibility of establishing lateral connections between models driven by different modalities. We propose dynamic diffuser to adaptively predict the spatial-temporal influence function for each pre-trained model. The dynamic diffuser dynamically determines spatial-varying and temporal-varying influences of each model, suppressing contributions from irrelevant modalities while enhancing contributions from admissible modalities. In addition to multi-modal synthesis, the sim-plicity and flexibility of our framework enable extension to multi-modal face editing with minimal modifications. In particular, the dynamic diffuser is first trained for collabora-tive synthesis. It is then fixed and combined with existing face editing approaches [29, 50] for multi-modal editing. It is worth-mentioning that users can select the best editing approaches based on their needs without the need of altering thedynamic diffusers . We demonstrate both qualitatively and quantitatively that our method achieves superior image quality and condition consistency in both synthesis and editing tasks. Our contri-butions can be summarized as follows: •We introduce Collaborative Diffusion , which exploits pre-trained uni-modal diffusion models for multi-modal controls without re-training. Our approach is the first attempt towards flexible integration of uni-modal diffu-sion models into a single collaborative framework. •Tailored for the iterative property of diffusion models, we propose dynamic diffuser , which predicts the spatial-varying and temporal-varying influence functions to selectively enhance or suppress the contributions of the given modalities at each iterative step. •We demonstrate the flexibility of our framework by ex-tending it to face editing driven by multiple modalities. Both quantitative and qualitative results demonstrate the superiority of Collaborative Diffusion in multi-modal face generation and editing.2. Related Work Diffusion Models. Diffusion models [20, 56, 58] have re-cently become a mainstream approach for image synthe-sis [9, 11, 38] apart from Generative Adversarial Networks (GANs) [14], and success has also been found in various domains including video generation [17, 19, 55, 63], im-age restoration [21, 52], semantic segmentation [1, 4, 15], and natural language processing [2]. In the diffusion-based framework, models are trained with score-matching objec-tives [22, 64] at various noise levels, and sampling is done via iterative denoising. Existing works focus on improv-ing the performance and efficiency of diffusion models through enhanced architecture designs [16,48] and sampling schemes [57]. In contrast, this work focuses on exploit-ing existing models, and providing a succinct framework for multi-modal synthesis and editing without large-scale re-training of models. Face Generation. Existing face generation approaches can be divided into three main directions. Following the GAN paradigm, the StyleGAN series [26 –28] boost the quality of facial synthesis, and provide an interpretable latent space for steerable style controls and manipulations. The vector-quantized approaches [12, 61] learn a discrete codebook by mapping the input images into a low-dimensional dis-crete feature space. The learned codebook is then sampled, either sequentially [12, 61] or parallelly [5, 6, 16], for syn-thesis. In contrast to the previous two approaches, diffusion models are trained with a stationary objective, without the need of optimizing complex losses ( e.g., adversarial loss) or balancing multiple objectives ( e.g., codebook loss versus reconstruction loss). With training simplicity as a merit, diffusion-based approaches have become increasingly pop-ular in recent years. Our framework falls in the diffusion-based paradigm. In particular, we leverage pre-trained diffu-sion models for multi-modal generation and editing. Conditional Face Generation and Editing. Conditional generation [10, 11, 13, 31, 34, 37, 40, 43 –46, 51, 65, 67 –71] and editing [8, 32, 41, 53, 54, 67, 68] is an active line of research focusing on conditioning generative models on dif-ferent modalities, such as texts [24, 41, 67, 68], segmentation masks [32, 33, 40, 47], and audios [59]. For example, Style-CLIP [41], DiffusionCLIP [30], and many others [35, 60] have demonstrated remarkable performance in text-guided face generation and editing. However, most existing models do not support simultaneous conditioning on multiple modal-ities ( e.g., text and mask at the same time), and supporting additional modalities often requires time-consuming model re-training and extensive hyper-parameter tuning, which are not preferable in general. In this work, we propose Col-laborative Diffusion to exploit pre-trained uni-modal diffu-sion models [48] ( e.g., text-driven and mask-driven mod-els) to achieve multi-modal conditioning without model re-training. 6081 Text-Driven Diffusion ModelMask-Driven Diffusion ModelThispersonisinherforties. ⨀Dynamic Diffuser 2Dynamic Diffuser 1 ⊕ Mask BranchText Branch…… ⨀Collaborative Diffusionatt X BranchPre-Trained Diffusion ModelDynamic Diffuser"!"#"!"$"% ⨀Pixel-wise Multiplication⊕Pixel-wise AdditionMulti-Modal Conditions c Influence Functionof Mask BranchInfluence Functionof Text Branch Synthesized Image……Collaborative Diffusion Overview Thispersonisinherforties.LegendX Condition "$"#"!"!"##&"!"#"!,%) %'()*%!+,!Figure 2. Overview of Collaborative Diffusion . We use pre-trained uni-modal diffusion models to perform multi-modal guided face generation and editing. At each step of the reverse process ( i.e., from timestep ttot−1), the dynamic diffuser predicts the spatial-varying and temporal-varying influence function to selectively enhance or suppress the contributions of the given modality. 3. Collaborative Diffusion We propose Collaborative Diffusion , which exploits mul-tiple pre-trained uni-modal diffusion models (Section 3.1) for multi-modal generation and editing. The key of our frame-work is the dynamic diffuser , which adaptively predicts the influence functions to enhance or suppress the contributions of the pre-trained models based on the spatial-temporal influ-ences of the modalities. Our framework is compatible with most existing approaches for both multi-modal guided syn-thesis (Section 3.2) and multi-modal editing (Section 3.3). 3.1. Uni-Modal Conditional Diffusion Models Diffusion models are a class of generative models that model the data distribution in the form of pθ(x0):=R pθ(x0:T)dx1:T. The diffusion process (a.k.a. forward process ) gradually adds Gaussian noise to the data and even-tually corrupts the data x0into an approximately pure Gaus-sian noise xTusing a variance schedule β1, . . . , β T: q(x1:T|x0):=TY t=1q(xt|xt−1), q(xt|xt−1):=N(xt;p 1−βtxt−1, βtI).(1) Reversing the forward process allows sampling new data x0 by starting from p(xT) =N(xT;0,I). The reverse process is defined as a Markov chain where each step is a learnedGaussian transition (µθ,Σθ): pθ(x0:T):=p(xT)TY t=1pθ(xt−1|xt), pθ(xt−1|xt):=N(xt−1;µθ(xt, t),Σθ(xt, t)).(2) Training diffusion models relies on minimizing the varia-tional bound on p(x)’s negative log-likelihood. The com-monly used optimization objective LDM[20] reparameter-izes the learnable Gaussian transition as ϵθ(·), and tempo-rally reweights the variational bound to trade for better sam-ple quality: LDM(θ):=Et,x0,ϵ∼N(0,I)h ∥ϵ−ϵθ(xt, t)∥2i ,(3) where xtcan be directly approximated by xt=√¯αtx0+√1−¯αtϵ, with ¯αt:=Qt s=1αsandαt:= 1−βt. To sample data x0from a trained diffusion model ϵθ(·), we iteratively denoise xtfrom t=Ttot= 1with noise z: xt−1=1√αt xt−1−αt√1−¯αtϵθ(xt, t) +σtz(4) The unconditional diffusion models can be extended to model conditional distributions pθ(x0|c), where x0is the image corresponding to the condition csuch as class labels, segmentation masks, and text descriptions. The conditional diffusion model receives an additional input τ(c)and is trained by minimizing ∥ϵ−ϵθ(xt, t, τ(c))∥2, where τ(·)is an encoder that projects the condition cto an embedding τ(c). For brevity, we will use cto represent τ(c)in our subsequent discussions. 6082 Algorithm 1 Dynamic Diffuser Training 1:repeat 2:x0, c1, c2, ..., c M∼q(x0, c1, c2, ..., c M) 3:t∼Uniform( {1, . . . , T }) 4:ϵ∼ N(0,I) 5:form= 1, ..., M do 6: ϵpred,m,t =ϵθm(√¯αtx0+√1−¯αtϵ, t, cm) 7: Im,t=Dϕm(√¯αtx0+√1−¯αtϵ, t, cm) 8:end for 9:ˆIm,t,p =exp( Im,t,p )PM j=1exp( Ij,t,p), softmax at each pixel p 10: ϵpred,t =PM m=1ˆIm,t⊙ϵpred,m,t 11 |
Borse_DejaVu_Conditional_Regenerative_Learning_To_Enhance_Dense_Prediction_CVPR_2023 | Abstract We present DejaVu, a novel framework which leverages conditional image regeneration as additional supervision during training to improve deep networks for dense pre-diction tasks such as segmentation, depth estimation, and surface normal prediction. First, we apply redaction to the input image, which removes certain structural information by sparse sampling or selective frequency removal. Next, we use a conditional regenerator, which takes the redacted image and the dense predictions as inputs, and reconstructs the original image by filling in the missing structural in-formation. In the redacted image, structural attributes like boundaries are broken while semantic context is largely preserved. In order to make the regeneration feasible, the conditional generator will then require the structure infor-mation from the other input source, i.e., the dense predic-tions. As such, by including this conditional regeneration objective during training, DejaVu encourages the base net-work to learn to embed accurate scene structure in its dense prediction. This leads to more accurate predictions with clearer boundaries and better spatial consistency. When it is feasible to leverage additional computation, DejaVu can be extended to incorporate an attention-based regenera-tion module within the dense prediction network, which fur-ther improves accuracy. Through extensive experiments on multiple dense prediction benchmarks such as Cityscapes, COCO, ADE20K, NYUD-v2, and KITTI, we demonstrate the efficacy of employing DejaVu during training, as it out-performs SOTA methods at no added computation cost. | 1. Introduction Dense prediction tasks produce per-pixel classification or regression results, such as semantic or panoptic class la-bels, depth or disparity values, and surface normal angles. These tasks are critical for many vision applications to bet-ter perceive their surroundings for XR, autonomous driving, *These authors contributed equally to this work. †Qualcomm AI Research, an initiative of Qualcomm Technologies, Inc. Figure 1. Training within the DejaVu framework enables dense prediction models to improve their initial predictions using our proposed loss. The segmentation results are for the same OCR [87] model with and without DejaVu. The surface normal results are for SegNet-XTC [43]. robotics, visual surveillance, and so on. There has been sig-nificant success in adopting neural networks to solve dense prediction tasks through innovative architectures, data aug-mentations and training optimizations. For example, [46] addresses pixel level sampling bias and [7] incorporates boundary alignment objectives. Orthogonal to existing methods, we explore novel regeneration-based ideas to un-derstand how additional gradients from reconstruction tasks complement the established training pipelines for dense prediction tasks and input representations. There are works [69, 94] in classification settings that leverage reconstructions and likelihood-based objectives as auxiliary loss functions to enhance the quality of feature representations and also improve Open-set/OOD Detection [25, 48, 49, 55]. The core intuition is that, for discrimina-tive tasks the model needs a minimal set of features to solve the task and any feature which does not have discriminative This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19466 power for the target subset of data are ignored. Another line of work for dense predictions [54] focuses on depth comple-tion and leverages reconstruction-based loss to learn com-plementary image features that aid better capture of object structures and semantically consistent features. Following such intuitions, we can see that reconstruction-based auxil-iary loss should capture more information in representation than discriminative-only training. Here, we introduce a novel training strategy, DejaVu1, for dense prediction tasks with an additional, conditional re-construction objective to improve the generalization capac-ity of the task-specific base networks as illustrated in Fig. 1. We redact the input image to remove structure information (e.g., boundaries) while retaining contextual information. We adopt various redaction techniques that drop out com-ponents in spatial or spectral domains. Then, we enforce a conditional regeneration module (CRM), which takes the redacted image and the base network’s dense predictions, to reconstruct the missing information. For regeneration feasi-bility, the CRM will require structure information from the dense predictions. By including this conditional regenera-tion objective during training, we encourage the base net-work to learn and use such structure information, which leads to more accurate predictions with clearer boundaries and better spatial consistency, as shown in the experimental section. In comparison, the supervised loss cannot capture this information alone since the cross-entropy objective (for segmentation, as an example) looks at the probability distri-bution of every pixel. In this sense, DejaVu can implicitly provide cues to the dense prediction task from the recon-struction objective depending on the type of redaction we select. We also note that using the same number of addi-tional regenerated images as a data augmentation scheme does not provide the performance improvements that De-jaVu can achieve (as reported in the Appendix). This shows that DejaVu conditions the training process more effectively than any data augmentation technique. Our DejaVu loss can be applied to train any dense pre-diction network and does not incur extra computation at test time. When it is feasible to leverage additional com-putation, DejaVu can be extended where we incorporate an attention-based regeneration module within the dense pre-diction network, further improving accuracy. An advantage of regenerating the original image from predictions is that we can additionally use other losses including text supervi-sion and cyclic consistency, as described in Section 3.4. Our extensive experiments on multiple dense prediction tasks, including semantic segmentation, depth estimation, and surface normal prediction, show that employing DejaVu during training enables our trained models to outperform 1In training, DejaVu redacts the input image and constructs its regener-ated versions, in a way, these regenerated versions are ”already seen” yet not exactly the same due to initial redaction.the latest state of the art on several large-scale benchmarks. Our main contributions are summarized as follows: • We devise a novel learning strategy, DejaVu, that lever-ages conditional image regeneration from redacted in-put images to improve the overall performance on dense prediction tasks. (Sec. 3.3) • We propose redacting the input image to enforce the base networks to learn accurate dense predictions such that these tasks can precisely condition the regenera-tive process. (Sec. 3.1) • We devise a novel shared attention scheme, DejaVu-SA, by incorporating the regeneration objective into the parameters of the network. (Sec. 3.4) • We further provide extensions to DejaVu, such as the text supervision loss DejaVu-TS and Cyclic consis-tency loss DejaVu-CL, further improving performance when additional data is available. (Sec. 3.5) • DejaVu is a universal framework that can enhance the performance of multiple networks for essential dense prediction tasks on numerous datasets with no added inference cost. (Sec. 4) |
Guedon_MACARONS_Mapping_and_Coverage_Anticipation_With_RGB_Online_Self-Supervision_CVPR_2023 | Abstract We introduce a method that simultaneously learns to ex-plore new large environments and to reconstruct them in 3D from color images only. This is closely related to the Next Best View problem (NBV), where one has to identify where to move the camera next to improve the coverage of an un-known scene. However, most of the current NBV methods rely on depth sensors, need 3D supervision and/or do not scale to large scenes. Our method requires only a color camera and no 3D supervision. It simultaneously learns in a self-supervised fashion to predict a “volume occupancy field” from color images and, from this field, to predict the NBV . Thanks to this approach, our method performs well on new scenes as it is not biased towards any training 3D data. We demonstrate this on a recent dataset made of vari-ous 3D scenes and show it performs even better than recent methods requiring a depth sensor, which is not a realistic assumption for outdoor scenes captured with a flying drone.1. Introduction By bringing together Unmanned Aerial Vehicles (UA Vs) and Structure-from-Motion algorithms, it is now possible to reconstruct 3D models of large outdoor scenes, for example for creating a Digital Twin of the scene. However, flying a UA V requires expertise, especially when capturing images with the goal of running a 3D reconstruction algorithm, as the UA V needs to capture images that together cover the entire scene from multiple points of view. Our goal with this paper is to make this capture automatic by developing a method that controls a UA V and ensures a coverage suitable to 3D reconstruction. This is often referenced in the literature as the “Next Best View” problem (NBV) [14]: Given a set of already-captured images of a scene or an object, how should we move the camera to improve our coverage of the scene or This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 940 object? Unfortunately, current NBV algorithms are still not suitable for three main reasons. First, most of them rely on a voxel-based representation and do not scale well with the size of the scene. Second, they also rely on a depth sensor, which is in practice not possible to use on a small UA V in outdoor conditions as it is too heavy and requires too much power. Simply replacing the depth sensor by a monocular depth prediction method [49, 56, 77] would not work as such methods can predict depth only up to a scale factor. The third limitation is that they require 3D models for learning to predict how much a pose will increase the scene coverage. In this paper, we show that it is possible to simultane-ously learn in a self-supervised fashion to efficiently ex-plore a 3D scene and to reconstruct it using an RGB sensor only, without any 3D supervision. This makes it convenient for applications in real scenarios with large outdoor scenes. We only assume the camera poses to be known, as done in past works on NBV [31,48,80]. This is reasonable as NBV methods control the camera. The closest work to ours is probably the recent [27]. [27] proposed an approach that can scale to large scenes thanks to a Transformer-based architecture that predicts the visi-bility of 3D points from any viewpoint, rather than relying on an explicit representation of the scene such as voxels. However, this method still uses a depth sensor. It also uses 3D meshes for training the prediction of scene coverage. To solve this, [27] relies on meshes from ShapeNet [6], which is suboptimal when exploring large outdoor scenes, as our experiments show. This limitation can actually be seen in Figure 1: The trajectory recovered by [27] mostly focuses on the main building and does not explore the rest of the scene. By contrast, we use a simple color sensor and do not need any 3D supervision. As our experiments show, we nonetheless significantly outperform this method thanks to our architecture and joint learning strategy. As shown in Figure 2, our architecture is made of three neural modules that communicate together: | 1. Our first module learns to predict depth maps from a sequence of images in a self-supervised fashion. |
Jiang_Fair_Federated_Medical_Image_Segmentation_via_Client_Contribution_Estimation_CVPR_2023 | Abstract How to ensure fairness is an important topic in feder-ated learning (FL). Recent studies have investigated how to reward clients based on their contribution (collaboration fairness), and how to achieve uniformity of performance across clients (performance fairness). Despite achieving progress on either one, we argue that it is critical to con-sider them together, in order to engage and motivate more diverse clients joining FL to derive a high-quality global model. In this work, we propose a novel method to opti-mize both types of fairness simultaneously. Specifically, we propose to estimate client contribution in gradient and data space. In gradient space, we monitor the gradient direc-tion differences of each client with respect to others. And in data space, we measure the prediction error on client data using an auxiliary model. Based on this contribu-tion estimation, we propose a FL method, federated train-ing via contribution estimation (FedCE), i.e., using estima-tion as global model aggregation weights. We have theo-retically analyzed our method and empirically evaluated it on two real-world medical datasets. The effectiveness of our approach has been validated with significant perfor-mance improvements, better collaboration fairness, better performance fairness, and comprehensive analytical stud-ies. Code is available at https://nvidia.github. io/NVFlare/research/fed-ce | 1. Introduction Recent development of federated learning (FL) facil-itates collaboration for medical applications, given that multiple medical institutions can jointly train a consensus model without sharing raw data [ 1–6]. FL provides an op-portunity to leverage larger and more diverse datasets to derive a robust and generalizable model [ 7,8]. However, it is usually difficult to pool different institutions together *Corresponding authors: Qi Dou (qidou@cuhk.edu.hk) and Ziyue Xu (ziyuex@nvidia.com)to train a FL model in practice. The challenges mainly lie in two aspects. First, it takes effort to set up and partici-pate in federated training, medical institutions may not be sufficiently motivated to contribute to a FL study without a fair credit assignment and a fair reward allocation, i.e., collaboration fairness [9]. Second, medical data are het-erogeneous in amounts and data-collection process [ 10–13], which may lead to inferior performance for clients with ei-ther less data or a data distribution deviating from others, harming performance fairness [14,15]. It is critical to in-volve diverse datasets and improve individual prediction ac-curacy for building robust medical applications with low er-ror tolerance [ 16]. Therefore, we argue that these two types of fairness need to be considered together. Despite recent investigations on fairness-related topics, existing literature mostly addresses collaboration fairness and performance fairness separately. For example, meth-ods for collaboration fairness aim to estimate client reward, by using the computation and communication cost of each client [ 17], evaluating local validation performance [ 18], and using cosine similarity between local and global up-dates [ 19]. Meanwhile, methods for performance fairness aim to mitigate performance disparities, by using mini-max optimization to improve worst-performing clients [ 15, 20], re-weighting clients to adjust fairness/accuracy trade-off [14], or learning personalized models [ 21]. To ade-quately address concerns on these two fairness, we postulate that it is desirable to consider both simultaneously, because reward estimation and model performance could essentially be coupled during training. Solutions on how to tackle col-laboration fairness andperformance fairness together are still under-investigated, especially in medical domain. To tackle this problem, our insight is to estimate the contribution of each client, and further use the contribu-tion to promote training performance. The idea is inspired by Shapley Value (SV) [ 22], a classic approach to quan-tify the contribution of participants in cooperative game the-ory. SV proposes to permute all possible subsets of partici-pants to calculate the contribution of a certain client. Some existing works have adopted SV for estimating client re-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16302 ward [ 19,23–25]. However, these methods mostly approxi-mate SV by comparing local model updates or local model validations, which can be highly correlated with local sam-ple numbers. A client with more samples can dominate the training, resulting in inaccurate estimation results. There-fore, finding a more accurate and robust estimation is im-perative to break through this bottleneck. In this work, we propose a novel client contribution es-timation method to approximate SV by comparing a certain client with respect to all other clients. We further present a new FL algorithm, federated training via contribution es-timation ( FedCE ), which uses client contributions as new weighting factors for global model aggregation. Specifi-cally, since the fundamental setting of SV is to validate if a new client contributes to all possible combinations of ex-isting clients, to effectively and efficiently approximate it, we propose to directly measure how a certain client con-tributes to all remaining clients, rather than computing all possible permutations. Our contribution measurement con-siders both gradient and data space to quantify the contribu-tion of each client. In gradient space, we calculate the gradi-ent direction differences between one client and all the other clients; and in data space, we measure the prediction error on client data by using an auxiliary model, which is calcu-lated by excluding a client’s own parameters. By combin-ing these two measurements, we are able to quantify each client’s contribution for collaboration fairness, and further use this estimation to promote training for performance fair-ness. Our main contributions are summarized as follows: •We propose a novel method for client contribution esti-mation to facilitate collaboration fairness . We empir-ically and theoretically analyze the robustness of this estimation method under various FL data distributions. •We propose a novel federated learning method, FedCE , based on the proposed client contribution estimation to help promote performance fairness , and we theoreti-cally analyze the model’s convergence. •We conduct extensive experiments on two medical im-age segmentation tasks. The proposed FedCE method significantly outperforms several latest state-of-the-art FL methods, and comprehensive analytical studies demonstrate the effectiveness of our method. |
Feng_Dynamic_Generative_Targeted_Attacks_With_Pattern_Injection_CVPR_2023 | Abstract Adversarial attacks can evaluate model robustness and have been of great concern in recent years. Among various attacks, targeted attacks aim at misleading victim models to output adversary-desired predictions, which are more chal-lenging and threatening than untargeted ones. Existing tar-geted attacks can be roughly divided into instance-specific and instance-agnostic attacks. Instance-specific attacks craft adversarial examples via iterative gradient updating on the specific instance. In contrast, instance-agnostic at-tacks learn a universal perturbation or a generative model on the global dataset to perform attacks. However, they rely too much on the classification boundary of substitute mod-els, ignoring the realistic distribution of the target class, which may result in limited targeted attack performance. And there is no attempt to simultaneously combine the in-formation of the specific instance and the global dataset. To deal with these limitations, we first conduct an analysis via a causal graph and propose to craft transferable targeted adversarial examples by injecting target patterns. Based on this analysis, we introduce a generative attack model com-posed of a cross-attention guided convolution module and a pattern injection module. Concretely, the former adopts a dynamic convolution kernel and a static convolution ker-nel for the specific instance and the global dataset, respec-tively, which can inherit the advantages of both instance-specific and instance-agnostic attacks. And the pattern in-jection module utilizes a pattern prototype to encode target patterns, which can guide the generation of targeted adver-sarial examples. Besides, we also provide rigorous theoret-ical analysis to guarantee the effectiveness of our method. Extensive experiments demonstrate that our method shows superior performance than 10 existing adversarial attacks against 13 models. | 1. Introduction With the encouraging progress of deep neural networks (DNNs) in various fields [6, 19, 16, 44, 42], recent studies ∗Equal Contribution †Corresponding Author P erturbation Adversarial images Hippopotamus 97.75% Hippopotamus 95.47% Hippopotamus 98.34% Police van 45.73% Tailed frog 37.55% Bathing cap 53.47% (a) (b)Figure 1. Visualization comparison between adversarial exam-ples generated by our method (a) and the instance-specific method MIM [9] (b). Our perturbations (a) not only show an underlying dependency with the input instance, but also have strong semantic patterns or styles of the target class (“Hippopotamus”). In contrast, the perturbations generated by MIM perform like random noises. The adversarial examples are generated against ResNet-152, and labels are predicted by another unknown model (VGG-16). [36, 29, 28, 38] have corroborated that adversarial examples generated with small-magnitude perturbations can mislead the DNNs to make incorrect predictions. Due to the vulner-ability of DNNs, adversarial attacks expose a security threat to real application systems based on deep neural networks, especially in some sensitive fields such as autonomous driv-ing [27], face verification and financial systems [43], to name a few. Therefore, adversarial attacks have become a research hotspot over the past decade [2, 48, 35, 3, 33], which are significant in demonstrating the adversarial ro-bustness and stability of deep learning models. To further understand adversarial examples, there are tremendous works [1, 48, 33, 4, 36, 38] focusing on ad-versarial attacks. Recently, it has been found that adver-sarial examples possess an intriguing property of transfer-ability, which indicates that adversarial examples generated for a white-box model can also mislead another black-box model. Due to this inherent property, black-box attacks be-come workable in real-world scenarios where attackers can-not access the attacked model. Extensive methods, such as This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16404 MIM [9], DIM [53] and CD-AP [40], have made an impres-sive performance of boosting the transferability for untar-geted attacks, which mislead the model to make an incor-rect classification without specifying a target class. How-ever, targeted attacks are more challenging compared with untargeted attacks, making the model output the adversary-desired target class. It is claimed in recent works [14, 55] that transferable targeted attacks are more worthy of study because attackers can directly control the unknown model to output the desired prediction, which can expose huge threats to data privacy and security. Therefore, it is of great signif-icance to develop transferable targeted attacks. Existing methods of transferable targeted attacks can be roughly categorized into instance-specific [9, 14, 13, 54, 50, 34, 31] and instance-agnostic [52, 35, 27, 40, 39] attacks. Specifically, almost all instance-specific attacks craft adversarial examples via iterative gradient updating, where attackers can only take advantage of the specific in-put instance, the white-box model and the target class la-bel. Instance-specific attacks rely on optimizing the classi-fication score of the adversary-desired class label to perturb the specific instance, which ignore the global data distri-bution. As a result, they inevitably lead to adversarial ex-amples over-fitting the white-box model and result in mod-est transferability of targeted attacks. On the other hand, via learning a universal perturbation [37] or a generative at-tack model [52, 55], instance-agnostic attacks optimize ad-versarial perturbations on the global data distribution rather than the specific instance. To a certain extent, they can al-leviate the problem of data-specific over-fitting and lead to more transferable targeted adversarial examples. However, taking the generative attack methods as an example, they suffer from two limitations. (1)Most generative attacks [52, 40, 35] still rely on the target label and the classification boundary information of white-box models rather than the realistic data distribution of the target class. Consequently, it is claimed that most generative attacks still have the pos-sibility of over-fitting the white-box model, which may re-sult in limited performance of transferable targeted attacks. (2)Another limitation is that existing generative attacks [39, 55, 27] apply the same network weights to every input instance in the test dataset. Nevertheless, it is considered that the shared network weights cannot stimulate the best attack performance of generative models [39, 55, 55]. Thus these aforementioned limitations have become the bottle-neck of developing transferable targeted attacks, to a certain extent. To address the aforementioned limitations, in this pa-per we construct a causal graph to formulate the prediction process of classifiers, and analyze the origin of adversar-ial examples. Based on this analysis, we propose to gener-ate targeted adversarial examples via injecting the specific pattern or style of the target class . To this end, we intro-duce a generative attack model, which can not only inject pattern or style information of the target class to improve transferable targeted attacks, but also learn specialized con-volutional kernels for each input instance. Specifically, we designed a cross-attention guided convolution module and a pattern injection module in the proposed generative attack model. (1)The cross-attention guided convolution mod-ule consists of a static convolutional kernel and a dynamic convolutional kernel that is computed according to the in-put instance. Consequently, this static and dynamic mixup module can not only encode the global information of the dataset, but also learn specialized convolutional kernels for each input instance. This paradigm makes our generative model inherit the advantages of both instance-specific and instance-agnostic attacks. (2)The pattern injection module is designed to model the pattern or style information of the target class and guide the generation of targeted adversar-ial examples. Concretely, we propose a pattern prototype to learn a global pattern representation over images from the target class, and use the prototype to guide the generation of more transferable targeted adversarial examples. And the generated adversarial images of our method are presented in Figure 1. It is observed that our generated perturbations (as shown in Figure 1(a)) pose strong semantic patterns or styles of the target class and show an underlying depen-dency on the input instance. In contrast, the perturbations (as presented in Figure 1(b)) generated by MIM [9] per-form like random noises. Finally, to further understand our method, we provide rigorous theoretical analysis to guaran-tee the effectiveness of our method, as shown in Section 3.4, where we derive a concise conclusion based on the problem of Gaussian binary classification. In summary, the main contributions of this paper are three-fold: (1)We propose a dynamic generative model to craft transferable targeted adversarial examples, which can not only inject pattern or style information of the target class to improve transferable targeted attacks, but also learn spe-cialized convolutional kernels for each input instance. Be-sides, our method inherits the advantages of both instance-specific and instance-agnostic attacks, and to the best of our knowledge, we are the first to bridge them. (2)We state thatinjecting the specific pattern or style of the target class can improve the transferability of targeted adversarial ex-amples , and we provide a comprehensive theoretical analy-sis to verify the rationality of this statement. (3)Extensive experimental results demonstrate that our method signifi-cantly boosts the transferability of targeted adversarial ex-amples over 10 existing methods against 13 models. |
Huang_Tracking_Multiple_Deformable_Objects_in_Egocentric_Videos_CVPR_2023 | Abstract Most existing multiple object tracking (MOT) methods that solely rely on appearance features struggle in tracking highly deformable objects. Other MOT methods that use motion clues to associate identities across frames have dif-ficulty handling egocentric videos effectively or efficiently. In this work, we present DogThruGlasses, a large-scale deformable multi-object tracking dataset, with 150 videos and 73K annotated frames, which is collected exclusively by smart glasses. We also propose DETracker, a new MOT method that jointly detects and tracks deformable objects in egocentric videos. DETracker uses three novel mod-ules, namely the motion disentanglement network (MDN), the patch association network (PAN) and the patch memory network (PMN), to explicitly tackle severe ego motion and track fast morphing target objects. DETracker is end-to-end trainable and achieves near real-time speed, which outper-forms existing state-of-the-art method on DogThruGlasses and YouTube-Hand. | 1. Introduction Wearable cameras have become an emerging trend, pro-moted by the rapidly growing collection of consumer prod-†Work done during Mingzhen’s internship with Meta.ucts such as smart glasses. As the wearable cameras became more powerful with increased battery capacity, sensor size, on-board memory volume, and sophisticated in-device pro-cessors, there is an increasing demand for real-time scene understanding to run reliably and yet efficiently on-device. The underlying computer vision algorithms, on the other hand, frequently starts from the detection and tracking of objects within the scene. For the egocentric videos captured from wearable cameras, besides being challenged by oc-clusion, morphing shapes, and multiple visually resembling objects, the multiple object tracking (MOT) algorithms are stressed by the constantly changing egocentric viewpoint. Unique to wearable cameras, the large ego motion caused by the head movements of the wearer is often dras-tic, unpredictable, and largely uncorrelated to the object motions. The reduced predictability of motion patterns forces traditional MOT algorithms [45] to search in larger regions to maintain same performance level, which in turn compromises the running speed and makes them less suit-able for on-device execution. In the meanwhile, the ego motion may also exacerbate the deformation and occlu-sion of objects, by imposing lens distortion, blurriness and rolling shutter, especially those in the near field. The ego motion also aggravates object occlusion due to the con-stantly changing of points of view. These side effects fur-ther downgrade the performance of appearance-based MOT This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1461 approaches [16, 42, 53]. In this work, we propose an efficient end-to-end trainable method, DETracker (Deformable Egocentric Tracker ), for tracking multiple deformable objects in egocentric videos. DETracker has three major components, the motion disen-tanglement network (MDN), the patch association network (PAN), and the patch memory network (PMN). MDN is to estimate the motion flow between two consecutive frames efficiently. It explicitly separates a global camera motion before estimating the local object motion and thus is robust under severe ego motion. PAN tackles deformable object tracking by dividing objects into patches and localizing in-dividual patches by finding their best matched patches in upcoming frames. PMN retains and updates feature embed-dings of tracked objects within a prolonged time window by leveraging a transformer network [40] and thus is able to use historical patch features for long-term association. Although there exist several large-scale MOT datasets [7, 9, 14, 22], they are limited to either fixed camera views [7] or simple ego motion, e.g., from car-mount cameras [9, 52]. To build a large-scale, egocentric MOT dataset, we collected DogThruGlasses , a video set of dogs captured with the smart glasses. This dataset represents the complexity of real-life object tracking scenarios from wearable devices (see examples in Fig. 1). In DogThruGlasses, we release 150 videos with 73K annotated frames, 157K annotated bounding boxes, and 474 unique identities/trajectories. To our knowledge, this is the first large-scale dataset for tracking multiple objects in egocentric videos. It would serve as a challenging benchmark for existing and future MOT methods. The dataset and the code will be released for research purposes. We summarize our major contributions as follows: • We present DogThruGlasses, the first large-scale ego-centric MOT dataset collected with smart glasses, of-fering extensive coverage of object deformation, ego motion, and diverse scenes. • We propose DETracker, a new MOT algorithm that is designed to be robust under severe ego motion and fast object deformation. • In DETracker, we use MDN to disentangle camera mo-tion from object motion and predict a high-accuracy object trajectory very efficiently. In addition, we de-sign PAN and PMN to help with the detection and long-term tracking of objects under large deformations and heavy occlusions. • Experimental results in Table 2 show that our pro-posed method outperforms the state-of-the-art method by 8.1% on DogThruGlasses. It also achieves compet-itive results on YouTube-Hand [14] for hand tracking.2. Related Works Traditional MOT approaches mostly follow a tracking-by-detection scheme [5, 28, 31, 37, 42, 43, 48, 54]: an ob-ject detector is employed to detect objects, and then a spe-cific association method is used to connect individual detec-tions into continuous trajectories. Hungarian algorithm [23] is a popular method for associating, where the affinity cost is defined based on Intersection over Union (IoU) of two bounding boxes. Bewley et al. [4] proposed to use the Hungarian algorithm for associating detected bound-ing boxes with predicted tracklet movement generated by Kalman Filter [15]. ByteTrack [54] further improves the tracking quality by recovering low confident detection re-sults with a two-stage association. However, the rapid camera motion leads to large offsets, which fails the tra-ditional spatial location-based methods. Appearance-based Re-Identification (ReID) module is another widely adopted association method in MOT algorithms [11, 39, 42]. Those methods are robust to objects and ego motion since the ob-jects’ spatial location change is not used as a key for the association. Wojke and Bewley [42] proposed to extract ResNet [12] features for detected objects and then associate them based on their feature cosine similarities. A common drawback for those methods is that the detection and asso-ciation modules are separately optimized, even though they are trying to describe the same object. As a result, the de-coupled outcomes from two modules cannot benefit each other, and often yield sub-optimal final results. Recently, the methods that jointly detect and track ob-jects [20, 27, 34, 41, 45, 53, 57] became mainstream in MOT. The detection and association networks are usually trained end-to-end to avoid falling into local optimums. Zhan et al. [53] proposes to leverage a Re-ID branch for association jointly train it along with the detection net-work. FairMOT [53] achieves competitive results on MOT benchmarks [7, 22]. However, it is hard to handle highly deformable objects with frequently changing appearance, similar to appearance-based tracking-by-detection meth-ods. Many other joint detection and tracking MOT ap-proaches [10, 14, 31, 32, 35, 36, 45, 50, 57] achieve compet-itive performance by leveraging a motion estimation mod-ule for data association. However, such methods usually limit the object motion search within a small spatial win-dow. This setting can easily fail under dramatic camera mo-tion. Zhou et al. [57] followed [56] to detect objects’ cen-ter points as a heatmap and proposed to compute the object center motion from concatenated features from two consec-utive frames. Similarly, such methods estimate the motion offset within a local window, and cannot tolerate rapid cam-era motion. Inspired by RAFT [38], Wu et al. [45] extends [57] by estimating the object’s movement offset from the global cost volume for each pair of pixels. This operation results in an affinity matrix M ∈Rhw×hw, where handw 1462 are the height and width of the input feature map. Different from [57], the estimated motion offset is not constrained by a small local neighborhood, but computing such an affin-ity matrix is extremely expensive and is not achievable on mobile devices. Memory Network has been widely explored in video analysis tasks [19, 26, 44], and tracking objects with mem-ories is also widely adopted in recent MOT methods [6, 8, 49]. By maintaining and optimizing an appearance bank, objects can be tracked in long term effects, with only a minimal footprint. Alone this line, Cai et al. [6] use their proposed Memory Aggregator Network to keep the appear-ance feature embedding of tracked objects in memory and use these memory embeddings to query current objects for detecting and tracking. 3. DogThruGlasses Dataset It is notable that, dramatic camera motion and fast chang-ing object appearance caused by occlusion and deformation are commonly seen from media captured with wearable de-vices. Yet, their combined impact on object tracking meth-ods is largely unexplored in past literature. Targeting this absence, we present DogThruGlasses, a large-scale MOT dataset collected with wearable devices. In this dataset, we carefully choose dogs as targeting ob-jects for multiple reasons. First of all, as companion ani-mals, dogs are among the most frequently imaged targets in consumer videos. Dog videos are also shared through so-cial media in vast amounts on a daily basis. Also, dogs of different breeds vary dramatically in size, color, shape, and habit. They are also frequently in motion and demonstrate rapid deformation. All these factors post-add-on challenge the ego motion due to the wearers’ head movements. Data source. Multiple individuals participate the collec-tion of videos in DogThruGlasses via smart glasses, refer to Fig. 1 for sample frames. The videos are captured in 30 frames per second (FPS) using the device domestic cam-era application, and are then resized to 1000 ×1000 pixel resolution. DogThruGlasses covers scene diversity by in-cluding beaches, dog parks, roadsides, backyards, parking lots, restaurants, etc. The videos are captured at different time point of the day, as well as under different weather condition. The dataset also aggressively covers object di-versity by including up to 33 dog breeds. To name a few, Labrador Retriever, Golden Retriever, German Shepherd, Poodle, American Bulldog, Rottweile, Australian Shep-herd, Beagle, etc. The annotation task is carried out by two individual an-notators, who are unfamiliar with any tracking algorithms. They are asked to annotate tight boxes around visible part of individual dogs, without hallucination about the occluded parts. During the annotation process, annotators noticedUnpred. Deform. Total Total Anno. Total Trajs Dataset EgoMotion Objects #Videos #Frames Boxes #Trajs Length MOT17 [22] ⋄ -14 11K 215K 1342 110 MOT20 [7] --8 13K 1.6M 3457 453 KITTI [9] --50 13K 47K 917 51.5 YouTube-Hand [14] ⋄ ✓ 240 20K 60K 864 70.5 VIV A [29] × ✓ 20 6K 13K 45 64.5 DogThruGlasses ✓ ✓ 150 73K 156K 474 325 Table 1. Comparison among MOT datasets. The⋄denotes part of the dataset contain unpredictable ego motion. The length of tra-jectories we reported here is the median number of all trajectories length. Figure 2. Analysis of DogThruGlasses. that some of the dog breeds are in particular challenging to annotate due to very small size, all white/all black ap-pearance, or highly similar patterns. These factors lead to inconsistent annotations. So we enforce an additional round of cross verification and correction between the two anno-tators to guarantee the quality. Data statistics. We show the statistics of DogThruGlasses compared to other MOT datasets in Table 1. DogThru-Glasses provides a complex combination of ego motion (compared to fixed or in-vehicle-mounted cameras) and ob-jects deformation. Meanwhile, it has a larger volume in terms of both the number of vide |
Gao_AsyFOD_An_Asymmetric_Adaptation_Paradigm_for_Few-Shot_Domain_Adaptive_Object_CVPR_2023 | Abstract In this work, we study few-shot domain adaptive ob-ject detection (FSDAOD), where only a few target labeled images are available for training in addition to sufficient source labeled images. Critically, in FSDAOD, the data scarcity in the target domain leads to an extreme data imbalance between the source and target domains, which potentially causes over-adaptation in traditional feature alignment. To address the data imbalance problem, we pro-pose an asymmetric adaptation paradigm, namely AsyFOD, which leverages the source and target instances from differ-ent perspectives. Specifically, by using target distribution estimation, the AsyFOD first identifies the target-similar source instances, which serves to augment the limited tar-get instances. Then, we conduct asynchronous alignment between target-dissimilar source instances and augmented target instances, which is simple yet effective for allevi-ating the over-adaptation. Extensive experiments demon-strate that the proposed AsyFOD outperforms all state-of-the-art methods on four FSDAOD benchmarks with var-ious environmental variances, e.g., 3.1% mAP improve-ment on Cityscapes-to-FoggyCityscapes and 2.9% mAP in-crease on Sim10k-to-Cityscapes. The code is available at https://github.com/Hlings/AsyFOD. | 1. Introduction Object detection [4, 18, 47–50], which aims to localize and classify objects simultaneously, is widely used in real-world applications such as video surveillance [29,42,67,70] and autonomous driving [57, 69]. Unfortunately, detec-tors suffer a significant performance drop when deployed in an unseen domain due to the domain discrepancy between training and test data [36,37,43,63,66,68]. And usually, re-peatedly collecting a large amount of labeled data in new *denotes the authors contributed equally to this work. †denotes the corresponding author. Traditional Symmetric Adaptation Proposed Asymmetric Adaptation Target -similar Source InstancesSource Instances Target Instances Target -dissimilar Source InstancesUnobserved Target InstancesSynchronous Feature Alignment Asynchronous Feature AlignmentFigure 1. A few target instances are biased to represent the over-all target data distribution, i.e., many light orange target instances are not observed, as shown in the top left. And, the data scarcity in the target domain leads to a data imbalance between the source and target domains. Therefore, traditional symmetric adaptation (such as MMD [38, 40, 41, 56]) easily causes over-adaptation, i.e., the detector concentrates on a small area for observed target in-stances but ignores many other unobserved ones, as shown in the top right. By contrast, our proposed asymmetric adaptation alle-viates the over-adaptation via source instance division and asyn-chronous alignment. domains requires expensive labor and time cost. In this work, we explore the Few-Shot Domain Adaptive Object Detection (FSDAOD) [16, 61, 72], which attempts to gen-eralize detectors with minor cost. In addition to adequate labeled source images, the FSDAOD assumes that only a few (usually eight) labeled target images are available for adapting a detector in the target domain. A critical challenge of the FSDAOD is the data scarcity in the target domain, which leads to an extreme data im-balance between the source and target domains. As shown in Figure 1, it is difficult to comprehensively describe the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3261 overall target data distribution by only a few target in-stances. Usually, in standard unsupervised domain adap-tation [22, 24, 38, 53, 71, 73], alignment-based methods, e.g., Maximum Mean Discrepancy (MMD) [38, 40, 41, 56], conduct synchronous feature alignment to mitigate the do-main discrepancy, which is termed the symmetric adapta-tionparadigm in our work. However, without consideration of imbalanced distributions in FSDAOD, simply conduct-ing synchronous feature alignment easily causes the over-adaptation problem, i.e., the detector is prone to concentrate on limited observed target instances but hardly generalizes well on other unobserved ones [61]. Typically, existing FS-DAOD methods attempt to alleviate the imbalance problem by reusing the same target samples, which yet overlooks the leverage of source samples [61, 72]. To address the extreme data imbalance problem, in this work, we propose a novel asymmetric adaptation paradigm, named AsyFOD, which leverages the source and target in-stances from different perspectives. The AsyFOD first di-vides the source instance set into two parts, namely target-similar and target-dissimilar instance sets. Such a division strategy is inspired by an observation that, some source in-stances are visually similar to the target instances (see Fig-ure 5 for empirical verification). Accordingly, we identify the target-similar source instances by formulating a uni-fied discrepancy estimation function, which serves to aug-ment the limited target instances to alleviate the imbalanced amounts of data. The remaining source instances are re-garded as target-dissimilar after identifying target-similar source instances. To further alleviate the data imbalance between domains, we propose conducting asynchronous alignment between the target-dissimilar source instances and augmented target instances. Unlike traditional meth-ods, the AsyFOD aligns feature distributions in an asym-metric way, with a stop-gradient operation applied on target instance features when optimizing the detector. In this way, the proposed asynchronous alignment can better align the unobserved target samples. The AsyFOD obtains the state-of-the-art performance on mitigating various types of domain discrepancy, such as background variations [9, 17], natural weather [54] and synthetic-to-real [1, 26, 35]. Also, the AsyFOD generalizes well on various few-shot settings of domain adaptive ob-ject detection, i.e., FSDAOD with weak or strong augmen-tation [16] and Few-Shot Unsupervised Domain Adaptive Object Detection (FSUDAOD). |
Ji_MAP_Multimodal_Uncertainty-Aware_Vision-Language_Pre-Training_Model_CVPR_2023 | Abstract Multimodal semantic understanding often has to deal with uncertainty, which means the obtained messages tend to refer to multiple targets. Such uncertainty is problematic for our interpretation, including inter-and intra-modal un-certainty. Little effort has studied the modeling of this uncer-tainty, particularly in pre-training on unlabeled datasets and fine-tuning in task-specific downstream datasets. In this paper, we project the representations of all modali-ties as probabilistic distributions via a Probability Distri-bution Encoder (PDE) by utilizing sequence-level interac-tions. Compared to the existing deterministic methods, such uncertainty modeling can convey richer multimodal seman-tic information and more complex relationships. Further-more, we integrate uncertainty modeling with popular pre-training frameworks and propose suitable pre-training tasks: Distribution-based Vision-Language Contrastive learning (D-VLC), Distribution-based Masked Language Modeling (D-MLM), and Distribution-based Image-Text Matching (D-ITM). The fine-tuned models are applied to challenging downstream tasks, including image-text retrieval, visual question answering, visual reasoning, and visual entailment, and achieve state-of-the-art results. | 1. Introduction Precise understanding is a fundamental ability of hu-man intelligence, whether it involves localizing objects from similar semantics or finding corresponding across multiple modalities. Our artificial models suppose to do the same, pinpointing exact concepts from rich multimodal seman-tic scenarios. However, this kind of precise understanding is challenging. Information from different modalities can present rich semantics from each other, but the resulting am-biguity and noise are also greater than the case with a single *Equal contribution. ²Corresponding Author. Intra-modal uncertainty (a) Vision uncertainty Inter-modal uncertainty(b) Language uncertainty (c.1) There are people with umbrellas in the street.(c) Vision-to-Language uncertainty (d) Language-to-Vision uncertainty (d.1) People along a narrow street and a guy riding a scooter. (d.2) The motorcyclist is traveling down the busy street. (d.3) Man on moped traveling through busy street. (b.1) A woman poses with avocado sandwich lunch at an outdoor restaurant (b.2) Young girl having a meal in outdoor setting. (b.3) A woman sitting at a restaurant getting ready to eat her food .(a.3) Snowy Mountains(a.2) Zebras(a.1) A billboard (a.0) An image region / grid (e) An example for (b) Language uncertainty (Point Rep. V .S. Distribution Rep.) avocado sandwich foodmealavocado sandwich food meal (e.1) Point Representations (e.2) Distribution RepresentationsMultimodal uncertaintiesFigure 1. Multimodal uncertainties and an example for language un-certainty (b) by modeling as point representations and distribution representations. The images and text are from MSCOCO [ 30]. modality. Multimodal representation learning methods hold the promise of promoting the desired precise understanding across different modalities [ 13]. While these methods have shown promising results, current methods face the challenge of uncertainty [ 7,51], including within a modality and be-tween modalities. Considering image (a.0) in Fig. 1as an example, one vision region includes multiple objects, such as a billboard, several zebras and others. Therefore, it is unclear which objects when mentioning this region. In the language domain, the complex relationships of words lead to uncertainty, such as synonymy and hyponymy. In Fig. 1 (c)&(d), the same object often has different descriptions This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23262 from different modalities, such as text and images, which manifests inter-modal uncertainty. Instead, previous methods often neglect the uncertainty [ 11,19,46], resulting in limited understanding ability on complicated concept hierarchies and poor prediction diversity. Therefore, it is desirable to model such uncertainty. Moreover, with multimodal datasets becoming more com-monplace, there is a flourishing trend to implement pre-training models, particularly Vision-Language Pre-training (VLP), to support downstream applications [ 6,18,23,36,50]. Existing deterministic representations, however, often fail to understand uncertainty in pre-training data, as they can only express positions in semantic space and measure the relation-ship between targets in certainty, such as Euclidean distance. How can we efficiently model uncertainty in multi-modalities when dealing with pre-training models? Applying Gaussian distribution is one of the prominent approaches used for modeling uncertainty in the representa-tion space [ 40,45,51,54]. In these methods, however, the ob-tained uncertainty depends on individual features rather than considering the whole features together, which ignores the in-ner connection between features. To exploit this connection, we implicitly model them when formulating the uncertainty with a module called Probability Distribution Encoder (PDE). Inspired by the self-attention mechanism [ 44], we further add the interaction between text tokens and image patches when constructing our distribution representations to capture more information. In Figure 1(e), we provide an example for two different types of representations to describe the lan-guage uncertainty, where the distribution representations can express richer semantic relationships than the conventional point representations. The distribution variance measures the uncertainty of the corresponding text. As a byproduct, distri-bution representations enable diverse generations, providing multiple reasonable predictions with random sampling. In this paper, we integrate this uncertainty modeling in the pre-training framework, resulting in three new tasks: Distribution-based Vision-Language Contrastive learning (D-VLC), Distribution-based Masked Language Modeling (D-MLM), and Distribution-based Image-Text Matching (D-ITM) pre-training tasks. All these tasks are to deal with cross-modality alignment. More specifically, D-VLC is to handle the coarse-grained cross-modal alignment, which measures the whole distributions to align representations from differ-ent domains. D-MLM and D-ITM are implemented after the fine-grained interaction between different modalities, pro-viding the token level and overall level alignment for images and text. Our contributions are summarized as follows: 1) We focus on the semantic uncertainty of multimodal un-derstanding and propose a new module, called Probability Distribution Encoder, to frame the uncertainty in multimodal representations as Gaussian distributions.2) We develop three uncertainty-aware pre-training tasks to deal with large-scale unlabeled datasets, including D-VLC, D-MLM, and D-ITM tasks. To the best of our knowledge, these are the first attempt to harness the probability distribu-tion of representations in VLP. 3) We wrap the proposed pre-training tasks into an end-2-end Multimodal uncertainty-Aware vision-language Pre-training model, called MAP, for downstream tasks. Experiments show MAP gains State-of-The-Art (SoTA) performance. Our code is available at https://github.com/IIGROUP/MAP. |
Duan_Federated_Learning_With_Data-Agnostic_Distribution_Fusion_CVPR_2023 | Abstract Federated learning has emerged as a promising dis-tributed machine learning paradigm to preserve data privacy. One of the fundamental challenges of federated learning is that data samples across clients are usually not independent and identically distributed (non-IID), leading to slow convergence and severe performance drop of the aggregated global model. To facilitate model aggregation on non-IID data, it is desirable to infer the unknown global distributions without violating privacy protection policy. In this paper, we propose a novel data-agnostic distribution fusion based model aggregation method called FedFusion to optimize federated learning with non-IID local datasets, based on which the heterogeneous clients’ data distributions can be represented by a global distribution of several virtual fusion components with different parameters and weights. We develop a Variational AutoEncoder (VAE) method to learn the optimal parameters of the distribution fusion components based on limited statistical information extracted from the local models, and apply the derived distribution fusion model to optimize federated model aggregation with non-IID data. Extensive experiments based on various federated learning scenarios with real-world datasets show that FedFusion achieves significant performance improvement compared to the state-of-the-art. | 1. Introduction Federated learning (FL) has emerged as a novel distributed machine learning paradigm that allows a global deep neural network (DNN) model to be trained by multiple participanting clients collaboratively. In such a paradigm, multiple clients train their local models based on datasets generated by edge devices such as sensors and smartphones, and the server is responsible *The corresponding author is Wenzhong Li (lwz@nju.edu.cn).to aggregate the parameters from the local models to form a global model without transferring local data to the central server. Nowadays federated learning has been drawn much attention in mobile-edge computing [21, 39] with its advantages in preserving data privacy [17, 49] and enhancing communication efficiency [30, 38, 43]. The de facto standard algorithm for federated learning is FedAvg [30], where parameters of local models are averaged element-wise with weights proportional to sizes of the client datasets. Based on FedAvg, a lot of algorithms have been proposed to improve the resource allocation fairness, communication efficiency, and convergence rate for federated learning [16, 29], which include LAG [3], Zeno [45], AFL [31], FedMA [43], etc. One of the fundamental challenges of federated learning is the non-IID data sampling from heterogeneous clients. In real-world federated learning scenarios, local datasets are typically non-IID, and the local models trained on them are significantly different from each other. It was reported in [48] that the accuracy of a convolutional neural network (CNN) model trained by FedAvg reduces by up to 55% for a highly skewed non-IID dataset. The work in [43] showed that the accuracy of VGG model trained with FedAvg and its variants dropped from 61% to under 50% when the client number increases from 5 to 20 on heterogeneous data. Several efforts have been made to address the non-IID challenges. FedProx [26] modified FedAvg by adding a dissimilarity bound on local datasets and a proximal term on the local model parameter to tackle heterogeneity. However, it poses restrictions on the local updates to be closer to the initial global model, which may lead to model bias. Zhao et al. [48] proposed a data sharing strategy to improve training on non-IID data by creating a small subset of data to share between all clients. However, data sharing could weaken the privacy requirement of federated learning. Several works [5, 28, 32] adopted data augmentation and model bias correction to deal with non-IID data. The clustered federated learning [2, 6, 7, 46] tackled non-IID settings by partitioning client models into clusters and performed model aggregation in cluster level. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8074 The personalized federated learning [18, 34, 35, 37] aimed to train personalized local models on non-IID data with the help of federated model aggregation. However, none of the existing works have considered to optimize federated model aggregation from the perspective of inferring the unknown global distribution based on the observed local model parameters, and yet the feasibility of global distribution inference subject to the data privacy policy of federated learning remains unexplored. In this paper, we propose a novel data-agnostic dis-tribution fusion method called FedFusion for federated learning on non-IID data. We introduce a distribution fusion model to describe the global data distribution as a fusion of several virtual distribution components, which is ideal for representing non-IID data generated from heterogeneous clients. However, applying a distribution fusion for federated learning is not a trivial work. Due to the data privacy policy of federated learning, the local datasets are inaccessible and their distributions are unknown to the server, so it is challenging for the server to derive the distribution parameters of a fusion model without observing to the real local data samples. To tackle these issues, we propose an efficient method to optimize the distribution fusion federated learning with variational inference. Since the local data is inaccessible to the server, our method is based on the limited statistical information embedded in the normalization layers of the DNN models, i.e., the means and standard deviations of the feature maps (the outputs of intermediate layers). As shown in the proposed method, those information can be extracted from the local model parameters, which can be further used to infer a global distribution. Specifically, we develop a Variational AutoEncoder (V AE) method to learn the optimal parameters of distribution fusion components based on the observed information, and apply the derived parameters to optimize federated model aggregation with non-IID data. Extensive experiments based on a variety of federated learning scenarios with non-IID data show thatFedFusion significantly outperforms the state-of-the-arts. The contributions of our work are as follows. • We propose a novel data-agnostic distribution fusion based model aggregation method called FedFusion to address the data heterogeneity problem in federated learning. It represents the global data by a fusion model of several virtual distribution components with different fusion weights, which is ideal to describe non-IID data generated from heterogeneous clients. • We develop a V AE method to learn the optimal parameters for the data-agnostic distribution fusion model. Without violating the privacy principle of federated learning, the proposed method uses limitedstatistical information embedded in DNN models to infer a target global distribution with a maximum probability. Based on the inferred parameters, an optimal model aggregation strategy can be developed for federated learning under non-IID data. • We conduct extensive experiments using five main-stream DNN models based on four real-world datasets under non-IID conditions. Compared to FedAvg and the state-of-the-art for non-IID data (FedProx, FedMA, IFCA, FedGroup, etc), the proposed FedFusion has better convergence and training efficiency, improving the global model’s accuracy up to 12%. |
Hruby_Four-View_Geometry_With_Unknown_Radial_Distortion_CVPR_2023 | Abstract We present novel solutions to previously unsolved prob-lems of relative pose estimation from images whose calibra-tion parameters, namely focal lengths and radial distortion, are unknown. Our approach enables metric reconstruction without modeling these parameters. The minimal case for reconstruction requires 13 points in 4 views for both the calibrated and uncalibrated cameras. We describe and im-plement the first solution to these minimal problems. In the calibrated case, this may be modeled as a polynomial sys-tem of equations with 3584 solutions. Despite the apparent intractability, the problem decomposes spectacularly. Each solution falls into a Euclidean symmetry class of size 16, and we can estimate 224class representatives by solving a sequence of three subproblems with 28,2, and 4solutions. We highlight the relationship between internal constraints on the radial quadrifocal tensor and the relations among the principal minors of a 4×4matrix. We also address the case of 4 upright cameras, where 7 points are minimal. Fi-nally, we evaluate our approach on simulated and real data and benchmark against previous calibration-free solutions, and show that our method provides an efficient startup for an SfM pipeline with radial cameras. *Supported by EU RDF IMPACT No. CZ.02.1.01/0.0/0.0/15 003/0000468 and EU H2020 No. 871245 SPRING projects. Viktor Larsson was funded by the strategic research project ELLIIT. Timo-thy Duff acknowledges support from an NSF Mathematical Sciences Postdoctoral Research Fellowship (DMS-2103310). Viktor Korotynskiy was partially supported by the Grant Agency of CTU in Prague project SGS23/056/OHK3/1T/13. Figure 1. Four-view Structure-from-Motion with 1D radial cameras. A radial camera projects a 3D point onto a radial line passing through its pinhole projection. | 1. Introduction Determining the relative position and orientation of two or more cameras is a classical problem in computer vi-sion [33]. It appears in the back-end of many vision sys-tems, usually to initialize SLAM [54] or further reconstruc-tion in Structure-from-Motion [70]. Much effort has been concentrated on the development of methods for 3D recon-struction using perspective cameras with various additional lens distortion models [1, 52, 70]. 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8990 1.1. Motivation Any type of geometric estimation usually requires know-ing the intrinsic calibration, i.e., the mapping from pix-els in the image to the directions of the incoming viewing rays. If the intrinsic parameters are unknown, so-called self-calibration [26] can be attempted where the camera extrin-sic and intrinsic calibration are jointly estimated. Histori-cally, this is done in a stratified approach where a projective reconstruction is first obtained followed by a metric upgrade step [33]. These approaches are usually limited in the com-plexity of the cameras that can be handled; often making the assumption of pure pinhole projection. An orthogonal line of work aims to recover camera ex-trinsics without estimating intrinsics. Assuming that the camera is radially symmetric (unit aspect ratio and distor-tion that only varies radially), it is possible to extract geo-metric constraints on camera poses that are invariant to fo-cal length or radial distortion. The idea, first introduced by Tsai [78], is to only consider the angle and ignore the radius for each projection, essentially projecting 3D points onto radial lines in the image. Enforcing that the radial line pass through the 2D keypoint then yields a geometric con-straint on both the 3D point and the camera pose (excluding the pure forward translation). In this context, the camera (mapping from 3D point to radial line) is referred to as a 1D radial camera [47, 76]. Mathematically, this gives a per-spective camera from P3toP1, illustrated in Figure 1. Radial cameras bring an important alternative to classi-cal uncalibrated (radially distorted) pinhole cameras. With radial cameras, we can completely avoid (auto-)calibrating complicated radial distortion models (and focal lengths) of all cameras involved, just by using 4 instead of 2 cameras in 3D reconstruction initialization. 1.2. Contribution Motivated by the 1D radial construction above, we study problems from the multi-view geometry of P399KP1cam-eras. In particular, we consider problems containing four images (the minimum number where constraints can be ob-tained in a general configuration). Solving these problems allows us to effectively perform metric reconstruction for cameras with unknown radial distortion under very weak assumptions on the distortion, namely that it is radially sym-metric and centered in the image. We provide three main technical contributions. First, we formulate 13-point calibrated minimal problem for 4 ra-dial cameras and, guided by computational Galois theory, show that this (seemingly) hard problem with 3584 com-plex solutions decomposes into subproblems with 28, 2, and 4 solutions, among which the minimal case for uncal-ibrated cameras also appears. We present a parallel study for the 7-point minimal problem for upright radial cam-eras. Secondly, we present the internal constraints on theCamera model Method # cam. # pts P399KP2Uncalibrated Linear [34] 2 8 Uncalibrated Minimal [38] 2 7 Calibrated Minimal [58] 2 5 Upright Minimal [29] 2 3P399KP1Uncalibrated Linear / Minimal [75] 3∗7 Calibrated Minimal [47] 3∗6 Uncalibrated Linear [75] 4 15 Uncalibrated Minimal [ OURS ] 4 13 Calibrated Minimal [ OURS ] 4 13 Upright Minimal [ OURS ] 4 7 ∗Requires intersecting principal axis or planar scene. Table 1. Comparison between relative pose solvers. OURS in-clude the first minimal solvers for general 1D radial cameras. radial quadrifocal tensor and show that they are given by non-trivial relations among the principal minors of a 4×4 matrix derived in [39, 53, 55]. Finally, based on the pre-vious theoretical contributions, we design and implement stable and practical ( 78&18ms runtime) Homotopy Con-tinuation (HC) minimal solvers and show their quality in simulated and real experiments. We show that our solvers provide efficient initialization of the radial camera 3D re-construction pipeline [47]. This provides previously miss-ing piece for building an efficient radial camera 3D recon-struction pipelines. |
Dong_Rethinking_Optical_Flow_From_Geometric_Matching_Consistent_Perspective_CVPR_2023 | Abstract Optical flow estimation is a challenging problem remain-ing unsolved. Recent deep learning based optical flow mod-els have achieved considerable success. However, these models often train networks from the scratch on standard optical flow data, which restricts their ability to robustly and geometrically match image features. In this paper, we propose a rethinking to previous optical flow estima-tion. We particularly leverage Geometric Image Matching (GIM) as a pre-training task for the optical flow estimation (MatchFlow) with better feature representations, as GIM shares some common challenges as optical flow estimation, and with massive labeled real-world data. Thus, match-ing static scenes helps to learn more fundamental feature correlations of objects and scenes with consistent displace-ments. Specifically, the proposed MatchFlow model em-ploys a QuadTree attention-based network pre-trained on MegaDepth to extract coarse features for further flow re-gression. Extensive experiments show that our model has great cross-dataset generalization. Our method achieves 11.5% and 10.1% error reduction from GMA on Sintel clean pass and KITTI test set. At the time of anony-mous submission, our MatchFlow(G) enjoys state-of-the-art performance on Sintel clean and final pass compared to published approaches with comparable computation and memory footprint. Codes and models will be released in https://github.com/DQiaole/MatchFlow . | 1. Introduction This paper studies optical flow estimation, which is the problem of estimating the per-pixel displacement vector be-tween two frames. It is very useful to various real-world applications, such as video frame interpolation [24], video inpainting [17], and action recognition [48]. The recent *Equal contributions. †Corresponding author. The author is also with Shanghai Key Lab of Intelligent Information Processing, and Fudan ISTBI-ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jin-hua, China.direct-regression based methods [16,22,25,34,44,50] have achieved great success by using powerful deep models, es-pecially the recent transformers [20, 54]. Among them, RAFT [50] employs a convolutional GRU for iterative re-finements, which queries local correlation features from a multi-scale 4D correlation volume. And GMA [25] fur-ther proposes a global motion aggregation module based on the self-similarity of image context, which greatly im-proves the performance within the occluded regions without degrading the performance in non-occluded regions. Typi-cally, these models often train networks from the scratch on standard optical flow data, with the matching module (correlation volume) to help align the features of different images/frames. Generally, these current optical flow esti-mation algorithms still can not robustly handle several in-tractable cases, e.g., small and fast-moving objects, occlu-sions, and textureless regions, as these estimators have very restricted ability of robustly learning the local image feature correspondence of different frames. In this paper, we aim to provide a rethinking to the im-portance of Geometric Image Matching (GIM) to the opti-cal flow estimation. In particular, despite GIM being de-signed to deal with the geometrically matching of static scenes, it indeed shares some common challenges with opti-cal flow estimation, such as large displacement and appear-ance change [52]. Thus, we advocate that the deep models for optical flow estimation should be trained from match-ing static scene pairs with consistent displacements. This can potentially help these models to learn the local low-level features and color correlations at the early stages of networks, before extracting the priors for 3D multi-object motion. Furthermore, compared to optical flow data, it is much easier and simple to collect the real-world GIM data [14, 27], labeled by camera poses and depth computed from ground-truth or pioneering multi-view stereo man-ners [40]. Such extensive real-world data largely improves the generalization of optical flow. On the other hand, we can also emphasize a rethinking of the general training pipeline of optical flow. Specifi-cally, since the creative work of FlowNet2 [22], optical flow models [25, 42, 42] are often trained following the schedule This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1337 (a) Reference Frame (b) GMA (c) MatchFlow (G)Figure 1. Qualitative results on KITTI test set. Red dashed boxes mark the regions of substantial improvements. Please zoom in for details. ofCurriculum Learning [2],i.e., from FlyingChair [16] to FlyThings3D [31] and finally to Sintel [7] or KITTI [18]. Nevertheless, the motion contained in FlyingChair is still far from the simplest scenario. Empirically, the static scene under viewpoint changes of GIM [46,49], can also be taken as one special type of optical flow estimation, which is even much simpler than FlyingChair. Therefore, it is reason-able to take GIM amenable for being the very first stage of the curriculum learning pipeline. Essentially, as mentioned above, the GIM data can be easily collected at large-scale, and thus will greatly benefit the learning of deep models. Formally, this paper well elaborates on the key idea of taking GIM as the prefixed task for optical flow. We draw the inspiration from recent GIM works, and present a novel MatchFlow that can effectively generalize the pre-trained image matching module to estimate optical flow. The key component of MatchFlow is a new module of Fea-ture Matching Extractor, composed of Resnet-16 and 8 in-terleaving self/cross-Quadrtee attention blocks, trained on GIM task and used to get the 4D correlation volume. After Feature Matching Extractor, our MatchFlow still takes the common practice of using GRUs module to handle optical flow estimation, with an optional GMA modelling the con-text information. In this paper, we denote the full model based on GMA as MatchFlow(G) , and the model without GMA module (a.k.a. RAFT) as MatchFlow(R) . Following the standard optical flow training proce-dure [25, 50], we conduct extensive experiments on Fly-ingChair [16], FlyingThings3D [31], Sintel [7], and KITTI [18]. Experiments results show that MatchFlow en-joys good performance and great cross-dataset generaliza-tion. Formally, RAFT-based MatchFlow(R) obtains an Fl-all error 13.6% on KITTI training set after being trained on synthetic datasets. In addition, GMA-based MatchFlow(G) achieves 11.5% and 10.1% error reduction from GMA on the Sintel clean pass and KITTI test set. The qualitative comparison on KITTI also shows the superior performance of MatchFlow as in Fig. 1. Ablation studies verify that GIM pre-training indeed helps to learn better feature representa-tion for optical flow estimation. We highlight our contributions as follows. (1) We re-formulate the optical flow pipeline, and propose the ideaof employing GIM as the preluding task for optical flow. This offers a rethinking to the learning based optical flow estimation. (2) We further present a novel matching-based optical flow estimation model – MatchFlow, which has the new module of Feature Matching Extractor, learned by the GIM pre-training task for optical flow estimation. Accord-ingly, the pipeline of curriculum learning has also been up-dated to effectively train our MatchFlow. (3) We introduce the massive real-world matching data to train our model. And thus our model can extract robust features to handle with the consistent motion of scenes, and common chal-lenges faced by both tasks. (4) We conduct extensive ex-periments and ablation studies to show that both the match-ing based pre-training and interleaving self/cross-attention modules are critical for the final optical flow performance. The proposed model shows great cross-dataset generaliza-tion and better performance over several optical flow com-petitors on several standard benchmarks. |
He_Frustratingly_Easy_Regularization_on_Representation_Can_Boost_Deep_Reinforcement_Learning_CVPR_2023 | Abstract Deep reinforcement learning (DRL) gives the promise that an agent learns good policy from high-dimensional information, whereas representation learning removes ir-relevant and redundant information and retains pertinent information. In this work, we demonstrate that the learned representation of the 𝑄-network and its target 𝑄-network should, in theory, satisfy a favorable distinguishable repre-sentation property. Specifically, there exists an upper bound on the representation similarity of the value functions of two adjacent time steps in a typical DRL setting. However, through illustrative experiments, we show that the learned DRL agent may violate this property and lead to a sub-optimal policy. Therefore, we propose a simple yet effective regularizer called Policy Evaluation with Easy Regulariza-tion on Representation (PEER), which aims to maintain the distinguishable representation property via explicit regu-larization on internal representations. And we provide the convergence rate guarantee of PEER. Implementing PEER requires only one line of code. Our experiments demonstrate that incorporating PEER into DRL can significantly improve performance and sample efficiency. Comprehensive experi-ments show that PEER achieves state-of-the-art performance on all 4environments on PyBullet, 9out of 12tasks on DM-Control, and 19out of 26games on Atari. To the best of our knowledge, PEER is the first work to study the inherent rep-resentation property of 𝑄-network and its target. Our code is available at https://sites.google.com/view/peer-cvpr2023/. | 1. Introduction Deep reinforcement learning (DRL) leverages the func-tion approximation abilities of deep neural networks (DNN) and the credit assignment capabilities of RL to enable agents to perform complex control tasks using high-dimensionalobservations such as image pixels and sensor information [1,2,3,4]. DNNs are used to parameterize the policy and value functions, but this requires the removal of irrelevant and redundant information while retaining pertinent informa-tion, which is the task of representation learning. As a result, representation learning has been the focus of attention for researchers in the field [ 5,6,7,8,9,10,11,12,13]. In this paper, we investigate the inherent representation properties of DRL. The action-value function is a measure of the quality of taking an action in a given state. In DRL, this function is approximated by the action-value network or 𝑄-network. To enhance the training stability of the DRL agent, Mnih et al. [2]introduced a target network, which computes the target value with the frozen network parameters. The weights of a target network are either periodically replicated from learning𝑄-network, or exponentially averaged over time steps. Despite the crucial role played by the target network in DRL, previous studies[ 5,6,7,8,9,10,11,12,13,14,15, 16,17,18,19,20,21,22,23,24] have not considered the representation property of the target network. In this work, we investigate the inherent representation property of the 𝑄-network. Following the commonly used definition of rep-resentation of 𝑄-network [ 17,25,26], the𝑄-network can be separated into a nonlinear encoder and a linear layer, with the representation being the output of the nonlinear encoder. By employing this decomposition, we reformulate the Bellman equation [ 27] from the perspective of representation of 𝑄-network and its target. We then analyze this formulation and demonstrate theoretically that a favorable distinguishable representation property exists between the representation of 𝑄-network and that of its target. Specifically, there exists an upper bound on the representation similarity of the value functions of two adjacent time steps in a typical DRL setting, which differs from previous work. We subsequently conduct experimental verification to investigate whether agents can maintain the favorable dis-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20215 tinguishable representation property. To this end, we choose two prominent DRL algorithms, TD3 [ 28] and CURL [ 6] (without/with explicit representation learning techniques). The experimental results indicate that the TD3 agent indeed maintains the distinguishable representation property, which is a positive sign for its performance. However, the CURL agent fails to preserve this property, which can potentially have negative effects on the model’s overall performance. These theoretical and experimental findings motivate us to propose a simple yet effective regularizer, named Policy Evaluation with Easy Regularization on Representation (PEER). PEER aims to ensure that the agent maintains the distinguishable representation property via explicit regular-ization on the 𝑄-network’s internal representations. Specifi-cally, PEER regularizes the policy evaluation phase by push-ing the representation of the 𝑄-network away from its target. Implementing PEER requires only one line of code. Addi-tionally, we provide a theoretical guarantee for the conver-gence of PEER. We evaluate the effectiveness of PEER by combining it with three representative DRL methods TD3 [ 28], CURL [ 6], and DrQ [ 9]. The experiments show that PEER effectively maintains the distinguishable representation property in both state-based PyBullet [ 29] and pixel-based DMCon-trol [ 30] suites. Additionally, comprehensive experiments demonstrate that PEER outperforms compared algorithms on four tested suites PyBullet, MuJoCo [ 31], DMControl, and Atari [ 32]. Specifically, PEER achieves state-of-the-art performance on 4out of 4environments on PyBullet, 9 out of 12tasks on DMControl, and 19out of 26games on Atari. Moreover, our results also reveal that combining algorithms (e.g., TD3, CURL, DrQ) with the PEER loss outperforms their respective backbone methods. This ob-servation suggests that the performance of DRL algorithms may be negatively impacted if the favorable distinguishable representation property is not maintained. The results also demonstrate that the PEER loss is orthogonal to existing representation learning methods in DRL. Our contributions are summarized as follows. (i) We the-oretically demonstrate the existence of a favorable property, distinguishable representation property , between the repre-sentation of 𝑄-network and its target. (ii) The experiments show that learned DRL agents may violate such a property, possibly leading to sub-optimal policy. To address this issue, we propose an easy-to-implement and effective regularizer PEER that ensures that the property is maintained. To the best of our knowledge, the PEER loss is the first work to study the inherent representation property of 𝑄-network and its target and be leveraged to boost DRL. (iii) In addition, we provide the convergence rate guarantee of PEER. (iv) To demonstrate the effectiveness of PEER, we perform com-prehensive experiments on four commonly used RL suits PyBullet, MuJoCo, DMControl, and Atari suites. The empir-ical results show that PEER can dramatically boost state-of-the-art representation learning DRL methods. |
Berges_Galactic_Scaling_End-to-End_Reinforcement_Learning_for_Rearrangement_at_100k_Steps-per-Second_CVPR_2023 | Abstract We present Galactic, a large-scale simulation and reinforcement-learning (RL) framework for robotic mobile manipulation in indoor environments. Specifically, a Fetch robot (equipped with a mobile base, 7DoF arm, RGBD camera, egomotion, and onboard sensing) is spawned in a home environment and asked to rearrange objects – by navigating to an object, picking it up, navigating to a target location, and then placing the object at the target location. Galactic is fast. In terms of simulation speed (rendering +p h y s i c s ) ,G a l a c t i ca c h i e v e s over 421,000 steps-per-second (SPS) on an 8-GPU node, which is 54x faster than Habitat 2.0 [55]( 7 6 9 9S P S ) .M o r ei m p o r t a n t l y ,G a l a c t i cw a sd e s i g n e d to optimize the entire rendering+physics+RL interplay since any bottleneck in the interplay slows down training. In terms of simulation+RL speed (rendering + physics + inference + learning), Galactic achieves over 108,000 SPS ,w h i c h8 8 x faster than Habitat 2.0 (1243 SPS). These massive speed-ups not only drastically cut the wall-clock training time of existing experiments, but also unlock an unprecedented scale of new experiments. First, Galactic can train a mobile pick skill to >80% accuracy in under 16 minutes, a1 0 0 xs p e e d u pc o m p a r e dt ot h eo v e r2 4h o u r si tt a k e st o train the same skill in Habitat 2.0. Second, we use Galactic to perform the largest-scale experiment to date for rearrangement using 5B steps of experience in 46 hours, which is equivalent to 20 years of robot experience. This scaling results in a single neural network composed of task-agnostic components achiev-ing85%success in GeometricGoal rearrangement, compared to0%success reported in Habitat 2.0 for the same approach. The code is available at github.com/facebookresearch/galactic . | 1. Introduction The scaling hypothesis posits that as general-purpose neural architectures are scaled to larger model sizes and training experience ever increasingly sophisticated intelligent behavior *Equal contributionemerges. These so-called ‘scaling laws’ appear to be driving many recent advances in AI, leading to massive improvements in computer vision [ 12,42,44]a n dn a t u r a ll a n g u a g ep r o c e s s -ing [3,40]. But what about embodied AI? We contend that embodied AI experiments need to be scaled by several orders of magnitude to become comparable to the experiment scales of CV and NLP, and likely even further beyond given the multi-modal interactive long-horizon nature of embodied AI problems. Consider one of the largest-scale experiments in vision-and-language: the CLIP [ 39]m o d e lw a st r a i n e do nad a t a s e to f 400 million images (and captions) for 32 epochs, giving a total of approximately 12 Billion frames seen during training. In contrast, most navigation experiments in embodied AI involve only 100-500M frames of experience [ 5,30,68]. The value of large scale training in embodied AI was demonstrated by Wi-jmans et al.[60]b ya c h i e v i n gn e a r -p e r f e c tp e r f o r m a n c eo nt h e PointNav task through scaling to 2.5 billion steps of experience. Since then, there have been several other examples of scaling experiments to 1B steps in navigation tasks [ 41,65]. Curiously, as problems become more challenging, going from navigation to mobile manipulation object rearrangement, the scale of experiments have become smaller. Examples of training scales in rearrangement policies include [ 19,55,59]t r a i n i n gs k i l l s in Habitat 2.0 for 100-200M steps and [ 58]i nV i s u a lR o o m Rearrangement for 75M steps. In this work, we demonstrate for the first time scaling training to 5 billion frames of experience for rearrangement in visually challenging environments. Why is large-scale learning in embodied AI hard? Unlike in CV or NLP, data in embodied AI is collected through an agent acting in environments. This data collection process involves policy inference to compute actions, physics to update the world state, rendering to compute agent observations, and reinforcement learning (RL) to learn from the collected data. These separate systems for rendering, physics, and inference are largely absent in CV and NLP. We present Galactic, a large-scale simulation+RL framework for robotic mobile manipulation in indoor environments. Specifically, we study and simulate the task of GeometricGoal This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13767 Arcade RL SimsDeviceResSensorsTrain SPSSim SPSPhotorealPhysicsVizDoom [26,37]1 x R T X 3 0 9 0 2 8 × 7 2 R G B 1 8 , 9 0 0 3 8 , 1 0 063Physics-only SimsDeviceResSensorsTrain SPSSim SPSPhotorealPhysicsIsaac Gym (Shadow Hand) [29]1 x A 1 0 0 N / A N / A 1 5 0 , 0 0 0 –64Brax (Grasp) [16]4 x 2 T P U v 3 N / A N / A 1 , 0 0 0 , 0 0 0 1 0 , 0 0 0 , 0 0 064ADPL Humanoid [64]1 x T I T A N X N / A N / A 4 0 , 9 6 0 1 4 4 , 0 3 564EAI SimsDeviceRes.SensorsTrain SPSSim SPSPhotorealPhysicsiGibson [28,51]1 x G P U 1 2 8 × 1 2 8 R G B – 1 0 044AI2-THOR [11]8 x R T X 2 0 8 0 T I 2 2 4 x 2 2 4 R G B⇡3002,86044Megaverse [37]1 x R T X 3 0 9 0 1 2 8 × 7 2 R G B 4 2 , 7 0 0 3 2 7 , 0 0 0638x RTX 2080TI 128×72 RGB 134,000 1,148,000LBS [48]1 x R T X 3 0 9 0 6 4 × 6 4 R G B 1 3 , 3 0 0 3 3 , 7 0 0461x Tesla V100 64×64 RGB 9,000 –8x Tesla V100 64×64 RGB 37,800 –Habitat 2.0 [55,59]1 x R T X 2 0 8 0 T i 1 2 8 × 1 2 8 R G B D 3 6 7 1 , 6 6 0448x RTX 2080 Ti 128×128 RGBD 1,243 7,6991x Tesla V100 128×128 RGBD 128 2,7908x Tesla V100 128×128 RGBD 945 17,465Galactic (Ours)1x Tesla V100 128×128 RGBD 14,807 54,966448x Tesla V100 128×128 RGBD 108,806 421,194Table 1. High-level throughput comparison of different simulators. Steps-per-second (SPS) numbers are taken from source publications, and we don’t control for all performance-critical variables including scene complexity and policy architecture. Comparisons should focus on orders of magnitude. We show Sim SPS (physics and/or rendering) and training SPS (physics and/or rendering, inference and learning) for various physics-only and Embodied AI simulators. We also describe VizDoom, an arcade simulator which has served as a classic benchmarks for RL algorithms due to its speed. The 3for Megaverse and VizDoom represent physics for abstract, non-realistic environments. Among EAI simulators that support realistic environments (photorealism and realistic physics), Galactic is 80 ⇥faster than the existing fastest simulator, Habitat 2.0 (108,806 vs 1243 training SPS for 8 GPUs). Galactic’s training speed is comparable to LBS, Megaverse, and VizDoom, even though LBS doesn’t simulate physics and neither Megaverse nor VizDoom support realistic environments. We also compare to GPU-based physics simulators: while these are generally faster than Galactic, they entirely omit rendering, which significantly reduces their compute requirements. For Galactic, we observe near-linear scaling from 1 to 8 GPUs, with a 7.3x speedup. Rearrangement [ 2], where a Fetch robot [ 43]e q u i p p e d with a mobile base, 7DoF arm, RGBD camera, egomotion, and proprioceptive sensing must rearrange objects in the ReplicaCAD [ 55]e n v i r o n m e n t sb yn a v i g a t i n gt oa no b j e c t , picking up the object, navigating to a target location, and then placing the object at the target location. Galactic is fast. In terms of simulation speed (rendering +p h y s i c s ) ,G a l a c t i ca c h i e v e s over 421,000 steps-per-second (SPS) on an 8-GPU node, which is 54x faster than Habitat |
Jiang_StyleIPSB_Identity-Preserving_Semantic_Basis_of_StyleGAN_for_High_Fidelity_Face_CVPR_2023 | Abstract Recent researches reveal that StyleGAN can generate highly realistic images, inspiring researchers to use pre-trained StyleGAN to generate high-fidelity swapped faces. However, existing methods fail to meet the expectations in two essential aspects of high-fidelity face swapping. Their results are blurry without pore-level details and fail to pre-serve identity for challenging cases. To overcome the above artifacts, we innovatively construct a series of identity-preserving semantic bases of StyleGAN (called StyleIPSB) in respect of pose, expression, and illumination. Each ba-sis of StyleIPSB controls one specific semantic attribute and disentangles with the others. The StyleIPSB constrains style code in the subspace of W+ space to preserve pore-level details and gives us a novel tool for high-fidelity face swapping, and we propose a three-stage framework for face swapping with StyleIPSB. Firstly, we transform the *Co-Corresponding authors †The research is supported in part by the National Natural Science Foundation of China (61972342,61832016,61972341,61902277)target facial images’ attributes to the source image. We learn the mapping from 3D Morphable Model (3DMM) pa-rameters, which capture the prominent semantic variance, to the coordinates of StyleIPSB that show higher identity-preserving and fidelity. Secondly, to transform detailed at-tributes which 3DMM does not capture, we learn the resid-ual attribute between the reenacted face and the target face. Finally, the face is blended into the background of the tar-get image. Extensive results and comparisons demonstrate that StyleIPSB can effectively preserve identity and pore-level details. The results of face swapping can achieve state-of-the-art performance. We will release our code at https://github.com/a686432/StyleIPSB | 1. Introduction Facial image manipulation [36, 37,48,50] is a task of transforming specific attributes from the source image to the target image while persevering other attributes unchanged. Face swapping is one of the essential parts of facial im-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 352 age manipulation, which has attracted lots of interest in the computer vision and graphics community. Face swapping aims to generate an image with the source image’s iden-tity and the target image’s attributes (e.g., expression, pose, background, hair, etc.). It has wide applications in the film industry and computer games. Current face swapping methods are mainly divided into two categories: 3D model-based methods and 2D image-based methods. 3D model-based methods [11,17,40] firstly reconstruct the 3D face models based on 3DMM from the source image and target image and transfer the non-identity parameters of the target face model to the source face model. Then they render the transferred 3D model and blend it into the target image. Although such methods can transfer coarse facial attributes such as pose, expression, and illumination, they have difficulty in generating realis-tic hair and teeth accessories. With the development of generative adversarial net-works, 2D image-based methods [6, 7,22,27,28,45] can synthesize photo-realistic images. The generated face im-ages have convincing and detailed facial attributes, such as mouth, teeth, and eyebrows. Recent works [21, 46,47,51] employ the pre-trained StyleGAN decoder to further im-prove the fidelity and synthesize pore-level details. How-ever, as shown in Fig. 1, despite using the pre-trained Style-GAN model, their results fail to generate pore-level details and identity-preserving in challenge conditions. Overall, 2D image-based methods generate more realistic images than 3D model-based methods, but the identity-preserving and pore-level details of the images still need improvement. To tackle the challenges of blurry images and identity-preserving in face swapping, according to our observations, the cause of the blurred images is that the regressed style code is out of W+ space. Additionally, as mentioned in [18], identity embedding is a non-smooth space, so finding the identity-preserving optimized direction is challenging. To address these problems, our method constrains the re-gressed style code with identity-preserving semantic bases of StyleGAN (i.e., the proposed StyleIPSB). StyleIPSB stays within the W+ space, and the identity is preserved when changing its coordinates. The advantages of the proposed StyleIPSB are summa-rized as follows: (1) StyleIPSB constitutes a linear space, which is the subspace of the W+ space of StyleGAN. By ensuring the regressed style code within the W+ space of StyleGAN, we can more easily generate images with pore-level details. (2) When changing the coordinates of the StyleIPSB, the identity remains preserved as much as possi-ble. (3) StyleIPSB can represent various poses, expressions, and illuminations. To construct the basis that satisfies the above properties, we propose a novel identity-preserving distance metric to find the orthogonal semantic directions, which are further assembled to StyleIPSB.StyleIPSB also cooperates well with 3DMM to control facial attributes. StyleRig [39] builds the mapping of the 3DMM parameter space and W+ space of StyleGAN, which can change the facial attribute of the generated image by the 3DMM parameters. StyleRig only can manipulate im-ages generated by StyleGAN. Pie [38] designs a non-linear optimization problem to edit the real-world image based on StyleRig, but the optimization operation is time-consuming. GIF [12] generates face images by the FLAME [23] para-metric control. However, the generated images are easy to contain artifacts and change identity. Other face manipu-lation methods [4, 29,37] use the network directly to find the edit direction. Still, without the guidance of 3DMM, they can only generate some basic expressions (e.g., smile) and fail to cover various expressions. In this paper, we pro-pose the StyleGAN-3DMM mapping network, which trans-forms the semantic information of 3DMM parameters into StyleIPSB coordinates. The StyleGAN-3DMM mapping network reduces the gap between 3DMM and StyleIPSB. It shows that StyleIPSB is very compatible with 3DMM. In summary, we propose a face swapping framework based on StyleIPSB and achieve state-of-the-art results. The main contributions of this paper lie in the following three aspects: • We propose a novel method of establishing identity-preserving semantic bases of StyleGAN called StyleIPSB. The face image, generated by the linear space of StyleIPSB, remains pore-level details and identity-preserving. • The proposed StyleGAN-3DMM mapping network serves as the bridge to narrow the gap between 3DMM and StyleIPSB, which can take advantage of the promi-nent semantic variance of 3DMM and the identity-preserving and high-fidelity of styleIPSB. • We propose the face swapping framework based on StyleIPSB and StyleGAN-3DMM mapping network. Extensive results show our method outperforms others in detail-preserving and identity-preserving. |
Hu_Discriminator-Cooperated_Feature_Map_Distillation_for_GAN_Compression_CVPR_2023 | Abstract Despite excellent performance in image generation, Generative Adversarial Networks (GANs) are notorious for its requirements of enormous storage and intensive com-putation. As an awesome “performance maker”, knowl-edge distillation is demonstrated to be particularly effica-cious in exploring low-priced GANs. In this paper, we in-vestigate the irreplaceability of teacher discriminator and present an inventive discriminator-cooperated distillation, abbreviated as DCD, towards refining better feature maps from the generator. In contrast to conventional pixel-to-pixel match methods in feature map distillation, our DCD utilizes teacher discriminator as a transformation to drive intermediate results of the student generator to be percep-tually close to corresponding outputs of the teacher gen-erator. Furthermore, in order to mitigate mode collapse in GAN compression, we construct a collaborative adver-sarial training paradigm where the teacher discriminator is from scratch established to co-train with student gener-ator in company with our DCD. Our DCD shows superior results compared with existing GAN compression methods. For instance, after reducing over 40 ×MACs and 80 ×pa-rameters of CycleGAN, we well decrease FID metric from 61.53 to 48.24 while the current SoTA method merely has 51.92. This work’s source code has been made accessible at https://github.com/poopit/DCD-official . | 1. Introduction Image generation transforms random noise or source-domain images to other images in user-required domains. Recent years have witnessed the burgeoning of genera-tive adversarial networks (GANs) that lead to substantial progress in image-to-image translation [8, 9, 18, 49], style transfer [11, 12, 42], image synthesis [3, 22, 23, 32, 46], etc. *Corresponding Author Teacher Student (a) (b) ..... Distance FunctionTeacher Student (c) ..... ..........Figure 1. (a) Layer-by-layer feature map distillation [34]. (b) Cross-layer feature map distillation [6]. (c) Our discriminator-cooperated feature map distillation. Image generation has a wide application in daily enter-tainment such as TikTok AI image generator, Dream by WOMBO, Google Imagen, and so on. Running platforms performing these applications are typically featured with poor memory storage and limited computational power. However, GANs are also ill-famed for the growing spurt of learnable parameters and multiply-accumulate operations (MACs), raising a huge challenge to the storage require-ment and computing ability of deployment infrastructure. To address the above dilemma for better usability of GANs in serving human life, methods such as pruning [7, 27, 28, 33], neural network architecture search (NAS) [10, 19, 26] and quantization [39, 40], have been broadly ex-plored to obtain a smaller generator. On the premise of these compression researches, knowledge distillation, in particu-lar to distilling feature maps, has been accepted as a supple-mentary means to enhance the performance of compressed generators [1,4,17,26,29,41]. Originated from image clas-sification, as illustrated in Fig. 1(a), feature map based dis-tillation, which extracts information of intermediate activa-tions and transfers the knowledge from the teacher model to the student one, has been extensively explored and demon-strated to well improve the capability of lightweight mod-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20351 els [5, 25, 34, 43, 45]. Distinctive from passing on common feature maps from teacher to student, AT [45] calculates feature attentions as the delivered knowledge; MGD [43] randomly masks feature maps to indirectly guide the stu-dent to learn from the teacher; KRD [6] uses a cross-layer distillation method to allow the “new knowledge” of the stu-dent to learn from the “old knowledge” in teacher, as shown in Fig. 1(b). Whatever, most methods execute pixel-to-pixel feature maps matching between teacher and student. Alike to the implementations on image classification, feature map based distillation is also considered in GAN compression. For example, GCC [28] considers a well pre-trained discriminator to absorb high-level information from the teacher-generated image, and fuses it with intermediate activations from the teacher generator, results of which are passed to the corresponding position of the student genera-tor. OMGD [33] utilizes an online multi-granularity strat-egy to allow a deeper teacher generator and a wider one to simultaneously deliver output image knowledge of differ-ent granularities to the student generator.These two methods follow the pipeline of image classification to tune the inter-mediate outputs of student generator with those of teacher generator in a fashion of per-pixel matching. Although the sustainable progress on multiple benchmark datasets demonstrates the efficacy of intermediate activation outputs, the feature-based distillation, as we reveal in this paper, is not well compatible with the very nature of generating per-ceptually similar images and adversarial training paradigm. Concretely speaking, conversely to image classification that relies on feature vector representations, the essence of image generation is to improve perceptually alike between the real images and generated images. Two important facts cause it is eventually difficult to use per-pixel match to an-alyze a pair of images: First, two similar images can con-tain many different pixel values; Second, two dissimilar im-ages can still comprise the same pixel values. Thus, it is not suitable to simply use the per-pixel match. Regarding adversarial training in GANs, a generator learns to synthe-size samples that best resemble the dataset, meanwhile a discriminator differentiate samples in the dataset from the generator generated samples. The adversarial results finally lead the generator to creating images of out-of-the-ordinary visual quality, indicating that the discriminator is also em-powered with informative capacity and can be exploited to enrich the distillation of feature maps. Therefore, it might be inappropriate to directly extend feature map distillation in image classification to image generation. And GAN com-pression oriented feature map distillation with discriminator included remains to be well explored. In order to achieve this objective, in this paper, we pro-pose a discriminator-cooperated distillation (DCD) method to involve the teacher discriminator in distilling feature maps for student generator. A simple illustration is given inFig. 1(c), in contrast to the vanilla pixel-wise distance con-straint, our DCD measures the distance at the end of teacher discriminator with the intermediate generator outputs as its inputs. Our DCD is perspicacious in multiple benchmark datasets with a simple implementation. Akin to perceptual loss [20] which employs a pre-trained neural network such as a VGG model [36] to extract features upon which the ℓ1 distance is calculated from activations of hidden layers, the teacher discriminator in DCD also acts as a feature extrac-tor. Due to pooling operations in the hidden layers, feature maps from different sources (student generator and teacher generator) as inputs to the discriminator may lead to iden-tical latent representations, therefore encouraging natural and perceptually pleasing results. In addition, the proposed DCD is used in conjunction with collaborative adversarial training, which is also simple but perspicacious to allow the student generator to fool the discriminator for generating better images. In contrast to discriminator-free paradigm training [28], we find our DCD empowers the compressed student generator with a better capability to compete against teacher discriminator. Thus, we also employ the teacher dis-criminator to collaboratively determine whether inputs from the student generator are real or not. This work intends to raise the level of feature map dis-tillation to strengthen the compressed student generator to generate high-quality images. The major contributions we have made across the entire paper are listed as follows: (1) An incentive GAN-oriented discriminator-cooperated fea-ture map distillation method to produce images with high fidelity; (2) One novel collaborative adversarial training paradigm to better reach a global equilibrium point in com-pressing GANs; (3) Remarkable reduction on the generator complexity and significant performance increase. |
Fel_Dont_Lie_to_Me_Robust_and_Efficient_Explainability_With_Verified_CVPR_2023 | Abstract A plethora of attribution methods have recently been de-veloped to explain deep neural networks. These methods use different classes of perturbations (e.g, occlusion, blur-ring, masking, etc) to estimate the importance of individ-ual image pixels to drive a model’s decision. Neverthe-less, the space of possible perturbations is vast and cur-rent attribution methods typically require significant com-putation time to accurately sample the space in order to achieve high-quality explanations. In this work, we intro-duce EVA (Explaining using Verified Perturbation Analysis) – the first explainability method which comes with guaran-tees that an entire set of possible perturbations has been exhaustively searched. We leverage recent progress in ver-ified perturbation analysis methods to directly propagate bounds through a neural network to exhaustively probe a – potentially infinite-size – set of perturbations in a single forward pass. Our approach takes advantage of the bene-ficial properties of verified perturbation analysis, i.e., time efficiency and guaranteed complete – sampling agnostic – coverage of the perturbation space – to identify image pixels that drive a model’s decision. We evaluate EVA systemat-ically and demonstrate state-of-the-art results on multiple benchmarks. Our code is freely available: github.com/ deel-ai/formal-explainability | 1. Introduction Deep neural networks are now being widely deployed in many applications from medicine, transportation, and secu-rity to finance, with broad societal implications [ 40]. They are routinely used to making safety-critical decisions – of-ten without an explanation as their decisions are notoriously Figure 1. Manifold exploration of current attribution meth-ods. Current methods assign an importance score to individ-ual pixels using perturbations around a given input image x. Saliency [ 56] uses infinitesimal perturbations around x, Occlu-sion [ 71] switches individual pixel intensities on/off. More recent approaches [ 17,43,46,48,49] use (Quasi-) random sampling meth-ods in specific perturbation spaces (occlusion of segments of pix-els, blurring, ...). However, the choice of the perturbation space un-doubtedly biases the results – potentially even introducing serious artifacts [ 26,29,38,64]. We propose to use verified perturbation analysis to efficiently perform a complete coverage of a perturba-tion space around xto produce reliable and faithful explanations. hard to interpret. Many explainability methods have been proposed to gain insight into how network models arrive at a particular deci-sion [ 17,24,43,46,48,49,53,55,61,65,71]. The applications of these methods are multiple – from helping to improve or debug their decisions to helping instill confidence in the reliability of their decisions [ 14]. Unfortunately, a severe limitation of these approaches is that they are subject to a confirmation bias: while they appear to offer useful expla-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16153 nations to a human experimenter, they may produce incor-rect explanations [ 2,23,59]. In other words, just because the explanations make sense to humans does not mean that they actually convey what is actually happening within the model. Therefore, the community is actively seeking for better benchmarks involving humans [ 12,29,37,45]. In the meantime, it has been shown that some of our current and commonly used benchmarks are biased and that explainability methods reflect these biases – ultimately providing the wrong explanation for the behavior of the model [ 25,29,64]. For example, some of the current fi-delity metrics [ 7,18,27,34,48] mask one or a few of the input variables (with a fixed value such as a gray mask) in order to assess how much they contribute to the output of the system. Trivially, if these variables are already set to the mask value in a given image (e.g., gray), masking these variables will not yield any effect on the model’s output and the importance of these variables is poised to be underesti-mated. Finally, these methods rely on sampling a space of perturbations that is far too vast to be fully explored – e.g., LIME on a image divided in 64segments image would need more than 1019samples to test all possible perturbations. As a result, current attribution methods may be subject to bias and are potentially not entirely reliable. To address the baseline issue, a growing body of work is starting to leverage adversarial methods [ 8,29,31,42,50] to derive explanations that reflect the robustness of the model to local adversarial perturbations. Specifically, a pixel or an image region is considered important if it allows the easy generation of an adversarial example. That is if a small perturbation of that pixel or image region yields a large change in the model’s output. This idea has led to the design of several novel robustness metrics to evaluate the quality of explanations, such as Robustness-S r[29]. For a better ranking on those robustness metrics, several meth-ods have been proposed that make intensive use of adversar-ial attacks [ 29,70], such as Greedy-AS for Robustness-S r. However, these methods are computationally very costly – in some cases, requiring over 50 000 adversarial attacks per explanation – severely limiting the widespread adoption of these methods in real-world scenarios. In this work, we propose to address this limitation by introducing EV A (Explaining using Verified perturbation Analysis), a new explainability method based on robustness analysis. Verified perturbation analysis is a rapidly growing toolkit of methods to derive bounds on the outputs of neural networks in the presence of input perturbations. In contrast to current attributions methods based on gradient estimation or sampling, verified perturbation analysis allows the full exploration of the perturbation space, see Fig. 1. We use a tractable certified upper bound of robustness confidence to derive a new estimator to help quantify the importance of input variables (i.e., those that matter the most). That is, thevariables most likely to change the predictor’s decision. We can summarize our main contributions as follows: • We introduce EV A, the first explainability method guar-anteed to explore its entire set of perturbations using Ver-ified Perturbation Analysis. • We propose a method to scale EV A to large vision models and show that the exhaustive exploration of all possible perturbations can be done efficiently. • We systematically evaluate our approach using several image datasets and show that it yields convincing results on a large range of explainability metrics • Finally, we demonstrate that we can use the proposed method to generate class-specific explanations, and we study the effects of several verified perturbation analysis methods as a hyperparameter of the generated explana-tions. |
Fu_StyleAdv_Meta_Style_Adversarial_Training_for_Cross-Domain_Few-Shot_Learning_CVPR_2023 | Abstract Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains. It aims at transferring prior knowledge learned on the source dataset to novel target datasets. The CD-FSL task is especially challenged by the huge domain gap between different datasets. Critically, such a domain gap actually comes from the changes of visual styles, and wave-SAN [ 10] empirically shows that spanning the style distribution of the source data helps alleviate this issue. However, wave-SAN simply swaps styles of two images. Such a vanilla operation makes the generated styles “real” and “easy”, which still fall into the original set of the source styles. Thus, inspired by vanilla adversarial learning, a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method is proposed for CD-FSL. Particularly, our style attack method synthe-sizes both “virtual” and “hard” adversarial styles for model training. This is achieved by perturbing the original style with the signed style gradients. By continually attacking styles and forcing the model to recognize these challenging adversarial styles, our model is gradually robust to the vi-sual styles, thus boosting the generalization ability for novel target datasets. Besides the typical CNN-based backbone, we also employ our StyleAdv method on large-scale pre-trained vision transformer. Extensive experiments conducted on eight various target datasets show the effectiveness of our method. Whether built upon ResNet or ViT, we achieve the new state of the art for CD-FSL. Code is available at https://github.com/lovelyqian/StyleAdv-CDFSL . | 1. Introduction This paper studies the task of Cross-Domain Few-Shot Learning (CD-FSL) which addresses the Few-Shot Learn-ing (FSL) problem across different domains. As a gen-eral recipe for FSL, episode -based meta-learning strategy *indicates corresponding authorhas also been adopted for training CD-FSL models, e.g., FWT [ 48], LRP [ 42], ATA [ 51], and wave-SAN [ 10]. Gener-ally, to mimic the low-sample regime in testing stage, meta learning samples episodes for training the model. Each episode contains a small labeled support set and an unla-beled query set . Models learn meta knowledge by predicting the categories of images contained in the query set according to the support set. The learned meta knowledge generalizes the models to novel target classes directly. Empirically, we find that the changes of visual appear-ances between source and target data is one of the key causes that leads to the domain gap in CD-FSL. Interestingly, wave-SAN [ 10], our former work, shows that the domain gap issue can be alleviated by augmenting the visual styles of source images. Particularly, wave-SAN proposes to augment the styles, in the form of Adaptive Instance Normalization (AdaIN) [ 22], by randomly sampling two source episodes and exchanging their styles. However, despite the efficacy of wave-SAN, such a na ¨ıve style generation method suffers from two limitations: 1) The swap operation makes the styles always be limited in the “real” style set of the source dataset; 2) The limited real styles further lead to the generated styles too“easy” to learn. Therefore, a natural question is whether we can synthesize “virtual” and “hard” styles for learning a more robust CD-FSL model? Formally, we use “real/virtual” to indicate whether the styles are originally presented in the set of source styles, and define “easy/hard” as whether the new styles make meta tasks more difficult. To that end, we draw inspiration from the adversarial training, and propose a novel meta Style Adv ersarial train-ing method ( StyleAdv ) for CD-FSL. StyleAdv plays the minimax game in two iterative optimization loops of meta-training. Particularly, the inner loop generates adversarial styles from the original source styles by adding perturba-tions. The synthesized adversarial styles are supposed to be more challenging for the current model to recognize, thus, in-creasing the loss. Whilst the outer loop optimizes the whole network by minimizing the losses of recognizing the images with both original and adversarial styles. Our ultimate goal is to enable learning a model that is robust to various styles, be-1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24575 yond the relatively limited and simple styles from the source data. This can potentially improve the generalization ability on novel target domains with visual appearance shifts. Formally, we introduce a novel style adversarial attack method to support the inner loop of StyleAdv. Inspired yet different from the previous attack methods [ 14,34], our style attack method perturbs and synthesizes the styles rather than image pixels or features. Technically, we first extract the style from the input feature map, and include the extracted style in the forward computation chain to obtain its gradient for each training step. After that, we synthesize the new style by adding a certain ratio of gradient to the original style. Styles synthesized by our style adversarial attack method have the good properties of “hard” and “virtual”. Particularly, since we perturb styles in the opposite direction of the training gradients, our generation leads to the “hard” styles. Our attack method results in totally “virtual” styles that are quite different from the original source styles. Critically, our style attack method makes progressive style synthesizing , with changing style perturbation ratios , which makes it significantly different from vanilla adver-sarial attacking methods. Specifically, we propose a novel progressive style synthesizing strategy. The na ¨ıve solution of directly plugging-in perturbations is to attack each block of the feature embedding module individually, which however, may results in large deviations of features from the high-level block. Thus, our strategy is to make the synthesizing signal of the current block be accumulated by adversarial styles from previous blocks. On the other hand, rather than attacking the models by fixing the attacking ratio, we syn-thesize new styles by randomly sampling the perturbation ratio from a candidate pool. This facilitates the diversity of the synthesized adversarial styles. Experimental results have demonstrated the efficacy of our method: 1) our style adversarial attack method does synthesize more challenging styles, thus, pushing the limits of the source visual distribu-tion; 2) our StyleAdv significantly improves the base model and outperforms all other CD-FSL competitors. We highlight our StyleAdv is model-agnostic and com-plementary to other existing FSL or CD-FSL models, e.g., GNN [ 12] and FWT [ 48]. More importantly, to benefit from the large-scale pretrained models, e.g., DINO [ 2], we fur-ther explore adapting our StyleAdv to improve the Vision Transformer (ViT) [ 5] backbone in a non-parametric way. Experimentally, we show that StyleAdv not only improves CNN-based FSL/CD-FSL methods, but also improves the large-scale pretrained ViT model. Finally, we summarize our contributions. 1) A novel meta style adversarial training method, termed StyleAdv, is pro-posed for CD-FSL. By first perturbing the original styles and then forcing the model to learn from such adversarial styles, StyleAdv improves the robustness of CD-FSL models. 2) We present a novel style attack method with the novelprogressive synthesizing strategy in changing attacking ra-tios. Diverse “virtual” and “hard” styles thus are generated. 3) Our method is complementary to existing FSL and CD-FSL methods; and we validate our idea on both CNN-based and ViT-based backbones. 4) Extensive results on eight un-seen target datasets indicate that our StyleAdv outperforms previous CD-FSL methods, building a new SOTA result. |
Jin_Long-Tailed_Visual_Recognition_via_Self-Heterogeneous_Integration_With_Knowledge_Excavation_CVPR_2023 | Abstract Deep neural networks have made huge progress in the last few decades. However, as the real-world data often ex-hibits a long-tailed distribution, vanilla deep models tend to be heavily biased toward the majority classes. To ad-dress this problem, state-of-the-art methods usually adopt a mixture of experts (MoE) to focus on different parts of the long-tailed distribution. Experts in these methods are with the same model depth, which neglects the fact that different classes may have different preferences to be fit by models with different depths. To this end, we propose a novel MoE-based method called Self-Heterogeneous Integration with Knowledge Excavation (SHIKE). We first propose Depth-wise Knowledge Fusion (DKF) to fuse features between dif-ferent shallow parts and the deep part in one network for each expert, which makes experts more diverse in terms of representation. Based on DKF , we further propose Dynamic Knowledge Transfer (DKT) to reduce the influence of the hardest negative class that has a non-negligible impact on the tail classes in our MoE framework. As a result, the clas-sification accuracy of long-tailed data can be significantly improved, especially for the tail classes. SHIKE achieves the state-of-the-art performance of 56.3%, 60.3%, 75.4% and 41.9% on CIFAR100-LT (IF100), ImageNet-LT, iNatu-ralist 2018, and Places-LT, respectively. The source code is available at https://github.com/jinyan-06/SHIKE. | 1. Introduction Deep learning has made incredible progress in visual recognition tasks during the past few years. With well-designed models, e.g., ResNet [18], and Transformer [57], *Corresponding author: Yang Lu, luyang@xmu.edu.cn 0 20 40 60 80 100 Class index0.00.20.40.60.81.0Maximum accuracyMany Medium FewShallow branch 1 Shallow branch 2 Deep branch 0100200300400500 FrequencyFigure 1. Comparison of test accuracy of a ResNet-32 model with two shallow branches and a deep branch. The model is jointly trained on CIFAR100-LT with an imbalance factor of 100. Only the highest accuracy among the three branches is shown for each class. deep learning techniques have outperformed humans in many visual applications, like image classification [32], se-mantic segmentation [17, 42], and object detection [50, 52]. One key factor in the success of deep learning is the avail-ability of large-scale datasets [13,55,70], which are usually manually constructed and annotated with balanced train-ing samples for each class. However, in real-world ap-plications, data typically follows a long-tailed distribution, where a small fraction of classes possess massive samples, but the others are with only a few samples [5,12,26,41,44]. Such imbalanced data distribution leads to a significant ac-curacy drop for deep learning models trained by empirical risk minimization (ERM) [56] as the model tends to be bi-ased towards the head classes and ignore the tail classes to a great extent. Thus, the model’s generalization ability on tail classes is severely degraded. The most straightforward action for long-tailed recog-nition often focuses on re-balancing the learning process from either a data processing [3, 27, 46] or cost-sensitive perspective [12, 28, 71]. Recently, methods proposed for This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23695 long-tailed data have drawn more attention to representa-tion learning. For example, the decoupling strategy [27] is proposed to deal with the inferior representation caused by re-balancing methods. Contrastive learning [11,26] special-izes in learning better and well-distributed representations. Among them, the methods that achieve state-of-the-art per-formance are usually based on a mixture of experts (MoE), also known as multi-experts. Some MoE-based methods prompt different experts to learn different parts of the long-tailed distribution (head, medium, tail) [4, 10, 39, 61], while some others were designed to reduce the overall model’s prediction variance or uncertainty [33, 60]. Unlike traditional ensemble learning methods that adopt independent models for joint prediction, the MoE-based methods for long-tailed learning often adopt a multi-branch model architecture with shared shallow layers and exclu-sive deep layers. Thus, the features generated by different experts are actually from the model with the same depth, although the methods force them to be diverse from various perspectives. Recently, self-distillation [64] is proposed to enable shallow networks to have the ability to predict cer-tain samples in the data distribution. This brings us to a new question: can we integrate the knowledge from shal-low networks into some experts in MoE to fit the long-tailed data in a self-adaptive manner regardless of the number of samples? With this question, we conduct a quick experi-ment to reveal the preference of the deep neural network on different classes in long-tailed data. A ResNet-32 model with branches directly from shared layers is adopted. Each branch contains an independent classifier after feature align-ment, and all classifiers are re-trained with balanced soft-max cross entropy [49]. Fig. 1 shows the highest accuracy among the three branches for each class. We can clearly observe that shallow parts of the deep model are able to perform better on certain tail classes. This implies that dif-ferent parts of the long-tailed distribution might accommo-date the network differently according to the depth. Thus the shallow part of the deep model can provide more useful information for learning the long-tailed distribution. Driven by the observation above, we propose a novel MoE-based method called Self-Heterogeneous Integration with Knowledge Excavation (SHIKE). SHIKE adopts an MoE-based model consisting of heterogeneous experts along with knowledge fusion and distillation. To fuse the knowledge diversely, we first introduce Depth-wise Knowl-edge Fusion (DKF) as a fundamental component to incor-porate different intermediate features into deep features for each expert. The proposed DKF architecture can not only provide more informative features for experts but also opti-mize more directly to shallower layers of the networks by mutual distillation. In addition, we design Dynamic Knowl-edge Transfer (DKT) to address the problem of the hard-est negatives during knowledge distillation between experts.DKT elects the non-target logits with large values to re-form non-target predictions from all experts to one grand teacher, which can be used in distilling non-target predic-tions to suppress the hardest negative, especially for the tail classes. DKT can fully utilize the structure of MoE and di-versity provided by DKF for better model optimization. In this paper, our contributions can be summarized as follow: • We propose Depth-wise Knowledge Fusion (DKF) to encourage feature diversity in knowledge distillation among experts, which releases the potential of the MoE in long-tailed representation learning. • We propose a novel knowledge distillation strategy DKT for MoE training to address the hardest negative problem for long-tailed data, which further exploits the diverse features fusing enabled by DKF. • We outperform other state-of-the-art methods on four benchmarks by achieving performance 56.3%, 60.3%, 75.4% and 41.9% accuracy for CIFAR100-LT (IF100), ImageNet-LT, iNaturalist 2018, and Places-LT, respec-tively. |
Huang_CP3_Channel_Pruning_Plug-In_for_Point-Based_Networks_CVPR_2023 | Abstract Channel pruning can effectively reduce both compu-tational cost and memory footprint of the original net-work while keeping a comparable accuracy performance. Though great success has been achieved in channel pruning for 2D image-based convolutional networks (CNNs), exist-ing works seldom extend the channel pruning methods to 3D point-based neural networks (PNNs). Directly imple-menting the 2D CNN channel pruning methods to PNNs undermine the performance of PNNs because of the dif-ferent representations of 2D images and 3D point clouds as well as the network architecture disparity. In this pa-per, we proposed CP3, which is a Channel Pruning Plug-in for Point-based network. CP3is elaborately designed to leverage the characteristics of point clouds and PNNs in order to enable 2D channel pruning methods for PNNs. Specifically, it presents a coordinate-enhanced channel im-portance metric to reflect the correlation between dimen-sional information and individual channel features, and it recycles the discarded points in PNN’s sampling process and reconsiders their potentially-exclusive information to enhance the robustness of channel pruning. Experiments on various PNN architectures show that CP3constantly improves state-of-the-art 2D CNN pruning approaches on different point cloud tasks. For instance, our compressed PointNeXt-S on ScanObjectNN achieves an accuracy of 88.52% with a pruning rate of 57.8%, outperforming the baseline pruning methods with an accuracy gain of 1.94%. * Equal contributions. BCorresponding authors. This work is done during Yaomin Huang and Xinmei Liu’s internship at Midea Group.1. Introduction Convolutional Neural Networks (CNNs) often encounter the problems of overloaded computation and overweighted storage. The cumbersome instantiation of a CNN model leads to inefficient, uneconomic, or even impossible de-ployment in practice. Therefore, light-weight models that provide comparable results with much fewer computational costs are in great demand for nearly all applications. Chan-nel pruning is a promising solution to delivering efficient networks. In recent years, 2D CNN channel pruning, e.g., pruning classical VGGNets [37], ResNets [14], Mo-bileNets [16], and many other neural networks for process-ing 2D images [6, 7, 12, 24, 26, 29, 40], has been success-fully conducted. Most channel pruning approaches focus on identifying redundant convolution filters (i.e., channels) by evaluating their importance. The cornerstone of 2D chan-nel pruning methods is the diversified yet effective channel evaluation metrics. For instance, HRank [24] uses the rank of the feature map as the pruning metric and removes the low-rank filters that are considered to contain less informa-tion. CHIP [40] leverages channel independence to repre-sent the importance of each feature mapping and eliminates less important channels. With the widespread application of depth-sensing tech-nology, 3D vision tasks [9, 10, 36, 44] are a rapidly growing field starving for powerful methods. Apart from straight-forwardly applying 2D CNNs, models built with Point-based Neural Networks (PNNs), which directly process point clouds from the beginning without unnecessary ren-dering, show their merits and are widely deployed on edge devices for various applications such as robots [22, 49] and self-driving [2, 53]. Compressing PNNs is crucial due to the limited resources of edge devices and multiple models for different tasks are likely to run simultaneously [8, 30]. Given the huge success of 2D channel pruning and the great This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5302 demand for efficient 3D PNNs, we intuitively raise one question: shall we directly implement the existing pruning methods to PNNs following the proposed channel impor-tance metrics in 2D CNNs pruning? With this question in mind, we investigate the fundamen-tal factors that potentially impair 2D pruning effectiveness on PNNs. Previous works [19, 48] have shown that point clouds record visual and semantic information in a signifi-cantly different way from 2D images. Specifically, a point cloud consists of a set of unordered points on objects’ and environments’ surfaces, and each point encodes its features, such as intensity along with the spatial coordinates (x,y,z). In contrast, 2D images organize visual features in a dense and regular pixel array. Such data representation differences between 3D point clouds and 2D images lead to a) different ways of exploiting information from data and b) contrasting network architectures of PNNs and 2D CNNs. It is credible that only the pruning methods considering the two aspects (definitely not existing 2D CNN pruners) may obtain supe-rior performance on PNNs. From the perspective of data representations, 3D point clouds provide more 3D feature representations than 2D im-ages, but the representations are more sensitive to network channels. To be more specific, for 2D images, all three RGB channels represent basic information in an isotropic and homogeneous way so that the latent representations ex-tracted by CNNs applied to the images. On the other hand, point clouds explicitly encode the spatial information in three coordinate channels, which are indispensable for ex-tracting visual and semantic information from other chan-nels. Moreover, PNNs employ the coordinate information in multiple layers as concatenated inputs for deeper feature extraction. Nevertheless, existing CNN pruning methods are designed only suitable for the plain arrangements of 2D data but fail to consider how the informative 3D information should be extracted from point clouds. Moreover, the network architectures of PNNs are de-signed substantially different from 2D CNNs. While using smaller kernels [37] is shown to benefit 2D CNNs [37], it does not apply to networks for 3D point clouds. On the contrary, PNNs leverage neighborhoods at multiple scales to obtain both robust and detailed features. The reason is that small neighborhoods (analogous to small kernels in 2D CNNs) in point clouds consist of few points for PNNs to capture robust features. Due to the necessary sampling steps, the knowledge insufficiency issue becomes more se-vere for deeper PNN layers. In addition, PNNs use the random input dropout procedure during training to adap-tively weight patterns detected at different scales and com-bine multi-scale features. This procedure randomly discards a large proportion of points and loses much exclusive infor-mation of the original data. Thus, the architecture disparity between 2D CNNs and PNNs affects the performance ofdirectly applying existing pruning methods to PNNs. In this paper, by explicitly dealing with the two charac-teristics of 3D task, namely the data representation and the PNN architecture design, we propose a Channel Pruning Plug-in for Point-based network named CP3, which can be applied to most 2D channel pruning methods for compress-ing PNN models. The proposed CP3refines the channel importance, the key factor of pruning methods, from two aspects. Firstly, considering the point coordinates ( x,y, and z) encode the spatial information and deeply affects fea-ture extraction procedures in PNN layers, we determine the channel importance by evaluating the correlation between the feature map and its corresponding point coordinates by introducing a coordinate-enhancement module. Secondly, calculating channel importance in channel pruning is data-driven and sensitive to the input, and the intrinsic sampling steps in PNN naturally makes pruning methods unstable. To settle this problem, we make full use of the discarded points in the sampling process via a knowledge recycling mod-ule to supplement the evaluation of channel importance. This reduces the data sampling bias impact on the chan-nel importance calculation and increases the robustness of the pruning results. Notably, both the coordinates and re-cycled points in CP3do not participate in network training (with back-propagation) but only assist channel importance calculation in the reasoning phase. Thus, CP3does not in-crease any computational cost of the pruned network. The contributions of this paper are as follows: • We systematically consider the characteristics of PNNs and propose a channel pruning plug-in named CP3to enhance 2D CNN channel pruning approaches on 3D PNNs. To the best of our knowledge, CP3is the first method to export existing 2D pruning methods to PNNs. • We propose a coordinate-enhanced channel importance score to guide point clouds network pruning, by evaluat-ing the correlation between feature maps and correspond-ing point coordinates. • We design a knowledge recycling pruning scheme that increases the robustness of the pruning procedure, using the discarded points to improve the channel importance evaluation. • We show that using CP3is consistently superior to di-rectly transplanting 2D pruning methods to PNNs by ex-tensive experiments on three 3D tasks and five datasets with different PNN models and pruning baselines. 2. Related Work 2.1. 2D Channel Pruning Channel pruning (a.k.a., filter pruning) methods reduce the redundant filters while maintaining the original struc-ture of CNNs and is friendly to prevailing inference accel-eration engines such as TensorFlow-Lite (TFLite) [11] and 5303 Mobile Neural Network (MNN) [18]. Mainstream chan | nel pruning methods [6, 7, 12, 29] usually first evaluate the im-portance of channels by certain metrics and then prune (i.e., remove) the less important channels. Early work [21] uses thel1norm of filters as importance score for channel prun-ing. Afterwards, learning parameters, such as the scaling factor γin the batch norm layer [26] and the reconstruction error in the final network layer [51], are considered as the importance scores for channel selection. The importance sampling distribution of channels [23] is also used for prun-ing. Recent works [15, 40] measure the correlation of mul-tiple feature maps to determine the importance score of the filter for pruning. HRank [24] proposes a method for prun-ing filters based on the theory that low-rank feature maps contain less information. [50] leverages the statistical dis-tribution of activation gradient and takes the smaller gradi-ent as low importance score for pruning. [46] calculates the average importance of both the input feature maps and their corresponding output feature maps to determine the overall importance. [13, 45] compress CNNs from multiple dimen-sions While most channel pruning methods are designed for and tested on 2D CNNs, our CP3can work in tandem with existing pruners for 3D point-based networks. 2.2. Point-based Networks for Point Cloud Data Point-based Neural Networks (PNNs) directly process point cloud data with a flexible range of receptive field, have no positioning information loss, and thus keep more accurate spatial information. As a pioneer work, Point-Net [32] learns the spatial encoding directly from the in-put point clouds and uses the characteristics of all points to obtain the global representations. PointNet++ [33] further proposes a multi-level feature extraction structure to extract local and global features more effectively. KPConv [42] proposes a new point convolution operation to learn lo-cal movements applied to kernel points. ASSANet [34] proposes a separable set abstraction module that decom-poses the normal SA module in PointNet++ into two sepa-rate learning phases for channel and space. PointMLP [28] uses residual point blocks to extract local features, trans-forms local points using geometric affine modules, and ex-tracts geometric features before and after the aggregation operation. PointNeXt [35] uses inverted residual bottle-neck and separable multilayer perceptrons to achieve more efficient model scaling. Besides classification, PNNs also serve as backbones for other 3D tasks. V oteNet [31] effec-tively improves the 3D object detection accuracy through the Hough voting mechanism [20]. PointTransformer [52] designs models improving prior work across domains and tasks. GroupFree3D [27] uses the attention mechanism to automatically learn the contribution of each point to the ob-ject. In this paper, we show that CP3can be widely applied to point-based networks on a variety of point cloud bench-marks and representative original networks. 3. Methodology Although point-based networks are similar to CNN in concrete realization, they have fundamental differences in data representation and network architecture design. To extend the success of CNN pruning to PNN, two mod-ules are proposed in CP3taking advantage from the di-mensional information and discarded points: 1) coordinate-enhancement (CE) module, which produces a coordinate-enhanced score to estimate the channel importance by com-bining dimensional and feature information, and 2) knowl-edge recycling module reusing the discarded points to im-prove the channel importance evaluation criteria and in-crease the robustness. 3.1. Formulations and Motivation Point-based networks PNN is a unified architecture that directly takes point clouds as input. It builds hierarchical groups of points and progressively abstracts larger local re-gions along the hierarchy. PNN is structurally composed by a number of set abstraction (SA) blocks. Each SA block consists of 1) a sampling layer iteratively samples the far-thest point to choose a subset of points from input points, 2) a group layer gathers neighbors of centroid points to a local region, 3) a set of shared Multi-Layer Perceptrons (MLPs) to extract features, and 4) a reduction layer to aggregate features in the neighbors. Formally speaking, a SA block takes an ni−1×(d+ci−1)matrix as input that is from ni−1 points with d-dim coordinates and ci−1-dim point feature. It outputs an ni×(d+ci)matrix of nisubsampled points with d-dimensional coordinates (i.e., d=3) and new ci-dimensional feature vectors summarizing local context. The SA block is formulated as: Fl+1 i=Rn hΘh Fl j;xl j−xl iio , (1) where hΘis MLPs to extract grouped points feature, Ris the reduction layer (e.g. max-pooling) to aggregate features in the neighbors {j:(i,j)∈N},Fl jis the features of neigh-borjin the l-th layer, xl iandxl jare input points coordinates and coordinates of neighbor jin the l-th layer. Channel pruning Assume a pre-trained PNN model has a set of Kconvolutional layers, and Alis the l-th convolu-tion layer. The parameters in Alcan be represented as a set of filters WAl= wl 1,wl 2, . . . , wl cl ∈R(d+cl)×(d+cl−1)×kl×kl, where j-th filter is wl j∈R(d+cl−1)×kl×ki.(d+cl)repre-sents the number of filters in Alandkldenotes the kernel size. The outputs of filter, i.e., feature map, are denoted as Fl= fl 1,fl 2, . . . , fl ni ∈Rni×(d+ci). Channel pruning aims to identify and remove the less importance filter from the 5304 Shared MLPs Sample and Group Set Abstraction (SA) Block Channel Selection Kept Channel Pruned Channel Reduction Layer New Points Feature Points Feature Shared MLPs Reduction Layer Discarded New Points Feature Discarded Points Feature Points Feature Coordinate-Enhancement KR Channel Importance Original Channel Importance CE Channel Importance Points Feature Concat Add L layer points number L layer point feature dimension Grouped points number Sampled Centroid Points Discarded Points Recycling Centroid PointsFigure 1. The framework of CP3. The figure shows the specific pruning process of one of the SA blocks. Whether a channel in a PNN is pruned is determined by three parts: 1) Original channel importance: obtained from the original CNNs channel pruning method (e.g., HRank [24], CHIP [40]). 2) Discarded channel importance: obtained from the Knowledge-Recycling module by leveraging the discarded points in the network to supplement the channel importance evaluation of the corresponding points and improve the robustness of the channel selection. 3) CE (Coordinate-Enhanced) channel importance: obtained from calculating the correlation between the feature map and its corresponding points coordinates to guide point clouds network pruning. original networks. In general, channel pruning can be for-mulated as the following optimization problem: min δi jK ∑ i=1ni ∑ j=1δi jL wi j , s.t.ni ∑ j=1δi j=kl, (2) where δi jis an indicator which is 1 if wi jis to be pruned or 0 ifwi jis to be kept, L(·)measures the importance of a filter andklis the kept filter number. Robust importance metric for channel pruning The metrics for evaluating the importance of filters is critical. Existing CNN pruning methods design a variety of L(·) on the filters. Consider the feature maps, contain rich and important information of both filter and input data, ap-proaches using feature information have become popular and achieved state-of-the-art performance for channel prun-ing. The results of the feature maps may vary depending on the variability of the input data. Therefore, when the importance of one filter solely depends on the information represented by its own generated feature map, the measure-ment of the importance may be unstable and sensitive to the slight change of input data. So we have taken into account the characteristics of point clouds data and point-based net-works architecture to improve the robustness of channel im-portance in point-based networks. On the one hand, we pro-pose a coordinate-enhancement module by evaluating the correlation between the feature map and its corresponding points coordinates to guide point clouds network pruning, which will be described in Sec. 3.2. On the other hand, we design a knowledge recycling pruning schema, using dis-carded points to improve the channel importance evaluation criteria and increase the robustness of the pruning module, which will be described in detail in Sec. 3.3.3.2. Coordinate-Enhanced Channel Importance Dimensional information is critical in PNNs. The di-mensional information (i.e., coordinates of the points) are usually adopted as input for feature extraction. Namely, the input and output of each SA block are concatenated with the coordinates of the points. Meanwhile, the intermediate feature maps reflect not only the information of the original input data but also the corresponding channel information. Therefore, the importance of the channel can be obtained from the feature maps, i.e., the importance of the corre-sponding channel. The dimensional information is crucial in point-based tasks and should be considered as part of im-portance metric. Thus the critical problem falls in design-ing a function that can well reflect the dimensional infor-mation richness of feature maps. The feature map, obtained by encoding points spatial x,y, and zcoordinates, should be closely related to the original corresponding points coordi-nates. Therefore, we use the correlation between the cur-rent feature map and the corresponding input points coordi-nates to determin |
Brooks_InstructPix2Pix_Learning_To_Follow_Image_Editing_Instructions_CVPR_2023 | Abstract We propose a method for editing images from human in-structions: given an input image and a written instruction that tells the model what to do, our model follows these in-structions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pre-trained models—a language model (GPT-3) and a text-to-image model (Stable Diffusion)—to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and gen-eralizes to real images and user-written instructions at in-ference time. Since it performs edits in the forward pass and does not require per-example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions. *Denotes equal contribution More results on our project page: timothybrooks.com/instruct-pix2pix1. Introduction We present a method for teaching a generative model to follow human-written instructions for image editing. Since training data for this task is difficult to acquire at scale, we propose an approach for generating a paired dataset that combines multiple large models pretrained on different modalities: a large language model (GPT-3 [7]) and a text-to-image model (Stable Diffusion [51]). These two models capture complementary knowledge about language and im-ages that can be combined to create paired training data for a task spanning both modalities. Using our generated paired data, we train a conditional diffusion model that, given an input image and a text in-struction for how to edit it, generates the edited image. Our model directly performs the image edit in the forward pass, and does not require any additional example images, full de-scriptions of the input/output images, or per-example fine-tuning. Despite being trained entirely on synthetic exam-ples (i.e., both generated written instructions and generated This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18392 imagery), our model achieves zero-shot generalization to both arbitrary real images and natural human-written in-structions. Our model enables intuitive image editing that can follow human instructions to perform a diverse collec-tion of edits: replacing objects, changing the style of an im-age, changing the setting, the artistic medium, among oth-ers. Selected examples can be found in Figure 1. 2. Prior work Composing large pretrained models Recent work has shown that large pretrained models can be combined to solve multimodal tasks that no one model can perform alone, such as image captioning and visual question an-swering (tasks that require the knowledge of both a large language model and a text-image model). Techniques for combining pretrained models include joint finetuning on a new task [4, 33, 40, 67], communication through prompt-ing [62, 69], composing probability distributions of energy-based models [11, 37], guiding one model with feedback from another [61], and iterative optimization [34]. Our method is similar to prior work in that it leverages the com-plementary abilities of two pretrained models—GPT-3 [7]) and Stable Diffusion [51]—but differs in that we use these models to generate paired multi-modal training data. Diffusion-based generative models Recent advances in diffusion models [59] have enabled state-of-the-art image synthesis [10, 18, 19, 53, 55, 60] as well as generative mod-els of other modalities such as video [21, 58], audio [30], text [35] and network parameters [45]. Recent text-to-image diffusion models [41, 48, 51, 54] have shown to gen-erate realistic images from arbitrary text captions. Generative models for image editing Image editing models traditionally targeted a single editing task such as style transfer [15, 16] or translation between image do-mains [22, 24, 36, 42, 71]. Numerous editing approaches invert [1–3, 12] or encode [8, 50, 63] images into a latent space (e.g., StyleGAN [25, 26]) where they can be edited by manipulating latent vectors. Recent models have lever-aged CLIP [47] embeddings to guide image editing using text [5, 9, 14, 28, 31, 41, 44, 70]. We compare with one of these methods, Text2Live [6], an editing method that opti-mizes for an additive image layer that maximizes a CLIP similarity objective. Recent works have used pretrained text-to-image diffu-sion models for image editing [5,17,27,38,48]. While some text-to-image models natively have the ability to edit im-ages (e.g., DALLE-2 can create variations of images, in-paint regions, and manipulate the CLIP embedding [48]), using these models for targeted editing is non-trivial, be-cause in most cases they offer no guarantees that similar text prompts will yield similar images. Recent work byHertz et al. [17] tackles this issue with Prompt-to-Prompt, a method for assimilating the generated images for simi-lar text prompts, such that isolated edits can be made to a generated image. We use this method in generating training data. To edit non-generated (i.e., real) imagery, SDEdit [38] uses a pretrained model to noise and denoise an input im-age with a new target prompt. We compare with SDEdit as a baseline. Other recent works perform local inpainting given a caption and user-drawn mask [5, 48], generate new images of a specific object or concept learned from a small collection of images [13, 52], or perform editing by invert-ing (and fine-tuning) a single image, and subsequently re-generating with a new text description [27]. In contrast to these approaches, our model takes only a single image and an instruction for how to edit that image (i.e., not a full de-scription of any image), and performs the edit directly in the forward pass without need for a user-drawn mask, addi-tional images, or per-example inversion or finetuning. Learning to follow instructions Our method differs from existing text-based image editing works [6,13,17,27,38,52] in that it enables editing from instructions that tell the model what action to perform, as opposed to text labels, captions or descriptions of input/output images. A key benefit of fol-lowing editing instructions is that the user can just tell the model exactly what to do in natural written text. There is no need for the user to provide extra information, such as example images or descriptions of visual content that re-mains constant between the input and output images. In-structions are expressive, precise, and intuitive to write, al-lowing the user to easily isolate specific objects or visual at-tributes to change. Our goal to follow written image editing instructions is inspired by recent work teaching large lan-guage models to better follow human instructions for lan-guage tasks [39, 43, 68]. Training data generation with generative models Deep models typically require large amounts of training data. Internet data collections are often suitable, but may not exist in the form necessary for supervision, e.g., paired data of particular modalities. As generative models con-tinue to improve, there is growing interest in their use as a source of cheap and plentiful training data for downstream tasks [32, 46, 49, 57, 64, 65]. In this paper, we use two different off-the-shelf generative models (language, text-to-image) to produce training data for our editing model. 3. Method We treat instruction-based image editing as a supervised learning problem: (1) first, we generate a paired training dataset of text editing instructions and images before/after the edit (Sec. 3.1, Fig. 2a-c), then (2) we train an image editing diffusion model on this generated dataset (Sec. 3.2, Fig 2d). Despite being trained with generated images and 18393 Stable Diffusion + Prompt2PromptInput Caption: “photograph of a girl riding a horse”Instruction: “have her ride a dragon” Edited Caption: “photograph of a girl riding a dragon”GPT-3 Input Caption: “photograph of a girl riding a horse” Edited Caption: “photograph of a girl riding a dragon”(b) Generate paired images:(a) Generate text edits: InstructPix2Pix“turn her into a snake lady”Training Data Generation Instruction -following Diffusion Model “have her ride a dragon” “Color the cars pink” “Make it lit by fireworks” “convert to brick” …(c) Generated training examples:(d) Inference on real images: Figure 2. Our method consists of two parts: generating an image editing dataset, and training a diffusion model on that dataset. (a) We first use a finetuned GPT-3 to generate instructions and edited captions. (b) We then u | se StableDiffusion [51] in combination with Prompt-to-Prompt [17] to generate pairs of images from pairs of captions. We use this procedure to create a dataset (c) of over 450,000 training examples. (d) Finally, our InstructPix2Pix diffusion model is trained on our generated data to edit images from instructions. At inference time, our model generalizes to edit real images from human-written instructions. editing instructions, our model is able to generalize to edit-ingrealimages using arbitrary human-written instructions. See Fig. 2 for an overview of our method. 3.1. Generating a Multi-modal Training Dataset We combine the abilities of two large-scale pretrained models that operate on different modalities—a large lan-guage model [7] and a text-to-image model [51]—to gen-erate a multi-modal training dataset containing text editing instructions and the corresponding images before and af-ter the edit. In the following two sections, we describe in detail the two steps of this process. In Section 3.1.1, we describe the process of fine-tuning GPT-3 [7] to generate a collection of text edits: given a prompt describing an im-age, produce a text instruction describing a change to be made and a prompt describing the image after that change (Figure 2a). Then, in Section 3.1.2, we describe the process of converting the two text prompts (i.e., before and after the edit) into a pair of corresponding images using a text-to-image model [51] (Figure 2b). 3.1.1 Generating Instructions and Paired Captions We first operate entirely in the text domain, where we lever-age a large language model to take in image captions and produce editing instructions and the resulting text captions after the edit. For example, as shown in Figure 2a, provided the input caption “photograph of a girl riding a horse” , our language model can generate both a plausible edit instruc-tion“have her ride a dragon” and an appropriately modi-fied output caption “photograph of a girl riding a dragon” . Operating in the text domain enables us to generate a large and diverse collection of edits, while maintaining corre-spondence between the image changes and text instructions. Our model is trained by finetuning GPT-3 on a relativelysmall human-written dataset of editing triplets: (1) input captions, (2) edit instructions, (3) output captions. To pro-duce the fine-tuning dataset, we sampled 700 input captions from the LAION-Aesthetics V2 6.5+ [56] dataset and man-ually wrote instructions and output captions. See Table 1a for examples of our written instructions and output captions. Using this data, we fine-tuned the GPT-3 Davinci model for a single epoch using the default training parameters. Benefiting from GPT-3’s immense knowledge and abil-ity to generalize, our finetuned model is able to generate creative yet sensible instructions and captions. See Table 1b for example GPT-3 generated data. Our dataset is created by generating a large number of edits and output captions using this trained model, where the input captions are real image captions from LAION-Aesthetics (excluding samples with duplicate captions or duplicate image URLs). We chose the LAION dataset due to its large size, diversity of con-tent (including references to proper nouns and popular cul-ture), and variety of mediums (photographs, paintings, dig-ital artwork). A potential drawback of LAION is that it is quite noisy and contains a number of nonsensical or unde-scriptive captions—however, we found that dataset noise is mitigated through a combination of dataset filtering (Sec-tion 3.1.2) and classifier-free guidance (Section 3.2.1). Our final corpus of generated instructions and captions consists of454,445examples. 3.1.2 Generating Paired Images from Paired Captions Next, we use a pretrained text-to-image model to transform a pair of captions (referring to the image before and after the edit) into a pair of images. One challenge in turning a pair of captions into a pair of corresponding images is that text-to-image models provide no guarantees about image consis-tency, even under very minor changes of the conditioning 18394 Input LAION caption Edit instruction Edited caption Human-written (700 edits)Yefim Volkov, Misty Morning make it afternoon Yefim Volkov, Misty Afternoon girl with horse at sunset change the background to a city girl with horse at sunset in front of city painting-of-forest-and-pond Without the water. painting-of-forest ... ... ... GPT-3 generated (>450,000 edits)Alex Hill, Original oil painting on can-vas, Moonlight Bayinthestyle ofacoloringbook Alex Hill, Orig inalcoloringbook illustra-tion, Moon light Bay The great elf city of Rivendell, sitting atop a waterfall as cascades of water spill around itAddagiantreddragon Thegreat elfcityofRiven dell, sittingatop a waterfallascascades ofwaterspill around itwith agiantreddragon flyingoverhead Kate Hudson arriving at the Golden Globes 2015make herlook likeazombie ZombieKate HudsonarrivingattheGolden Globes 2015 ... ... ... Table 1. We label a small text dataset, finetune GPT-3, and use that finetuned model to generate a large dataset of text triplets. As the input caption for both the labeled and generated examples, we use real image captions from LAION. High lighted text is generated by GPT-3. (a) Without Prompt-to-Prompt. (b) With Prompt-to-Prompt. Figure 3. Pair of images generated using StableDiffusion [51] with and without Prompt-to-Prompt [17]. For both, the corresponding captions are “photograph of a girl riding a horse” and“photo-graph of a girl riding a dragon” . prompt. For example, two very similar prompts: “a picture of a cat” and“a picture of a black cat” may produce wildly different images of cats. This is unsuitable for our purposes, where we intend to use this paired data as supervision for training a model to edit images (and not produce a different random image). We therefore use Prompt-to-Prompt [17], a recent method aimed at encouraging multiple generations from a text-to-image diffusion model to be similar. This is done through borrowed cross attention weights in some number of denoising steps. Figure 3 shows a comparison of sampled images with and without Prompt-to-Prompt. While this greatly helps assimilate generated images, different edits may require different amounts of change in image-space. For instance, changes of larger magni-tude, such as those which change large-scale image struc-ture (e.g., moving objects around, replacing with objects of different shapes), may require less similarity in the gener-ated image pair. Fortunately, Prompt-to-Prompt has as a parameter that can control the similarity between the two images: the fraction of denoising steps pwith shared atten-tion weights. Unfortunately, identifying an optimal value ofpfrom only the captions and edit text is difficult. We therefore generate 100sample pairs of images per caption-pair, each with a random p∼ U(0.1,0.9), and filter thesesamples by using a CLIP-based metric: the directional sim-ilarity in CLIP space as introduced by Gal et al. [14]. This metric measures the consistency of the change between the two images (in CLIP space) with the change between the two image captions. Performing this filtering not only helps maximize the diversity and quality of our image pairs, but also makes our data generation more robust to failures of Prompt-to-Prompt and Stable Diffusion. 3.2. InstructPix2Pix We use our generated training data to train a conditional diffusion model that edits images from written instructions. We base our model on Stable Diffusion, a large-scale text-to-image latent diffusion model. Diffusion models [59] learn to generate data samples through a sequence of denoising autoencoders that estimate the score [23] of a data distribution (a direction pointing to-ward higher density data). Latent diffusion [51] improves the efficiency and quality of diffusion models by operat-ing in the latent space of a pretrained variational autoen-coder [29] with encoder Eand decoder D. For an image x, the diffusion process adds noise to the encoded latent z=E(x)producing a noisy latent ztwhere the noise level increases over timesteps t∈T. We learn a network ϵθthat predicts the noise added to the noisy latent ztgiven image conditioning cIand text instruction conditioning cT. We minimize the following latent diffusion objective: L=EE(x),E(cI),cT,ϵ∼N (0,1),th ∥ϵ−ϵθ(zt, t,E(cI), cT))∥2 2i (1) Wang et al. [66] show that fine-tuning a large image dif-fusion models outperforms training a model from scratch for image translation tasks, especially when paired training data is limited. We therefore initialize the weights of our model with a pretrained Stable Diffusion checkpoint, lever-18395 sT= 3 sT= 7.5 sT= 15 sI= |
Jin_ReDirTrans_Latent-to-Latent_Translation_for_Gaze_and_Head_Redirection_CVPR_2023 | Abstract Learning-based gaze estimation methods require large amounts of training data with accurate gaze annotations.Facing such demanding requirements of gaze data collec-tion and annotation, several image synthesis methods were proposed, which successfully redirected gaze directions pre-cisely given the assigned conditions. However , these meth-ods focused on changing gaze directions of the images thatonly include eyes or restricted ranges of faces with low res-olution (less than 128×128 ) to largely reduce interference from other attributes such as hairs, which limits applica-tion scenarios. To cope with this limitation, we proposeda portable network, called ReDirTrans, achieving latent-to-latent translation for redirecting gaze directions andhead orientations in an interpretable manner . ReDirTransprojects input latent vectors into aimed-attribute embed-dings only and redirects these embeddings with assigned pitch and yaw values. Then both the initial and editedembeddings are projected back (deprojected) to the initiallatent space as residuals to modify the input latent vec-tors by subtraction and addition, representing old status re-moval and new status addition. The projection of aimed at-tributes only and subtraction-addition operations for statusreplacement essentially mitigate impacts on other attributesand the distribution of latent vectors. Thus, by combiningReDirTrans with a pretrained fixed e4e-StyleGAN pair , wecreated ReDirTrans-GAN, which enables accurately redi-recting gaze in full-face images with 1024×1024 resolution while preserving other attributes such as identity, expres-sion, and hairstyle. Furthermore, we presented improve-ments for the downstream learning-based gaze estimationtask, using redirected samples as dataset augmentation. | 1. Introduction Gaze is a crucial non-verbal cue that conveys attention and awareness in interactions. Its potential applications in-clude mental health assessment [ 5,18], social attitudes anal-ysis [ 19], human-computer interaction [ 12], automotive as-sistance [ 30], AR/VR [ 6,34]. However, developing a robustunified learning-based gaze estimation model requires large amounts of data from multiple subjects with precise gaze annotations [ 42,44]. Collecting and annotating such an ap-propriate dataset is complex and expensive. To overcomethis challenge, several methods have been proposed to redi-rect gaze directions [ 17,39,41,42,44] in real images with assigned directional values to obtain and augment train-ing data. Some works focused on generating eye images with new gaze directions by either 1) estimating warpingmaps [ 41,42] to interpolate pixel values or 2) using encoder-generator pairs to generate redirected eye images [ 17,39]. ST-ED [ 44] was the first work to extend high-accuracy gaze redirection from eye images to face images. By dis-entangling several attributes, including person-specific ap-pearance, it can explicitly control gaze directions and headorientations. However, due to the design of the encoder-decoder structure and limited ability to maintain appearance features by a 1×1024 projected appearance embedding, ST-ED generates low-resolution ( 128×128) images with restricted face range (no hair area), which narrows the ap-plication ranges and scenarios of gaze redirection. As for latent space manipulation for face editing tasks, large amounts of works [ 2–4,14,31,35] were proposed to modify latent vectors in predefined latent spaces ( W[21], W +[1] andS[40]). Latent vectors in these latent spaces can work with StyleGAN [ 21,22] to generate high-quality and high-fidelity face images with desired attribute editing. Among these methods, Wu et.al [40] proposed the latent spaceSworking with StyleGAN, which achieved only one degree-of-freedom gaze redirection by modifying a certainchannel of latent vectors in Sby an uninterpreted value in-stead of pitch and yaw values of gaze directions. Considering these, we proposed a new method, called ReDirTrans, to achieve latent-to-latent translation for redi-recting gaze directions and head orientations in high-resolution full-face images based on assigned directionalvalues. Specifically, we designed a framework to projectinput latent vectors from a latent space into the aimed-attribute-only embedding space for an interpretable redirec-tion process. This embedding space consists of estimatedpseudo conditions and embeddings of aimed attributes, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5547 where conditions describe deviations from the canonical status and embeddings are the ‘carriers’ of the conditions.In this embedding space, all transformations are imple-mented by rotation matrices multiplication built from pitchand yaw values, which can make the redirection processmore interpretable and consistent. After the redirection pro-cess, the original embeddings and redirected ones are bothdecoded back to the initial latent space as the residuals tomodify the input latent vectors by subtraction and additionoperations. These operations represent removing the oldstate and adding a new one, respectively. ReDirTrans onlyfocuses on transforming embeddings of aimed attributesand achieves status replacement by the residuals outputtedfrom weight-sharing deprojectors. ReDirTrans does notproject or deproject other attributes with information loss;and it does not affect the distribution of input latent vec-tors. Thus ReDirTrans can also work in a predefined featurespace with a fixed pretrained encoder-generator pair for theredirection task in desired-resolution images. In summary, our contributions are as follows: • A latent-to-latent framework, ReDirTrans , which projects latent vectors to an embedding space for aninterpretable redirection process on aimed attributesand maintains other attributes, including appearance,in initial latent space with no information loss causedby projection-deprojection processes. • A portable framework that can seamlessly integrate into a pretrained GAN inversion pipeline for high-accuracy redirection of gaze directions and head ori-entations, without the need for any parameter tuningof the encoder-generator pairs. • A layer-wise architecture with learnable parameters that works with the fixed pretrained StyleGAN andachieves redirection tasks in high-resolution full-face images through ReDirTrans-GAN . |
Han_Noisy_Correspondence_Learning_With_Meta_Similarity_Correction_CVPR_2023 | Abstract Despite the success of multimodal learning in cross-modal retrieval task, the remarkable progress relies on the correct correspondence among multimedia data. How-ever, collecting such ideal data is expensive and time-consuming. In practice, most widely used datasets are har-vested from the Internet and inevitably contain mismatched pairs. Training on such noisy correspondence datasets causes performance degradation because the cross-modal retrieval methods can wrongly enforce the mismatched data to be similar. To tackle this problem, we propose a Meta Similarity Correction Network (MSCN) to provide reliable similarity scores. We view a binary classification task as the meta-process that encourages the MSCN to learn dis-crimination from positive and negative meta-data. To fur-ther alleviate the influence of noise, we design an effective data purification strategy using meta-data as prior knowl-edge to remove the noisy samples. Extensive experiments are conducted to demonstrate the strengths of our method in both synthetic and real-world noises, including Flickr30K, MS-COCO, and Conceptual Captions. Our code is publicly available.1 | 1. Introduction Recently, cross-modal retrieval has drawn much atten-tion with the rapid growth of multimedia data. Given a query sample of specific modality, cross-modal retrieval aims to retrieve relevant samples across different modali-ties. Existing cross-modal retrieval works [1, 6, 11] usu-ally learn a comparable common space to bridge different modalities, which achieved remarkable progress in many applications, including video-audio retrieval [1, 33], visual question answering [11, 26], and image-text matching [44]. Despite the promise, a core assumption in cross-modal retrieval is the correct correspondence among multiple *Minnan Luo is the corresponding author. 1https://github.com/hhc1997/MSCN Figure 1. Illustration of noisy correspondence in image-text re-trieval. A standard triplet loss is used to enforce the positive pairs to be closer than negatives in the common space. Noisy correspon-dence is the mismatched pairs but wrongly considered as positive ones, and thus results in model performance degradation. modalities. However, collecting such ideal data is expen-sive and time-consuming. In practice, most widely used datasets are harvested from the Internet and inevitably con-tain noisy correspondence [31]. As illustrated in Fig. 1, the cross-modal retrieval method will wrongly enforce the mis-matched data to be similar when learning with noisy corre-spondence, which may significantly affect the retrieval per-formance. To date, few effort has been made to address this. Huang [15] first researches this issue and proposes the NCR method to train from the noisy image-text pairs ro-bustly. Inspired by the prior success for noisy labels [20], NCR divides the data into clean and noisy partitions and rectifies the correspondence with an adaptive model. How-ever, NCR based on the memorization effect of DNNs [3], which leads to poor performance under high noise ratio. To tackle the challenge, we propose a Meta Similarity Correction Network (MSCN) which aims to provide reli-able similarity scores for the noisy features from main net. We view a binary classification task as the meta-process: given a multimodal sample, the MSCN will learn to deter-mine whether the modalities correspond to each other or not, where the prediction of MSCN can be naturally re-garded as the similarity score. Specifically, a small amount of clean pairs is used to construct positive and negative meta-data, both viewed as meta-knowledge that encourages MSCN to learn the discrimination. Meanwhile, the main net This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7517 trained by the corrected similarity score from MSCN with a self-adaptive margin to achieve robust learning. This inter-dependency makes the main net and MSCN benefits each other against noise. However, due to the property of triplet loss, it remains to produce positive loss for noisy pairs even if we employ the ideal similarity scores. To this end, we further propose a meta-knowledge guided data purification strategy to remove samples with potentially wrong corre-spondence. Extensive experiments are conducted to demon-strate the strengths of our method in both synthetic and real-world noises. The main contributions of this work are summarized as follows: • We pioneer the exploration of meta-learning for noisy correspondence problem, where a meta correction net-work is proposed to provide reliable similarity scores against noise. • We present a novel meta-process that first considers both positive and negative data as meta-knowledge to encourage the MSCN to learn discrimination from them. • We design an effective data purification strategy us-ing meta-data as prior knowledge to remove the noisy samples. |
Gadre_CoWs_on_Pasture_Baselines_and_Benchmarks_for_Language-Driven_Zero-Shot_Object_CVPR_2023 | Abstract For robots to be generally useful, they must be able to find arbitrary objects described by people (i.e., be language-driven) even without expensive navigation train-ing on in-domain data (i.e., perform zero-shot inference). We explore these capabilities in a unified setting: language-driven zero-shot object navigation (L-ZSON). Inspired by the recent success of open-vocabulary models for image classification, we investigate a straightforward framework, CLIP on Wheels (CoW), to adapt open-vocabulary models to this task without fine-tuning. To better evaluate L-ZSON, we introduce the PASTURE benchmark, which considers finding uncommon objects, objects described by spatial and appearance attributes, and hidden objects described rel-ative to visible objects. We conduct an in-depth empiri-cal study by directly deploying 22 CoW baselines across HABITAT ,ROBOTHOR , and PASTURE . In total, we eval-uate over 90k navigation episodes and find that (1) CoW baselines often struggle to leverage language descriptions but are proficient at finding uncommon objects. (2) A sim-ple CoW, with CLIP-based object localization and classical exploration—and no additional training—matches the nav-igation efficiency of a state-of-the-art ZSON method trained for 500M steps on HABITAT MP3D data. This same CoW provides a 15.6 percentage point improvement in success over a state-of-the-art ROBOTHOR ZSON model.1 | 1. Introduction To be more widely applicable, robots should be language-driven: able to deduce goals based on arbitrary text input instead of being constrained to a fixed set of ob-ject categories. While existing image classification, seman-tic segmentation, and object navigation benchmarks like ImageNet-1k [ 65], ImageNet-21k [ 22], MS-COCO [ 45], LVIS [ 28], H ABITAT [67], and R OBOTHOR [ 18] include a vast array of everyday items, they do not capture all objects that matter to people. For instance, a lost “toy airplane” may ⇧Columbia University,†University of Washington. Correspondence to sy@cs.columbia.edu . 1For code, data, and videos, see cow.cs.columbia.edu/ (c) Finding hidden objects in the presence of distractors correct apple ✅ correct mug ✅ Top-down visualization Egocentric Observations(a) Finding uncommon objects(b) Finding objects based on attributes in the presence of distractorsSample tasks “…llama wicker basket…”“…tie-dye surfboard…” goal ✅ goal ✅ distractor apple ⛔ “…mug under the bed…”“…small, green apple…”“…apple on a coffee table near a laptop…” distractor mug ⛔Figure 1. The P ASTURE benchmark for L-ZSON. Text speci-fies navigation goal objects. Agents do not train on these tasks, making the evaluation protocol zero-shot. (a) Uncommon object goals like “llama wicker basket”, not found in existing navigation benchmarks. (b) Appearance, spatial descriptions , which are nec-essary to find the correct object. (c) Hidden object descriptions , which localize objects that are not visible. become relevant in a kindergarten classroom, but this object is not annotated in any of the aforementioned datasets. In this paper, we study Language-driven zero-shot object navigation (L-ZSON) , a more challenging but also more ap-plicable version of object navigation [ 5,18,67,79,89] and ZSON [ 38,46] tasks. In L-ZSON, an agent must find an ob-ject based on a description, which may contain different lev-els of granularity (e.g., “toy airplane”, “toy airplane under the bed”, or “wooden toy airplane”). L-ZSON encompasses ZSON, which specifies only the target category [ 38,46]. Since L-ZSON is “zero-shot”, we consider agents without access to navigation training on the target objects or do-mains. This reflects realistic application scenarios, where the environment and object set may not be known a priori. Performing L-ZSON in anyenvironment with unstruc-tured language input is challenging; however, recent ad-vances in open-vocabulary models for image classifica-tion [ 35,58,61], object detection [ 4,21,27,36,43,47,49, 62,88], and semantic segmentation [ 3,6,15,33,36,37,86] present a promising foundation. These models provide an interface where one specifies—in text—the arbitrary ob-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23171 jects they wish to classify, detect, or segment. For example, CLIP [ 61] open-vocabulary classifiers compute similarity scores between an input image and a set of user-specified captions (e.g., “a photo of a toy airplane.”, ...), selecting the caption with the highest score to determine the image clas-sification label. Given the flexibility of these models, we would like to understand their capability to execute embod-ied tasks even without additional training. To this end, we present baselines and benchmarks for L-ZSON. More specifically: •A collection of baseline algorithms, CLIP on Wheels (CoW), which adapt open-vocabulary models to the task of L-ZSON . CoW takes inspiration from the semantic mapping line of work [ 11,41,53], and decomposes the navigation task into exploration when the language tar-get is not confidently localized, and target-driven plan-ning otherwise. CoW retains the textual user inter-face of the original open-vocabulary model and does not require any navigation training. We evaluate 22 CoWs, ablating over many open-vocabulary models, ex-ploration policies, backbones, prompting strategies, and post-processing strategies. •A new benchmark, PASTURE , to evaluate CoW and fu-ture methods on L-ZSON . We design P ASTURE , visual-ized in Fig. 1, to study capabilities that traditional object navigation agents, which are trained on a fixed set of cat-egories, do not possess. We consider the ability to find (1) uncommon objects (e.g., “tie-dye surfboard”), (2) ob-jects by their spatial and appearance attributes in the pres-ence of distractor objects (e.g., “green apple” vs. “red apple”), and (3) objects that cannot be visually observed (e.g., “mug under the bed”). Together the CoW baselines and P ASTURE benchmark allow us to conduct extensive studies on the capabilities of open-vocabulary models in the context of L-ZSON embod-ied tasks. Our experiments demonstrate CoW’s potential on uncommon objects and limitations in taking full advan-tage of language descriptions—thereby providing empirical motivation for future studies. To contextualize CoW rela-tive to prior zero-shot methods, we additionally evaluate on the H ABITAT MP3D dataset. We find that our best CoW achieves navigation efficiency (SPL) that matches a state-of-the-art ZSON method [ 46] that trains on MP3D train-ing data for 500M steps. On a R OBOTHOR object subset, considered in prior work, the same CoW beats the leading method [ 38] by 15.6 percentage points in task success. |
Grigorev_HOOD_Hierarchical_Graphs_for_Generalized_Modelling_of_Clothing_Dynamics_CVPR_2023 | Abstract We propose a method that leverages graph neural networks, multi-level message passing, and unsupervised training to enable efficient prediction of realistic clothing dynamics. Whereas existing methods based on linear blend skinning must be trained for specific garments, our method, called HOOD, is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing cloth-ing. Furthermore, HOOD handles changes in topology (e.g., garments with buttons or zippers) and material prop-erties at inference time. As one key contribution, we pro-pose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes while preserving local de-tail. We empirically show that HOOD outperforms strong baselines quantitatively and that its results are perceived as more realistic than state-of-the-art methods.1. Introduction The ability to model realistic and compelling clothing behavior is crucial for telepresence, virtual try-on, video games and many other applications that rely on high-fidelity digital humans. A common approach to generate plausible dynamic motions is physics-based simulation [2]. While impressive results can be obtained, physical simulation is sensitive to initial conditions, requires animator exper-tise, and is computationally expensive; state-of-the-art ap-proaches [14, 22, 36] are not designed for the strict compu-tation budgets imposed by real-time applications. Deep learning-based methods have started to show promising results both in terms of efficiency and quality. However, there are several limitations that have so far pre-vented such approaches from unlocking their full poten-tial: First, existing methods rely on linear-blend skinning and compute clothing deformations primarily as a func-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16965 tion of body pose [24, 43]. While compelling results can be obtained for tight-fitting garments such as shirts and sportswear, skinning-based methods struggle with dresses, skirts, and other types of loose-fitting clothing that do not closely follow body motion. Crucially, many state-of-the-art learning-based methods are garment-specific [20, 38, 43, 45, 53] and can only predict deformations for the particular outfit they were trained on. The need to retrain these meth-ods for each garment limits applicability. In this paper, we propose a novel approach for predict-ing dynamic garment deformations using graph neural net-works (GNNs). Our method learns to predict physically-realistic fabric behavior by reasoning about the map be-tween local deformations, forces, and accelerations. Thanks to its locality, our method is agnostic to both the global structure and shape of the garment and directly generalizes to arbitrary body shapes and motions. GNNs have shown promise in replacing physics-based simulation [40, 41], but a straightforward application of this concept to clothing simulation yields unsatisfying results. GNNs apply local transformations (implemented as MLPs) to feature vectors of vertices and their one-ring neighbor-hood in a given mesh. Each transformation results in a set of messages that are then used to update feature vec-tors. This process is repeated, allowing signals to propagate through the mesh. However, a fixed number of message-passing steps limits signal propagation to a finite radius. This is problematic for garment simulation, where elastic waves due to stretching travel rapidly through the material, leading to quasi-global and immediate long-range coupling between vertices. Using too few steps delays signal propa-gation and leads to disturbing over-stretching artifacts, mak-ing garments look unnatural and rubbery. Na ¨ıvely increas-ing the number of iterations comes at the expense of rapidly growing computation times. This problem is amplified by the fact that the maximum size and resolution of simulation meshes is not known a priori , which would allow setting a conservative, sufficiently large number of iterations. To address this problem, we propose a message-passing scheme over a hierarchical graph that interleaves propaga-tion steps at different levels of resolution. In this way, fast-travelling waves due to stiff stretching modes can be effi-ciently treated on coarse scales, while finer levels provide the resolution needed to model local detail such as folds and wrinkles. We show through experiments that our graph representation improves predictions both qualitatively and quantitatively for equal computation budgets. To extend the generalization capabilities of our ap-proach, we combine the concepts of graph-based neural networks and differentiable simulation by using an incre-mental potential for implicit time stepping as a loss func-tion [33, 43]. This formulation allows our network to be trained in a fully unsupervised way and to simultaneouslylearn multi-scale clothing dynamics, the influence of mate-rial parameters, as well as collision reaction and frictional contact with the underlying body, without the need for any ground-truth (GT) annotations. Additionally, the graph for-mulation enables us to model garments of varied and chang-ingtopology; e.g. the unbuttoning of a shirt in motion. In summary, we propose a method, called HOOD, that leverages graph neural networks, multi-level message pass-ing, and unsupervised training to enable real-time predic-tion of realistic clothing dynamics for arbitrary types of garments and body shapes. We empirically show that our method offers strategic advantages in terms of flexibil-ity and generality compared to state-of-the-art approaches. Specifically, we show that a single trained network: (i) ef-ficiently predicts physically-realistic dynamic motion for a large variety of garments; (ii) generalizes to new gar-ment types and shapes not seen during training; (iii) al-lows for run-time changes in material properties and gar-ment sizes, and (iv) supports dynamic topology changes such as opening zippers or unbuttoning shirts. Code and models are available for research purposes: https:// dolorousrtur.github.io/hood/ . 2. Related Work Physics-based Simulation. Modeling the behavior of 3D clothing is a longstanding problem in computer graphics [48]. Central research problems include mechanical model-ing [13, 19, 50], material behavior [7, 34, 51], time inte-gration [3, 49], collision handling [8, 21, 22, 47] and, more recently, differentiable simulation [25, 26]. While state-of-the-art methods can generate highly realistic results with impressive levels of detail, physics-based simulation meth-ods are often computationally expensive. Learned Deformation Models. To overcome the per-formance limitations of traditional physical simulators, prior work uses machine learning to accelerate computa-tion. One line of research uses neural networks, in combi-nation with linear blend skinning, to learn garment defor-mations from pose and shape parameters of human body models [20, 30, 39, 42]. While such methods can produce plausible results at impressive rates, their reliance on skin-ning limits their ability to realistically model loose, free-flowing garments such as long skirts and dresses. Several learning-based methods specifically tackle loose garments. For example, Santesteban et al. [45] introduce a diffused body model to extend blend shapes and skinning weights to any 3D point beyond the body mesh. Pan et al. [38] use virtual bones to drive the shape of garments as a function of body pose. A common limitation of such methods is their inability to generalize across multiple gar-ments: as they predict a specific number of vertex offsets, these networks need to be retrained even for small changes in garment geometry. 16966 Another set of methods learns geometric priors from a large dataset of synthetic garments [4]. Zakharkin et al. [52] learn a latent space of garment geometries, modeling gar-ments as point clouds. Su et al. [46] represent garments in texture space, allowing them to explicitly control garment shape and topology. DeePSD [6] learns a mapping from a garment template to its skinning weights. Although these methods are able to generate geometries for various gar-ments, including loose ones, they still rely on linear blend skinning and parametric body models, ultimately limiting their ability to generate dynamic garment motions. Traditional, mesh-based methods are typically restricted to fixed topologies and cannot deal with topological changes such as unzipping a jacket. To address this, sev-eral methods resort to implicit shape models [1, 10, 12, 15, 27, 31, 44]. While these can capture arbitrary topology, they require significant training data, are not compatible with ex-isting graphics pipelines, and are expensive to render. In contrast, our graphical formulation supports varied topol-ogy with an efficent mesh representation. Recent work learns the mapping from body parameters to garment deformations in an unsupervised way. PBNS [5] pioneered this idea by using the potential energy of a mass-spring system to train a neural network to predict gar-ment deformations in static equilibrium. SNUG [43] ex-tends this approach with a dynamic component, using a re-current neural network to predict sequences of garment de-formations. Their method outperforms other learning-based approaches without using any physically-simulated training data. SNUG, however, is limited to tight-fitting garments and cannot generalize to novel garments. Our method builds on the idea of a physics-based loss function for self-supervised learning. Unlike SNUG, how-ever, our method does not rely on skinning and is able to generate plausible dynamic motion for arbitrary types of garments, including dynamic, free-flowing clothing and changing topology. Graph-based Methods. As another promising line of research, graph neural networks have recently started to show their potential for learning-based modeling of dy-namic physical systems [16, 40, 41]. Most closely related to our work are MeshGraphNets [40], which use a message-passing network to learn the dynamics of physical systems such as fluids and cloth from mesh-based simulations. Due to their local nature, MeshGraphNets achieve outstanding generalization capabilities. However, using their default message passing scheme, signals tend to propagate slowly through the graph and it is difficult to set hyper-parameters (number of message passing steps) in advance such that sat-isfying behaviour is obtained. This problem is particularly relevant for clothing simulation, where an insufficient num-ber of steps can lead to excessive stretching and unnatural dynamics. We address this problem with a multi-level mes-sage passing architecture that uses a hierarchical graph to accelerate signal propagation. Recently, several graph pooling strategies were intro-duced to increase the radius of message propagation includ-ing learned p | ooling [17], pooling by rasterization [28] and spatial proximity [16]. Concurrent to ours, the work of Cao et al. [11] analyzes limitations of pooling strategies and sug-gests using pre-computed coarsened geometries with hand-crafted aggregation weights for inter-level transfer. We pro-pose a simple and efficient graph coarsening strategy that allows our network to implicitly learn transitions between graph levels, thus avoiding the need for any manually de-signed transfer operators. Graph-based methods have demonstrated their ability to model numerous types of physical systems, including fabric simulation. To the best of our knowledge, however, we are the first to propose a graph-based approach for modelling the dynamic garment motions of dressed virtual humans. 3. Method Our method, schematically summarized in Fig. 2, learns the parameters of a single network that is able to pre-dict plausible dynamic motion for a wide range of gar-ment types and shapes, that generalizes to new, unseen clothing, and allows for dynamic changes in material pa-rameters (Fig. 10) and garment topology (Fig. 4). These unique capabilities derive from a novel combination of a) graph neural networks that learn the local dynamics in a garment-independent way (Sec. 3.1), b) hierarchical message-passing for the efficient capture of long-range cou-pling (Sec. 3.2), and c) a physics-based loss function that enables self-supervised training (Sec. 3.3). 3.1. Background HOOD builds on MeshGraphNets [40], a type of graph neural network, to learn the local dynamics of deformable materials. Once trained, MeshGraphNets predict nodal ac-celerations from current positions and velocities, which are then used to step the garment mesh forward in time. Basic Structure. We model garment dynamics based on a graph consisting of the vertices and edges of the garment mesh, augmented with so-called body edges : for each gar-ment node we find the nearest vertex on the body model and add a new edge if the distance is below a threshold r. Vertices and edges are endowed with feature vectors viand eij, respectively, where iandjare node indices. Nodal feature vectors consist of a type value (garment or body), current state variables (velocity, normal vector), and physi-cal properties (mass). Edge feature vectors store the relative position between their two nodes w.r.t. both the current state and canonical geometry of the garment (Fig. 2, left). Message Passing. To evolve the system forward in time, we apply Nmessage-passing steps on the input graph. In 16967 Figure 2. Method overview : We model garment mesh interactions based on a graph that is derived from the garment mesh and augmented with additional body nodes (blue) and edges between the garment and the closest body nodes. The input graph is converted into a hierarchical graph structure to allow fast signal propagation and is processed by a message-passing network with a UNet-like architecture. The colours of garment nodes show which of the nodes are present on each of the hierarchical levels and correspond to the levels of the GNN. The GNN predicts accelerations (green) for each garment node. The model is trained via self-supervision with a set of physical objectives, thus removing the need for any offline training data. At inference time, the model autoregressively generates dynamic garment motions over long time horizons. A single network can generalize to unseen garments, material properties, and even topologies. each step, edge features are first updated as eij←fv→e(eij, vi, vj), (1) where fv→eis a multilayer perceptron (MLP). Nodes are then updated by processing the average of all incident edge features, denoted by fe→v: vi←fe→v(vi,X jebody ij,X jeij), (2) where fe→vis another multilayer perceptron (MLP), and ebody ij are body edges. While the same MLP is used for all nodes, each set of edges is processed by a separate MLP. After Nmessage-passing steps, the nodal features are passed into a decoder MLP to obtain per-vertex accelera-tions, which are then used to compute end-of-step velocities and positions. The full model consists of several encoder MLPs – one for nodes and one for each of the edge sets – a decoder MLP, andLmessage-passing layers. Each message-passing layer, in turn, contains one node MLP fe→vplus one edge MLP fv→efor each set of edges in the graph. For more details, we refer to the supplementary material and [40]. Extensions for Clothing. To model different types of fabric and multi-fabric garments, we extend node and edge feature vectors with local material parameters. These ma-terial parameters include: Young’s modulus and Poisson’s ratio (mapped to their corresponding Lam ´e parameters µ andλ) that model the stretch resistance and area preserva-tion of a given fabric, the bending coefficient kbending that penalizes folding and wrinkling, as well as the density ofthe fabric, defining its weight. Since our network supports heterogeneous material properties (for each edge and node) as input, the definition of individual material parameters for different parts to model multi-material garments is possible, even at inference time (See Fig. 10). 3.2. Hierarchical Message Passing Fabric materials are sufficiently stiff such that forces ap-plied in one location propagate rapidly across the garment. When using a fixed number of message-passing steps, how-ever, forces can only propagate within a finite radius for a given time step. Consequently, using too small a number of message-passing steps will make garments appear overly elastic and rubbery. We solve this problem by extending MeshGraphNets to accelerate signal propagation. To this end, we construct a hierarchical graph representation from the flat input graph and use this to accelerate signal propa-gation during message-passing. Hierarchical Graph Construction. Allowing messages to travel further within a single step requires long-range connections between nodes. To this end, we recursively coarsen a given input graph to obtain a hierarchical repre-sentation. Although there are many other options for gener-ating graph hierarchies, we take inspiration from concurrent work [11] and use a simple but effective recursive process. We start by partitioning the nodes of the input graph into successively coarser sets such that the inter-vertex distance — the number of edges in the shortest path between two nodes — increases. We then create new sets of coarsened edges for each of the partitions. See Fig. 2 for an illustration and the supplemental material for details. 16968 Figure 3. Our hierarchical network architecture with 1 fine ( green ) and 2 coarse ( yellow, orange ) levels. We use 15 message-passing steps, simultaneously processing two levels at a time. By applying this algorithm recursively, we obtain a nested hierarchical graph in which the nodes of each coarser level form a proper subset of the next finer level nodes, i.e., Vl+1⊂Vl. This property is important for our multi-level message-passing scheme, which we describe next. Multi-level Message Passing. Our nested hierarchical graph representation enables accelerated message-passing through simultaneous processing on multiple levels. To this end, we endow each level lin the graph with its own set of edge feature vectors el ijwhile the node feature vectors viare shared across all levels. At the beginning of each message-passing step, we first update edge features on all levels using the finest-level node features: el ij←fl v→e(el ij, v0 i, v0 j), (3) where fl v→eis a level-specific MLP. Then, node features are updated: vi←fe→v(vi,X jebody ij,X je1 ij, ...,X jeL ij),(4) where Lis the number of levels processed in this step. Note that, for each message-passing step, we update the set of body edges ebody ij to keep only those that are connected to the currently processed garment nodes. This scheme has the important advantage of not requir-ing any explicit averaging or interpolation operators for inter-level transfer. Thanks to the nesting property of our hierarchical graph, nodes are shared across levels and all information transfer happens implicitly by processing the fe→vMLPs at the end of each message-passing step. Our multi-level message-passing scheme can operate on any number of levels simultaneously. We found that the UNet-like architecture shown Fig. 3, with a three-level hi-erarchy and simultaneous message-passing on two adjacent levels at a time, yields a favourable trade-off between infer-ence time and quality of results. To compute the propagation radius of our multi-level message-passing scheme, we sum the maximal distances a message can travel per message-passing step. For roughly the same amount of computation, our architecture yields aradius of 48 edges compared with 15 edges for a single-level scheme. See Sec. 4 and the supplementary material for more detailed analysis. 3.3. Garment Model To learn the dynamics of garments, we must model their mechanical behaviour; i.e., the relation between internal de-formations and elastic energy, as well as friction and contact with the underlying body. Our approach follows standard practice in cloth simulation: we model resistance to stretch-ing with an energy term Lstretching , using triangle finite el-ements [36] with a St. Venant-Kirchhoff material [35]. The Lbending term penalizes changes in discrete curvature as a function of the dihedral angle between edge-adjacent trian-gles [19]. To prevent interpenetration between cloth and body, we penalize the negative distance between garment nodes and their closest point on the body mesh with a cu-bic energy term Lcollision , once it falls below a threshold value [43]. Furthermore, Linertia is an energy term whose gradient with respect to xt+1yields inertial forces. Finally, we introduce a friction term Lfriction that penalizes the tan-gential motion of garment nodes over the body [9, 18]. To model the evolution of clothing through time, we follow [45] and use the optimization-based formulation of Martin et al. [33] to construct an incremental potential Ltotal=Lstretching (xt+1) +Lbending (xt+1)+ Lgravity (xt+1) +Lfriction (xt,xt+1)+ (5) Lcollision (xt,xt+1) +Linertia (xt−1,xt,xt+1), where xt−1,xt, and xt+1are nodal positions at the previ-ous, current, and next time steps, respectively. Minimizing Ltotal with respect to end-of-step positions is equivalent to solving the implicit Euler update equations, providing a ro-bust method for forward simulation. When used as a loss during training, this incremental potential allows the net-work to learn the dynamics of clothing without supervision. 3.4. Training We train our hierarchical graph network in a fully self-supervised way using the physics-based loss function (5). We briefly discuss some aspects specific to our setting be-low and provide more details in the supplemental material. Training Data. We use the same set of 52 body pose sequences from the AMASS dataset [32] used in [45]. For each training step, we randomly sample SMPL [29] shape parameters βfrom the uniform distribution U(−2,2). We randomly select a garment |
Attal_HyperReel_High-Fidelity_6-DoF_Video_With_Ray-Conditioned_Sampling_CVPR_2023 | Abstract Volumetric scene representations enable photorealistic view synthesis for static scenes and form the basis of several ex-isting 6-DoF video techniques. However, the volume render-ing procedures that drive these representations necessitate careful trade-offs in terms of quality, rendering speed, and memory efficiency. In particular, existing methods fail to simultaneously achieve real-time performance, small mem-ory footprint, and high-quality rendering for challenging real-world scenes. To address these issues, we present Hy-perReel — a novel 6-DoF video representation. The two core components of HyperReel are: (1) a ray-conditioned sample prediction network that enables high-fidelity, high frame rate rendering at high resolutions and (2) a compact and memory-efficient dynamic volume representation. Our 6-DoF video pipeline achieves the best performance compared to prior and contemporary approaches in terms of visual quality with small memory requirements, while also rendering at up to 18 frames-per-second at megapixel resolution without any custom CUDA code. 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16610 | 1. Introduction Six–Degrees-of-Freedom (6-DoF) videos allow for free ex-ploration of an environment by giving the users the ability to change their head position (3 degrees of freedom) and orientation (3 degrees of freedom). As such, 6-DoF videos offer immersive experiences with many exciting applications in AR/VR. The underlying methodology that drives 6-DoF video is view synthesis : the process of rendering new, unob-served views of an environment—static or dynamic—from a set of posed images or videos. V olumetric scene representa-tions such as neural radiance fields [ 31] and instant neural graphics primitives [ 32] have recently made great strides toward photorealistic view synthesis for static scenes. While several recent works build dynamic view synthe-sis pipelines on top of these volumetric representations [14,23,24,35,64], it remains a challenging task to cre-ate a 6-DoF video format that can achieve high quality, fast rendering, and a small memory footprint (even given many synchronized video streams from multi-view camera rigs [ 9,37,46]). Existing approaches that attempt to create memory-efficient 6-DoF video can take nearly a minute to render a single megapixel image [ 23]. Works that target ren-dering speed and represent dynamic volumes directly with 3D textures require gigabytes of storage even for short video clips [ 59]. While other volumetric methods achieve memory efficiency andspeed by leveraging sparse or compressed volume storage for static scenes [ 11,32], only contemporary work [ 22,51] addresses the extension of these approaches to dynamic scenes. Moreover, all of the above representations struggle to capture highly view-dependent appearance, such as reflections and refractions caused by non-planar surfaces. In this paper, we present HyperReel , a novel 6-DoF video representation that achieves state-of-the-art quality while being memory efficient and real-time renderable at high res-olution. The first ingredient of our approach is a novel ray-conditioned sample prediction network that predicts sparse point samples for volume rendering. In contrast to exist-ing static view synthesis methods that use sample networks [20,33], our design is unique in that it both (1) acceler-ates volume rendering and at the same time (2) improves rendering quality for challenging view-dependent scenes. Second, we introduce a memory-efficient dynamic vol-ume representation that achieves a high compression rate by exploiting the spatio-temporal redundancy of a dynamic scene. Specifically, we extend Tensorial Radiance Fields [ 11] to compactly represent a set of volumetric keyframes, and capture intermediate frames with trainable scene flow. The combination of these two techniques comprises our high-fidelity 6-DoF video representation, HyperReel . We validate the individual components of our approach and our representation as a whole with comparisons to state-of-the-art sampling network-based approaches for static scenes as well as 6-DoF video representations for dynamic scenes. Notonly does HyperReel outperform these existing works, but it also provides high-quality renderings for scenes with chal-lenging non-Lambertian appearances. Our system renders at up to 18 frames-per-second at megapixel resolution without using any custom CUDA code. The contributions of our work include the following: 1.A novel sample prediction network for volumetric view synthesis that accelerates volume rendering and accu-rately represents complex view-dependent effects. |
Futschik_Controllable_Light_Diffusion_for_Portraits_CVPR_2023 | Abstract We introduce light diffusion, a novel method to im-prove lighting in portraits, softening harsh shadows and specular highlights while preserving overall scene illumi-nation. Inspired by professional photographers’ diffusers and scrims, our method softens lighting given only a single portrait photo. Previous portrait relighting approaches fo-cus on changing the entire lighting environment, removing shadows (ignoring strong specular highlights), or removing shading entirely. In contrast, we propose a learning based method that allows us to control the amount of light diffu-sion and apply it on in-the-wild portraits. Additionally, we design a method to synthetically generate plausible external shadows with sub-surface scattering effects while conform-ing to the shape of the subject’s face. Finally, we show how our approach can increase the robustness of higher level vi-sion applications, such as albedo estimation, geometry es-timation and semantic segmentation.1. Introduction High quality lighting of a subject is essential for cap-turing beautiful portraits. Professional photographers go to great lengths and cost to control lighting. Outside the stu-dio, natural lighting can be particularly harsh due to direct sunlight, resulting in strong shadows and pronounced spec-ular effects across a subject’s face. While the effect can be dramatic, it is usually not the desired look. Professional photographers often address this problem with a scrim or diffuser (Figure 2), mounted on a rig along the line of sight from the sun to soften the shadows and specular highlights, leading to much more pleasing portraits [9]. Casual photog-raphers, however, generally lack the equipment, expertise, or even the desire to spend time in the moment to perfect the lighting in this way. We take inspiration from professional photography and propose to diffuse the lighting on a sub-ject in an image, i.e., directly estimating the appearance of the person as though the lighting had been softer, enabling anyone to improve the lighting in their photos after the shot This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8412 Figure 2. Using a bulky scrim (left), a photographer can reduce strong shadows and specularities. Our proposed approach operates directly on the original image to produce a similar softening effect. is taken. Deep learning approaches have led to great advances in relighting portraits [12,21, 23,26,27,29,33,35,37,38]. Our goal is different: we want to improve the existing lighting rather than replace it entirely. This goal has two advan-tages: the resulting portrait has improved lighting that is visually consistent with the existing background, and the task is ultimately simpler, leading to a more robust solution than completely relighting the subject under arbitrary illu-mination. Indeed, one could estimate the lighting [14, 15], diffuse (blur) it, and then relight the subject [12, 23,33], but lighting estimation and the full relighting task themselves are open research questions. We instead go directly from input image to diffused-lighting image without any illumi-nation estimation. Past works [11, 37] specifically focused on removing shadows from a subject via CNNs. However, these meth-ods do not address the unflattering specularities that remain which our work tackles. In the extreme, lighting can be diffused until it is com-pletely uniform. The problem of “delighting,” recovering the underlying texture (albedo) as though a subject has been uniformly lit by white light1, has also been studied (most re-cently in [30]). The resulting portrait is not suitable as an end result – too flat, not visually consistent with the back-ground – but the albedo map can be used as a step in portrait relighting systems [23]. Delighting, however, has proved to be a challenging task to do well, as the space of materials, particularly clothing, can be too large to handle effectively. In this paper, we propose light diffusion, a learning-based approach to controllably adjust the levels of diffuse light-ing on a portrait subject. The proposed method is able to soften specularities, self shadows, and external shadows while maintaining the color tones of the subject, leading to a result that naturally blends into the original scene (see Fig. 1). Our variable diffusion formulation allows us to go from subtle shading adjustment all the way to removing the shading on the subject entirely to obtain an albedo robust to shadows and clothing variation. Our overall contributions are the following: 1Technically, uniform lighting will leave ambient occlusion in the re-covered albedo, often desirable for downstream rendering tasks.A novel, learning-based formulation for the light dif-fusion problem, which enables controlling the strength of shadows and specular highlights in portraits. A synthetic external shadow generation approach that conforms to the shape of the subject and matches the diffuseness of the illumination. A robust albedo predictor, able to deal with color am-biguities in clothing with widely varying materials and colors. Extensive experiments and comparisons with state-of-art approaches, as well as results on downstream appli-cations showing how light diffusion can improve the performance of a variety of computer visions tasks. 2. Related Work Controlling the illumination in captured photos has been exhaustively studied in the context of portrait relighting [12, 21, 23, 26, 27, 29,33,35,37,38], which tries to address this problem for consumer photography using deep learning. Generative models and inverse rendering [1, 7,17,22,28] have also been proposed to enable face editing and synthe-sis of portraits under any desired illumination. The method of Sun et al. [27], was the first to propose a self-supervised architecture to infer the current lighting con-dition and replace it with any desired illumination to obtain newly relit images. This was the first deep learning method applied to this specific topic, overcoming issues of previous approaches such as [26]. However, this approach does not explicitly perform any image decomposition, relying on a full end-to-end method, which makes its explainability harder. More recent meth-ods [12, 21,23,29] decompose the relighting problem into multiple stages. These approaches usually rely on a ge-ometry network to predict surface normals of the subject, and an albedo predictor generates a de-lit image of the por-trait, that is close to the actual albedo (i.e. if the person was illuminated by a white diffuse light from any direction). A final neural renderer module combines geometry, albedo and target illumination to generate the relit image. Differ-ently from previous work, Pandey et al. [23] showed the importance of a per-pixel aligned lighting representation to better exploit U-Net architectures [25], showing state-of-art results for relighting and compositing. Other methods specifically focus on the problem of im-age decomposition [2, 3,13,18,19,24,30–32], attempt-ing to decompose the image into albedo, geometry and re-flectance components. Early methods rely on model fitting and parametric techniques [2–4, 18], which are limited in capturing high frequency details not captured by these mod-els, whereas more recent approaches employ learned based strategies [13, 19,24,31]. 8413 In particular, the method of Weir et al. [30] explicitly tackles the problem of portrait de-lighting. This approach relies on novel loss functions specifically targeting shad-ows and specular highlights. The final inferred result is an albedo image, which resembles the portrait as if it was lit from diffuse uniform illumination. Similarly, Yang et al. [32] propose an architecture to remove facial make-up, generating albedo images. These methods, however completely remove the light-ing from the current scene, whereas in photography appli-cations one may simply want to control the effect of the current illumination, perhaps softening shadows and spec-ular highlights. Along these lines the methods of Zhang et al. [37] and Inouei and Yamasaki [11] propose novel ap-proaches to generate synthetic shadows that can be applied to in-the-wild images. Given these synthetically generated datasets, they propose a CNN based architecture to learn to remove shadows. The final systems are capable of remov-ing harsh shadows while softening the overall look. Despite their impressive results, these approaches are designed to deal with shadow removal, and, although some softening effect can be obtained as byproduct of the method, their formulations ignore high order light transport effects such as specular highlights. In contrast, we propose a novel learning based formu-lation to control the amount of light diffusion in portraits, without changing the overall scene illumination while soft-ening or removing shadow and specular highlights. 3. A Framework for Light Diffusion In this section, we formulate the light diffusion problem, and then propose a learning based solution for in-the-wild portraits. Finally we show how our model can be applied to infer a more robust albedo from images, improving down-stream applications such as relighting, face part segmenta-tion, and normal estimation. 3.1. Problem Formulation We model formation of image Iof a subject Pin terms of illumination from a HDR environment map E(; ): I=R[P;E (; )] (1) whereR[]renders the subject under the given HDR envi-ronment map. We can then model light diffusion as ren-dering the subject under a smoothed version of the HDR environment map. Concretely, a light-diffused image Idis formed as: Id=R" P;E(; )cosn +() PH;W i;jcosn +(i;j)# (2) whererepresents spherical convolution, and the incident HDR environment map is smoothed with normalized ker-nelcosn +()max(0; cosn()), effectively pre-smoothing Figure 3. Illumination convolution. Shown are the original en-vironment and relit image, followed by convolution with cosn + with varying exponent n, and th | e resulting Gini coefficient Gfor each diffused environment. Note the gradual reduction in light harshness while still maintaining the overall lighting tone. the HDR environment map with the Phong specular term. The exponent ncontrols the amount of blur or diffusion of the lighting. Setting nto 1 leads to a diffusely lit image, and higher specular exponents result in sharper shadows and specular effects, as seen in Figure 3. Our goal then is to construct a function fthat takesI and the amount of diffusion controlled by exponent nand predicts the resulting light-diffused image Id=f(I;n). In practice, as described in section 3.2.1, we replace nwith a parameter tthat proved to be easier for the networks to learn; this new parameter is based on a novel application of the Gini coefficient [8] to measure environment map dif-fuseness. 3.2. Learning-based Light Diffusion We perform the light diffusion task in a deep learning framework. We can represent the mapping fas a deep net-workf : Id=f (I;t) (3) where represents the parameters of the network. To super-vise training, we capture subjects in a light stage and, using the OLAT images [6], synthetically render each subject un-der an HDRI environment E(; ) and a diffused version of that environment E(; )cosn +(), providing training pair IandId. In practice, we obtain better results with a sequence of two networks. The first network estimates a specular map S, which represents image brightening, and a shadow map D, which represents image darkening, both relative to a fully diffusely lit (n = 1) image. Concretely, we generate the fully diffused image Idi use as described in Equation 2 withn= 1 and then define the shadow Dand specular S maps as: S= max(min(1 Idi use=I;1);0) (4) D= max(min(1 I=Idi use;1);0) (5) 8414 Figure 4. Architecture for parametric diffusion. Taking a portrait image with an alpha matte, the first stage predicts specular and shadow maps. The second stage uses these maps and the source image to produce an image with light diffused according to an input diffusion parameter. The result is composited over the input image to replace the foreground subject with the newly lit version. Given the light stage data, it is easy to additionally synthe-sizeIdi use and compute SandDfor a given HDR envi-ronment map Eto supervise training of a shadow-specular networkg s: fS;Dg=g s(I) (6) The light diffusion network then maps the input image along with the specular and shadow maps to the final result: Id=h d(I;S;D;t ) (7) Note that, as we are not seeking to modify lighting of the background, we focus all the computation on the subject in the portrait. We thus estimate a matte for the foreground subject and feed it into the networks as well; Ithen is rep-resented as the union of the original image and its portrait matte. The overall framework is shown in Figure 4. In addition, we can extend our framework to infer a more robust albedo than prior work, through a process of repeated light diffusion. We now detail each of the individual compo-nents of the light diffusion and albedo estimation networks. 3.2.1 Network details Specular+Shadow Network The specular+shadow net-workg sis a single network that takes in the source im-ageIalong with a pre-computed alpha matte [23], as a 10247684dimensional tensor. We used a U-Net [25] with 7 encoder-decoder layers and skip connections. Each layer used a 33convolution followed by a Leaky ReLU activation. The number of filters for each layer is 24;32;64;64;64;92;128for the encoder, 128 for the bot-tleneck, and 128;92;64;64;64;32;24for the decoder. We used blur-pooling [36] layers for down-sampling and bilin-ear resizing followed by a 33convolution for upsampling. The final output -two single channel maps -is generated by a33convolution with two filters.Parametric Diffusion Net The diffusion network h d takes the source image, alpha matte, specular map, shadow map, and the diffusion parameter t(as a constant channel) into the Diffusion Net as a 10247687tensor. The Dif-fusion Net is a U-Net similar to the previous U-Net, with 48;92;128;256;256;384;512 encoder filters, 512 bottle-neck filters, and 384;384;256;256;128;92;48decoder fil-ters. The larger filter count accounts for the additional dif-ficulty of the diffusion task. Diffusion parameter choice The diffusion parameter t indicates the amount of diffusion. While one can naively rely on specular exponents as a control parameter, we ob-served that directly using them led to poor and inconsis-tent results, as the perceptual change for the same specular convolution can be very varied for different HDR environ-ments, for instance, a map with evenly distributed lighting will hardly change, whereas a map with a point light would change greatly. We hypothesize the non-linear nature of this operation is difficult for the model to learn, and so we quan-tified a different parameter based on a measure of ‘absolute’ diffuseness. To measure the absolute diffusivity of an image, we ob-served that the degree of diffusion is related to how evenly distributed the lighting environment is, which strongly de-pends on the specific scene; e.g., if all the lighting comes from a single, bright source, we will tend to have harsh shadows and strong specular effects, but if the environment has many large area lights, the image will have soft shadows and subdued specular effects. In other words, the diffusiv-ity is related to the inequality of the lighting environment. Thus, we propose to quantify the amount of diffusion by using the Gini coefficient [8] of the lighting environment, which is designed to measure inequality. Empirically, we found that the Gini coefficient gives a normalized measure-ment of the distribution of the light in an HDR map, as seen in Figure 5, and thus we use it to control the amount of dif-8415 Figure 5. Gini coefficients Gof some HDR maps and their relit images. Similar Gini coefficients approximately yield a similar quality of lighting, allowing a consistent measure of diffusion. fusion. Mathematically, for a finite multiset XR+, where jXj=k, the Gini coefficient, G2[0;1], is computed as G=P xi;xj2Xjxi xjj 2kP xi2Xxi: (8) For a discrete HDR environment map, we compute the Gini coefficient by setting each xi2Xto be the luminance from theithsample of the HDR environment map. For instance, on a discrete equirectangular projection E(; ) where (; )2[0;][0;2), theithsample’s light contri-bution is given by E(i; i) sin(i), where sin(i)compen-sates for higher sampling density at the poles. If we indicate theithsample ofEbyEi, the coefficient is then given by G=P iP jjEisin(i) Ejsin(j)j 2kP iEisin(i)(9) wherei;jrange over all samples of the equirectangular map andkis the total number of samples in the map. Finally, as an input parameter, we re-scaled this abso-lute measure based on each training example: t= (Gt Gd)=(Gs Gd), whereGtis the Gini coefficient for the tar-get image,Gdis the Gini coefficient for the fully diffused image (diffused with specular exponent 1), and Gsis the Gini coefficient for the source image. Parameter tranges from 0 to 1, where 0 corresponds to maximally diffuse (Gt=Gd) and 1 corresponds to no diffusion (G t=Gs). 3.2.2 Albedo Estimation We observed that the primary source of errors in albedo es-timation in state-of-the-art approaches like [23] arises from color and material ambiguities in clothing and is exacer-bated by shadows. The albedo estimation stage tends to be the quality bottleneck in image relighting, as errors are propagated forward in such multistage pipelines. Motivated by this observation, we propose to adapt our light diffusion approach to albedo estimation (Figure 6). While the fully diffuse image (diffused with n= 1) re-moves most shading effects, the approach can be pushedfurther to estimate an image only lit by the average color of the HDRI map, i.e., a tinted albedo. Since the diffuse con-volution operation preserves the average illumination of the HDR environment map and acts as a strong smoothing op-eration, repeated convolution converges to the average color of the HDR environment map. We found that iterating our diffusion network just three times (along with end-to-end training of the iteration based network) yielded good results. An alternative formulation of this problem is to pass the fully diffuse image into a separate network which estimates this tinted albedo, and we show a comparison between these two in the supplementary material. To remove the color tint, we crop the face – which resides in the more constrained space of skin tone priors – and train a CNN to estimate the RGB tint of the environment, again supervised by light stage data. We then divide out this tint to recover the untinted albedo for the foreground. 3.3. Data Generation and Training Details To train the proposed model, we require supervised pairs of input portraits Iand output diffused images Id. Follow-ing previous work [23], we rely on a light stage [10, 20] to capture the full reflectance field of multiple subjects as well as their geometry. Our data generation stage consists of generation of images with varying levels of diffusion as well as the tinted and true albedo maps, to use as ground truth to train our model. Importantly, we also propose a synthetic shadow aug-mentation strategy to add external shadow with subsurface scattering effects that are not easily modeled in relit images generated in the light stage. We extend the method pro-posed by [37] to follow the 3D geometry of the scene, by placing a virtual cylinder around the subject with a silhou-ette mapped to the surface. We then project the silhouette over the 3D surface of the subject – reconstructed from the light stage dataset – from the strongest light in the scene followed by blurring and opacity adjustment of the result-ing projected shadow map, guided by the Gini coefficient of the environment (smaller Ginis have more blur and lower shadow opacity). The resulting shadow map is used to blend between the original image and the image after removing the brightest light direction contribution. This shadow aug-mentation step is key to effective light diffusion |
Feng_MaskCon_Masked_Contrastive_Learning_for_Coarse-Labelled_Dataset_CVPR_2023 | Abstract Deep learning has achieved great success in recent years with the aid of advanced neural network structures and large-scale human-annotated datasets. However, it is of-ten costly and difficult to accurately and efficiently anno-tate large-scale datasets, especially for some specialized domains where fine-grained labels are required. In this set-ting, coarse labels are much easier to acquire as they do not require expert knowledge. In this work, we propose a con-trastive learning method, called mask edcontrastive learn-ing (MaskCon ) to address the under-explored problem set-ting, where we learn with a coarse-labelled dataset in or-der to address a finer labelling problem. More specifically, within the contrastive learning framework, for each sam-ple our method generates soft-labels with the aid of coarse labels against other samples and another augmented view of the sample in question. By contrast to self-supervised contrastive learning where only the sample’s augmentations are considered hard positives, and in supervised contrastive learning where only samples with the same coarse labels are considered hard positives, we propose soft labels based on sample distances, that are masked by the coarse labels. This allows us to utilize both inter-sample relations and coarse labels. We demonstrate that our method can obtain as special cases many existing state-of-the-art works and that it provides tighter bounds on the generalization error. Experimentally, our method achieves significant improve-ment over the current state-of-the-art in various datasets, including CIFAR10, CIFAR100, ImageNet-1K, Standford Online Products and Stanford Cars196 datasets. Code and annotations are available at https://github.com/ MrChenFeng/MaskCon_CVPR2023 . | 1. Introduction Supervised learning with deep neural networks has achieved great success in various computer vision tasks such as image classification, action detection and ob-ject localization. However, the success of supervised learning relies on large-scale and high-quality human-Self Coarse Fine Ours ( 𝜏=1) Ours ( 𝜏=0.1) 0 1 2 3 4 5 'british short' 'british short' 'siamese' 'jaguar' 'bottlenose' 'flycatcher' 'cat' 'cat' 'cat' 'cat' 'non-cat' 'non-cat' Coarse label:Fine label:Figure 1. Contrastive learning sample relations using MaskCon (ours) and other learning paradigms when only coarse labels are available. MaskCon are closer to the fine ones. annotated datasets, whose annotations are time-consuming and labour-intensive to produce. To avoid such reliance, various learning frameworks have been proposed and in-vestigated: Self-supervised learning aims to learn mean-ingful representations with heuristic proxy visual tasks, such as rotation prediction [16] and the more prevalent in-stance discrimination task, the latter, being widely applied in self-supervised contrastive learning framework; semi-supervised learning usually considers a dataset for which only a small part is annotated – within this setting, pseudo labelling methods [24] and consistency regularization tech-niques [1, 29] are typically used; Moreover, learning using more accessible but noisy data, such as web-crawled data, has also received increasing attention [13, 25]. In this work, we consider an under-explored problem setting aiming at reducing the annotation effort – learn-ing fine-grained representations with a coarsely-labelled dataset. Specifically, we learn with a dataset that is fully la-beled, albeit at a coarser granularity than we are interested in (i.e., that of the test set). Compared to fine-grained la-bels, coarse labels are often significantly easier to obtain, especially in some of the more specialized domains, such as the recognition and classification of medical pathology images. As a simple example, for the task of differentia-tion between different pets, we need a knowledgeable cat lover to distinguish between ‘British short’ and ‘Siamese’, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19913 but even a child annotator may help to discriminate between ‘cat’ and ‘non-cat’ (Fig. 1). Unfortunately, learning with a coarse labelled dataset has been less investigated compared to other weakly supervised learning paradigms. Recently, Bukchi et al. [2] investigate on learning with coarse labels in the few-shot setting. More closely related to us, Grafit [31] proposes a multi-task framework by a weighted combina-tion of self-supervised contrastive learning and supervised contrastive learning cost; Similarly, CoIns [37] uses both a self-supervised contrastive learning cost and a supervised learning cross-entropy loss. Both works combine a fully su-pervised learning cost (cross entropy or contrastive) with a self-supervised contrastive loss – these works are the main ones with which we compare. Differently than them, instead of using self-supervised contrastive learning as an auxiliary task, we propose a novel learning scheme, namely Mask edContrastive Learn-ing ( MaskCon ). Our method aims to learn by consider-ing inter-sample relations of each sample with other sam-ples in the dataset. Specifically, we always consider the relation to oneself as confidently positive. To estimate the relations to other samples, we derive soft labels by contrasting an augmented view of the sample in question with other samples, and further improve it by utilizing the mask generated based on the coarse labels. Our approach generates soft inter-sample relations that can more accu-rately estimate fine inter-sample relations compared to the baseline methods (Fig. 1). Efficiently and effectively, our method achieves significant improvements over the state-of-the-art in various datasets, including CIFARtoy, CIFAR100, ImageNet-1K and more challenging fine-grained datasets Stanford Online Products and Stanford Cars196. |
Bolkart_Instant_Multi-View_Head_Capture_Through_Learnable_Registration_CVPR_2023 | Abstract Existing methods for capturing datasets of 3D heads in dense semantic correspondence are slow and commonly ad-dress the problem in two separate steps; multi-view stereo (MVS) reconstruction followed by non-rigid registration. To simplify this process, we introduce TEMPEH (Towards Es-timation of 3D Meshes from Performances of Expressive Heads) to directly infer 3D heads in dense correspondence from calibrated multi-view images. Registering datasets of 3D scans typically requires manual parameter tuning to find the right balance between accurately fitting the scans’ sur-faces and being robust to scanning noise and outliers. In-stead, we propose to jointly register a 3D head dataset while training TEMPEH. Specifically, during training, we mini-mize a geometric loss commonly used for surface registra-tion, effectively leveraging TEMPEH as a regularizer. Our multi-view head inference builds on a volumetric feature representation that samples and fuses features from each view using camera calibration information. To account for partial occlusions and a large capture volume that enables head movements, we use view-and surface-aware feature fusion, and a spatial transformer-based head localization module, respectively. We use raw MVS scans as supervision during training, but, once trained, TEMPEH directly pre-dicts 3D heads in dense correspondence without requiringscans. Predicting one head takes about 0.3seconds with a median reconstruction error of 0.26 mm, 64% lower than the current state-of-the-art. This enables the efficient cap-ture of large datasets containing multiple people and di-verse facial motions. Code, model, and data are publicly available at https://tempeh.is.tue.mpg.de . | 1. Introduction Capturing large datasets containing 3D heads of multi-ple people with varying facial expressions and head poses is a key enabler for modeling and synthesizing realistic head avatars. Typically, building such datasets is done in two steps: unstructured 3D scans are captured with a calibrated multi-view stereo (MVS) system, followed by a non-rigid registration step to unify the mesh topology [23]. This two-stage process has major drawbacks. MVS reconstruction requires cameras with strongly overlapping views and the resulting scans frequently contain holes and noise. Reg-istering a template mesh to these scans typically involves manual parameter tuning to balance the trade-off between accurately fitting the scan’s surface and being robust to scan artifacts. Both stages are computationally expensive, each taking several minutes per scan. For professional captures, both steps are augmented with manual clean-up to enhance This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 768 the quality of the output meshes [3, 67]. Such manual edit-ing is infeasible for large-scale captures ( ≫10K scans). Instead, we advocate for a more practical setting that di-rectly predicts 3D heads in dense correspondence from cal-ibrated multi-view images, effectively bypassing the MVS step. We achieve this with TEMPEH (Towards Estimation of 3D Meshes from Performances of Expressive Heads), which quickly ( ∼0.3seconds per head on a NVIDIA A100-SXM GPU) infers accurate 3D heads ( ∼0.26mm median error) in correspondence, without manual user input. While several methods exist that directly recover 3D faces in correspondence from calibrated multi-view im-ages, they have high computational cost and require care-ful selection of optimization parameters per capture subject [9, 14, 29, 62]. These remain major obstacles for large-scale data captures. A few learning-based methods directly regress parameters of a 3D morphable model (3DMM) [80] or iteratively refine 3DMM meshes from multi-view images [6]. As shown by Li et al. [50], this 3DMM dependency constrains the quality and expressiveness of these methods. The recent ToFu [50] method goes beyond these 3DMM-based approaches with a volumetric feature sampling framework to infer face meshes from calibrated multi-view images. While demonstrating high-quality predic-tions, ToFu has several limitations. (a) The training is fully-supervised with paired data of multi-view images and high-quality registered meshes; creating such data requires ex-tensive manual input. (b) Only the face region is predicted; ears, neck, and the back of the head are manually completed in an additional fitting step. (c) Self-occlusions in scanner setups designed to capture the entire head result in mediocre predictions due to the na ¨ıve feature aggregation strategy that ignores the surface visibility. (d) Only a small capture vol-ume is supported and increasing the size of the capture vol-ume to cover head movements reduces the accuracy. TEMPEH adapts ToFu’s volumetric feature sampling framework but goes beyond it in several ways: (a) The train-ing requires no manually curated data as we jointly optimize TEMPEH’s weights and register the raw scans. Obtaining the clean, registered meshes required by ToFu is a key prac-tical hurdle. TEMPEH learns from raw scans and is robust to their noise and missing data. This is done by directly minimizing the point-to-surface distance between scans and predicted meshes. (b) At run time the entire head is inferred from images alone and includes the ears, neck, and back of the head. (c) The feature aggregation accounts for surface visibility. (d) A spatial transformer module [42] localizes the head in the feature volume to only sample regions rele-vant for prediction, improving the accuracy. In summary, TEMPEH is the first framework to accu-rately capture the entire head from multi-view images at near interactive rates. During training, TEMPEH jointly learns to predict 3D heads from multi-view images, and reg-isters unstructured scans. Once trained, it only requires cal-ibrated camera input and it generalizes to diverse extreme expressions and head poses for subjects unseen during train-ing (see Fig. 1). TEMPEH is trained and evaluated on a dy-namic 3D head dataset of 95 subjects, each performing 28 facial motions, totalling about 600K 3D head meshes. The registered dataset meshes, raw images, camera calibrations, trained model, and training code are publicly available. |
Farina_Quantum_Multi-Model_Fitting_CVPR_2023 | Abstract Geometric model fitting is a challenging but fundamen-tal computer vision problem. Recently, quantum optimiza-tion has been shown to enhance robust fitting for the case of a single model, while leaving the question of multi-model fitting open. In response to this challenge, this paper shows that the latter case can significantly benefit from quantum hardware and proposes the first quantum approach to multi-model fitting (MMF). We formulate MMF as a problem that can be efficiently sampled by modern adiabatic quan-tum computers without the relaxation of the objective func-tion. We also propose an iterative and decomposed ver-sion of our method, which supports real-world-sized prob-lems. The experimental evaluation demonstrates promising results on a variety of datasets. The source code is avail-able at: https://github.com/FarinaMatteo/qmmf . | 1. Introduction Since the data volumes that AI technologies are required to process are continuously growing every year, the pressure to devise more powerful hardware solutions is also increas-ing. To face this challenge, a promising direction currently pursued both in academic research labs and in leading com-panies is to exploit the potential of quantum computing. Such paradigm leverages quantum mechanical effects for computations and optimization, accelerating several impor-tant problems such as prime number factorization, database search [36] and combinatorial optimization [33]. The rea-son for such acceleration is that quantum computers lever-age quantum parallelism of qubits, i.e., the property that a quantum system can be in a superposition of multiple (ex-ponentially many) states and perform calculations simulta-neously on all of them. Among the two quantum computing models, i.e., gate-based and Adiabatic Quantum Computers (AQCs), the lat-ter recently gained attention in the computer vision com-munity thanks to advances in experimental hardware real-izations [2, 6, 17, 34, 41, 51, 53]. At the present, AQCs pro-NummodelsQuantum HW MultimodelSinglemodelSimulationRealOursHQC-RF Figure 1. Left: Differences between our method and H QC-RF [17]. While H QC-RF considers only a single model and is tested on quantum hardware with synthetic data, Q UMF is also evaluated on real data on real quantum hardware. Although devised for mul-tiple models, our method supports a single model likewise. Right: Qualitative results of Q UMF on motion segmentation on the Ade-laideRMF dataset [49]. vide sufficient resources in terms of the number of qubits, qubit connectivity and admissible problem sizes which they can tackle [7], to be applied to a wide range of problems in computer vision. The AQC model is based on the adiabatic theorem of quantum mechanics [8] and designed for combi-natorial problems (including NP-hard) that are notoriously difficult to solve on classical hardware. Modern AQCs oper-ate by optimizing objectives in the quadratic unconstrained binary optimization (QUBO) form (see Sec. 2). However, many relevant tasks in computer vision cannot be trivially expressed in this form. Hence, currently, two prominent research questions in the field are: 1) Which problems in computer vision could benefit from an AQC? , and 2) How can these problems be mapped to a QUBO form in order to use an AQC? Several efforts have been recently undertaken to bring classical vision problems in this direction. Notable exam-ples are works on graph matching [40, 41], multi-image matching [6], point-set registration [20, 34], object detec-tion [26], multi-object tracking [53], motion segmentation [2] and robust fitting [17]. Focusing on geometric model fitting, Doan et al. [17] proposed an iterative consensus maximization approach to robustly fit a single geometric model to noisy data. This work was the first to demonstrate the advantages of quantum hardware in robust single-model fitting with error bounds, which is an important and chal-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13640 lenging problem with many applications in computer vision (e.g., template recognition in a point set). The authors pro-posed to solve a series of linear programs on an AQC and demonstrated promising results on synthetic data. They also showed experiments for fundamental matrix estimation and point triangulation using simulated annealing (SA) [25]. SA is a classical global optimization approach, that, in contrast to AQCs, can optimize arbitrary objectives and is a frequent choice when evaluating quantum approaches (see Sec. 2). HQC-RF [17] takes advantage of the hypergraph formalism to robustly fit a single model. It is not straightforward to extend it to the scenario where multiple models are required to explain the data. Multi-model fitting (MMF) is a relevant problem in many applications, such as 3D reconstruction, where it is employed to fit multiple rigid moving objects to initialize multi-body Structure from Motion [3, 37], or to produce in-termediate interpretations of reconstructed 3D point clouds by fitting geometric primitives [32]. Other scenarios in-clude face clustering, body-pose estimation, augmented re-ality and image stitching, to name a few. This paper proposes Q UMF, i.e.,the first quantum MMF approach. We propose to leverage the advantages of AQCs in optimizing combinatorial QUBO objectives to explain the data with multiple anddisjoint geometric models. Im-portantly, Q UMF does not assume the number of disjoint models to be known in advance. Note that the potential ben-efit from AQCs for MMF is higher than in the single-model case: when considering multiple models the search space scales exponentially with their number, making the combi-natorial nature of the problem even more relevant. Further-more, we show that Q UMF can be easily applied to single-model fitting even though not explicitly designed for this task. We perform an extensive experimental evaluation on quantum hardware with many large-scale real datasets and obtain competitive results with respect to both classical and quantum methods. Figure 1 depicts a visual comparison be-tween H QC-RF [17] and Q UMF. Contributions. In summary, the primary technical con-tributions of this paper are the following: • We bring multi-model fitting, a fundamental computer vision problem with combinatorial nature, into AQCs; • We introduce Q UMF, demonstrating that it can be suc-cessfully used both for single and multiple models; • We propose D EQUMF, a decomposition policy allow-ing our method to scale to large-scale problems, over-coming the limitations of modern quantum hardware. The following section provides the background on AQCs and how to use them to solve QUBO problems. After intro-ducing Q UMF and D EQUMF in Sec. 3, we discuss related work in Sec. 4. Experiments are given in Sec. 5. Limitations and Conclusion are reported in Sec. 6 and 7, respectively.2. Background TheQUBO formulation in abinary search space is an optimization problem of the following form: min yPBdyTQy`sTy, (1) where Bddenotes the set of binary vectors of length d, QPRdˆdis a real symmetric matrix and, in a QUBO withdvariables, their linear coefficients are packed into sPRd. When the problem is subject to linear constraints of the form Ay“b, a common approach is to relax such constraints and reformulate the QUBO as: min yPBdyTQy`sTy`λ||Ay´b||2 2, (2) where λmust be tuned to balance the contribution of the constraint term. Unravelling such a term leads to: min yPBdyTrQy`˜sTy (3) where the following simple substitutions are implied: rQ“Q`λATA, ˜s“s´2λATb (4) For the proof, the reader can refer to, e.g., Birdal et al. [6]. Adiabatic Quantum Computers (AQCs) are capable of solving QUBO problems. An AQC is organized as a fixed and architecture-dependent undirected graph, whose nodes correspond to physical qubits and whose edges corre-spond to couplers (defining the connectivity pattern among qubits) [14]. Such graph structure, in principle, can be mapped to a QUBO problem (1) as follows: each physi-cal qubit represents an element of the solution (i.e., of the binary vector y), while each coupler models an element of Q(i.e., the coupler between the physical qubits iandjmaps to entry Qij). An additional weight term is assigned to each physical qubit, modelling the linear coefficients in s. Because AQC graphs are not fully connected, mapping an arbitrary QUBO to quantum hardware requires addi-tional steps. This concept is easier to visualize if the notion oflogical graph is introduced: the QUBO itself can also be represented with a graph, having dnodes (called logi-cal qubits1), corresponding to the dentries of yand edges corresponding to the non-zero entries of Q. The logical graph undergoes minor embedding , where an algorithm such as [9] maps the logical graph to the physi-cal one. During this process, a single logical qubit can be mapped to a set of physical qubits, i.e.,achain . All the physical qubits in a chain must be constrained to share the same state during annealing [38]. The magnitude of such 1Hereafter we will sometimes omit the term “logical” or “physical” as it will be clear from the context which type of qubits are being involved. 13641 constraint is called chain strength and the number of physi-cal qubits in the chain is called chain length . An equivalent QUBO that can be directly embedded on the physical graph is obtained as the output of minor embedding. Then, optimization of the combinatorial objective takes place: we say that the AQC undergoes an annealing pro-cess. In this phase, the physical system transitions from a high-energy initial state to a low-energy final state, rep-resenting the solution of the mapped QUBO problem, ac-cording to the laws of quantum mechanics [18, 33]. At the end of the annealing, a binary assignment is produced for each physical qubit. A final postprocessing step is needed to go from physical qubits back to logical ones, thus obtain-ing a candidate solution for the original QUBO formulation (prior to minor embedding). Due to noise during the anneal-ing process, the obtained solution may not correspond to the global optimum of the QUBO problem. In other terms, the annealing is inherently probabilistic and has to be repeated multiple times. The final solution is obtained as the one that achieves the minimum value of the objective function (low-est energy). Performing multiple annealings of the same QUBO form is called sampling . It is common practice [2, 17, 41, 53] to test the QUBO objective on classic computers using simulated annealing (SA) [25], a probabilistic optimization technique able to op-erate in the binary space. |
Jeong_WinCLIP_Zero-Few-Shot_Anomaly_Classification_and_Segmentation_CVPR_2023 | Abstract Visual anomaly classification and segmentation are vi-tal for automating industrial quality inspection. The fo-cus of prior research in the field has been on training custom models for each quality inspection task, which re-quires task-specific images and annotation. In this paper we move away from this regime, addressing zero-shot and few-normal-shot anomaly classification and segmentation. Recently CLIP , a vision-language model, has shown rev-olutionary generality with competitive zero-/few-shot per-formance in comparison to full-supervision. But CLIP falls short on anomaly classification and segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a compositional ensemble on state words and prompt templates and (2) efficient extraction and aggre-gation of window/patch/image-level features aligned with text. We also propose its few-normal-shot extension Win-CLIP+, which uses complementary information from nor-mal images. In MVTec-AD (and VisA), without further tun-ing, WinCLIP achieves 91.8%/85.1% (78 .1%/79.6%) AU-ROC in zero-shot anomaly classification and segmentation while WinCLIP+ does 93.1%/95.2% (83 .8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins. | 1. Introduction Visual anomaly classification (AC) and segmentation (AS) classify and localize defects in industrial manufac-turing, respectively, predicting an image or a pixel as nor-mal or anomalous. Visual inspection is a long-tail problem. The objects and their defects vary widely in color, texture, and size across a wide range of industrial domains, includ-ing aerospace, automobile, pharmaceutical, and electronics. These result in two main challenges in the field. First, defects are rare with wide range of variations, leading to a lack of representative anomaly samples in the †Work done during an Amazon internship. ∗The authors contributed equally. ‡Work done as part of AWS AI Labs. 1few-shot andfew-normal-shot are used interchangeably in our case. Figure 1. Language guided zero-/one-shot1anomaly segmentation from WinCLIP/WinCLIP+. Best viewed in color and zoom in. training data. Consequently, existing works have mainly focused on one-class or unsupervised anomaly detection [2,7,8,20,29,31,51,57], which only requires normal images. These methods typically fit a model to the normal images and treat any deviations from it as anomalous. When hundreds or thousands of normal images are available, many methods achieve high-accuracy on public benchmarks [3, 8, 31]. But in the few-normal-shot regime, there is still room to improve performance [14, 32, 39, 57], particularly in comparison with the fully-supervised upper bound. Second, prior work has focused on training a bespoke model for each visual inspection task, which is not scalable across the long-tail of tasks. This motivates our interest in zero-shot anomaly classification and segmentation. But many defects are defined with respect to a normal image. For example, a missing component on a circuit board is most easily defined with respect to a normal circuit board with all components present. For such cases, at least a few normal images are needed. So in addition to the zero-shot case, we also consider the case of few-normal-shot anomaly classification and segmentation. Since only few normal images are available, there is no segmentation supervision for localizing anomalies, making this a challenging problem across the long-tail of tasks. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19606 Figure 2. Motivation of language guided visual inspection. (a) Language helps describe and clarify normality and anomaly; (b) Aggregating multi-scale features helps identify local defects; (c) Normal images provide rich referencing content to visually define normality Vision-language models [1, 18, 27, 36] have shown promise in zero-shot classification tasks. Large-scale train-ing with vision-language annotated pairs learns expressive representations that capture broad concepts. Without addi-tional fine-tuning, text prompts can then be used to extract knowledge from such models for zero-/few-shot transfer to downstream tasks including image classification [27], object detection [11] and segmentation [45]. Since CLIP is one of the few open-source vision-language models, these works build on top of CLIP, benefiting from its generalization abil-ity, and showing competitive low-shot performances in both seen and unseen objects compared to full supervision. In this paper, we focus on zero-shot and few-normal-shot (1 to 4) regime, which has received limited atten-tion [14, 32, 39]. Our hypothesis is that language is perhaps even more important for zero-shot/few-normal-shot anomaly classification and segmentation. This hypothesis stems from multiple observations. First, “normal” and “anomalous” are states [17] of an object that are context-dependent, and lan-guage helps clarify these states. For example, “a hole in a cloth” may be a desirable or undesirable depending upon whether distressed fashion or regular fashion clothes are be-ing manufactured. Language can bring such context and specificity to the broad “normal” and “anomalous” states. Second, language can provide additional information to dis-tinguish defects from acceptable deviations from normality. For example, in Figure 2(a), language provides informa-tion on the soldering defect, while minor scratches/stains on background are acceptable. In spite of these advantages, we are not aware of prior work leveraging vision-language models for anomaly classification and segmentation. In this work, with the pre-trained CLIP as a base model, we show and verify our hypothesis that language aids zero-/few-shot anomaly classification/segmentation. Since CLIP is one of the few open-source vision-language models, we build on top of it. Previously, CLIP-based meth-ods have been applied for zero-shot classification [27]. CLIP can be applied in the same way to anomaly classification, using text prompts for “normal” and “anomalous” as classes. However, we find na ¨ıve prompts are not effective (see Ta-ble 3). So we improve the na ¨ıve baseline with a state-levelword ensemble to better describe normal and anomalous states. Another challenge is that CLIP is trained to enforce cross-modal alignment only on the global embeddings of image and text. However, for anomaly segmentation we seek pixel-level classification and it is non-trivial to extract dense visual features aligned with language for zero-shot anomaly segmentation. Therefore, we propose a new Window-based CLIP (WinCLIP), which extracts and aggregates the multi-scale features while ensuring vision-language alignment. The multiple scales used are illustrated in Figure 2(b). To leverage normal images available in the few-normal-shot setting, we introduce WinCLIP+, which aggregates comple-mentary information from the language driven WinCLIP and visual cues from the normal reference images, such as the one shown in Figure 2(c). We emphasize that our zero-shot models do not require any tuning for individual cases, and the few-normal-only setup does not use any segmentation annotation, facilitating applicability across a broad range of visual inspection tasks. As a sample, Figure 1 illustrates WinCLIP and WinCLIP+ qualitative results for a few cases. To summarize, our main contributions are: •We introduce a compositional prompt ensemble, which improves zero-shot anomaly classification over the na¨ıve CLIP based zero-shot classification. •Using the pre-trained CLIP model, we propose Win-CLIP, that efficiently extract and aggregate multi-scale spatial features aligned with language for zero-shot anomaly segmentation. As far as we know, we are the first to explore language-guided zero-shot anomaly classification and segmentation. •We propose a simple reference association method, which is applied to multi-scale feature maps for im-age based few-shot anomaly segmentation. WinCLIP+ combines the language-guided and vision-only methods for few-normal-shot anomaly recognition. •We show via extensive experiments on MVTec-AD and VisA benchmarks that our proposed methods Win-CLIP/WinCLIP+ outperform the state-of-the-art meth-ods in zero-/few-shot anomaly classification and seg-mentation with large margins. 19607 |
Choi_Adversarial_Normalization_I_Can_Visualize_Everything_ICE_CVPR_2023 | Abstract Vision transformers use [CLS] tokens to predict image classes. Their explainability visualization has been stud-ied using relevant information from [CLS] tokens or fo-cusing on attention scores during self-attention. Such vi-sualization, however, is challenging because of the depen-dence of the structure of a vision transformer on skip con-nections and attention operators, the instability of non-linearities in the learning process, and the limited reflec-tion of self-attention scores on relevance. We argue that the output vectors for each input patch token in a vision trans-former retain the image information of each patch location, which can facilitate the prediction of an image class. In this paper, we propose ICE (Adversarial Normalization: I Can visualize Everything), a novel method that enables a model to directly predict a class for each patch in an im-age; thus, advancing the effective visualization of the ex-plainability of a vision transformer. Our method distin-guishes background from foreground regions by predicting background classes for patches that do not determine im-age classes. We used the DeiT-S model, the most repre-*Both authors contributed equally to this researchsentative model employed in studies, on the explainabil-ity visualization of vision transformers. On the ImageNet-Segmentation dataset, ICE outperformed all explainabil-ity visualization methods for four cases depending on the model size. We also conducted quantitative and qualita-tive analyses on the tasks of weakly-supervised object lo-calization and unsupervised object discovery. On the CUB-200-2011 and PASCALVOC07/12 datasets, ICE achieved comparable performance to the state-of-the-art methods. We incorporated ICE into the encoder of DeiT-S and im-proved efficiency by 44.01% on the ImageNet dataset over that achieved by the original DeiT-S model. We showed performance on the accuracy and efficiency comparable to EViT, the state-of-the-art pruning model, demonstrating the effectiveness of ICE. The code is available at https: //github.com/Hanyang-HCC-Lab/ICE . | 1. Introduction The emergence of vision transformers in the field of computer vision has driven improvements in model per-formance [2, 5]. Unlike a CNN model, a vision trans-former learns the association between image patches and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12115 classifies images using a [CLS] token. A CNN model and a vision transformer have structural differences that lead to variances in explainability visualization approaches. A representative approach is GradCAM, which demonstrates the explainability of CNN models by reflecting the impor-tance of pixel levels using the feature maps and gradients of the models. However, it is somewhat difficult to effec-tively apply GradCAM to vision transformers because the structural characteristics of vision transformers pose several challenges, such as skip connections, dependency on atten-tion operators, and unstable learning due to non-linearities. To overcome these challenges, previous research mainly used attention score information between [CLS] tokens and other patches to discriminate patches with a significant im-pact on learning and visualizing the explainability of vi-sion transformers [2, 35]. Later studies have evaluated the degree to which each attention head contributes to perfor-mance [36] or integrated the relevance and attention scores in layers through the proposal of a relevance propagation rule [3]. Most recently, the optimization of relevance maps has improved the explainability of a vision transformer by assigning a lower relevance to the background region of an image, whereas high relevance is placed on the foreground region [4]. Despite the advantage of this optimization, chal-lenges to explainability visualization for vision transform-ers remain given their structural characteristics [3, 4]. We note that the output embedding vectors for each in-put patch token in a vision transformer retain the image information of each patch location, and these vectors can help predict image classes. Based on this motivation, in this paper, we propose ICE (ICan visualize Everything), a novel method that uses the output embedding vectors of a vision transformer for each patch token, except for [CLS] tokens, in visualizing explainability. ICE initially assumes that the class of all patches is a background and gradually learns the direction in which the class of each patch in an image is predicted. With this approach, we propose a loss function for adversarial normalization that combines back-ground and classification losses for each patch token. ICE predicts a class for each patch in a foreground region of an image where the object of the class is likely to exist and classifies other regions as a background. To evaluate the explainability visualization performance of ICE, we mainly used DeiT-S [33], pre-trained with Ima-geNet [25], the most representatively adopted model in pre-vious studies. On the ImageNet-Segmentation [14] dataset, ICE (with DeiT-S) achieved improvements of 4.05% and 3.94% in pixel-wise accuracy and mean intersection over union (mean IoU), respectively, compared with state-of-the-art methods. To verify the scalability and robustness of ICE, we additionally considered ViT AugReg (AR) [31] and evaluated ICE for four cases depending on the model size (i.e., Small and Tiny). Through qualitative analyses,we showed that ICE was good at predicting not only the class of a single object but also the same class of multiple objects in an image. We found that other methods failed to segment objects, especially in multi-object conditions. We further evaluated the foreground and background separation performance of ICE on unsupervised seman-tic segmentation, weakly supervised object localization, and unsupervised object discovery tasks using the Pas-calVOC07/12 [11] validation sets and the CUB-200-2011 [37] dataset. As a result, ICE (with DeiT-S) achieved comparable and superior performance compared to the existing self-supervised learning-based methods (i.e., DINO [7], DINO-based LOST [8], and DINO-based To-kenCut [9]). We found that ICE could distinish between background and foreground regions despite the presence of multiple objects of different sizes and classes that were not learned in the images of PascalVOC07/12 (Figure 1). Regarding our experiment in efficiency on inference, we incorporated ICE into the encoder of DeiT-S and achieved an improved efficiency of 44.01% on ImageNet [25] com-pared with the original DeiT-S, while maintaining compa-rable accuracy. ICE also achieved accuracy and efficiency comparable to that of EViT, the state-of-the-art pruning method [20]. Our contributions are as follows. • We propose ICE that can be employed to vision trans-formers based on the notion of patch-wise classifica-tion andadversarial normalization . DeiT-S models with ICE and ICE-f improve class-specific explainabil-ity visualization performance (Section 4.2). • We show that ICE significantly improves foreground and background separations over the original DeiT-S and. Even without segmentation or object location labels, ICE achieves comparable or superior perfor-mance than existing self-supervised learning methods (Sections 4.3 and 4.4). • We demonstrate that ICE is effective in background patch selection by showing comparable efficiency and accuracy of DeiT-S that incorporates the ICE’s capa-bility to its encoder, to EViT (Section 4.5). With the experimental results as grounding, we discuss the scalability of our methodology in terms of improving the efficiency of a vision transformer-based model. |
Han_Reinforcement_Learning-Based_Black-Box_Model_Inversion_Attacks_CVPR_2023 | Abstract Model inversion attacks are a type of privacy attack that reconstructs private data used to train a machine learning model, solely by accessing the model. Recently, white-box model inversion attacks leveraging Generative Adversarial Networks (GANs) to distill knowledge from public datasets have been receiving great attention because of their excel-lent attack performance. On the other hand, current black-box model inversion attacks that utilize GANs suffer from issues such as being unable to guarantee the completion of the attack process within a predetermined number of query accesses or achieve the same level of performance as white-box attacks. To overcome these limitations, we propose a reinforcement learning-based black-box model inversion at-tack. We formulate the latent space search as a Markov De-cision Process (MDP) problem and solve it with reinforce-ment learning. Our method utilizes the confidence scores of the generated images to provide rewards to an agent. Fi-nally, the private data can be reconstructed using the latent vectors found by the agent trained in the MDP . The exper-iment results on various datasets and models demonstrate that our attack successfully recovers the private informa-tion of the target model by achieving state-of-the-art attack performance. We emphasize the importance of studies on privacy-preserving machine learning by proposing a more advanced black-box model inversion attack. | 1. Introduction With the rapid development of artificial intelligence, deep learning applications are emerging in various fields such as computer vision, healthcare, autonomous driving, and natural language processing. As the number of cases requiring private data to train the deep learning models in-creases, the concern of private data leakage including sensi-tive personal information is rising. In particular, studies on privacy attacks [21] show that personal information can be extracted from the trained models by malicious users. One of the most representative privacy attacks on machine learn-ing models is a model inversion attack, which reconstructsthe training data of a target model with only access to the model. The model inversion attacks are divided into three categories, 1) white-box attacks, 2) black-box attacks, and 3) label-only attacks, depending on the amount of informa-tion of the target model. The white-box attacks can access all parameters of the model. The black-box attacks can ac-cess soft inference results consisting of confidence scores, and the label-only attacks only can access inference results in hard label forms. The white-box model inversion attacks [5, 25, 27] have succeeded in restoring high-quality private data including personal information by using Generative Adversarial Net-works (GANs) [10]. First, they train the GANs on sepa-rate public data to learn the general prior of private data. Then benefiting from the accessibility of the parameters of the trained white-box models, they search and find latent vectors that represent data of specific labels with gradient-based optimization methods. However, these methods can-not be applied to machine learning services such as Ama-zon Rekognition [1] where the parameters of the model are protected. To reconstruct private data from such ser-vices, studies on black-box and label-only model inversion attacks are required. Unlike the white-box attacks, these at-tacks require methods that can explore the latent space of the GANs in order to utilize them, as gradient-based op-timizations are not possible. The recently proposed Model Inversion for Deep Learning Network (MIRROR) [2] uses a genetic algorithm to search the latent space with confidence scores obtained from a black-box target model. In addi-tion, Boundary-Repelling Model Inversion attack (BREP-MI) [14] has achieved success in the label-only setting by using a decision-based zeroth-order optimization algorithm for latent space search. Despite these attempts, each method has a significant is-sue. BREP-MI starts the process of latent space search from the first latent vector that generates an image classified as the target class. This does not guarantee how many query accesses will be required until the first latent vector is found by random sampling, and in the worst case, it may not be possible to start the search process for some target classes. In the case of MIRROR, it performs worse than the label-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20504 only attack BREP-MI, despite its use of confidence scores for the attack. Therefore, we propose a new approach, Re-inforcement Learning-based Black-box Model Inversion at-tack (RLB-MI), as a solution that is free from the aforemen-tioned problems. We integrate reinforcement learning to ob-tain useful information for latent space exploration from the confidence scores. More specifically, we formulate the ex-ploration of the latent space in the GAN as a problem in Markov Decision Processes (MDP). Then, we provide the agent with rewards based on the confidence scores of the generated images, and use update steps in the replay mem-ory to enable the agent to approximate the environment in-cluding latent space. Actions selected by the agent based on this information can navigate latent vectors more effectively than existing methods. Finally, we can reconstruct private data through the GAN from the latent vectors. We experi-ment with our attack on various datasets and models. The attack performance is compared with various model inver-sion attacks in three categories. The results demonstrate that the proposed attack can successfully recover meaningful in-formation about private data by outperforming all other at-tacks. |
Chen_Learning_a_Deep_Color_Difference_Metric_for_Photographic_Images_CVPR_2023 | Abstract Most well-established and widely used color differ-ence (CD) metrics are handcrafted and subject-calibrated against uniformly colored patches, which do not general-ize well to photographic images characterized by natural scene complexities. Constructing CD formulae for photo-graphic images is still an active research topic in imag-ing/illumination, vision science, and color science commu-nities. In this paper, we aim to learn a deep CD metric for photographic images with four desirable properties. First, it well aligns with the observations in vision science that color and form are linked inextricably in visual cortical processing. Second, it is a proper metric in the mathemat-ical sense. Third, it computes accurate CDs between pho-tographic images, differing mainly in color appearances. Fourth, it is robust to mild geometric distortions ( e.g., trans-lation or due to parallax), which are often present in pho-tographic images of the same scene captured by different digital cameras. We show that all four properties can be satisfied at once by learning a multi-scale autoregressive normalizing flow for feature transform, followed by the Eu-clidean distance which is linearly proportional to the hu-man perceptual CD. Quantitative and qualitative experi-ments on the large-scale SPCD dataset demonstrate the promise of the learned CD metric. Source code is available athttps://github.com/haoychen3/CD-Flow . | 1. Introduction For a long time in vision science community, the mod-ular and segregated view of cortical color processing pre-dominated [46]: the visual perception/processing of color-related quantities is separate from and in parallel with the perception/processing of form ( i.e., object shape and struc-ture), motion direction, and depth order in natural scenes. As a result, vision scientists preferred to investigate color *Corresponding author.perception under minimal conditions on form [20, 46], for example, using uniformly colored patches. The idea that color as a visual sensation can be analyzed separately had a profound impact on the development of computational formulae for color difference (CD) assess-ment. Till now, the most well-established and widely used CD metrics are primarily built upon the three-dimensional spatially-isotropic CIELAB coordinate system [32], recom-mended by the International Commission on Illumination (abbreviated as CIE from its French name Commission In-ternationale de l’Èclairage) in 1976. However, the unifor-mity1of the CIELAB space is not as ideal as intended [28], even for uniformly colored patches. Thus, more complex and parametric formulae are proposed to rectify different as-pects of perceptual non-uniformity. Representative methods include JPC79 [33], CMC( l:c)2[9], BFD( l:c) [31], CIE94 [34], and CIEDE2000 [29], in which the parameters are cal-ibrated by fitting the human perceptual CD measurements of uniformly colored patches. A naïve application of these metrics to photographic images is to compute the mean of the CDs between co-located pixels, which has been em-pirically shown to correlate poorly to human perception of CDs [38]. Back to the vision science community, with more sup-porting evidence from psychophysical and perceptual stud-ies [2, 6, 27, 47], vision scientists have gradually come to agree on an alternative and more persuasive view of color perception: color and form (and motion) are inextricably interdependent as a unitary process of perceptual organiza-tion [19,46]. Even the primary visual cortex ( i.e., V1) plays a significant role in color perception through two types of color-sensitive neurons: single-opponent and double-opponent cells. The single-opponent cells are sensitive to large areas of color, while the double-opponent cells re-spond to color patterns ,textures , and boundaries [24, 46]. 1A system is perceptually uniform if a small perturbation to a compo-nent value is approximately equally perceptible across the range of that value [43]. 2landcare two multiplicative parameters in the model to be fitted. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22242 At later stages, color is transformed to more complex and abstract features, which represent the integral properties of objects, and remain consistent against the changes of the environmental illumination [17, 37]. Inspired by these scientific findings, researchers and engineers began to take spatial context ( i.e., local sur-rounding regions) into account, when designing CD for-mulae. Representative strategies include low-pass spa-tial filtering [57], histograming [18], patch-based compar-ison [49], and texture-based segmentation [38]. Most re-cently, Wang et al. [51] established the largest photographic image dataset, SPCD, for perceptual CD assessment. They further trained a lightweight deep neural network (DNN) for CD assessment of photographic images in a data-driven fashion, as a generalization of several existing CD metrics built on the CIE colorimetry. Nonetheless, the learned for-mula may not be a proper metric, due to reliance on the possibly surjective mapping for feature transform. In this work, we further pursue the data-driven approach. We aim to learn a deep CD metric for photographic images with four desirable properties. • It is conceptually inspired by color perception in the visual cortex. The design of our approach should re-spect the view that color and form interact inextricably through all stages of visual cortical processing. • It is a proper metric that satisfies non-negativity, sym-metry, identity of indiscernibles, and triangle inequal-ity. Such design has been proven useful for perceptual optimization of image processing systems [11]. • It is accurate in predicting the human perceptual CDs of photographic images, with good generalization to uniformly colored patches. • It is robust to mild geometric distortions ( e.g., trans-lation and dilation), which are often present in photo-graphic images of the same scene captured with differ-ent camera settings or along different lines of sight. We show that all the four desirable properties can be sat-isfied at once by learning a multi-scale autoregressive nor-malizing flow (a variant of RealNVP [13] to be specific) for feature transform, followed by Euclidean distance mea-sure in the transformed space. More specifically, we achieve the first property by the squeezing operation (also known as invertible downsampling) in the normalizing flow, which trades space size for channel dimension. The second prop-erty is a direct consequence of the bijectivity of the normal-izing flow and the Euclidean distance measure. We achieve the third property by optimizing the model parameters to ex-plain the human perceptual CDs in SPCD [51]. We achieve the fourth property by enforcing the normalizing flow to be multi-scale and autoregressive, in which the features at a particular scale are conditioned on those at a higher ( i.e., coarser) scale. By doing so, our metric automatically learnsto preferentially rely on coarse-scale feature representations with more built-in tolerance to geometric distortions for CD assessment. We conduct extensive experiments on the large-scale SPCD dataset [51], and find that our proposed metric, termed as CD-Flow, outperforms 15CD formulae in as-sessing CDs of photographic images, produces competitive multi-scale local CD maps without any dense supervision, and is more robust to geometric distortions. Moreover, we empirically verify the perceptual uniformity of the learned color image representation from multiple aspects. |
Guo_DINN360_Deformable_Invertible_Neural_Network_for_Latitude-Aware_360deg_Image_Rescaling_CVPR_2023 | Abstract With the rapid development of virtual reality, 360◦im-ages have gained increasing popularity. Their wide field of view necessitates high resolution to ensure image qual-ity. This, however, makes it harder to acquire, store and even process such 360◦images. To alleviate this issue, we propose the first attempt at 360◦image rescaling, which refers to downscaling a 360◦image to a visually valid low-resolution (LR) counterpart and then upscaling to a high-resolution (HR) 360◦image given the LR variant. Specifi-cally, we first analyze two 360◦image datasets and observe several findings that characterize how 360◦images typi-cally change along their latitudes. Inspired by these find-ings, we propose a novel deformable invertible neural net-work (INN), named DINN360, for latitude-aware 360◦im-age rescaling. In DINN360, a deformable INN is designed to downscale the LR image, and project the high-frequency (HF) component to the latent space by adaptively handling various deformations occurring at different latitude regions. Given the downscaled LR image, the high-quality HR image is then reconstructed in a conditional latitude-aware man-ner by recovering the structure-related HF component from the latent space. Extensive experiments over four public datasets show that our DINN360 method performs consid-erably better than other state-of-the-art methods for 2×,4× and8×360◦image rescaling. | 1. Introduction With the rapid development of virtual reality, 360◦im-ages have gained increasing popularity. Different from 2D images, 360◦images cover a scene with a wide range of 360◦×180◦views, requiring high resolution for ensuring the image quality. However, this also makes it considerably more costly to acquire, store and even process such high-resolution (HR) 360◦images. To address these issues, it is necessary to conduct 360◦image rescaling, which consists of image downscaling for generating low-resolution (LR) *Corresponding authors: Mai Xu (MaiXu@buaa.edu.cn), Lai Jiang (jianglai.china@buaa.edu.cn) Deformable downscaling High latitude Severe deformation Low latitude Slight deformation ERP Spherical Characteristic90° 0° -90° Latitude-aware HF projection Reverse upscalingDINN360 rescaling methodLatitude HR LRForward ReverseNon-uniform sampling density 1 2 3 𝒛𝒛 Latent spaceGuide Latitude DeformationFigure 1. Motivation and pipeline of our DINN360 method. The non-uniform sampling density causes various deformations at dif-ferent latitude regions, and this guides the design of our DINN360 model. Finally, the HR 360◦image can be rescaled from the cor-responding LR image and latent space. images with visually valid information and image upscaling for reconstructing HR 360◦images. Different from image super-resolution (SR) that only upscales from LR images, image rescaling can directly utilize the texture information from the input HR 360◦images, and therefore achieves bet-ter reconstruction results. Recently, 2-dimensional (2D) image rescaling has re-ceived increasing research interests [17, 21, 23, 36, 43, 44], due to its promising application potential. Specifically, Kim et al. [17] proposed a task-aware auto-encoder-based frame-work including a task-aware downscaling (TAD) model and a task-aware upscaling (TAU) model. In this work, the pro-cedures of downscaling and upscaling are implemented by two individual deep neural networks (DNNs), and then they are jointly optimized. Xiao et al. [44] proposed an im-age rescaling framework based on invertible neural network (INN), in which downscaling and upscaling are regarded as invertible procedures. Different from 2D images, as shown in Fig. 1, 360◦images contain various types of deformation at different latitude regions, due to the non-uniform sam-pling density of the sphere-to-plain projection. Therefore, it is inappropriate to directly apply the existing 2D rescal-ing methods on 360◦images (see analysis in Section 3). Hence, it is necessary to develop a specialized framework This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21519 for rescaling of the 360◦image, by fully considering its spherical characteristics. This paper is the first attempt at 360◦image rescaling. First, we conduct data analysis to find how the spherical characteristics of 360◦images, such as texture complexity and high-frequency (HF) components, change along with the latitude. Inspired by our findings, we propose a de-formable invertible neural network (DINN360) for latitude-aware 360◦image rescaling. Specifically, as shown in Fig. 1, deformable downscaling with a set of invertible deformable blocks is developed in DINN360 to learn the adaptive receptive fields. As such, the 360◦image can be downscaled in a deformation-adaptive manner. Subse-quently, the bijective projection is conducted with the devel-oped INN structure for the HF component extracted from the downscaling procedure, such that the texture details can be better recovered for the following upscaling. More importantly, a novel latitude-aware conditional mechanism is developed for the projection, in order to preserve the HF component of 360◦images in a latitude-aware manner. Given the invertible structures of downscaling and HF pro-jection, the 360◦image can be reversely upscaled. More-over, a new backflow training protocol is developed to re-duce the information gap between the forward and reverse flows of the INN structure. The extensive experimental re-sults show that our DINN360 outperforms state-of-the-art image rescaling and 360◦SR methods for 2×,4×and8× rescaling over 4 public datasets. The codes are available athttps://github.com/gyc9709/DINN360 . The main contributions of this paper are three-fold. • We find how the low-level characteristics of 360◦im-ages change along with its latitude, benefiting the de-signs of our DINN360 method. • We propose a novel INN framework for 360◦image rescaling, with the developed invertible deformable blocks to handle various 360◦deformations. • We develop a latitude-aware conditional mechanism in our framework, to better preserve the HF component of 360◦images in a latitude-aware manner. |
Arsomngern_Learning_Geometric-Aware_Properties_in_2D_Representation_Using_Lightweight_CAD_Models_CVPR_2023 | Abstract Cross-modal training using 2D-3D paired datasets, such as those containing multi-view images and 3D scene scans, presents an effective way to enhance 2D scene under-standing by introducing geometric and view-invariance pri-ors into 2D features. However, the need for large-scale scene datasets can impede scalability and further improve-ments. This paper explores an alternative learning method by leveraging a lightweight and publicly available type of 3D data in the form of CAD models. We construct a 3D space with geometric-aware alignment where the similarity in this space reflects the geometric similarity of CAD mod-els based on the Chamfer distance. The acquired geometric-aware properties are then induced into 2D features, which boost performance on downstream tasks more effectively than existing RGB-CAD approaches. Our technique is not limited to paired RGB-CAD datasets. By training exclu-sively on pseudo pairs generated from CAD-based recon-struction methods, we enhance the performance of SOTA 2D pre-trained models that use ResNet-50 or ViT-B back-bones on various 2D understanding tasks. We also achieve comparable results to SOTA methods trained on scene scans on four tasks in NYUv2, SUNRGB-D, indoor ADE20k, and indoor/outdoor COCO, despite using lightweight CAD models or pseudo data. Please visit our page: https: //GeoAware2dRepUsingCAD.github.io/ | 1. Introduction Recent 2D visual representation learning approaches, such as contrastive learning [3, 6, 10, 17, 28] or masked au-toencoder [20], are widely used to tackle various problems in computer vision due to their ability to encode rich vi-sual features. While these methods have shown exceptional results on 2D image classification, they still have shortcom-ings in other 2D understanding tasks that involve instance-level reasoning. Prior research [25] also shows that models pre-trained using image augmentations [8,21] or supervised Same shape (Viewpoint-inv .) Similar shape Different shape2D Encoder2D Understanding tasks Semantic segmentation Object detection & Instance segmentation Object retrievalTraining images Transfer learning 2D-3D embedding space Bed ChairLow CD High CDCD = Chamfer DistanceFigure 1. Overview concept of our solution. We leverage CAD models to train a joint 2D-3D space such that images of objects with similar shapes, based on the Chamfer distance , are attracted to each other, while images with different shapes are separated. This results in a continuous geometric-aware space where the dis-tance between two points reflects their geometric similarity, which could be utilized for downstream 2D object understanding tasks. labels [13] could not deliver satisfactory results when ap-plied to downstream tasks such as semantic segmentation, instance segmentation, and object detection [12, 42]. To alleviate this, Hou et al. [25] proposed to learn 3D geometric priors, such as view-invariance, from 3D data and transfer the learned priors to 2D representations. In particular, their model is first pre-trained on ScanNet [12], a database of multi-view RGB-D scans, using contrastive learning and later used as initialization for fine-tuning net-works on downstream tasks. Chen et al. [5] further extend this work by utilizing additional priors through learning to group nearby points that refer to the same object part from 3D scenes. The key concept of their introduced new paradigm is to share useful 3D priors from 3D data with 2D representa-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21371 tions. However, previous studies have investigated mostly 3D priors related to viewpoint invariance from 3D scene datasets, which are often limited in size and scene varia-tions due to the laborious data collecting and labeling pro-cess [24]. This scarcity of large-scale 3D data also limits the number of available 3D priors for learning the invari-ance, hence hindering further performance gains. This pa-per questions whether there are other effective 3D priors for 2D downstream tasks and whether they can be learned from other forms of 3D data that are lighter and easier to obtain. We begin our exploration by considering how to learn an embedding space that maps together images with the same or similar geometries. One solution is to learn from images that share the same 3D model, which inspires our interest in CAD models. Unlike 3D scenes, CAD mod-els are lightweight, publicly available, and can be easily aligned with RGB images via web scraping or human an-notation [2]. There exist studies that jointly learn 2D and 3D CAD representation for other CAD-related tasks, e.g., 3D classification [1,27]. However, their 2D features, which are derived from augmentation-based CAD features, are in-sufficient to be directly applied to 2D object understanding tasks—they can group images with similar geometries but struggle to learn the distinctions between object categories. We argue that useful geometric-aware representations should account for both similarities and differences in ob-ject geometry. Our key idea is to acquire such features by imitating the Chamfer distance between 3D objects in our embedding space and inducing this derived geometric awareness in our learned 2D representation. In particular, we learn our space by attracting the encoded features of ge-ometrically similar CAD models in the mini-batch based on the Chamfer distance and repelling those with lower simi-larities. In contrast to other methods trained on supervised discrete signals like object labels or through 3D augmenta-tions, our method produces a continuous 3D space that bet-ter captures the similarity and difference in geometry (see Section 5.1). In addition, we employ augmentation-based contrastive learning [6] to learn other useful visual feature properties, such as translation and color invariances. This results in a 2D representation in a 2D-3D space that contains rich visual information and strong geometric-aware proper-ties, as shown in Fig. 1, which can be leveraged to improve 2D object understanding tasks. To match our geometric-aware CAD features with cor-responding 2D features, a paired RGB-CAD dataset, such as Pix3D [45], is required. However, by leveraging recent techniques [19, 31] that can reconstruct a CAD model from an input image, it is possible to generate pseudo CAD mod-els for any images and use them to learn our method with-out a paired dataset. Adapting this pseudo-pair generation to other techniques that rely on scene scans is significantly harder, as synthesizing full 3D scenes with reasonable detailremains harder than reconstructing individual objects. We demonstrate the effectiveness of our geometric-aware 2D representation in Fig. 2. Our features can group and differentiate objects based on their categories or subcat-egories, leading to improved performance on multiple 2D object understanding tasks using both ResNet-50 [23] and ViT-B [29] backbones. Our method trained on a pseudo-pair dataset also yields superior results over DINO [3] and MAE [20]. Remarkably, we also surpass a state-of-the-art method, Pri3D [25] without using any 3D scene scans in the following tasks: (i) semantic segmentation using NYUv2 [42] and indoor ADE20k [54]; (ii) object detection and instance segmentation using NYUv2 and in/outdoor COCO [35]; (iii) object retrieval using Pix3D [45]. To summarize, our contributions are as follows. • We present a simple yet effective approach to inducing geometric-aware properties in 2D representation using lightweight CAD models. These can be either ground truth from RGB-CAD datasets or generated pseudo CAD pairs based on 2D-only data. • We propose training objectives to learn a 2D-3D em-bedding space where feature similarity reflects geo-metric similarity based on the Chamfer distance. • We enhance the performance of SOTA 2D representa-tion learning techniques on four 2D object understand-ing tasks and achieve competitive results to SOTA that require 3D scene scans across five datasets, in both set-tings that use real or pseudo-RGB-CAD datasets. |
Guo_Texts_as_Images_in_Prompt_Tuning_for_Multi-Label_Image_Recognition_CVPR_2023 | Abstract Prompt tuning has been employed as an efficient way to adapt large vision-language pre-trained models ( e.g. CLIP) to various downstream tasks in data-limited or label-limited settings. Nonetheless, visual data ( e.g., images) is by de-fault prerequisite for learning prompts in existing methods. In this work, we advocate that the effectiveness of image-text contrastive learning in aligning the two modalities (for training CLIP) further makes it feasible to treat texts as im-ages for prompt tuning and introduce TaI prompting. In contrast to the visual data, text descriptions are easy to col-lect, and their class labels can be directly derived. Particu-larly, we apply TaI prompting to multi-label image recogni-tion, where sentences in the wild serve as alternatives to im-ages for prompt tuning. Moreover, with TaI, double-grained prompt tuning (TaI-DPT) is further presented to extract both coarse-grained and fine-grained embeddings for enhanc-ing the multi-label recognition performance. Experimen-tal results show that our proposed TaI-DPT outperforms zero-shot CLIP by a large margin on multiple benchmarks, e.g., MS-COCO, VOC2007, and NUS-WIDE, while it can be combined with existing methods of prompting from im-ages to improve recognition performance further. The code is released at https://github.com/guozix/TaI-DPT. | 1. Introduction Recent few years have witnessed rapid progress in large vision-language (VL) pre-trained models [1, 16, 19, 24, 33, 36] as well as their remarkable performance on downstream vision tasks. A VL pre-trained model generally involves data encoders, and it is becoming increasingly popular to exploit image-test contrastive loss [24] to align the embed-ding of images and texts into a shared space. When adapt-ing to downstream tasks in data-limited or label-limited set-tings, it is often ineffective to fine-tune the entire model, due to its high complexity. Then, prompt tuning as a represen-tative parameter-efficient learning paradigm has emerged as *This work was done when Zixian Guo was a research intern at TAL. Figure 1. A comparison between prompting from images and our text-as-image (TaI) prompting. (a) Prompting from images (e.g., [41]) uses labeled images of task categories to learn the text prompts. Instead, (b) our TaI prompting learn the prompts with easily-accessed text descriptions containing target categories. (c) After training, the learned prompts in (a) or (b) can be readily ap-plied to test images. an efficient way to adapt VL models to downstream tasks. Albeit considerable achievements have been made, ex-isting prompt tuning methods generally require visual data to learn prompts (as shown in Fig. 1(a)). For example, CoOp [41] learns from annotated images. CoCoOp [40] further introduces generalizable input-conditional prompts. DualCoOp [28] adapts CLIP to multi-label recognition tasks by training pairs of positive and negative prompts with partial-labeled images. Nonetheless, the performance of these prompting methods may be limited when it is infeasi-ble to obtain sufficient image data or annotate the required images. In this paper, we advocate treating TextsasImages for prompt tuning, i.e., TaI prompting. It is considered feasible as the image encoder and text encoder in many pre-trained VL models [16, 24] encode images and texts into a shared space. Given an image and its caption, the visual features produced by the image encoder will be close to the text fea-ture of the caption produced by the text encoder. There-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2808 fore, in addition to extracting visual features from images, it is also feasible to extract text features as alternatives form, for example, descriptive sentences and captions, for prompt tuning (see Fig. 1(b)). TaI prompting has several interesting properties and merits. Taking a downstream image recog-nition task as an example, given a set of object categories, one can easily crawl a large set of text descriptions that con-tain object names from these categories. Text descriptions are easily accessible in this way, and class labels can be di-rectly derived from text descriptions, which means, in con-trast to prompting from images, TaI prompting may suffer less from the data-limited and label-limited issues. We use multi-label image recognition [8, 9, 11, 20, 35] to verify the effectiveness of our TaI prompting in this paper. To begin with, we crawl the captions from the public image caption datasets ( e.g., MS-COCO [20]) and localized narra-tives from object detection datasets ( e.g., Open Images [18]) to form the training set of text descriptions. For any specific multi-label recognition task, we adopt a noun filter to map the nouns in the text descriptions to the corresponding ob-ject categories, and then only keep the text descriptions that contain one or more classes of target objects. To better cope with multi-label classification, we introduce double-grained prompt tuning ( i.e., TaI-DPT) which involves: (i) a set of global prompts to generate embeddings for classifying whole sentences or images, and (ii) a set of local prompts to extract embeddings for discriminating text tokens or image patches. Given a set of text descriptions, global and local prompts can be tuned by minimizing the ranking loss [14]. Note that, though these prompts are learned from text de-scriptions solely, they can be readily deployed to classify whole images as well as image patches during testing (see Fig. 1(c)). Experimental results show that, without using any labeled images, our TaI prompting surpasses zero-shot CLIP [24] by a large margin on multiple benchmarks, e.g., MS-COCO, VOC2007, and NUS-WIDE. Moreover, when images are also available during train-ing, our TaI prompting can be combined with existing meth-ods of prompting from images to improve its performance. In particular, given a few annotated images, our TaI-DPT can be integrated with CoOp as a prompt ensemble for im-proving classification accuracy. With partially labeled train-ing data being provided, we may also combine TaI-DPT and DualCoOp [28] to improve multi-label recognition accuracy consistently. Extensive results verify the effectiveness of our TaI-DPT in comparison to state-of-the-art. To sum up, the contributions of this work include: • We propose Texts as Images in prompt tuning ( i.e., TaI prompting) to adapt VL pre-trained models to multi-label image recognition. Text descriptions are easily accessible and, in contrast to images, their class labels can be directly derived, making our TaI prompting very compelling in practice.• We present double-grained prompt tuning ( i.e. TaI-DPT) to extract both coarse-grained and fine-grained embeddings for enhancing multi-label image recogni-tion. Experiments on multiple benchmarks show that TaI-DPT achieves comparable multi-label recognition accuracy against state-of-the-arts. • The prompts learned by TaI-DPT can be easily com-bined with existing methods of prompting from images in an off-the-shelf manner, further improving multi-label recognition performance. |
Goyal_Finetune_Like_You_Pretrain_Improved_Finetuning_of_Zero-Shot_Vision_Models_CVPR_2023 | Abstract Finetuning image-text models such as CLIP achieves state-of-the-art accuracies on a variety of benchmarks. However, recent works (Kumar et al., 2022; Wortsman et al., 2021) have shown that even subtle differences in the finetuning process can lead to surprisingly large differences in the final performance, both for in-distribution (ID) and out-of-distribution (OOD) data. In this work, we show that a natural and simple approach of mimicking contrastive pretraining consistently outperforms alternative finetuning approaches. Specifically, we cast downstream class labels as text prompts and continue optimizing the contrastive loss between image embeddings and class-descriptive prompt embeddings (contrastive finetuning). Our method consistently outperforms baselines across 7 distribution shift, 6transfer learning, and 3few-shot learn-ing benchmarks. On WILDS-iWILDCam, our proposed ap-proach FLYP outperforms the top of the leaderboard by 2.3%ID and 2.7%OOD, giving the highest reported accu-racy. Averaged across 7OOD datasets (2 WILDS and 5 Im-ageNet associated shifts), FLYP gives gains of 4.2%OOD over standard finetuning and outperforms current state-of-the-art (LP-FT) by more than 1%both ID and OOD. Simi-larly, on 3few-shot learning benchmarks, FLYP gives gains up to 4.6%over standard finetuning and 4.4%over the state-of-the-art. Thus we establish our proposed method of contrastive finetuning as a simple and intuitive state-of-the-art for supervised finetuning of image-text models like CLIP . Code is available at https://github.com/ locuslab/FLYP . | 1. Introduction Recent large-scale models pretrained jointly on image and text data, such as CLIP (Radford et al., 2021) or ALIGN (Jia et al., 2021), have demonstrated exceptional perfor-mance on many zero-shot classification tasks. These mod-els are pretrained via a contrastive loss that finds a joint em-bedding over the paired image and text data. Then, for anew classification problem, one simply specifies a prompt for all classnames and predict the class whose text embed-ding has highest similarity with the image embedding. Such “zero-shot” classifiers achieve reasonable performance on downstream tasks and impressive robustness to many com-mon forms of distribution shift. However, in many cases, it is desirable to further improve performance via supervised finetuning: further training and updates to the pretrained pa-rameters on a (possibly small) number of labeled images. In practice, however, several studies have found that standard finetuning procedures, while improving in-distribution performance, come at a cost to robustness to distribution shifts. Subtle changes to the finetuning process could mitigate this decrease in robustness. For example, Kumar et al. (2022) demonstrated the role of initialization of the final linear head and proposed a two-stage process of linear probing, then finetuning. Wortsman et al. (2021) showed that ensembling the weights of the finetuned and zero-shot classifier can improve robustness. Understanding the role of these subtle changes is challenging, and there is no simple recipe for what is the “correct” modification. A common theme in all these previous methods is that they are small changes to the standard supervised training paradigm where we minimize a cross-entropy loss on an im-age classifier. Indeed, such a choice is natural precisely be-cause we are finetuning the system to improve classification performance. However, directly applying the supervised learning methodology for finetuning pretrained models without considering pretraining process can be sub-optimal. In this paper, we show that an alternative, straightfor-ward approach reliably outperforms these previous meth-ods. Specifically, we show that simply finetuning a classi-fier via the same pretraining (contrastive) loss leads to uni-formly better performance of the resulting classifiers. That is, after constructing prompts from the class labels, we di-rectly minimize the contrastive loss between these prompts and the image embeddings of our (labeled) finetuning set. We call this approach finetune like you pretrain (FLYP ) and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19338 𝐼!"𝑇!𝐼!"𝑇"⋯𝐼!"𝑇#𝐼""𝑇!𝐼""𝑇"⋯𝐼""𝑇#⋮⋮⋱⋮𝐼#"𝑇!𝐼#"𝑇"⋯𝐼#"𝑇#Image Encoder𝑇!𝑇"⋯𝑇#𝐼!𝐼!⋮𝐼" planebirdplane“A photo of a {label}”𝑦!𝑦"𝑦#𝑇!𝑇"⋯𝑇#𝐼!"𝑇!𝐼!"𝑇"⋯𝐼!"𝑇#𝐼""𝑇!𝐼""𝑇"⋯𝐼""𝑇#⋮⋮⋱⋮𝐼#"𝑇!𝐼#"𝑇"⋯𝐼#"𝑇#𝐼!𝐼!⋮𝐼" 𝐼!𝐼" 𝐼#……Image Encoder Text EncoderText Encoder𝐼!𝐼!⋮𝐼" 𝐼!𝐼" 𝐼#…Image EncoderLinear head #𝑦!#𝑦"#𝑦#Contrastive pretrainingFinetune like you pretrain (FLYP) Standard finetuningMaximize scores In-distribution test accuracyOut-of-distribution accuracyFull fine-tuningL2-SP (Li et al. 2018)LP-FT (Kumar et al. 2022)FLYP (Ours)Zero-shot modelBest ID valaccuracyOut-of-distribution accuracyIn-distribution test accuracyImageNet (full)ImageNet (4-shot)Figure 1. Finetune Like You Pretrain (FLYP ): Given a downstream classification dataset, standard finetuning approaches revolve around using the cross-entropy loss. We show that simply using the same loss as the pretraining i.e. contrastive loss, with “task supervision” coming from the text-description of labels, consistently outperforms state-of-the-art approaches like LP-FT (Kumar et al., 2022) and WiseFT (Wortsman et al., 2021). For example, on ImageNet, FLYP outperforms LP-FT + weight ensembling by 1.1%ID and 1.3%OOD, with a ID-OOD frontier curve (orange curve) dominating those of the baselines, i.e. lies above and to the right of all the baselines. summarize in Figure 1. FLYP results in better ID andOOD performance than alternative approaches without any addi-tional features such as multi-stage finetuning or ensembling. When ensembling, it further boosts gains over ensembling with previous methods. This contrastive finetuning is done entirely “naively”: it ignores the fact that classes within a minibatch may overlap or that multiple prompts can corre-spond to the same class. We show that on a variety of different models and tasks, this simple FLYP approach consistently outperforms the existing state-of-the-art finetuning methods. On WILDS-iWILDCam, FLYP gives the highest ever reported accuracy, outperforming the top of the leaderboard (compute expen-sive ModelSoups (Wortsman et al., 2022) which ensembles over 70+ finetuned models) by 2.3%ID and 2.7%OOD. On CLIP ViT-B/16, averaged across 7out-of-distribution (OOD) datasets (2 WILDS and 5 ImageNet associated shifts), FLYP gives gains of 4.2%OOD over full finetun-ing and of more than 1%both ID and OOD over the cur-rent state-of-the-art , LP-FT. We also show that this advan-tage holds for few-shot finetuning, where only a very small number of examples from each class are present. Arguably, these few-shot tasks represent the most likely use case for zero-shot finetuning, where one has both an initial prompt, a handful of examples of each class type, and wishes to build the best classifier possible. The empirical gains of our method are quite intriguing. We discuss in Section 5 how several natural explanationsand intuitions from prior work fail to explain why the pre-training loss works so well as a finetuning objective. For example, one could hypothesize that the gains for FLYP come from using the structure in prompts or updating the language encoder parameters. However, using the same prompts and updating the image and language encoders, but via a cross-entropy loss instead performs worse than FLYP. Furthermore, when we attempt to correct for the overlap in classes across a minibatch, we surprisingly find that this de-creases performance. This highlights an apparent but poorly understood benefit to finetuning models on the same loss which they were trained upon, a connection that has been observed in other settings as well (Goyal et al., 2022). We emphasize heavily that the contribution of this work does not lie in the novelty of the FLYP finetuning procedure itself: as it uses the exact same contrastive loss as used for training, many other finetuning approaches have used slight variations of this approach (see Section 6 for full discussion of related work). Rather, the contribution of this paper lies precisely in showing that this extremely naive method, in fact, outperforms existing (and far more complex) finetun-ing methods that have been proposed in the literature. While the method is simple, the gains are extremely surprising, presenting an interesting avenue for investigating the fine-tuning process. In total, these results point towards a simple and effective approach that we believe should be adopted as the “standard” method for finetuning zero-shot classifiers rather than tuning via a traditional supervised loss. 19339 |
Huang_Weakly_Supervised_Temporal_Sentence_Grounding_With_Uncertainty-Guided_Self-Training_CVPR_2023 | Abstract The task of weakly supervised temporal sentence ground-ing aims at finding the corresponding temporal moments of a language description in the video, given video-language correspondence only at video-level. Most existing works se-lect mismatched video-language pairs as negative samples and train the model to generate better positive proposals that are distinct from the negative ones. However, due to the complex temporal structure of videos, proposals distinct from the negative ones may correspond to several video seg-ments but not necessarily the correct ground truth. To alle-viate this problem, we propose an uncertainty-guided self-training technique to provide extra self-supervision signal to guide the weakly-supervised learning. The self-training process is based on teacher-student mutual learning with weak-strong augmentation, which enables the teacher net-work to generate relatively more reliable outputs compared to the student network, so that the student network can learn from the teacher’s output. Since directly applying existing self-training methods in this task easily causes error accu-mulation, we specifically design two techniques in our self-training method: (1) we construct a Bayesian teacher net-work, leveraging its uncertainty as a weight to suppress the noisy teacher supervisory signals; (2) we leverage the cy-cle consistency brought by temporal data augmentation to perform mutual learning between the two networks. Exper-iments demonstrate our method’s superiority on Charades-STA and ActivityNet Captions datasets. We also show in the experiment that our self-training method can be applied to improve the performance of multiple backbone methods. | 1. Introduction One of the most important directions in video under-standing is to temporally localize the start and end times-tamp of a given sentence description. Also known as tempo-ral sentence grounding, this task has a wide range of poten-* equal contribution. Author order is determined by a coin toss. Figure 1. (a) Existing methods [70, 71] find it hard to distinguish the two cases since they learn positive proposals purely based on negative proposals. (b) Our method provides extra supervision sig-nals for learning positive proposals. (c) Performance of the back-bone network [71], backbone network trained with existing self-training methods pseudo labeling [29], Mean Teacher (MT) [50], and backbone network trained with our method. Directly apply-ing self-training methods for semi-supervised learning negatively influences the performance, while our self-training method can im-prove the backbone performance. tial applications ranging from video summarization [45,66], video action segmentation [23,28,59], to Human-computer interaction systems [8, 22, 30, 52, 63]. While most existing works deal with this task in a supervised manner, manually annotating temporal labels of the starting and ending times-tamps of each sentence is extremely laborious, which harms the scalability and viability of this task in real-world appli-cations. To escalate practicability, recent research attention has been drawn towards weakly supervised temporal sen-tence grounding, where video-language correspondence is given as annotation only at video-level for model training. Previous weakly supervised temporal sentence ground-ing works [16,21,36,38] mainly adopt the multiple instance learning (MIL) method. They generate mismatched video-language pairs as negative samples and train the model to distinguish the positive/negative samples, in order to learn a cross-modal latent space for a language feature to highlight a certain time period of the video. Some methods find neg-ative samples by selecting sentences that describe another This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18908 video [38, 62], but these negative samples are often easy to distinguish and thus cannot provide strong supervision sig-nals. Recent works [68, 70, 71] select negative samples by sampling video segments within the same video, allowing the model to distinguish more confusing video segments. One major limitation of these methods is that they learn the models completely depending on negative samples, since the objectives of these methods are to generate posi-tive proposals that are distinct from the negative ones, where the distance is usually measured by a certain metric such as the ability to reconstruct the query using only the video seg-ment inside the proposal [32, 60, 70, 71]. However, due to the complex temporal structure of videos that often contain multiple events, being distinct from the negative proposals does not always guarantee the quality of the positive pro-posals. For example in Figure 1(a), it is hard for existing methods like [70, 71] to distinguish the two cases since in both cases the positive proposals can better reconstruct the query sentence than the negative proposals. However, in the absence of strong supervision, it is not straightforward to positively guide the process of temporal sentence grounding. Our solution is to leverage self-training to produce extra supervision signals (Figure 1(b)). As for self-training, one may consider to directly apply existing techniques originally designed for semi-supervised learn-ing such as pseudo label [29] or Mean Teacher with weak-strong augmentation [50]. However, as shown in Figure 1(c), our preliminary experiment suggests that the teacher’s supervision tends to be noisy and would degrade perfor-mance due to error accumulation. This is mainly because unlike semi-supervised learning [72], no strong supervision is used for initializing the teacher network. Following previous works [12, 31, 34, 50], our method also apply the weak-strong augmentation technique, where the student network takes data with strong augmentation as input, while the teacher network gets as input weakly aug-mented data. Thus, compared to the student network, the teacher network can generate output less affected by heavy augmentation, providing supervisory guidance to the stu-dent network. To realize self-training in the weakly super-vised temporal sentence grounding task, we specifically de-sign the following two techniques: (1)As the teacher net-work itself is initially trained with only weak supervision and may generate erroneous supervision signals, we apply a Bayesian teacher network, enabling an uncertainty esti-mation of its output. The estimated uncertainty is used to weigh the teacher supervision signal thus reducing the chance of error accumulation. (2)To efficiently update both networks, we develop cyclic mutual learning, where the for-ward cycle forces the student network to output temporally consistent representations with the teacher, and the back-ward cycle encourages the teacher’s output to be consistent with the average of multiple student outputs generated byinputs with different augmentations. This mutual-learning method allows the teacher to update more carefully than the student, preventing over-fitting to the low-quality supervi-sion. On the other hand, a better teacher will provide reli-able uncertainty measures for learning the student network. Our self-training technique can be applied to most exist-ing methods and we observe performance improvement on multiple public datasets. Our contributions can be summarized as follows: (1) We propose a novel method for temporal sentence grounding based on self-training. To the best of our knowledge, this is the first attempt to apply self-training to the weakly su-pervised temporal sentence grounding task. (2) To realize self-training for this task, we design a Bayesian teacher net-work to alleviate the negative effect of low-quality teacher supervision, and we use a mutual-learning strategy based on the consistency of the data augmentation to better update the teacher and student networks. (3) Our experiments on two standard datasets Charades-STA and ActivityNet Cap-tions demonstrate that our method can effectively improve the performance of existing weakly supervised methods. |
Hajimiri_A_Strong_Baseline_for_Generalized_Few-Shot_Semantic_Segmentation_CVPR_2023 | Abstract This paper introduces a generalized few-shot segmenta-tion framework with a straightforward training process and an easy-to-optimize inference phase. In particular, we pro-pose a simple yet effective model based on the well-known InfoMax principle, where the Mutual Information (MI) be-tween the learned feature representations and their corre-sponding predictions is maximized. In addition, the terms derived from our MI-based formulation are coupled with a knowledge distillation term to retain the knowledge on base classes. With a simple training process, our inference model can be applied on top of any segmentation network trained on base classes. The proposed inference yields sub-stantial improvements on the popular few-shot segmenta-tion benchmarks, PASCAL-5iand COCO-20i. Particularly, for novel classes, the improvement gains range from 7% to 26% (PASCAL-5i) and from 3% to 12% (COCO-20i) in the 1-shot and 5-shot scenarios, respectively. Furthermore, we propose a more challenging setting, where performance gaps are further exacerbated. Our code is publicly avail-able at https://github.com/sinahmr/DIaM . | 1. Introduction With the advent of deep learning methods, the automatic interpretation and semantic understanding of image content have drastically improved in recent years. These models are nowadays at the core of a broad span of visual recognition tasks and have enormous potential in strategic domains for our society, such as autonomous driving, healthcare, or se-curity. Particularly, semantic segmentation, whose goal is to assign pixel-level categories, lies as one of the mainstays in visual interpretation. Nevertheless, the remarkable per-formance achieved by deep learning segmentation models is typically limited by the amount of available training data. Indeed, standard segmentation approaches are often trained on a fixed set of predefined semantic categories, commonly requiring hundreds of examples per class. This limits their scalability to novel classes, as obtaining annotations for new categories is a cumbersome and labor-intensive process. *Corresponding author: seyed-mohammadsina.hajimiri.1@etsmtl.netFew-shot semantic segmentation (FSS) has recently emerged as an appealing alternative to overcome this lim-itation [1,30,33]. Under this learning paradigm, models are trained with an abundant labeled dataset on base classes, and only a few instances of novel classes are seen during the adaptation stage. However, [29] identified two important limitations that hamper the application of these methods in real-life scenarios. First, existing literature on FSS assumes that the support samples contain the categories present in the query images, which may incur costly manual selection processes. Second, even though significant achievements have been made, all these methods focus on leveraging sup-ports as much as possible to extract effective target infor-mation, but neglect to preserve the performance on known categories. Furthermore, while in many practical applica-tions the number of novel classes is not limited, most FSS approaches are designed to work on a binary basis, which is suboptimal in the case of multiple novel categories. Inspired by these limitations, a novel Generalized Few-Shot Semantic Segmentation (GFSS) setting has been re-cently introduced in [29]. In particular, GFSS relaxes the strong assumption that the support and query categories are the same. This means that, under this new learning paradigm, providing support images that contain the same target categories as the query images is not required. Fur-thermore, the evaluation in this setting involves not only novel classes but also base categories, which provides a more realistic scenario. Although the setting in [29] overcomes the limitations of few-shot semantic segmentation, we argue that a gap still remains between current experimental protocols and real-world applications. Hereafter, we highlight limiting points of the current literature and further discuss them in Sec. 3.2. Unrealistic prior knowledge. We found that existing works explicitly rely on prior knowledge of the novel classes (supposed to be seen at test-time only) during the training phase. This, for instance, allows to filter out im-ages containing novel objects [14, 29] from the training set. Recent empirical evidence [28] found out that such assump-tions indeed boost the results in a significant manner. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11269 Modularity. Another limitation is the tight entanglement between the training and testing phases of current ap-proaches, which often limits their ability to handle arbi-trary tasks at test time. Specifically, existing meta-learning-based approaches are designed to handle binary segmen-tation [14], and need to be consequently modified to han-dle multiple classes. While we technically address that by using multiple forward passes (one per class) followed by some heuristic aggregation of segmentation maps, this scales poorly and lacks principle. Contributions. Motivated by these limitations, we aim to address a more practical setting and develop a fully modular inference procedure. Our inference abstracts away the train-ing stage, making no assumption about the type of training or the format of tasks met at test time. Specifically: • We present a new GFSS framework, DIaM (Distilled Information Maximization). Our method is inspired by the well-known InfoMax principle, which maxi-mizes the Mutual Information between the learned fea-ture representations and their corresponding predic-tions. To reduce performance degradation on the base categories, without requiring explicit supervision, we introduce a Kullback-Leibler term that enforces con-sistency between the old and new model’s base class predictions. • Although disadvantaged by rectifications to improve the practicality of previous experimental protocols, we still demonstrate that DIaM outperforms current SOTA on existing GFSS benchmarks, particularly excelling in the segmentation of novel classes. • Based on our observations, we go beyond standard benchmarks and present a more challenging scenario, where the number of base and novel classes is the same. In this setting, the gap between our method and the current GFSS SOTA widens, highlighting the poor ability of modern GFSS SOTA to handle numerous novel classes and the need for more modular/scalable methods. |
Guo_Learning_a_Practical_SDR-to-HDRTV_Up-Conversion_Using_New_Dataset_and_Degradation_CVPR_2023 | Abstract In media industry, the demand of SDR-to-HDRTV up-conversion arises when users possess HDR-WCG (high dy-namic range-wide color gamut) TVs while most off-the-shelf footage is still in SDR (standard dynamic range). The re-search community has started tackling this low-level vision task by learning-based approaches. When applied to real SDR, yet, current methods tend to produce dim and desat-urated result, making nearly no improvement on viewing experience. Different from other network-oriented meth-ods, we attribute such deficiency to training set (HDR-SDR pair). Consequently, we propose new HDRTV dataset (dubbed HDRTV4K) and new HDR-to-SDR degradation models. Then, it’s used to train a luminance-segmented network (LSN) consisting of a global mapping trunk, and two Transformer branches on bright and dark luminance range. We also update assessment criteria by tailored met-rics and subjective experiment. Finally, ablation studies are conducted to prove the effectiveness. Our work is availableat:https://github.com/AndreGuo/HDRTVDM . | 1. Introduction The dynamic range of image is defined as the maxi-mum recorded luminance to the minimum. Larger lumi-nance container endows high dynamic range (HDR) a bet-ter expressiveness of scene. In media and film industry, the superiority of HDR is further boosted by advanced electro-optical transfer function (EOTF) e.g. PQ/HLG [2], and wide color-gamut (WCG) RGB primaries e.g. BT.2020 [3]. While WCG-HDR displays are becoming more readily available in consumer market, most commercial footage is still in standard dynamic range (SDR) since WCG-HDR version is yet scarce due to exorbitant production workflow. Hence, there raise the demand of converting vast existing SDR content for HDRTV service. Such SDR may carry ir-reproducible scenes, but more likely, imperfections brought by old imaging system and transmission. This indicates that SDR-to-HDRTV up-conversion is an ill-posed low-level vi-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22231 sion task, and research community has therefore begun in-volving learning-based methods ( [4–9] etc.). Yet, versatile networks they use (§2.1), we find current methods’ result dim and desaturated when feeding real SDR images (Fig.1), conflicting with the perceptual motive of SDR-to-HDRTV up-conversion. As reported by CVPR22-1st Workshop on Vision Dataset Understanding [10], most methods are network-oriented and understate the impact of training set. For restoration-like low-level vision, there are 2 ingredients of a training set: the quality of label GT itself, and the GT-to-LQ degradation model (DM) i.e. what the network learns to restore. Such neglect is getting remedied in other low-level vision tasks [11–16], but still pervasive in learning-based SDR-to-HDRTV up-conversion. Not serendipitously, we find dataset the reason why cur-rent methods underperform. We exploit several HDRTV-tailored metrics (Tab.4) to assess current training set:(1) by measuring label HDR’s extent of HDR/WCG etc. (Tab.5), we notice that its quality and diversity are inadequate to in-centive the network to produce appealing result, (2) via the statistics of degraded SDR, we find current HDR-to-SDR DMs’ tendency to exaggeratedly alter the saturation and brightness (see Tab.6) thus network will learn a SDR-to-HDR deterioration. Hence, we propose HDRTV4K dataset (§3.2) consisting of high-quality and diversified (Fig.4) HDRTV frames as label. Then exploit 3 new HDRTV-to-SDR DMs (§3.3) avoiding above insufficiency, meanwhile possessing appropriate degradation capability (Tab.6) so the network can learn reasonable restoration ability. Afterwards, we formulate the task as the combination of global mapping on the full luminance range and recovery oflow/high luminance range. Correspondingly, we propose Luminance Segmented Network (LSN, §3.1) where a global trunk and two Transformer-style UNet [17] branches are as-signed to respectively execute divergent operations required in different segmented luminance ranges (areas). Lastly, as found by [18, 19], conventional distance-based metrics well-performed in solely-reconstruction task (e.g. denoising) fail for perceptual-motivated HDR reconstruc-tion, we therefore update the assessment criteria with fine-grained metrics (§4.2) and subjective experiment (§4.3) etc. Our contributions are three-fold: (1)Emphasizing & ver-ifying the impact of dataset on SDR-to-HDRTV task, which has long been understated. (2)Exploiting novel HDRTV dataset and HDR-to-SDR degradation models for network to learn. (3)Introducing new problem formulation, and ac-cordingly proposing novel luminance segmented network. |
Deng_Learning_Detailed_Radiance_Manifolds_for_High-Fidelity_and_3D-Consistent_Portrait_Synthesis_CVPR_2023 | Abstract A key challenge for novel view synthesis of monocular portrait images is 3D consistency under continuous pose variations. Most existing methods rely on 2D generative models which often leads to obvious 3D inconsistency ar-tifacts. We present a 3D-consistent novel view synthesis approach for monocular portrait images based on a re-cent proposed 3D-aware GAN, namely Generative Radi-ance Manifolds (GRAM) [13], which has shown strong 3D consistency at multiview image generation of virtual subjects via the radiance manifolds representation. How-ever, simply learning an encoder to map a real image into the latent space of GRAM can only reconstruct coarse ra-diance manifolds without faithful fine details, while im-proving the reconstruction fidelity via instance-specific op-timization is time-consuming. We introduce a novel de-tail manifolds reconstructor to learn 3D-consistent fine de-tails on the radiance manifolds from monocular images, and combine them with the coarse radiance manifolds for high-fidelity reconstruction. The 3D priors derived from the coarse radiance manifolds are used to regulate the learned details to ensure reasonable synthesized resultsat novel views. Trained on in-the-wild 2D images, our method achieves high-fidelity and 3D-consistent portrait synthesis largely outperforming the prior art. Project page: https://yudeng.github.io/GRAMInverter/ | 1. Introduction Synthesizing photorealistic portrait images of a per-son from an arbitrary viewpoint is an important task that can benefit diverse downstream applications such as vir-tual avatar creation and immersive online communication. Thanks to the thriving of 2D Generative Adversarial Net-works (GANs) [18, 25, 26], people can now generate high-quality portraits at desired views given only monocular im-ages as input, via a simple invert-then-edit strategy by con-ducting GAN inversion [1, 38, 52] and latent space edit-ing [12,20,41,50]. However, existing 2D GAN-based meth-ods still have deficiencies when applied to applications that require more strict 3D consistency ( e.g. VR&AR). Due to the non-physical rendering process of the 2D CNN-based generators, their synthesized images under pose changes usually bear certain kinds of multiview inconsistency, such This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4423 as geometry distortions [3,5] and texture sticking or flicker-ing [24, 55]. These artifacts may not be significant enough when inspecting each static image but can be easily cap-tured by human eyes under continuous image variations. Recently, there are an emerging group of 3D-aware GANs [9, 10, 13, 19, 39, 40] targeting at image generation with 3D pose disentanglement. By incorporating Neural Radiance Field (NeRF) [29] and its variants into the ad-versarial learning process of GANs, they can produce re-alistic images with strong 3D-consistency across different views, given only a set of monocular images as training data. As a result, 3D-aware GANs have shown greater po-tential than 2D GANs for pose manipulations of portraits. However, even though 3D-aware GANs are capable of gen-erating 3D-consistent portraits of virtual subjects, leverag-ing them for real image pose editing is still a challenging task. To obtain faithful reconstructions of real images, most existing methods [9, 10, 13, 27, 46, 47, 67] turn to a time-consuming and instance-specific optimization to invert the given images into the latent space of a pre-trained 3D-aware GAN, which is hard to scale-up. And simply enforcing an encoder-based 3D-aware GAN inversion [7, 46] often fails to preserve fine details in the original image. In this paper, we propose a novel approach GRAMIn-verter , for high-fidelity and 3D-consistent novel view syn-thesis of monocular portraits via single forward pass. Our method is built upon the recent GRAM [13] that can syn-thesize high-quality virtual images with strong 3D consis-tency via the radiance manifolds representation [13]. Never-theless, GRAM suffers from the same lack-of-fidelity issue when combined with a general encoder-based GAN inver-sion approach [52]. The main reason is that the obtained semantically-meaningful low-dimensional latent code can-not well record detail information of the input, as also indi-cated by some recent 2D GAN inversion methods [52, 55]. To tackle this problem, our motivation is to further learn 3D-space high-frequency details and combine them with the coarse radiance manifolds obtained from the general encoder-based inversion of GRAM, to achieve faithful re-construction and 3D-consistent view synthesis. A straight-forward way to achieve this is to extract a high-resolution 3D voxel from the input image and combine it with the coarse radiance manifolds. However, this is prohibited by modern GPUs due to the high memory cost of the 3D voxel. To tackle this problem, we turn to learn a high resolu-tion detail manifolds, taking the advantage of the radiance manifolds representation of GRAM, instead of learning the memory-consuming 3D voxel. We introduce a novel detail manifolds reconstructor to extract detail manifolds from the input images. It leverages manifold super-resolution [60] to predict high-resolution detail manifolds from a low resolu-tion feature voxel. This can be effectively achieved by a set of memory-efficient 2D convolution blocks. The obtainedhigh resolution detail manifolds can still maintain strict 3D consistency due to lying in the 3D space. We also propose dedicated losses to regulate the detail manifolds via 3D pri-ors derived from the coarse radiance manifolds, to ensure reasonable novel view results. Another contribution of our method is an improvement upon the memory and time-consuming GRAM, without which it is difficult to be integrated into our GAN inversion framework. We replace the original MLP-based radiance generator [13] in GRAM with a StyleGAN2 [26]-based tri-plane generator proposed by [9]. The efficient GRAM re-quires only 1/4memory cost with 7×speed up, without sacrificing the image generation quality and 3D consistency. We train our method on FFHQ dataset [25] and conduct multiple experiments to demonstrate its advantages on pose control of portrait images. Once trained, GRAMInverter takes a monocular image as input and predicts its radiance manifolds representation for novel view synthesis at 3FPS on a single GPU. The generated novel views well preserve fine details in the original image with strong 3D consistency, outperforming prior art by a large margin. We believe our method takes a solid step towards efficient 3D-aware con-tent creation for real applications. |
Do_Quantitative_Manipulation_of_Custom_Attributes_on_3D-Aware_Image_Synthesis_CVPR_2023 | Abstract While 3D-based GAN techniques have been successfully applied to render photo-realistic 3D images with a vari-ety of attributes while preserving view consistency, there has been little research on how to fine-control 3D im-ages without limiting to a specific category of objects of their properties. To fill such research gap, we propose a novel image manipulation model of 3D-based GAN repre-sentations for a fine-grained control of specific custom at-tributes. By extending the latest 3D-based GAN models (e.g., EG3D), our user-friendly quantitative manipulation model enables a fine yet normalized control of 3D manip-ulation of multi-attribute quantities while achieving view consistency. We validate the effectiveness of our proposed technique both qualitatively and quantitatively through var-ious experiments. | 1. Introduction Recent advances in neural rendering [23,34,38] are mak-ing it easy to reproduce virtual 3D objects from real-world objects. Neural rendering approach is not fully scalable in practice since it heavily relies on input images and thereby can not fully represent every possible form, style, and state variation of all real and unreal objects. 3D generative ad-versarial networks (GANs) models, on the other hand, are more generalizable and extensible since these can not only reproduce 3D objects at scale but also allow easier configu-rations based on the user’s intention [5,32,37], making them more suitable for various 3D image synthesis tasks. Image manipulation on the latent space of 2D GAN has been extensively studied in recent years [2, 28, 29, 33, 39]. StyleGAN2 [15] has been the dominant technique used due to its flexibility to represent different styles and disentan-gled latent spaces. More recently, 3D-based GAN [24, †Work done during at LG Electronics Tiredness 0.0 Tiredness 0.6 Tiredness 0.4 Tiredness 0.9 0 Work Break Over -workSleepFigure 1. An example of quantitative image manipulation for the face tiredness attribute. Attributes expressed as complex facial features, such as tiredness, are not easy to define explicitly. Our method assigns user-defined attributes based on a small number of image samples, allowing quantitative manipulation of 3D objects according to the user’s desired state changes. 26, 32] for multi-view image synthesis using neural ren-dering [23, 25] has gained popularity. For instance, those models [5, 10, 27] equipped with StyleGAN2 modules can generate photo-realistic 3D images with a variety of at-tributes while preserving view consistency. Nevertheless, the existing works do not well explore a fine-grained ma-nipulation of custom attributes ( e.g., capturing tiredness in a face, consisting of multiple and complex facial expres-sions, as shown in Figure 1) of 3D objects that are syn-thesized using 3D-based GAN models, and therefore it de-serves more thorough research. While there have been some attempts [29,33,39] to use the latent spaces generated by GAN models to manipulate generated and real images, these approaches mainly focus on 2D objects and they are not user-friendly because users need to individually deter-mine the appropriate manipulation scale for every use ac-cording to every specific intention. Achieving view consistency during 3D image manipu-lation in the latent spaces is crucial to achieve the quanti-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8529 tative manipulation of custom attributes. Previously, each attribute in a multi-view image for an object was incon-sistently estimated across viewpoints [8, 16, 17]. We al-leviate such multi-view inconsistency problem by treating each attribute in each multi-view image as the same. That is, our 3D manipulation model is based on a 3D-based GAN model like EG3D [5], also equipped with two oper-ators: 1) attribute quantifier that estimates the quantity of attribute to be edited, and 2) navigator that explores across the latent space to generate a manipulated image. Since the attribute quantifier guides the navigator, the manipula-tion quality of the navigator depends on the performance of quantifier. As quantifier, an off-the-shelf pre-trained regres-sion model [1, 22] for a specific attribute is often used. It is not always easy to construct the pre-trained quantifier, es-pecially for uncommon custom attributes. Hence, to better deal with custom attributes, our navigator manipulates the image by only assigning the target quantity without explor-ing the direction and scale of changes of the latent features. The attribute quantifier is first trained on a small number of custom image samples and then evaluates a user-defined attribute as a normalized quantity in the range [0,1]. Us-ing the quantifier, the navigator is then trained to generate and manipulate images corresponding to target custom at-tributes. We evaluate our approach in various attributes of 3D and 2D objects, including human faces, confirming that our method is qualitatively and quantitatively effective. |
Hu_NeRF-RPN_A_General_Framework_for_Object_Detection_in_NeRFs_CVPR_2023 | Abstract This paper presents the first significant object detection framework, NeRF-RPN, which directly operates on NeRF . Given a pre-trained NeRF model, NeRF-RPN aims to detect all bounding boxes of objects in a scene. By exploiting a novel voxel representation that incorporates multi-scale 3D neural volumetric features, we demonstrate it is possible to regress the 3D bounding boxes of objects in NeRF directly without rendering the NeRF at any viewpoint. NeRF-RPN is a general framework and can be applied to detect objects without class labels. We experimented NeRF-RPN with various backbone architectures, RPN head designs and loss functions. All of them can be trained in an end-to-end manner to estimate high quality 3D bounding boxes. To facilitate future research in object detection for NeRF , we built a new benchmark dataset which consists of both synthetic and real-world data with careful labeling and clean up. Code and dataset are available at https: //github.com/lyclyc52/NeRF_RPN . | 1. Introduction 3D object detection is fundamental to important appli-cations such as robotics and autonomous driving, which require scene understanding in 3D. Most existing relevant methods require 3D point clouds input or at least RGB-D images acquired from 3D sensors. Nevertheless, recent advances in Neural Radiance Fields (NeRF) [34] provide an effective alternative approach to extract highly semantic features of the underlying 3D scenes from 2D multi-view images. Inspired by Region Proposal Network (RPN) for 2D object detection, in this paper, we present the first 3D NeRF-RPN, which directly operates on the NeRF representation of a given 3D scene learned entirely from RGB images and camera poses. Specifically, given the radiance field and the density extracted from a NeRF model, our method produces bounding box proposals, which can be deployed in downstream tasks. Recently, NeRF has provided very impressive results *Equal contribution. The order of authorship was determined alphabetically. †This research is supported in part by the Research Grant Council of the Hong Kong SAR under grant no. 16201420. Figure 1. Region proposal results on a NeRF. Top 12 proposals in eight orientations with highest confidence are visualized. The NeRF is trained from the Living Room scene from INRIA [38]. in novel view synthesis, while 3D object detection has become increasingly important in many real-world applications such as autonomous driving and augmented reality. Compared to 2D object detection, detection in 3D is more challenging due to the increased difficulty in data collection where various noises in 3D can be captured as well. Despite some good works, there is a lot of room for exploration in the field of 3D object detection. Image-based 3D object detectors either use a single image (e.g., [1, 4, 62]) or utilize multi-view consensus of multiple images (e.g., [29, 51, 63]). Although the latter use multi-view projective geometry to combine information in the 3D space, they still use 2D features to guide the pertinent 3D prediction. Some other 3D detectors based on point cloud representation (e.g., [31,33,41,73]) heavily rely on accurate data captured by sensors. To our knowledge, there is still no representative work on direct 3D object detection in NeRF. Thus, we propose NeRF-RPN to propose 3D ROIs in a given NeRF representation. Specifically, the network takes as input the 3D volumetric information extracted from NeRF, and directly outputs 3D bounding boxes of ROIs. NeRF-RPN will thus be a powerful tool for 3D object detection in NeRF by adopting the “3D-to-3D learning” paradigm, taking full advantages of 3D information inherent in NeRF and predicting 3D region proposals directly in 3D space. As the first significant attempt to perform 3D object detection directly in NeRFs trained from multi-view images, this paper’s focus contributions consist of: • First significant attempt on introducing RPN to NeRF This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23528 for 3D objection detection and related tasks. • A large-scale public indoor NeRF dataset for 3D object detection, based on the existing synthetic indoor dataset Hypersim [46] and 3D-FRONT [11], and real indoor dataset ScanNet [6] and SceneNN [19], carefully curated for NeRF training. • Implementation and comparisons of NeRF-RPNs on various backbone networks, detection heads and loss functions. Our model can be trained in 4 hrs using 2 NVIDIA RTX3090 GPUs. At runtime, it can process a given NeRF scene in 115 ms (excluding postprocessing) while achieving a 99% recall on our 3D-FRONT NeRF dataset. • Demonstration of 3D object detection over NeRF and related applications based on our NeRF-RPN. |
Chen_PiMAE_Point_Cloud_and_Image_Interactive_Masked_Autoencoders_for_3D_CVPR_2023 | Abstract Masked Autoencoders learn strong visual representa-tions and achieve state-of-the-art results in several inde-pendent modalities, yet very few works have addressed their capabilities in multi-modality settings. In this work, we fo-cus on point cloud and RGB image data, two modalities that are often presented together in the real world, and explore their meaningful interactions. To improve upon the cross-modal synergy in existing works, we propose Pi-MAE, a self-supervised pre-training framework that pro-motes 3D and 2D interaction through three aspects. Specif-ically, we first notice the importance of masking strategies between the two sources and utilize a projection module to complementarily align the mask and visible tokens of the two modalities. Then, we utilize a well-crafted two-branch MAE pipeline with a novel shared decoder to pro-mote cross-modality interaction in the mask tokens. Fi-nally, we design a unique cross-modal reconstruction mod-ule to enhance representation learning for both modali-ties. Through extensive experiments performed on large-scale RGB-D scene understanding benchmarks (SUN RGB-D and ScannetV2), we discover it is nontrivial to interac-tively learn point-image features, where we greatly improve multiple 3D detectors, 2D detectors, and few-shot classi-fiers by 2.9%, 6.7%, and 2.4%, respectively. Code is avail-able at https://github.com/BLVLab/PiMAE. | 1. Introduction The advancements in deep learning-based technology have developed many significant real-world applications, *Equal Contribution. †Corresponding Author. Tokenize TokenizeComplementMasking Cross-modalReconstructionSharedDecoderSpecificEncodersPoint CloudProject SharedEncoderImage ReconstructedPoint CloudReconstructedImage SpecificDecodersReconstructed Image features Figure 1. With our proposed design, PiMAE learns cross-modal representations by interactively dealing with multi-modal data and performing reconstruction. such as robotics and autonomous driving. In these scenar-ios, 3D and 2D data in the form of point cloud and RGB im-ages from a specific view are readily available. Therefore, many existing methods perform multi-modal visual learn-ing, a popular approach leveraging both 3D and 2D infor-mation for better representational abilities. Intuitively, the paired 2D pixels and 3D points present different perspectives of the same scene. They encode dif-ferent degrees of information that, when combined, may be-come a source of performance improvement. Designing a model that interacts with both modalities, such as geometry and RGB, is a difficult task because directly feeding them to a model results in marginal, if not degraded, performance, as demonstrated by [34]. In this paper, we aim to answer the question: how to design a more interactive unsupervised multi-modal learn-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5291 ing framework that is for better representation learning? To this end, we investigate the Masked Autoencoders (MAE) proposed by He et al. [20], which demonstrate a straightfor-ward yet powerful pre-training framework for Vision Trans-formers [10] (ViTs) and show promising results for inde-pendent modalities of both 2D and 3D vision [2, 14, 17, 63, 64]. However, these existing MAE pre-training objectives are limited to only a single modality. While much literature has impressively demonstrated MAE approaches’ superiority in multiple modalities, exist-ing methods have yet to show promising results in bridging 3D and 2D data. For 2D scene understanding among multi-ple modalities, MultiMAE [2] generates pseudo-modalities to promote synergy for extrapolating features. Unfortu-nately, these methods rely on an adjunct model for gener-ating pseudo-modalities, which is sub-optimal and makes it hard to investigate cross-modality interaction. On the other hand, contrastive methods for self-supervised 3D and 2D representation learning, such as [1, 6, 7, 34, 58], suffer from sampling bias when generating negative samples and aug-mentation, making them impractical in real-world scenar-ios [8, 22, 73]. To address the fusion of multi-modal point cloud and im-age data, we propose PiMAE, a simple yet effective pipeline that learns strong 3D and 2D features by increasing their interaction. Specifically, we pre-train pairs of points and images as inputs, employing a two-branched MAE learn-ing framework to individually learn embeddings for the two modalities. To further promote feature alignment, we de-sign three main features. First, we tokenize the image and point inputs, and to cor-relate the tokens from different modalities, we project point tokens to image patches, explicitly aligning the masking re-lationship between them. We believe a specialized masking strategy may help point cloud tokens embed information from the image, and vice versa. Next, we utilize a novel symmetrical autoencoder scheme that promotes strong fea-ture fusion. The encoder draws inspiration from [31], con-sisting of both separate branches of modal-specific encoders and a shared-encoder. However, we notice that since MAE’s mask tokens only pass through the decoder [20], a shared-decoder design is critical in our scheme for mask tokens to learn mutual information before performing reconstructions in separate modal-specific decoders. Finally, for learning stronger features inspired by [15,67], PiMAE’s multi-modal reconstruction module tasks point cloud features to explic-itly encode image-level understanding through enhanced learning from image features. To evaluate the effectiveness of our pre-training scheme, we systematically evaluate PiMAE with different fine-tuning architectures and tasks, including 3D and 2D ob-ject detection and few-shot image classification, performed on the RGB-D scene dataset SUN RGB-D [51] and Scan-netV2 [9] as well as multiple 2D detection and classification datasets. We find PiMAE to bring improvements over state-of-the-art methods in all evaluated downstream tasks. Our main contributions are summarized as: • To the best of our knowledge, we are the first to pro-pose pre-training MAE with point cloud and RGB modalities interactively with three novel schemes. • To promote more interactive multi-modal learning, we novelly introduce a complementary cross-modal mask-ing strategy, a shared-decoder, and cross-modal recon-struction to PiMAE. • Shown by extensive experiments, our pre-trained mod-els boost performance of 2D & 3D detectors by a large margin, demonstrating PiMAE’s effectiveness. |
Huang_Anchor3DLane_Learning_To_Regress_3D_Anchors_for_Monocular_3D_Lane_CVPR_2023 | Abstract Monocular 3D lane detection is a challenging task due to its lack of depth information. A popular solution is to first transform the front-viewed (FV) images or features into the bird-eye-view (BEV) space with inverse perspective map-ping (IPM) and detect lanes from BEV features. However, the reliance of IPM on flat ground assumption and loss of context information make it inaccurate to restore 3D in-formation from BEV representations. An attempt has been made to get rid of BEV and predict 3D lanes from FV repre-sentations directly, while it still underperforms other BEV-based methods given its lack of structured representation for 3D lanes. In this paper, we define 3D lane anchors in the 3D space and propose a BEV-free method named An-chor3DLane to predict 3D lanes directly from FV represen-tations. 3D lane anchors are projected to the FV features to extract their features which contain both good structural and context information to make accurate predictions. In addition, we also develop a global optimization method that makes use of the equal-width property between lanes to reduce the lateral error of predictions. Extensive experi-ments on three popular 3D lane detection benchmarks show that our Anchor3DLane outperforms previous BEV-based methods and achieves state-of-the-art performances. The code is available at: https://github.com/tusen-ai/Anchor3DLane . | 1. Introduction Monocular 3D lane detection, which aims at estimating the 3D coordinates of lane lines from a frontal-viewed im-age, is one of the essential modules in autonomous driv-*Work done while at TuSimple Front-ViewedImage/FeatureIPMBEV LaneDetection2D Lane DetectionDepth EstimationProjecting Dense Depth MapSegmentation Mask Front-ViewedImage/Feature Bird-Eye-ViewedImage/Feature Lane Heights Front-ViewedImage/Feature Anchor Projection3D AnchorsFeatureSamplingIterative RegressionBEV Lanes3D Lanes 3D Lanes 3D Lanes ["$%&] ["$%&] ["$%&] 3DLane Detection(a) (b) (c)Figure 1. (a) BEV-based methods, which perform lane detection in the warped BEV images or features. (b) Non-BEV method, which projects 2D lane predictions back to 3D space with esti-mated depth. (c) Our Anchor3DLane projects 3D anchors into FV features to sample features for 3D prediction directly. ing systems. Accurate and robust perception of 3D lanes is not only critical for stable lane keeping, but also serves as an important component for downstream tasks like high-definition map construction [21, 25], and trajectory plan-ning [1, 40]. However, due to the lack of depth informa-tion, estimating lanes in 3D space directly from 2D image domain still remains very challenging. A straightforward way to tackle the above challenges is to detect lanes from the bird-eye-viewed (BEV) space. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17451 As illustrated in Figure 1(a), a common practice of BEV-based methods [5, 7, 8, 20] is to warp images or features from frontal-viewed (FV) space to BEV with inverse per-spective mapping (IPM), thereby transforming the 3D lane detection task into 2D lane detection task in BEV . To project the detected BEV lanes back into 3D space, coordinates of the lane points are then combined with their corresponding height values which are estimated by a height estimation head. Though proven effective, their limitations are still ob-vious: (1) IPM relies on a strict assumption of flat ground, which does not hold true for uphill or downhill cases. (2) Since IPM warps the images on the basis of ground, some useful height information as well as the context information above the road surface are lost inevitably. For example, ob-jects like vehicles on the road are severely distorted after warping. Therefore, information lost brought by IPM hin-ders the accurate restoration of 3D information from BEV representations. Given the above limitations of BEV , some works tried to predict 3D lanes from FV directly. As illustrated in Fig-ure 1(b), SALAD [36] decomposes 3D lane detection task into 2D lane segmentation and dense depth estimation. The segmented 2D lanes are projected into 3D space with cam-era intrinsic parameters and the estimated depth informa-tion. Even though getting rid of the flat ground assumption, SALAD lacks structured representations of 3D lanes. As a result, it is unnatural to extend it to more complex 3D lane settings like multi-view or multi-frame. Moreover, their performance is still far behind the state-of-the-art methods due to the unstructured representation. In this paper, we propose a novel BEV-free method named Anchor3DLane to predict 3D lanes directly from FV concisely and effectively. As shown in Figure 1(c), our Anchor3DLane defines lane anchors as rays in the 3D space with given pitches and yaws. Afterward, we first project them to corresponding 2D points in FV space us-ing camera parameters, and then obtain their features by bilinear sampling. A simple classification head and a re-gression head are adopted to generate classification proba-bilities and 3D offsets from anchors respectively to make final predictions. Unlike the information loss in IPM, sam-pling from original FV features retains richer context infor-mation around lanes, which helps estimate 3D information more accurately. Moreover, our 3D lane anchors can be it-eratively refined to sample more accurate features to better capture complex variations of 3D lanes. Furthermore, An-chor3DLane can be easily extended to the multi-frame set-ting by projecting 3D anchors to adjacent frames with the assistance of camera poses between frames, which further improves performances over single-frame prediction. In addition, we also utilize global constraints to refine the challenging distant parts due to low resolution. The moti-vation is based on an intuitive insight that lanes in the sameimage appear to be parallel in most cases except for the fork lanes, i.e., distances between different point pairs on each lane pair are nearly consistent. By applying a global equal-width optimization to non-fork lane pairs, we adjust 3D lane predictions to make the width of lane pairs consistent from close to far. The lateral error of distant parts of lane lines can be further reduced through the above adjustment. Our contributions are summarized as follows: • We propose a novel Anchor3DLane framework that di-rectly defines anchors in 3D space and regresses 3D lanes directly from FV without introducing BEV . An extension to the multi-frame setting of Anchor3DLane is also proposed to leverage the well-aligned temporal information for further performance improvement. • We develop a global optimization method to utilize the equal-width properties of lanes for refinement. • Without bells and whistles, our Anchor3DLane out-performs previous BEV-based methods and achieves state-of-the-art performances on three popular 3D lane detection benchmarks. |
Aimar_Balanced_Product_of_Calibrated_Experts_for_Long-Tailed_Recognition_CVPR_2023 | Abstract Many real-world recognition problems are characterized by long-tailed label distributions. These distributions make representation learning highly challenging due to limited generalization over the tail classes. If the test distribution differs from the training distribution, e.g. uniform versus long-tailed, the problem of the distribution shift needs to be addressed. A recent line of work proposes learning multiple diverse experts to tackle this issue. Ensemble di-versity is encouraged by various techniques, e.g. by spe-cializing different experts in the head and the tail classes. In this work, we take an analytical approach and extend the notion of logit adjustment to ensembles to form a Bal-anced Product of Experts (BalPoE). BalPoE combines a family of experts with different test-time target distributions, generalizing several previous approaches. We show how to properly define these distributions and combine the ex-perts in order to achieve unbiased predictions, by proving that the ensemble is Fisher-consistent for minimizing the balanced error. Our theoretical analysis shows that our balanced ensemble requires calibrated experts, which we achieve in practice using mixup. We conduct extensive exper-iments and our method obtains new state-of-the-art results on three long-tailed datasets: CIFAR-100-LT, ImageNet-LT, and iNaturalist-2018. Our code is available at https: //github.com/emasa/BalPoE-CalibratedLT . | 1. Introduction Recent developments within the field of deep learning, enabled by large-scale datasets and vast computational re-sources, have significantly contributed to the progress in many computer vision tasks [33]. However, there is a discrep-ancy between common evaluation protocols in benchmark datasets and the desired outcome for real-world problems. Many benchmark datasets assume a balanced label distribu-tion with a sufficient number of samples for each class. In *Affiliation: Husqvarna Group, Huskvarna, Sweden. †Co-affiliation: University of KwaZulu-Natal, Durban, South Africa. 1 2 3 4 5 6 7 Number of experts505152535455565758Accuracy [%] BalPoE (ours) NCL BCL SADE PACO 0.0 0.2 0.4 0.6 0.8 1.0 Confidence0.00.20.40.60.81.0AccuracyECE = 23.1% MCE = 34.7%(a) SOTA comparison (b) BS 0.0 0.2 0.4 0.6 0.8 1.0 Confidence0.00.20.40.60.81.0AccuracyECE = 17.3% MCE = 28.0% 0.0 0.2 0.4 0.6 0.8 1.0 Confidence0.00.20.40.60.81.0AccuracyECE = 4.1% MCE = 9.2% (c) SADE (d) Our approach Figure 1. (a) SOTA comparison. Many SOTA approaches employ Balanced Softmax (BS) [51] and its extensions to mitigate the long-tailed bias [12, 38, 70, 72]. However, we observe that single-expert bias-adjusted models, as well as multi-expert extensions, are still poorly calibrated by default. Reliability plots for (b) BS, (c) SADE [70] and (d) Calibrated BalPoE (our approach). this setting, empirical risk minimization (ERM) has been widely adopted to solve multi-class classification and is the key to many state-of-the-art (SOTA) methods, see Figure 1. Unfortunately, ERM is not well-suited for imbalanced or long-tailed (LT) datasets, in which head classes have many more samples than tail classes , and where an unbiased pre-dictor is desired at test time. This is a common scenario for real-world problems, such as object detection [42], med-ical diagnosis [18] and fraud detection [50]. On the one hand, extreme class imbalance biases the classifier towards head classes [31, 57]. On the other hand, the paucity of data prevents learning good representations for less-represented classes, especially in few-shot data regimes [67]. Addressing class imbalance is also relevant from the perspective of al-gorithmic fairness, since incorporating unfair biases into the models can have life-changing consequences in real-world decision-making systems [46]. Previous work has approached the problem of class imbal-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19967 ance by different means, including data re-sampling [4, 8], cost-sensitive learning [1, 65], and margin modifications [6, 56]. While intuitive, these methods are not without limi-tations. Particularly, over-sampling can lead to overfitting of rare classes [13], under-sampling common classes might hin-der feature learning due to the omitting of valuable informa-tion [31], and loss re-weighting can result in optimization in-stability [6]. Complementary to these approaches, ensemble learning has empirically shown benefits over single-expert models in terms of generalization [34] and predictive un-certainty [36] on balanced datasets. Recently, expert-based approaches [5, 61] also show promising performance gains in the long-tailed setting. However, the theoretical reasons for how expert diversity leads to better generalization over different label distributions remain largely unexplored. In this work, we seek to formulate an ensemble where each expert targets different classes in the label distribution, and the ensemble as a whole is provably unbiased. We accomplish this by extending the theoretical background for logit adjustment to the case of learning diverse expert ensembles. We derive a constraint for the target distributions, defined in terms of expert-specific biases, and prove that fulfilling this constraint yields Fisher consistency with the balanced error . In our formulation, we assume that the experts are calibrated, which we find not to be the case by default in practice. Thus, we need to assure that the assumption is met, which we realize using mixup [69]. Our contributions can be summarized as follows: •We extend the notion of logit adjustment based on label frequencies to balanced ensembles. We show that our approach is theoretically sound by proving that it is Fisher-consistent for minimizing the balanced error. •Proper calibration is a necessary requirement to apply the previous theoretical result. We find that mixup is vital for expert calibration, which is not fulfilled by default in practice. Meeting the calibration assumption ensures Fisher consistency, and performance gains follow. •Our method reaches new state-of-the-art results on three long-tailed benchmark datasets. |
Chen_Enhanced_Multimodal_Representation_Learning_With_Cross-Modal_KD_CVPR_2023 | Abstract This paper explores the tasks of leveraging auxiliary modalities which are only available at training to enhance multimodal representation learning through cross-modal Knowledge Distillation (KD). The widely adopted mutual information maximization-based objective leads to a short-cut solution of the weak teacher, i.e., achieving the max-imum mutual information by simply making the teacher model as weak as the student model. To prevent such a weak solution, we introduce an additional objective term, i.e., the mutual information between the teacher and the auxiliary modality model. Besides, to narrow down the in-formation gap between the student and teacher, we further propose to minimize the conditional entropy of the teacher given the student. Novel training schemes based on con-trastive learning and adversarial learning are designed to optimize the mutual information and the conditional en-tropy, respectively. Experimental results on three popular multimodal benchmark datasets have shown that the pro-posed method outperforms a range of state-of-the-art ap-proaches for video recognition, video retrieval and emotion classification. | 1. Introduction Multimodal learning has shown much promise in a wide spectrum of applications, such as video understand-ing [20, 39, 44], sentiment analysis [1, 19, 47] and medi-cal image segmentation [7, 49]. It is generally recognized that including more modalities helps the prediction accu-racy. For many real-world applications, while one has to make predictions based on limited modalities due to effi-ciency or cost concerns, it is usually possible to collect ad-ditional modalities at training. To enhance the representa-tion learned, several studies have thus considered to lever-age such modalities as a form of auxiliary information at training [11, 23, 29, 50]. Several attempts have explored to fill in the auxiliary modalities at test time, by employ-ing approaches such as Generative Adversarial Network Figure 1. Illustration of the difference between the MI-based method and the proposed AMID method. (a) MI-based methods maximize I(M;S). (b) AMID maximizes I(M;S)andI(M;A), while minimizing H(M|S). (GAN) [4, 22]. However, such data generation-based ap-proaches introduce extra computation costs at inference. As an alternative, cross-modal knowledge distillation is proposed, where a teacher model trained with both the tar-get modalities and the auxiliary modalities (denoted as full modalities hereafter), is employed to guide the training of a student model with target modalities only [10,11,16,23,45, 46]. In such a way, the student model directly learns to em-bed information about auxiliary modalities and involves no additional computational cost at inference. A typical prac-tice is to pre-train the teacher model from the full modality data, and fix it during the training. While such a setting enables the teacher to provide information-rich representa-tion [23, 46], it may not be favorable for the alignment of the teacher and student, due to the large information gap between them, especially at the early stage of training [32]. To better facilitate the knowledge transfer, online KD is proposed to jointly train the teacher and the student. One common solution is to maximize the mutual informa-tion (MI) between the teacher and the student in a shared This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11766 space [42]. However, such an objective only focuses on maximizing the shared knowledge, in terms of percentage, from the teacher to the student, which is necessary but not sufficient to guarantee a complete knowledge transfer. As il-lustrated in Fig. 1(a), a short-cut solution is to simply make the teacher model capture less information, i.e., shrinking the entropy of the teacher. Under the framework online KD, this paper explores to simultaneously achieve: 1) maintaining the representation capacity of the full modality teacher as much as possible, and 2) narrowing down the information gap between the teacher and the student. As Fig. 1(b) shows, in addition to maximizing the MI between the teacher and the student, maximizing the MI between the teacher and an additional auxiliary model may contribute to the former goal, while minimizing the conditional entropy of the teacher given the student may benefit the latter goal. We thus propose a new objective function consisting of all the above three terms. To achieve the optimization of the MI, we further derive a new form of its lower bound by taking into account the supervision of the label, and propose a contrastive learning-based approach to implement it. To minimize the condi-tional entropy, an adversarial learning-based approach is in-troduced, where additional discriminators are used to distin-guish representations between the teacher and the student. We name the proposed method Adversarial Mutual Infor-mation Distillation (AMID) hereafter. To validate the effectiveness of AMID, extensive exper-iments on three popular multimodal tasks including video recognition, video retrieval and emotion classification tasks are conducted. The performance of these tasks are con-ducted on UCF51 [40], ActivityNet [20] and IEMOCAP [6] benchmark datasets, respectively. The results show that AMID outperforms a range of state-of-the-art approaches on all three tasks. To summarize, the main contribution of this paper is threefold: • We propose a novel cross-modal KD method, Adver-sarial Mutual Information Distillation (AMID), to pre-vent the teacher from losing information by maximiz-ing the MI between it and an auxiliary modality model. • AMID maximally transfers information from the teacher to the student by optimizing the MI between them and minimizing the conditional entropy of the teacher given the student. • To implement the joint optimization of the three objec-tive terms, a novel training approach that leverages the combined advantage of both contrastive learning and adversarial learning is proposed. |
Dexheimer_Learning_a_Depth_Covariance_Function_CVPR_2023 | Abstract We propose learning a depth covariance function with applications to geometric vision tasks. Given RGB images as input, the covariance function can be flexibly used to define priors over depth functions, predictive distributions given observations, and methods for active point selection. We leverage these techniques for a selection of downstream tasks: depth completion, bundle adjustment, and monocular dense visual odometry. | 1. Introduction Inferring the 3D structure of the world from 2D images is an essential computer vision task. In recent years, there has been significant interest in combining principled multi-ple view geometry with data-driven priors. Learning-based methods that predict geometry provide a prior directly over the latent variables, which avoids the ill-posed configura-tions of traditional methods. However, direct, overconfident priors may prevent realization of the true 3D structure when multi-view geometry is well-defined. For example, data-driven methods have shown tremendous promise in monoc-ular depth estimation, but often lack consistency when fus-ing information into 3D space. Thus far, designing a unified framework that combines the best of learning and optimization methods has proven challenging. Recent data-driven methods have attempted to relax rigid geometric constraints by also predicting low-dimensional subspaces or per-pixel uncertainties. Fixing capacity during training is often wasteful or inflexible at test time, while per-pixel residual distributions typically explain away the limitations of the model instead of the relationship between errors. In reality, the 3D world is anywhere from simple to complex, and an ideal system should explicitly adapt its capacity and correlation based on the scene. In this paper, rather than directly predicting geometry from images, we propose learning a depth covariance func-tion. Given an RGB image, our method predicts how the depths of any two pixels relate. To achieve this, a neural net-work transforms color information to a feature space, and Figure 1. Example monocular reconstruction using the depth co-variance for bundle adjustment and dense depth prediction from three seconds (100 frames) of TUM data. Three representative images and the mesh from TSDF fusion of the depth predictions are shown. Each frame leverages the learned covariance function to model geometric correlation between pairs of scene points. a Gaussian process (GP) models a prior distribution given these features and a base kernel function. The distinction between image processing and the prior enables promoting locality and granting flexible capacity at test time. Locality avoids over-correlating pixels on distinct structures, while adaptive capacity permits tuning the complexity of our sub-space to the content of the viewed scene. Learning this flexible, high-level prior allows for bal-ancing data-driven methods with test-time optimization that can be applied to a variety of geometric vision tasks. Fur-thermore, the covariance function is agnostic to the 3D rep-resentation as it does not directly learn a geometric output. Depth maps may be requested by conditioning on observa-tions, but the prior may also be leveraged for inferring the desired latent 3D representation. In Figure 1, we illustrate depth covariance along with an example of bundle adjust-ment, dense depth prediction, and multi-view fusion. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13122 In summary, our key contributions are: • A framework for learning the covariance function by selecting a depth representation, a base kernel func-tion, and a scalable optimization objective • Application of the prior to depth completion, bundle adjustment, and monocular dense visual odometry |
Hou_Evading_DeepFake_Detectors_via_Adversarial_Statistical_Consistency_CVPR_2023 | Abstract In recent years, as various realistic face forgery tech-niques known as DeepFake improves by leaps and bounds, more and more DeepFake detection techniques have been proposed. These methods typically rely on detecting statis-tical differences between natural (i.e., real) and DeepFake-generated images in both spatial and frequency domains. In this work, we propose to explicitly minimize the statistical differences to evade state-of-the-art DeepFake detectors. To this end, we propose a statistical consistency attack (StatAt-tack) against DeepFake detectors, which contains two main parts. First, we select several statistical-sensitive natural degradations (i.e., exposure, blur, and noise) and add them to the fake images in an adversarial way. Second, we find that the statistical differences between natural and DeepFake im-ages are positively associated with the distribution shifting between the two kinds of images, and we propose to use a distribution-aware loss to guide the optimization of different degradations. As a result, the feature distributions of gen-erated adversarial examples is close to the natural images. Furthermore, we extend the StatAttack to a more power-ful version, MStatAttack, where we extend the single-layer degradation to multi-layer degradations sequentially and use the loss to tune the combination weights jointly. Compre-hensive experimental results on four spatial-based detectors and two frequency-based detectors with four datasets demon-strate the effectiveness of our proposed attack method in both white-box and black-box settings. | 1. Introduction Recent advances in facial generation and manipulation using deep generative approaches ( i.e., DeepFake [30]) have attracted considerable media and public attentions. Fake images can be easily created using a variety of free and open-source tools. However, the misuse of DeepFake raises *Corresponding author: tsingqguo@ieee.org Figure 1. Principle of our method. The light blue region and the light red region represent the embedding space of natural/real im-ages and fake images, respectively. The dark blue region represents the embedding spaces of real images shared by different detectors. The first row shows that a typical attack can map the fake samples (i.e., the orange points) to the ‘real’ samples that can fool detector A but fail to mislead detector B. The second row shows that our method is to map the fake samples to the common regions of differ-ent detectors, which can fool both detectors. security and privacy concerns, particularly in areas such as politics and pornography [6, 46]. The majority of back-end technologies for DeepFake rely on generative adversarial net-works (GANs). As GANs continue to advance, state-of-the-art (SOTA) DeepFake has achieved a level of sophistication that is virtually indistinguishable to human eyes. Although these realistic fake images can spoof human eyes, SOTA DeepFake detectors can still effectively detect subtle ‘fake features’ by leveraging the powerful feature ex-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12271 traction capabilities of deep neural networks (DNNs). How-ever, recent studies [2, 11] have shown that these detectors are vulnerable to adversarial attacks that can bypass detectors by injecting perturbations into fake images. Additionally, adversarial examples pose a practical threat to DeepFake de-tection if they can be transferred between different detectors. Adversarial attacks are often used to verify the robustness of DeepFake detectors [25]. Therefore, in order to further research and develop robust DeepFake detectors, it is crucial to develop effective and transferable adversarial attacks. Several previous studies have explored the transferabil-ity of adversarial examples [50, 51], which refers to the ability of adversarial examples designed for a specific vic-tim model to attack other models trained for the same task. However, achieving transferable attacks against DeepFake detectors is particularly challenging due to variations in net-work architectures and training examples caused by different data augmentation and pre-processing methods. These dif-ferences often result in poor transferability of adversarial examples crafted from typical attack methods when faced with different DeepFake detectors. Current detection methods typically rely on detecting sta-tistical differences in spatial and frequency domains between natural and DeepFake-generated images, (as explained in Section 3), and various detectors share some common sta-tistical properties of natural images [8, 36, 53]. These prior knowledge and discoveries inspire us to design an attack method with strong transferability that can minimize statis-tical differences explicitly. Toward the transferable attack, we propose a novel attack method, StatAttack. Specifically, we select three types of statistical-sensitive natural degra-dations, including exposure, blur, and noise, and add them to fake images in an adversarial manner. In addition, our analysis indicates that the statistical differences between the real and fake image sets are positively associated with their distribution shifting. Hence, we propose to mitigate these differences by minimizing the distance between the fea-ture distributions of fake images and that of natural images. To achieve this, we introduce a novel distribution-aware loss function that effectively minimizes the statistical dif-ferences. Figure 1 illustrates the principle of our proposed attack method. Moreover, We expand our attack method to a more powerful version, MStatAttack. This improved approach performs multi-layer degradations and can dynam-ically adjust the weights of each degradation in different layers during each attack step. With the MStatAttack, we can develop more effective attack strategies and generate adversarial examples that appear more natural. Our contributions can be summarized as the following: •We propose a novel natural degradation-based attack method, StatAttack. StatAttack can fill the feature dis-tributions difference between real and fake images by minimizing a distribution-aware loss.•To enhance the StatAttack, we further propose a multi-layer counterpart, MStatAttack, which can select a more effective combination of perturbations and generate more natural-looking adversarial examples. •We conduct comprehensive experiments on four spatial-based and two frequency-based DeepFake detectors using four datasets. The experimental results demon-strate the effectiveness of our attack in both white-box and black-box settings. |
Basu_RMLVQA_A_Margin_Loss_Approach_for_Visual_Question_Answering_With_CVPR_2023 | Abstract Visual Question Answering models have been shown to suffer from language biases, where the model learns a cor-relation between the question and the answer, ignoring the image. While early works attempted to use question-only models or data augmentations to reduce this bias, we pro-pose an adaptive margin loss approach having two com-ponents. The first component considers the frequency of answers within a question type in the training data, which addresses the concern of the class-imbalance causing the language biases. However, it does not take into account the answering difficulty of the samples, which impacts their learning. We address this through the second component, where instance-specific margins are learnt, allowing the model to distinguish between samples of varying complex-ity. We introduce a bias-injecting component to our model, and compute the instance-specific margins from the confi-dence of this component. We combine these with the esti-mated margins to consider both answer-frequency and task-complexity in the training loss. We show that, while the mar-gin loss is effective for out-of-distribution (ood) data, the bias-injecting component is essential for generalising to in-distribution (id) data. Our proposed approach, Robust Mar-gin Loss for Visual Question Answering (RMLVQA)1im-proves upon the existing state-of-the-art results when com-pared to augmentation-free methods on benchmark VQA datasets suffering from language biases, while maintaining competitive performance on id data, making our method the most robust one among all comparable methods. | 1. Introduction Visual question answering (VQA) lies at the intersection of computer vision and natural language processing. It is 1Code available at https://github.com/val-iisc/RMLVQAthe task of answering a question based on a given image. VQA networks need to combine knowledge from both vi-sual scene and the question to predict the answer. These systems have numerous applications such as aiding the vi-sually impaired in understanding their surroundings, image retrieval systems in e-commerce, and robotics. With the success of deep learning, research in VQA has made great strides in recent years [3, 5, 16]. However, stud-ies have shown that deep networks may learn correlations between the question and answer alone, ignoring the image modality [3, 21, 36]. They fail to do multimodal reasoning, specifically, if there is a class imbalance in the answer distri-butions of the training and test sets. For example, if most of the questions starting with “What color..?” are paired with the answer “red” in the training data, the model memorizes this trend to answer “red” for all color based questions in the test set, irrespective of the image. One solution to this problem is to perform data augmen-tations in various ways [1,4,11,15,27,36,42,43,46]. These methods outperform most other debiasing techniques in the literature. However, augmentation strategies are dependent on the dataset in question and the type of biases observed, which makes the process manual and tedious. Another ex-tensively explored solution is to learn the bias in the dataset separately using a question-only branch [9, 12, 18] and ex-plicitly removing the learnt bias from the base model. Margin losses have been widely used for a number of tasks. Cao et al. [10] address the problem of long tailed recognition through an adaptive margin loss, ensuring higher cosine margin penalty for the tail classes and lower penalty for other classes. A similar adaptive cosine mar-gin penalty has been applied to mitigate the language bias problem in VQA by a previous work [17]. Margin losses have been widely used in deep face recognition as well, where it is desired that the features of two images of the same person should be as similar as possible, whereas those of two different people should be far apart. Margin losses This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11671 are used in this regard for facial feature discrimination to maximize the decision margin. Rather than the Euclidean space, both these margin losses project their feature spaces onto a fixed radius hypersphere. While the well known CosFace loss [41] uses cosine margin penalties like Cao et al. [10], ArcFace [14], uses angular margins to maximize the decision margins among different classes in this feature space. Angles correspond to geodesic distance on the pro-jected hypersphere which stabilizes the training process and is shown to improve the discriminative power of face recog-nition models as compared to cosine margin based methods. We believe that the language bias problem of VQA can be addressed by learning discriminative features for the bi-ased and unbiased samples in the dataset. To this end, we implement an adaptive angular margin loss, inspired by Ar-cFace, as margins allow models to distinguish between dif-ferent kinds of samples. However, the key question here would be how to set the adaptive margins to cater to the specific problem of language biases. Traditional models over-trust the questions over the images due to a class im-balance in the training and test set answer distributions. Hence, one way to set the margins would be to ensure that the training samples with frequent answers are given a smaller penalty due to the abundance of those answers in the dataset, whereas those with rare answers are given a higher margin value. We refer to the resultant margin val-ues as the frequency-based margins. However, one factor that is ignored in this aspect is the answering difficulty of each sample. We argue that more margin should be given to hard samples compared to the easier ones even if the cor-responding answers are frequent. In this regard, we pro-pose the learning of instance-specific margins during train-ing, so as to allow the model to distinguish between hard and easy samples alongside frequent and rare samples. To the best of our knowledge, this is one of the first attempts in learning margins automatically and parallely during train-ing. We combine these learnable margins with the fre-quency based margins used in prior works [17], so as to allow both frequency and complexity of data samples to control the training dynamics. To compute these learnable margins, we introduce a bias injecting module to our model that is trained using Cross-Entropy ( CE) loss. We show that theCEloss additionally clusters the training samples based on the dataset bias itself, thereby aiding the margin loss fur-ther. We further introduce a supervised contrastive loss [23] to pull the features of samples having the same answer to-gether, while pushing others apart. Adaptive margin losses, with margins calculated by us-ing frequency of answers in the dataset, perform well in the oodsetting. However, we show that they cause a drop in the in-domain performance, which raises questions on their robustness. As pointed by [40], it is crucial for VQA mod-els to perform well on both in-domain and ooddata sincethe test set distibution is unknown apriori. We mitigate this issue by ensembling the outputs of the bias injecting compo-nent and the proposed learnable margin-loss trained classi-fication head during inference. This makes the model robust to the difference in answer distributions of the training and test sets. Our overall method is called RMLVQA -Robust Margin Loss for Visual Question Answering with language biases. The key contributions of this work can be summa-rized as follows: • We propose to mitigate the well known problem of lan-guage bias in VQA models by introducing an instance-specific adaptive margin loss, to allow the use of dif-ferent margins for the learning of samples with vary-ing complexities, in addition to the use of frequency-based margins. To achieve this, we introduce a bias-injecting component and allow the margins to be com-puted based on prediction probabilities of this branch. We show that this clusters samples in the feature space based on the bias present in the dataset. • We propose to overcome the id-ood trade-off in margin-based losses, by ensembling the outputs of the bias-injecting component and the main model. • We further introduce a supervised contrastive loss that pulls features of training samples having the same ground truth answers together, while pushing apart others. This aids the margin loss further. • Through extensive experiments and ablations, we show how the proposed approach achieves state-of-the art results when compared to augmentation-free meth-ods on the oodVQA-CP v2 dataset, while maintaining competitive performance on the idVQA v2 dataset. This makes our model the most robust one among all non-augmentation based methods. The code, hyperparameter analyses, and results on multiple datasets are shared as supplementary material. |
Gu_ViP3D_End-to-End_Visual_Trajectory_Prediction_via_3D_Agent_Queries_CVPR_2023 | Abstract Perception and prediction are two separate modules in the existing autonomous driving systems. They interact with each other via hand-picked features such as agent bounding boxes and trajectories. Due to this separation, prediction, as a downstream module, only receives limited information from the perception module. To make matters worse, er-rors from the perception modules can propagate and accu-mulate, adversely affecting the prediction results. In this work, we propose ViP3D, a query-based visual trajectory prediction pipeline that exploits rich information from raw videos to directly predict future trajectories of agents in a scene. ViP3D employs sparse agent queries to detect, track, ∗Equal contribution. †Corresponding to: hangzhao@mail.tsinghua.edu.cnand predict throughout the pipeline, making it the first fully differentiable vision-based trajectory prediction approach. Instead of using historical feature maps and trajectories, useful information from previous timestamps is encoded in agent queries, which makes ViP3D a concise streaming pre-diction method. Furthermore, extensive experimental re-sults on the nuScenes dataset show the strong vision-based prediction performance of ViP3D over traditional pipelines and previous end-to-end models.1 | 1. Introduction An autonomous driving system should be able to per-ceive agents in the current environment and predict their future behaviors so that the vehicle can navigate the world 1Code and demos are available on the project page: https:// tsinghua-mars-lab.github.io/ViP3D This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5496 safely. Perception and prediction are two separate modules in the existing autonomous driving software pipeline, where the interface between them is often defined as hand-picked geometric and semantic features, such as historical agent trajectories, agent types, agent sizes, etc. Such an interface leads to the loss of useful perceptual information that can be used in trajectory prediction. For example, tail lights and brake lights indicate a vehicle’s intention, and pedestrians’ head pose and body pose tell about their attention. This information, if not explicitly modeled, is ignored in the ex-isting pipelines. In addition, with the separation of percep-tion and prediction, errors are accumulated and cannot be mitigated in later stages. Specifically, historical trajectories used by trajectory predictors come from an upstream per-ception module, which inevitably contains errors, leading to a drop in the prediction performance. Designing a trajec-tory predictor that is robust to upstream output errors is a non-trivial task [60]. Recent works such as IntentNet [3], FaF [34], PnPNet [30] propose end-to-end models for LiDAR-based trajectory prediction. They suffer from a couple of limita-tions: (1) They are not able to leverage the abundant fine-grained visual information from cameras; (2) these models use convolutional feature maps as their intermediate rep-resentations within and across frames, thus suffering from non-differentiable operations such as non-maximum sup-pression in object decoding and object association in multi-object tracking. To address all these challenges, we propose a novel pipeline that leverages a query-centric model design to pre-dict future trajectories, dubbed ViP3D (Visual trajectory Prediction via 3Dagent queries). ViP3D consumes multi-view videos from surrounding cameras and high-definition maps, and makes agent-level future trajectory prediction in an end-to-end and concise streaming manner, as shown in Figure 1. Specifically, ViP3D leverages 3D agent queries as the interface throughout the pipeline, where each query can map to (at most) an agent in the environment. At each time step, the queries aggregate visual features from multi-view images, learn agent temporal dynamics, model the relation-ship between agents, and finally produce possible future tra-jectories for each agent. Across time, the 3D agent queries are maintained in a memory bank, which can be initialized, updated and discarded to track agents in the environment. Additionally, unlike previous prediction methods that uti-lize historical agent trajectories and feature maps from mul-tiple historical frames, ViP3D only uses 3D agent queries from one previous timestamp and sensor features from the current timestamp, making it a concise streaming approach. In summary, the contribution of this paper is three-fold: 1. ViP3D is the first fully differentiable vision-based approach to predict future trajectories of agents for au-tonomous driving. Instead of using hand-picked fea-tures like historical trajectories and agent sizes, ViP3D leverages the rich and fine-grained visual features from raw images which are useful for the trajectory predic-tion task. |
Du_Adaptive_Sparse_Convolutional_Networks_With_Global_Context_Enhancement_for_Faster_CVPR_2023 | Abstract Object detection on drone images with low-latency is an important but challenging task on the resource-constrained unmanned aerial vehicle (UAV) platform. This paper inves-tigates optimizing the detection head based on the sparse convolution, which proves effective in balancing the accu-racy and efficiency. Nevertheless, it suffers from inadequate integration of contextual information of tiny objects as well as clumsy control of the mask ratio in the presence of fore-ground with varying scales. To address the issues above, we propose a novel global context-enhanced adaptive sparse convolutional network (CEASC). It first develops a context-enhanced group normalization (CE-GN) layer, by replacing the statistics based on sparsely sampled features with the global contextual ones, and then designs an adaptive multi-layer masking strategy to generate optimal mask ratios at distinct scales for compact foreground coverage, promoting both the accuracy and efficiency. Extensive experimental re-sults on two major benchmarks, i.e. VisDrone and UAVDT, demonstrate that CEASC remarkably reduces the GFLOPs and accelerates the inference procedure when plugging into the typical state-of-the-art detection frameworks (e.g. Reti-naNet and GFL V1) with competitive performance. Code is available at https://github.com/Cuogeihong/CEASC. | 1. Introduction Recent progress of deep neural networks ( e.g.CNNs and Transformers) has significantly boosted the performance of object detection on public benchmarks such as COCO [23]. By contrast, building detectors for unmanned aerial vehicle (UA V) platforms currently remains a challenging task. On the one hand, existing studies are keen on designing com-plicated models to reach high accuracies of tiny objects on †indicates equal contribution. ∗refers to the corresponding author. VisDroneCOCO (a) VisDrone UAVDT (b) Figure 1. (a) Comparison of foreground proportions on the COCO and drone imagery databases; and (b) visualization of foregrounds (highlighted in yellow) on samples from VisDrone and UA VDT. high-resolution drone imagery, which are computationally consuming. On the other hand, the hardware equipped with UA Vs is often resource-constrained, raising an urgent de-mand in lightweight deployed models for fast inference and low latency. To deal with the dilemma of balancing the accuracy and efficiency, a number of efforts are made, mainly on general object detection, which basically concentrate on reducing This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13435 the complexity of the backbone networks [2, 13, 47]. De-spite some potential, these methods leave much room for improvement since they fail to take into account the heavy detection heads which are widely used by the state-of-the-art detectors [14, 21, 22, 46]. For instance, RetinaNet [22] taking ResNet18 [11] as backbone with 512 input channels adopts a detection head that occupies 82.3% of the overall GFLOPs. Recently, several methods have been presented to solve this problem, including network pruning [24, 45] and structure redesigning [1, 7], and prove effective in ac-celerating inference. However, the former is criticized by the sharp performance drop when computations are greatly decreased, evidenced by the attempt on detection for UA Vs [45], and the latter is primarily optimized for low-resolution input ( e.g.640×640), making it not straightforward to adapt to high-resolution aerial images. Sparse convolutions [6, 41] show another promising al-ternative, which limit computations by only operating con-volutions on sparsely sampled regions or channels via learn-able masks. While theoretically attractive, their results are highly dependent on the selection of meaningful areas, be-cause the focal region of the learned mask in sparse con-volutions is prone to locate within foreground. Regard-ing drone images, the vast majority of objects are of small scales (as shown in Fig. 1 (a)) and the scale of foreground areas varies along with flying altitudes and observing view-points (as shown in Fig. 1 (b)), and this issue becomes even more prominent. An inadequate mask ratio enlarges the fo-cal part and more unnecessary computations are consumed on background, which tends to simultaneously deteriorate efficiency and accuracy. On the contrary, an exaggerated one shrinks the focal part and incurs the difficulty in fully covering foreground and crucial context, thus leading to performance degradation. DynamicHead [31] and Query-Det [42] indeed apply sparse convolutions to the detection head; unfortunately, their primary goal is to offset the in-creased computational cost when additional feature maps are jointly used for performance gain on general object de-tection. They both follow the traditional way in original sparse convolutions that set fixed mask ratios or focus on foreground only and are thus far from reaching the trade-off between accuracy and efficiency required by UA V detec-tors. Therefore, it is still an open question to leverage sparse convolutions to facilitate lightweight detection for UA Vs. In this paper, we propose a novel plug-and-play detec-tion head optimization approach to efficient object detection on drone images, namely global context-enhanced adaptive sparse convolution (CEASC). Concretely, we first develop a context-enhanced sparse convolution (CESC) to capture global information and enhance focal features, which con-sists of a residual structure with a context-enhanced group normalization (CE-GN) layer. Since CE-GN specifically preserves a set of holistic features and applies their statis-tics for normalization, it compensates the loss of context caused by sparse convolutions and stabilizes the distribu-tion of foreground areas, thus bypassing the sharp drop on accuracy. We then propose an adaptive multi-layer mask-ing (AMM) scheme, and it separately estimates an optimal mask ratio by minimizing an elaborately designed loss at distinct levels of feature pyramid networks (FPN), balanc-ing the detection accuracy and efficiency. It is worth noting that CESC and AMM can be easily extended to various de-tectors, indicating that CEASC is generally applicable to existing state-of-the-art object detectors for acceleration on drone imagery. The contribution of our work lies in three-fold: 1) We propose a novel detection head optimization ap-proach based on sparse convolutions, i.e.CEASC, to effi-cient object detection for UA Vs. 2) We introduce a context-enhanced sparse convolution layer and an adaptive multi-layer masking scheme to opti-mize the mask ratio, delivering an optimal balance between the detection accuracy and efficiency. 3) We extensively evaluate the proposed approach on two major public benchmarks of drone imagery by integrating CEASC to various state-of-the-art detectors ( e.g. RetinaNet and GFL V1), significantly reducing their computational costs while maintaining competitive accuracies. |
Chu_Command-Driven_Articulated_Object_Understanding_and_Manipulation_CVPR_2023 | Abstract We present Cart, a new approach towards articulated-object manipulations by human commands. Beyond the existing work that focuses on inferring articulation struc-tures, we further support manipulating articulated shapes to align them subject to simple command templates. The key of Cart is to utilize the prediction of object structures to connect visual observations with user commands for effective manipulations. It is achieved by encoding com-mand messages for motion prediction and a test-time adap-tation to adjust the amount of movement from only com-mand supervision. For a rich variety of object categories, Cart can accurately manipulate object shapes and outper-form the state-of-the-art approaches in understanding the inherent articulation structures. Also, it can well general-ize to unseen object categories and real-world objects. We hope Cart could open new directions for instructing ma-chines to operate articulated objects. Code is available at https://github.com/dvlab-research/Cart. | 1. Introduction Articulated objects such as doors, cabinets, and laptops are ubiquitous in our daily life. Enabling machines to manip-ulate them under human instructions will pave the way for many applications, such as robotic work, human-computer interaction, and augmented reality. Recent research majorly focuses on understanding the structural properties of articu-lated objects, e.g., discovering the movable parts [51,61] and estimating the articulations between connected parts [19,28]. Despite their success, the way from understanding objects to user-controllable manipulation remains challenging. First, the model should be able to interpret the user in-struction and determine the corresponding object part for performing the manipulation action. To do so requires un-derstanding not only the object’s articulated structure but also the operational properties (see Fig. 1). If a part is movable, the action type and current state should be rec-*Corresponding Author User CommandMotion ParameterUnderstandingMotion ParametersCheck Target States openhalf-openclosedMovable Parts (A-D) and Joints Motion ParameterManipulationSingle Point Cloud Manipulation Result Figure 1. Given a single point cloud of an articulated object and a user command, Cart accordingly manipulates the object shape, based on understanding object structure and connecting the com-mand to predict the motion parameters. The command gives the desired state and we use it to check if the manipulation succeeds. ognized ahead. Second, to achieve successful manipulation, the model should know the target configuration or state of the object so as to determine the associated manipulation parameters, such as the rotation orientation and angle, for achieving the user instruction (see Fig. 1). In this work, we study the above challenges by formu-lating a new task called command-driven articulated object manipulation . Given a visual observation of an articulated object and a simple user command that specifies how to manipulate the object, our objective is to manipulate the associated object part to reach the target articulation state given by the command. Specifically, this task combines the needs of understanding the object structure (which part to move), recognizing the associated articulation property (how it moves), and predicting the amount of motion subject to the user command (how much to move). To this end, we propose a novel learning-based framework called Command-driven articulation modeling, or Cart for short; see Fig. 1. In particular, Cart is powerful in both structure under-standing and shape manipulation for articulated objects. The former acts as a critical building block, where we jointly This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8813 segment individual object parts, predict their motion joints, and model the current articulation state, given only a single visual observation. Based on the structural prediction, we further connect the user commands with the inherent object geometry for effective object manipulation. In our approach, we choose structured templates to create user-friendly commands. This design allows users to select a single point on object’s point cloud to identify the part to operate and specify its desired target state. To associate the user-selected point to an intact object part, we lever-age model segmentation prediction to locate the matched one. Subsequently, we transform the target states to a learn-able embedding and fuse it with the visual features in a part-aware manner. Since the fused features encapsulate twofold information, it can be further applied to estimate the command-related motion parameters. After the manipulation, to ensure the target object state satisfies the user command, we formulate the novel Test-time State Adaptation (TTSA) algorithm for state regular-ization. It automatically generates the state label from the command input and uses it to optimize the movement param-eters, thereby adjusting the final object status. Essentially, it is self-supervised and works without requiring ground-truth supervision. With little additional computation cost, TTSA is able to robustly improve the quality of the object manipulation, especially for unseen object categories. We extensively evaluate Cart on two widely-used datasets [51, 57] of articulated objects. Compared to the recent work [19, 28, 61], Cart yields superior predictive per-formance on both static object structure and dynamic motion parameters used for manipulation. We specifically show that each previous method can only partially achieve our task, and their simple augmentations to fit our setting underper-form as well. Then, we apply our method to real-world scenarios. We provide examples of manipulating real articu-lated objects’ shape subject to the user command. Further, we spawn our predictions in a virtual environment to sim-ulate robotic manipulations. Our overall contributions are listed as follows. •We propose a command-driven shape manipulation task for articulated objects. It enables a simple interaction way for users to control the manipulation procedure. •We present an integrated framework Cart towards this systematic task. It succeeds by understanding the ar-ticulated object structure and further connecting user commands to predict movement to the desired state. •Cart works on both articulation understanding and ob-ject manipulation on synthetic and real-world data. |
He_D2Former_Jointly_Learning_Hierarchical_Detectors_and_Contextual_Descriptors_via_Agent-Based_CVPR_2023 | Abstract Establishing pixel-level matches between image pairs is vital for a variety of computer vision applications. How-ever, achieving robust image matching remains challenging because CNN extracted descriptors usually lack discrim-inative ability in texture-less regions and keypoint detec-tors are only good at identifying keypoints with a specific level of structure. To deal with these issues, a novel im-age matching method is proposed by Jointly Learning Hier-archical Detectors and Contextual Descriptors via Agent-based Transformers (D2Former), including a contextual feature descriptor learning (CFDL) module and a hierar-chical keypoint detector learning (HKDL) module. The proposed D2Former enjoys several merits. First, the pro-posed CFDL module can model long-range contexts effi-ciently and effectively with the aid of designed descriptor agents. Second, the HKDL module can generate keypoint detectors in a hierarchical way, which is helpful for detect-ing keypoints with diverse levels of structures. Extensive experimental results on four challenging benchmarks show that our proposed method significantly outperforms state-of-the-art image matching methods. | 1. Introduction Finding pixel-level matches accurately between images depicting the same scene is a fundamental task with a wide range of 3D vision applications, such as 3D reconstruc-tion [35, 53, 55], simultaneous localization and mapping (SLAM) [15, 25, 39], pose estimation [13, 29], and visual localization [35,43]. Owing to its broad real-world applica-tions, the image matching task has received increasing at-tention in the past decades [9,16,31,33,34]. However, real-izing robust image matching remains difficult due to various challenges such as illumination changes, viewpoint trans-formations, poor textures and scale variations. To conquer the above challenges, tremendous image matching approaches have been proposed [7,9,12,16,31,34, 42], among which some dense matching methods [7,16,42] *Equal Contribution †Corresponding Author (a) Activation areas for different attention mechanism Full attention Agent -based attention High -level structures (object parts) (b) Diverse levels of structures in the image Low -level structures (corners/edges) Figure 1. Illustration of our motivation. (a) shows the compari-son between our proposed agent-based attention and full attention, where full attention would aggregate features from irrelevant ar-eas. (b) shows the diverse structures contained in the image. are proposed to consider all possible matches adequately and have achieved great success. However, because of the large matching space, these dense matching methods are ex-pensive in computation cost and memory consumption. To achieve high efficiency, we notice that the detector-based matching methods [4, 9, 20, 31] can effectively reduce the matching space by designing keypoint detectors to extract a relatively small keypoint set for matching, thus having high research value. Generally, existing detector-based match-ing methods can be categorized into two main groups in-cluding detect-then-describe approaches [18, 37, 40, 41, 54] and detect-and-describe approaches [12, 20, 31]. Detect-then-describe approaches refer to first detect repeatable key-points [3, 5, 18], and then keypoint features [19, 23, 28] are represented by describing image patches extracted around these keypoints. In this way, matches can be established by nearest neighbor search according to the Euclidean dis-tance between keypoint features. However, since the key-point detector and descriptor are usually designed sepa-rately in detect-then-describe approaches, keypoint features may not be suitable for detected keypoints, resulting in poor performance under extreme appearance changes. Dif-ferently, detect-and-describe approaches [12, 31] are pro-posed to tightly couple the keypoint detector learning with the descriptor learning. For example, both D2-Net [12] and R2D2 [31] use a single convolutional neural network (CNN) for joint detection and description. These methods This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2904 have achieved great performance mainly benefiting from the superiority of joint learning. However, the receptive field of features extracted by CNN is limited, and keypoint de-tectors are usually learned at a single feature scale, which restricts further progress. Based on the above discussions, we find that both the descriptor and detector learning are crucial for detector-based matching methods. To make image matching more robust to real-world challenges, the following two issues should be taken into consideration carefully. (1) How to learn feature descriptors with long-range dependencies. Current detector-based matching methods [9, 12, 31] usu-ally use CNN to extract image features. Due to the limited receptive field of CNN, the extracted features would lack discriminative ability in texture-less regions. Although sev-eral works [11, 48] leverage full attention to capture long-range dependencies, as shown in Figure 1 (a), full attention may aggregate irrelevant noise, which is harmful to learn discriminative features. Besides, the computation cost of full attention is rather expensive. Therefore, an effective and efficient attention mechanism needs to be proposed ur-gently to capture long-range contexts of features. (2) How to learn keypoint detectors suitable for various struc-tures. As shown in Figure 1 (b), there are diverse levels of structures in an image, from simple corner points (low-level structures) to complex object parts (high-level structures). However, existing keypoint detectors are usually good at identifying keypoints with a specific level of structure, such as corners (or edges) [14,49], and blobs [18,21]. Thus, it is necessary to learn hierarchical keypoint detectors to detect keypoints with different structures. Motivated by the above observations, we propose a novel model by Jointly Learning Hierarchical Detectors and Contextual Descriptors via Agent-based Transformers (D2Former) for image matching, which mainly consists of a contextual feature descriptor learning (CFDL) module and a hierarchical keypoint detector learning (HKDL) module. In the contextual feature descriptor learning module , it is proposed to capture reliable long-range contexts efficiently. Specifically, original image features are first extracted by a standard CNN. Then, we design a set of descriptor agents to aggregate contextual information by interacting with im-age features via attention mechanisms. Finally, contex-tual features are obtained by fusing the updated descriptor agents into original features. In the hierarchical keypoint detector learning module , it is proposed to detect key-points with different structures, which can achieve robust keypoint detection. Specifically, we design a set of detec-tor agents, which can interact with contextual features via attention mechanisms to obtain low-level keypoint detec-tors. Then, we aggregate these low-level keypoint detectors to form high-level keypoint detectors in a hierarchical way. Finally, the hierarchical keypoint detectors are obtained bygathering keypoint detectors from different levels. The main contributions of this work can be summarized as follows. (1) A novel image matching method is proposed by jointly learning hierarchical detectors and contextual de-scriptors via agent-based Transformers, which can extract discriminative feature description and realize robust key-point detection under some extremely challenging scenar-ios. (2) The proposed CFDL module can model long-range dependencies effectively and efficiently with the aid of de-signed descriptor agents. And the HKDL module can gen-erate keypoint detectors in a hierarchically aggregated man-ner, so that keypoints with diverse levels of structures can be detected. (3) Extensive experimental results on four chal-lenging benchmarks show that our proposed method per-forms favorably against state-of-the-art detector-based im-age matching methods. |
Fan_Joint_Appearance_and_Motion_Learning_for_Efficient_Rolling_Shutter_Correction_CVPR_2023 | Abstract Rolling shutter correction (RSC) is becoming increas-ingly popular for RS cameras that are widely used in com-mercial and industrial applications. Despite the promis-ing performance, existing RSC methods typically employatwo-stage network structure that ignores intrinsic infor-mation interactions and hinders fast inference. In this pa-per , we propose a single-stage encoder-decoder-based net-work, named JAMNet, for efficient RSC. It first extractspyramid features from consecutive RS inputs, and then si-multaneously refines the two complementary information(i.e., global shutter appearance and undistortion motionfield) to achieve mutual promotion in a joint learning de-coder . To inject sufficient motion cues for guiding jointlearning, we introduce a transformer-based motion embed-ding module and propose to pass hidden states across pyra-mid levels. Moreover , we present a new data augmenta-tion strategy “vertical flip + inverse order” to release thepotential of the RSC datasets. Experiments on variousbenchmarks show that our approach surpasses the state-of-the-art methods by a large margin, especially with a 4.7dB PSNR leap on real-world RSC. Code is available athttps://github.com/GitCVfb/JAMNet . | 1. Introduction As commonly used image sensors in the automotive sec-tor and motion picture industry, CMOS sensors offer par-ticular benefits, including low cost and simplicity in design [15,20,42,48]. The row-wise readout mechanism from top to bottom of electronic CMOS sensors, however, results inundesirable image distortions called the rolling shutter (RS)effect (also known as the jelly effect, e.g., wobble, skew) when a moving camera or object is in progress. Often, even a small camera motion causes visible geometric distortionsin the captured RS image or video. Because of this, the RSeffect inevitably becomes a hindrance to scene understand-ing and a nuisance in photography. As such, RS correction(RSC), as a way to make up for such deficiencies, is gradu-ally gaining more and more attention [ 5,11,27,29,58]. ⋆Equal contribution. †Corresponding author (daiyuchao@gmail.com).Figure 1. Performance vs. Speed. Each circle represents the per-formance of a model in terms of FPS and PSNR on the Fastec-RS testing set [ 29] with 640 ×480 images using a 3090 GPU. The radius of each circle denotes the model’s number of parame-ters. Our method achieves state-of-the-art performance with real-time inferences and smaller parameters compared with prior RSCmethods, including DeepUnrollNet [ 29], SUNet [ 10], JCD [ 56], AdaRSC [ 5], and Cascaded method ( i.e., SUNet + DAIN [ 4]). The RSC task aims to recover a latent distortion-free global shutter (GS) image corresponding to a specific ex-posure time between consecutive RS frames. The re-sulting RSC methods can be divided into traditional anddeep learning-based ones. The traditional RSC methods[1,13,26,38,40,48,50] usually rely on hand-designed prior assumptions, geometric constraints, and complex op-timization frameworks. Consequently, such processes aretypically time-consuming and require complex parameter-tuning strategies for different scenarios, which restricts theirreal-world applications. In contrast, convolutional neuralnetworks have also been used to remove RS artifacts in re-cent years due to the considerable success in many com-puter vision tasks, such as [ 5,9,14,29,51,60]. Particu-larly, RSC methods based on multiple consecutive RS im-ages have been heavily investigated [ 5,10,29,56]. In general, these multi-image-based RSC approaches of-ten consist of a two-stage network design with two key elements: a motion estimation module and a GS framesynthesis module, as illustrated in Fig. 2(a). The former is dedicated to estimating a pixel-wise undistortion field,which is utilized to warp the RS appearance content to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5671 the corresponding GS instance. The latter aims to fuse the contextual information in a coarse-to-fine manner, ul-timately decoding the desired GS image. Although thistwo-stage idea sounds relatively straightforward, it suffersfrom several drawbacks. First , the two-stage RSC faces a classic “chicken-and-egg” problem: motion estimation and GS frame synthesis are inextricably linked; a high-qualityundistortion field improves GS frame synthesis, and viceversa. Therefore, this step-by-step combination is not con-ducive to information interaction and joint optimization, re-sulting in a bottleneck for high-quality RSC. Second , the two modules are implemented by two independent encoder-decoders, ignoring the mutual promotion of these two key elements for RSC. Third , the two-stage network design in-evitably increases the model size and inference time, whichgreatly limits their efficient deployment in practice. To address these issues, we propose a novel single-stage solution for RS correction through Joint Appearance and Motion Learning (JAMNet). Our approach is a singleencoder-decoder structure with coarse-to-fine refinements,as depicted in Fig. 2(b), allowing the simultaneous learning of complementary GS appearance and undistortion motioninformation. After extracting hierarchical pyramid features, we design an efficient decoder for simultaneous occlusioninference and context aggregation. It leverages a warpingbranch to estimate the undistortion field to compensate for RS geometric distortions, while a synthetic branch is usedto progressively refine the GS appearance, which forms a mutual promotion of complementary information. Amongthem, a hidden state is maintained to transmit additionalcues across pyramid levels. Further, we propose to in-ject sufficient motion priors into the network at the coars-est level via a transformer-based motion embedding mod-ule. Moreover, inspired by the imaging principle of RSdata, we also develop a new data augmentation strategy, i.e. vertical flip + inverse order , in the training process, to en-hance the robustness of RSC models. Extensive experimen-tal results demonstrate that our JAMNet significantly out-performs state-of-the-art (SOTA) RSC methods, especiallyachieving a real-time inference speed, as shown in Fig. 1. It is worth mentioning that our pipeline achieves a 4.7 dB PSNR improvement on real-world RSC applications. In a nutshell, our main contributions are summarized: 1) We propose a tractable single-stage architecture to jointly perform GS appearance refinement and undistor-tion motion estimation for efficient RS correction. 2) We develop a general data augmentation strategy, i.e., vertical flip and inverse order, to maximize the explo-ration of the RS correction datasets. 3) Experiments show that our approach not only achieves SOTA RSC accuracy, but also enjoys a fast inferencespeed and a flexible and compact network structure.(a) Two-stage RSC (b) Single-stage RSCInput RS Undistortion FieldLatent GS Frame Input RS Latent GS Frame Undistortion Field Figure 2. Different RSC paradigms . (a) The currently popular two-stage structure first estimates the undistortion field, and thencompletes GS recovery accordingly. (b) We propose a single-stage RSC framework with a joint learning mechanism to estimate the undistortion field and the latent GS frame at the same time. |
Dong_Federated_Incremental_Semantic_Segmentation_CVPR_2023 | Abstract Federated learning-based semantic segmentation (FSS) has drawn widespread attention via decentralized training on local clients. However, most FSS models assume cate-gories are fixed in advance, thus heavily undergoing forget-ting on old categories in practical applications where local clients receive new categories incrementally while have no memory storage to access old classes. Moreover, new clients collecting novel classes may join in the global training of FSS, which further exacerbates catastrophic forgetting. To surmount the above challenges, we propose a Forgetting-Balanced Learning ( FBL ) model to address heterogeneous forgetting on old classes from both intra-client and inter-client aspects. Specifically, under the guidance of pseudo labels generated via adaptive class-balanced pseudo label-ing, we develop a forgetting-balanced semantic compensa-tion loss and a forgetting-balanced relation consistency loss to rectify intra-client heterogeneous forgetting of old cate-gories with background shift. It performs balanced gradient propagation and relation consistency distillation within lo-cal clients. Moreover, to tackle heterogeneous forgetting from inter-client aspect, we propose a task transition mon-itor. It can identify new classes under privacy protection and store the latest old global model for relation distillation. Qualitative experiments reveal large improvement of our model against comparison methods. The code is available athttps://github.com/JiahuaDong/FISS . | 1. Introduction Federated learning (FL) [13,20,22,44] is a remarkable de-centralized training paradigm to learn a global model across *Equal contributions.†The corresponding author is Prof. Yang Cong. ‡This work was supported in part by the National Nature Science Foun-dation of China under Grant 62127807, 62225310 and 62133005. … Task t Task t -1 Task t+1 Consecutive Segmentation Tasks of M edical L esions Local Model The 1 st Local Hospital Task t Task t -1 Task t+1 Consecutive Segmentation Tasks of M edical L esions Local Model The L th Local Hospital Task t Task t -1 Task t+1 Consecutive Segmentation Tasks of M edical L esions Local Model The Newly -Joined Local Hospital Global Server Upload Distribute Distribute UploadFigure 1. Exemplary FISS setting for medical diagnosis. Hundreds of hospitals including newly-joined ones receive new classes incre-mentally according to their own preference. FISS aims to segment new diseases consecutively via collaboratively learning a global segmentation model on private medical data of different hospitals. distributed local clients without accessing their private data. Under privacy preservation, it has achieved rapid develop-ment in semantic segmentation [4, 8, 30] by training on mul-tiple decentralized local clients to alleviate the constraint of data island that requires enormous finely-labeled pixel annotations [25]. As a result, federated learning-based se-mantic segmentation (FSS) [28,29] significantly economizes annotation costs in data-scarce scenarios via training a global segmentation model on private data of different clients [29]. However, existing FSS methods [16, 25, 28, 29] unrealisti-cally assume that the learned foreground classes are static and fixed over time, which is impractical in real-world dy-namic applications where local clients receive streaming data of new categories consecutively. To tackle this issue, existing This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3934 FSS methods [28, 29, 35] typically enforce local clients to store all samples of previously-learned old classes, and then learn a global model to segment new categories continually via FL. Nevertheless, it requires large computation and mem-ory overhead as new classes arrive continuously, limiting the application ability of FSS methods [16, 28]. If local clients have no memory to store old classes, existing FSS meth-ods [16, 29] significantly degrade segmentation behavior on old categories ( i.e., catastrophic forgetting [40,47,48]) when learning new classes incrementally. In addition, the pixels la-beled as background in the current learning task may belong to old classes from old tasks or new foreground classes from future tasks. This phenomenon is also known as background shift [11, 36] that heavily aggravates heterogeneous forget-ting speeds on old categories. More importantly, in practical scenarios, new local clients receiving new categories incre-mentally may join in global FL training irregularly, thus further exacerbating catastrophic forgetting to some extent. To surmount the above real-world scenarios, we pro-pose a novel practical problem called Federated Incremental Semantic Segmentation ( FISS ), where local clients collect new categories consecutively according to their preferences, and new local clients collecting unseen novel classes partici-pate in global FL training irregularly. In the FISS settings, the class distributions are non-independent and identically distributed (Non-IID) across different clients, and training data of old classes is unavailable for all local clients. FISS aims to train a global incremental segmentation model via collaborative FL training on local clients while addressing catastrophic forgetting. In this paper, we use medical lesions segmentation [25, 29] as an example to better illustrate FISS, as shown in Figure 1. Hundreds of hospitals, as well as newly joined ones, collect unseen/new medical lesions continuously in clinical diagnosis. Considering privacy preservation, it is desired for these hospitals to learn a global segmentation modal via FL without accessing each other’s data [44, 56]. A naive solution for FISS problem is to directly integrate incremental semantic segmentation [1,11,53] and FL [19,50] together. Nevertheless, such a trivial solution requires global server to have strong human prior about which and when local clients can collect new categories, so that global model learned in the latest old task can be stored by local clients to address forgetting on old classes via knowledge distillation [18, 43]. Considering privacy preservation in the FISS, this privacy-sensitive prior knowledge cannot be shared between local clients and global server. As a result, this naive solution severely suffers from intra-client heterogeneous forgetting on different old classes caused by background shift [1,11,36,53], and inter-client heterogeneous forgetting across different clients brought by Non-IID class distributions. To overcome the above-mentioned challenges, we de-velop a novel Forgetting-Balanced Learning ( FBL ) model, which alleviates heterogeneous forgetting on old classesfrom intra-client and inter-client perspectives. Specifically, to tackle intra-client heterogeneous forgetting caused by background shift, we propose an adaptive class-balanced pseudo labeling to adaptively generate confident pseudo la-bels for old classes. Under the guidance of pseudo labels, we propose a forgetting-balanced semantic compensation loss to rectify different forgetting of old classes with background shift via considering balanced gradient propagation of local clients. In addition, a forgetting-balanced relation consis-tency loss is designed to distill underlying category-relation consistency between old and new classes for intra-client het-erogeneous forgetting compensation. Moreover, considering addressing heterogeneous forgetting from inter-client aspect, we develop a task transition monitor to automatically identify new classes without any human prior, and store the latest old model from global perspective for relation consistency dis-tillation. Experiments on segmentation datasets reveal large improvement of our model over comparison methods. We summarize the main contributions of this work as follows: •We propose a novel practical problem called Federated Incremental Semantic Segmentation (FISS), where the major challenges are intra-client and inter-client heteroge-neous forgetting on old categories caused by intra-client background shift and inter-client Non-IID distributions. •We propose a Forgetting-Balanced Learning (FBL) model to address the FISS problem via surmounting heteroge-neous forgetting from both intra-client and inter-client aspects. As we all know, in the FL field, this is a pioneer attempt to explore a global continual segmentation model. •We develop a forgetting-balanced semantic compensation loss and a forgetting-balanced relation consistency loss to tackle intra-client heterogeneous forgetting across old classes, under the guidance of confident pseudo labels generated via adaptive class-balanced pseudo labeling. •We design a task transition monitor to surmount inter-client heterogeneous forgetting by accurately recognizing new classes under privacy protection and storing the latest old model from global aspect for relation distillation. |
Du_Avatars_Grow_Legs_Generating_Smooth_Human_Motion_From_Sparse_Tracking_CVPR_2023 | Abstract With the recent surge in popularity of AR/VR applica-tions, realistic and accurate control of 3D full-body avatars has become a highly demanded feature. A particular chal-lenge is that only a sparse tracking signal is available from standalone HMDs (Head Mounted Devices), often limited to tracking the user’s head and wrists. While this signal is resourceful for reconstructing the upper body motion, the lower body is not tracked and must be synthesized from the limited information provided by the upper body joints. In this paper, we present AGRoL, a novel conditional diffu-sion model specifically designed to track full bodies given sparse upper-body tracking signals. Our model is based on a simple multi-layer perceptron (MLP) architecture and a novel conditioning scheme for motion data. It can pre-dict accurate and smooth full-body motion, particularly the challenging lower body movement. Unlike common dif-fusion architectures, our compact architecture can run in real-time, making it suitable for online body-tracking appli-cations. We train and evaluate our model on AMASS motion capture dataset, and demonstrate that our approach outper-forms state-of-the-art methods in generated motion accu-racy and smoothness. We further justify our design choices through extensive experiments and ablation studies. *Work done during an internship at Meta AI. Code is available at github.com/facebookresearch/AGRoL.1. Introduction Humans are the primary actors in AR/VR applications. As such, being able to track full-body movement is in high demand for these applications. Common approaches are able to accurately track upper bodies only [25, 56]. Moving to full-body tracking unlocks engaging experiences where users can interact with the virtual environment with an in-creased sense of presence. However, in the typical AR/VR setting there is no strong tracking signal for the entire hu-man body – only the head and hands are usually tracked by means of Inertial Measurement Unit (IMU) sensors em-bedded in Head Mounted Displays (HMD) and hand con-trollers. Some works suggest adding additional IMUs to track the lower body joints [22, 25], those additions come at higher costs and the expense of the user’s comfort [24, 27]. In an ideal setting, we want to enable high-fidelity full-body tracking using the standard three inputs (head and hands) provided by most HMDs. Given the position and orientation information of the head and both hands, predicting full-body pose, especially the lower body, is inherently an underconstrained problem. To address this challenge, different methods rely on genera-tive models such as normalizing flows [44] and Variational Autoencoders (V AE) [11] to synthesize lower body mo-tions. In the realm of generative models, diffusion models have recently shown impressive results in image and video generation [21, 39,46], especially for conditional genera-tion. This inspires us to employ the diffusion model to gen-erate the fully-body poses conditioned on the sparse track-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 481 ing signals. To the best of our knowledge, there is no exist-ing work leveraging the diffusion model solely for motion reconstruction from sparse tracking information. However, it is not trivial to employ the diffusion model in this task. Existing approaches for conditional generation with diffusion models are widely used for cross-modal con-ditional generation. Unfortunately, these methods can not be directly applied to the task of motion synthesis, given the disparity in data representations, e.g. human body joints feature vs. images. In this paper, we propose a novel diffusion architecture – Avatars Grow Legs (AGRoL), which is specifically tailored for the task of conditional motion synthesis. Inspired by re-cent work in future motion prediction [18], which uses an MLP-based architecture, we find that a carefully designed MLP network can achieve comparable performance to the state-of-the-art methods. However, we discovered that the predicted motions of MLP networks may contain jittering artifacts. To address this issue and generate smooth realis-tic full body motion from sparse tracking signals, we design a novel lightweight diffusion model powered by our MLP architecture. Diffusion models require time step embed-ding [21, 38] to be injected in the network during training and inference; however, we found that our MLP architecture is not sensitive to the positional embedding in the input. To tackle this problem, we propose a novel strategy to effec-tively inject the time step embedding during the diffusion process. With the proposed strategy, we can significantly mitigate the jittering issues and further improve the model’s performance and robustness against the loss of tracking sig-nal. Our model accurately predicts full-body motions, out-performing state-of-the-art methods as demonstrated by the experiments on AMASS [35], large motion capture dataset. We summarize our contributions as follows: • We propose AGRoL, a conditional diffusion model specifically designed for full-body motion synthesis based on sparse IMU tracking signals. AGRoL is a simple and yet efficient MLP-based diffusion model with a lightweight architecture. To enable gradual de-noising and produce smooth motion sequences we pro-pose a block-wise injection scheme that adds diffusion timestep embedding before every intermediate block of the neural network. With this timestep embedding strategy, AGRoL achieves state-of-the-art performance on the full-body motion synthesis task without any ex-tra losses that are commonly used in other motion pre-diction methods. • We show that our lightweight diffusion-based model AGRoL can generate realistic smooth motions while achieving real-time inference speed, making it suitable for online applications. Moreover, it is more robust against tracking signals loss then existing approaches. Figure 2. The architecture of our MLP-based network. FC, LN, and SiLU denote the fully connected layer, the layer normal-ization, and the SiLU activation layer respectively. 1×1 Conv denotes the 1D convolution layer with kernel size 1. Note that 1× 1 Conv here is equivalent to a fully connected layer operating on the first dimension of the input tensor RN×D, while the FClayers operate on the last dimension. Ndenotes the temporal dimension andDdenotes the dimension of the latent space. The middle block is repeated Mtimes. The first FClayer projects input data to a la-tent space RN×Dand the last one converts from latent space to the output space of full-body poses RN×S. 2. Related Work 2.1. Motion Tracking from Sparse Tracking Inputs The generation of full-body poses from sparse tracking signals of body joints has become an area of considerable interest within the research community. For instance, re-cent works such as [22] have demonstrated the ability to track full bodies using only 6 IMU inputs and employing a bi-directional LSTM to predict SMPL body joints. Addi-tionally, in [56], a similar approach is used to track with 4 IMU inputs, specifically the head, wrists, and pelvis. How-ever, in the practical HMD setting, only 3 tracking signals are typically available: the head and 2 wrists. In this con-text, AvatarPoser [24] provides a solution to the 3-point problem through the use of a transformer-based architec-ture. Other methods attempt to solve sparse input body tracking as a synthesis problem. To that extent, Aliakbar-ianet al . [4] proposed a flow-based architecture derived from [10], while Dittadi et al. [11] opted for a Variational Autoencoder (V AE) method. While more complex methods have been developed that involve Reinforcement Learning, as seen in [55, 57], these approaches may struggle to simul-taneously maintain accurate upper-body tracking while gen-erating physically realistic motions. In summary, all methods presented in this section either require more than three joints input or face difficulties in ac-curately predicting full body pose, particularly in the lower body region. Our proposed method, on the other hand, uti-lizes a custom diffusion model and employs a straightfor-ward MLP-based architecture to predict full body pose with a high degree of accuracy, while utilizing only three IMU inputs. 482 2.2. Diffusion Models and Motion Synthesis Diffusion models [21, 39, 46] are a class of likelihood-based generative models based on learning progressive noising and denoising of data. Diffusion models have re-cently have garnered significant attention in the field of im-age generation [9] due to their ability to significantly outper-form popular GAN architectures [7, 26] and is better suited for handling a large amount of data. Furthermore, diffusion models can support conditional generation, as evidenced by the classifier guidance approach presented in [9] and the CLIP-based text conditional synthesis for diffusion models proposed in [38]. More recently, concurrent works have also extended dif-fusion models to motion synthesis, with particular focus on the text-to-motion task [28, 49,59]. However, these mod-els are both complex in architecture and require multiple iterations at inference time. This hinders them unsuitable for real-time applications like VR body tracking. We cir-cumvent this problem by designing a custom and efficient diffusion model. To the best of our knowledge, we present the first diffusion model solely purposed for solving motion reconstruction from sparse inputs. Our model leverages a simple MLP architecture, runs in real-time, and provides accurate pose predictions, particularly for lower bodies. 2.3. Human Motion Synthesis Early works in human motion synthesis rose under the task of future motion prediction. Works around this task saw various modeling approaches ranging from sequence to sequence models [14] to graph modeling of each body part [23]. These supervised models were later replaced by generative methods [17, 31] based on Generative Adversar-ial Networks (GANs) [16]. Despite their leap forward, these approaches tend to diverge from realistic motion and require access to all body joint positions, making them impractical for avatar animation in VR [19]. A second family of motion synthesis methods revolves around character control. In this setting, character motion must be generated according to user inputs and environmen-tal constraints, such as the virtual environment properties. This research direction has practical applications in the field of computer gaming, where controller input is used to guide character motion. Taking inspiration from these constraints, Wang et al. [54] formulated motion synthesis as a control problem by using a GAN architecture that takes direction and speed input into account. Similar efforts are found in [48], where the method learns fast and dynamic char-acter interactions that involve contacts between the body and other objects, given user input from a controller. These methods are impractical in a VR setting, where users want to drive motion using their real body pose instead of a con-troller.3. Method 3.1. Problem Formulation Our goal is to predict the whole body motion given sparse tracking signals, i.e. the orientation and translation of the headset and two hand controllers. To achieve this, we use a sequence of Nobserved joint features p1:N= {pi}N i=1∈RN×Cand | aim to predict the corresponding whole-body poses y1:N={yi}N i=1∈RN×Sfor each frame. The dimensions of the input/output joint features are represented by CandS, respectively. We utilize the SMPL [33] model in this paper to represent human poses and follow the approach outlined in [11, 24] to consider the first 22 joints of the SMPL model and disregard the joints on the hands and face. Thus, y1:Nreflects the global ori-entation of the pelvis and the relative rotation of each joint. Following [24], during inference, we initially pose the hu-man model using the predicted rotations. Next, we calcu-late the character’s global translation by accounting for the known head translation and subtracting the offset between the root joint and the head joint. In the following section, we first introduce a simple MLP-based network for full-body motion synthesis based on sparse tracking signals. Then, we show how we further improve the performance by leveraging the proposed MLP-based architecture to power the conditional generative dif-fusion model, termed AGRoL. 3.2. MLP-based Network Our network architecture comprises only four types of components commonly employed in the realm of deep learning: fully connected layers (FC), SiLU activation lay-ers [41], 1D convolutional layers [30] with kernel size 1 and an equal number of input and output channels, as well as layer normalization (LN) [5]. It is worth noting that the 1D convolutional layer with a kernel size of 1 can also be interpreted as a fully connected layer operating along a dif-ferent dimension. The details of our network architecture are demonstrated in Figure 2. Each block of the MLP net-work contains one convolutional and one fully connected layer, which is responsible for temporal and spatial infor-mation merging respectively. We use skip-connections as in ResNets [20] with Layer Norm [6] as pre-normalization of the layers. First, we project the input data p1:Nto a higher dimensional latent space using a linear layer. And the last layer of the network projects from the latent space to the output space of full-body poses y1:N. 3.3. Diffusion Model Diffusion model [21, 46] is a type of generative model which learns to reverse random Gaussian noise added by a Markov chain to recover desired data samples from the noise. In the forward diffusion process, given a sample mo-483 Figure 3. The architecture of our MLP-based diffusion model. tis the noising step. x1:N tdenotes the motion sequence of length Nat step t, which is pure Gaussian noises when t= 0.p1:N denotes the sparse upper body signals of length N.ˆx1:N tdenotes the denoised motion sequence at step t. tion sequence x1:N 0∼q(x1:N 0)from the data distribution, the Markovian noising process can be written as: q(x1:N t|x1:N t−1) :=N(x1:N t;√αtx1:N t−1,(1−αt)I),(1) where αt∈(0,1)is constant hyper-parameter and Iis the identity matrix. x1:N Ttends to an isotropic Gaussian dis-tribution when T→ ∞ . Then, in the reverse diffusion process, a model pθwith parameters θis trained to gener-ate samples from input Gaussian noise xT∼ N(0, I)with variance σ2 tthat follows a fixed schedule. Formally, pθ(x1:N t−1|x1:N t) :=N(x1:N t−1;µθ(xt, t), σ2 tI), (2) where µθcould be reformulated [21] as µθ(xt, t) =1√αt(xt−1−αt√1−¯αtϵθ(xt, t)), (3) where ¯αt=α1·α2...·αt. So the model has to learn to predict noise ϵθ(xt, t)from xtand timestep t. In our case, we want to use the diffusion model to gener-ate sequences of full-body poses conditioned on the sparse tracking of joint features p1:N. Thus, the reverse diffusion process becomes conditional: pθ(x1:N t−1|x1:N t, p1:N). More-over, we follow [42] to directly predict the clean body poses x1:N 0instead of predicting the residual noise ϵθ(xt, t). The objective function is then formulated as Ldm=Ex1:N 0∼q(x1:N 0),t ∥x1:N 0−ˆx1:N 0∥2 2 (4) where the ˆx1:N 0=fθ(x1:N, p1:N, t)denotes the output of our model fθ. We use the MLP architecture proposed in Sect. 3.2 as the backbone for the model fθthat predicts the full-body poses. At time step t, the motion features x1:N tand the observedjoints feature p1:Nare first passed separately through a fully connected layer to obtain the latent features ¯x1:N tand¯p1:N: ¯x1:N t=FC0(x1:N t), (5) ¯p1:N=FC1(p1:N). (6) Then these features are concatenated together and fed to the MLP backbone: ˆx1:N 0=MLP(Concat (¯x1:N t,¯p1:N), t). Block-wise Timestep Embedding. When utilizing diffu-sion models, the embedding of the timestep tis often in-cluded as an additional input to the network. To achieve this, a common approach is to concatenate the timestep embedding with the input, similar to positional embedding used in transformer-based methods [12,53]. However, since our network is mainly composed of FC layers, that mix the input features indiscriminately [50], the time step em-bedding information can easily be lost after several layers, which hinders learning the denoising process and results in predicted motions with severe jittering artifacts, as shown in Section 4.4.2. In order to address the issue of losing time step embedding information in our network, we introduce a novel strategy that repetitively injects the time step em-bedding into every block of the MLP network. This process involves projecting the timestep embedding to match the in-put feature dimensions through a fully connected layer and a SiLU activation layer. The details of our pipeline are shown in Figure 3. Unlike previous work, such as [21], which predicts a scale and shift factor for each block from the timestep embedding,our proposed approach directly adds the timestep embedding projections to the input activations of each block. Our experiments in Sect. 4 validate that this approach significantly reduces jittering issues and enables the synthesis of smooth motions. 4. Experiments Our models are trained and evaluated on the AMASS dataset [35]. To compare with previous methods, we use two different settings for training and testing. In the first setting, we follow the approach of [24], which uti-lizes three subsets of AMASS: CMU [8], BMLr [51], and HDM05 [37]. In the second setting, we adopt the data split employed in several recent works, including [4,11,43]. This approach employs a larger set of training data, includ-ing CMU [8], MPI Limits [3], Total Capture [52], Eyes Japn [13], KIT [36], BioMotionLab [51], BMLMovi [15], EKUT [36], ACCAD [1], MPI Mosh [32], SFU [2], and HDM05 [37] as training data, while HumanEval [45] and Transition [35] serve as testing data. In both settings, we adopt the SMPL [33] human model for the human pose representation and train our model to predict the global orientation of the root joint and relative rotation of the other joints. 484 Method MPJRE MPJPE MPJVE Hand PE Upper PE Lower PE Root PE Jitter Upper Jitter Lower Jitter Final IK 16.77 18.09 59.24 -------LoBSTr 10.69 9.02 44.97 -------V AE-HMD 4.11 6.83 37.99 -------AvatarPoser* 3.08 4.18 27.70 2.12 1.81 7.59 3.34 14.49 7.36 24.81 MLP (Ours) 2.69 3.93 22.85 2.62 1.89 6.88 3.35 13.01 9.13 18.61 AGRoL (Ours) 2.66 3.71 18.59 1.31 1.55 6.84 3.36 7.26 5.88 9.27 GT 0 0 0 0 0 0 0 4.00 3.65 4.52 Table 1. Comparison of our approach with state-of-the-art methods on a subset of AMASS dataset following [24]. We report MPJPE [cm], MPJRE [deg ],MPJVE [cm/s], Jitter [102m/s3] metrics. AGRoL achieves the best performance on MPJPE, MPJRE andMPJVE, and outperforms other models, especially on the Lower PE (Lower body Position Error) and Jitter metrics, which shows that our model generates accurate lower body movement and smooth motions. Method MPJRE MPJPE MPJVE Jitter V AE-HMD†[11] -7.45 --HUMOR†[43] -5.50 --FLAG†[4] -4.96 --A vatarPoser* 4.70 6.38 34.05 10.21 MLP (Ours) 4.33 6.66 33.58 21.74 A GRoL (Ours) 4.30 6.17 24.40 8.32 GT 0 0 0 2.93 Table 2. Comparison of our approach with state-of-the-art meth-ods on AMASS dataset following the protocol of [4, 11,43]. We report the MPJPE [cm], MPJRE [deg ], MPJVE [cm/s], and Jitter [102m/s3] metrics. The * denotes that we retrained the Avatar-Poser using public code. †denotes methods that use pelvis loca-tion and rotation during inference, which are not directly compa-rable to our method, as we assume that the pelvis information is not available during the training and the testing. The best results are in bold, and the second-best results are underlined. 4.1. Implementation Details We represent the joint rotations by the 6D reparametriza-tion [60] due to its simplicity and continuity. Thus, for the sequences of body poses y1:N∈RN×S,S= 22×6. The observed joint features p1:N∈RN×Cconsists of the orien-tation, translation, orientation velocity and translation ve-locity of the head and hands in global coordinate system. Additionally, we adopt 6D reparametrization for the orien-tation and orientation velocity, thus C= 18×3. Unless otherwise stated, we set the frame number Nto 196. MLP Network We build our MLP network using 12 blocks (M = 12). All latent features in the MLP network have the same shape of N×512. The network is trained with batch size 256 and Adam optimizer [29]. The learning rate is set to 3e-4 at the beginning and drops to 1e-5 after 200000 iterations. The weight decay is set to 1e-4 for the entire training. During inference, we apply our model in an auto-regressive manner for the longer sequences.MLP-based Diffusion Model (AGRoL) We keep the MLP network architecture unchanged in the diffusion model. To inject the time step embedding used in the dif-fusion process in the network, in each MLP block, we pass the time step embedding to a fully connected layer and a SiLU activation layer [41] and sum it with the input feature. The network is trained with exactly the same hyperparam-eters as the MLP network, with the exception of using the AdamW [34] as optimizer. During training, we set the sam-pling step to 1000 and employ a cosine noise schedule [39]. However, to expedite the inference speed, we leverage the DDIM [47] technique, which allows us to sample only 5 steps instead of 1000 during inference. All experiments were carried out on a single NVIDIA V100 graphics card, using the PyTorch framework [40]. 4.2. Evaluation Metrics In line with previous works [11, 24,43,58], we adopt nine evaluation metrics that we group into three categories. Rotation-related metric: Mean Per Joint Rotation Error [degrees] (MPJRE ) measures the average relative rotation error for all joints. Velocity-related metrics: These include Mean Per Joint Velocity Error [cm/s] (MPJVE ) and Jitter. MPJVE mea-sures the average velocity error for all joints, while Jit-ter[58] evaluates the mean jerk (change in acceleration over time) of all body joints in global space, expressed in 102m/s3.Jitter is an indicator of motion smoothness. Position-related metrics. Mean Per Joint Position Error [cm] (MPJPE ) quantifies the average position error across all joints. Root PE assesses the position error of the root joint, whereas Hand PE calculates the average position er-ror for both hands. Upper PE andLower PE estimate the average position error for joints in the upper and lower body, respectively. 4.3. Evaluation Results We evaluate our method on the AMASS dataset with two different protocols. As shown in Table 1and Table 2, 485 Figur e 4. Qualitative comparison between AGRoL (top) and AvatarPoser [24] (bottom) on test sequences from AMASS dataset. We visualize the predicted skeletons and render human body meshes. Top: AvatarPoser predictions in red.Bottom: AGRoL predictions in green. In both rows, the blue skeletons denote the |
Chen_MobileNeRF_Exploiting_the_Polygon_Rasterization_Pipeline_for_Efficient_Neural_Field_CVPR_2023 | Abstract Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumet-ric rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graph-ics hardware. This paper introduces a new NeRF repre-sentation based on textured polygons that can synthesize novel images efficiently with standard rendering pipelines. The NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors. Tradi-tional rendering of the polygons with a z-buffer yields an image with features at every pixel, which are interpreted by a small, view-dependent MLP running in a fragment shader to produce a final pixel color. This approach enables NeRFs to be rendered with the traditional polygon rasteri-zation pipeline, which provides massive pixel-level paral-lelism, achieving interactive frame rates on a wide range of compute platforms, including mobile phones. Project page: https://mobile-nerf.github.io | 1. Introduction Neural Radiance Fields (NeRF) [33] have become a pop-ular representation for novel view synthesis of 3D scenes. They represent a scene using a multilayer perceptron (MLP) that evaluates a 5D implicit function estimating the density and radiance emanating from any position in any direction, which can be used in a volumetric rendering framework to produce novel images. NeRF representations optimized to minimize multi-view color consistency losses for a set of posed photographs have demonstrated remarkable ability to reproduce fine image details for novel views. One of the main impediments to wide-spread adoption of NeRF is that it requires specialized rendering algorithms that are poor match for commonly available hardware. Tra-ditional NeRF implementations use a volumetric rendering 3Work done while at Google. Figure 1. Teaser – We present a NeRF that can run on a variety of common devices at interactive frame rates. algorithm that evaluates a large MLP at hundreds of sample positions along the ray for each pixel in order to estimate and integrate density and radiance. This rendering process is far too slow for interactive visualization. Recent work has addressed this issue by “baking” NeRFs into a sparse 3D voxel grid [21, 51]. For example, Hed-man et al. introduced Sparse Neural Radiance Grids (SNeRG) [21], where each active voxel contains an opac-ity, diffuse color, and learned feature vector. Rendering an image from SNeRG is split into two phases: the first uses ray marching to accumulate the precomputed diffuse col-ors and feature vectors along each ray, and the second uses a light-weight MLP operating on the accumulated feature vector to produce a view-dependent residual that is added to the accumulated diffuse color. This precomputation and deferred rendering approach increase the rendering speed of NeRF by three orders of magnitude. However, it still relies upon ray marching through a sparse voxel grid to produce the features for each pixel, and thus it cannot fully utilize the parallelism available in commodity graphics processing units (GPUs). In addition, SNeRG requires a significant amount of GPU memory to store the volumetric textures, which prohibits it from running on common mobile devices. In this paper, we introduce MobileNeRF, a NeRF that This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16569 can run on a variety of common mobile devices at inter-active frame rates. The NeRF is represented by a set of textured polygons, where the polygons roughly follow the surface of the scene, and the texture atlas stores opacity and feature vectors. To render an image, we utilize the classic polygon rasterization pipeline with Z-buffering to produce a feature vector for each pixel and pass it to a lightweight MLP running in a GLSL fragment shader to produce the output color. This rendering pipeline does not sample rays or sort polygons in depth order, and thus can model only bi-nary opacities. However, it takes full advantage of the paral-lelism provided by z-buffers and fragment shaders in mod-ern graphics hardware, and thus is 10×faster than SNeRG with the same output quality on standard test scenes. More-over, it requires only a standard polygon rendering pipeline, which is implemented and accelerated on virtually every computing platform, and thus it runs on mobile phones and other devices previously unable to support NeRF visualiza-tion at interactive rates. Contributions . In summary, MobileNeRF: • Is10×faster than the state-of-the-art (SNeRG), with the same output quality; • Consumes less memory by storing surface textures in-stead of volumetric textures, enabling our method to run on integrated GPUs with limited memory and power; • Runs on a web browser and is compatible with all devices we have tested, as our viewer is an HTML webpage; • Allows real-time manipulation of the reconstructed ob-jects/scenes, as they are simple triangle meshes. |
Basak_Pseudo-Label_Guided_Contrastive_Learning_for_Semi-Supervised_Medical_Image_Segmentation_CVPR_2023 | Abstract Although recent works in semi-supervised learning (SemiSL) have accomplished significant success in nat-ural image segmentation, the task of learning discrimi-native representations from limited annotations has been an open problem in medical images. Contrastive Learn-ing (CL) frameworks use the notion of similarity measure which is useful for classification problems, however, they fail to transfer these quality representations for accurate pixel-level segmentation. To this end, we propose a novel semi-supervised patch-based CL framework for medical im-age segmentation without using any explicit pretext task. We harness the power of both CL and SemiSL, where the pseudo-labels generated from SemiSL aid CL by providing additional guidance, whereas discriminative class informa-tion learned in CL leads to accurate multi-class segmen-tation. Additionally, we formulate a novel loss that syn-ergistically encourages inter-class separability and intra-class compactness among the learned representations. A new inter-patch semantic disparity mapping using aver-age patch entropy is employed for a guided sampling of positives andnegatives in the proposed CL framework. Experimental analysis on three publicly available datasets of multiple modalities reveals the superiority of our pro-posed method as compared to the state-of-the-art methods. Code is available at: GitHub. | 1. Introduction Accurate segmentation of medical images provides salient and insightful information to clinicians for appro-priate diagnosis, disease progression, and proper treatment planning. With the recent emergence of neural networks, supervised deep learning approaches have achieved state-of-the-art performance in multiple medical image segmen-tation tasks [11, 36, 41]. This can be attributed to the avail-ability of large annotated datasets. But, obtaining pixel-wise annotations in a large scale is often time-consuming,requires expertise, and incurs a huge cost, thus methods al-leviating these requirements are highly expedient. Semi-supervised learning (SemiSL) based methods are promising directions to this end, requiring a very small amount of annotations, and producing pseudo-labels for a large portion of unlabeled data, which are further utilized to train the segmentation network [32, 33]. In recent years, these methods have been widely recognized for their supe-rior performance in downstream tasks (like segmentation, object detection, etc.), not only in natural scene images but also in biomedical image analysis [3, 4, 64]. Traditional SemiSL methods employ regression, pixel-wise cross en-tropy (CE), or mean squared error (MSE) loss terms or their variants. But, none of these losses imposes intra-class compactness and inter-class separability, restricting their full learning potential. Recent SemiSL methods in medi-cal vision employing self-ensembling strategy [14,44] have received attention because of their state-of-the-art perfor-mance in segmentation tasks. However, they are designed for a single dataset, failing to generalize across domains. Unsupervised domain adaptation (UDA) [18, 61] can be utilized to address this problem, e.g., Xie et al. [60] pro-posed an efficient UDA method with self-training strat-egy to unleash the learning potential. However, most of these methods heavily rely upon abundant source labels, hence producing substandard performance with limited la-bels in clinical deployment [71]. Representational learn-ing is another promising way to learn from limited anno-tations, where models trained for pretext tasks on large source domains can be transferred for downstream tasks in the target domain. Current advancements in represen-tational learning have been ascribed as the upturn of con-trastive learning (CL) [23], that aims to distinguish simi-lar samples ( positive ) from dissimilar ones ( negative ) re-garding a specified anchor point in a projected embedding space. This idea has resulted in substantial advancements in self-supervision paradigms by learning useful represen-tations from large-scale unlabeled data [9, 43, 57]. The fun-damental idea of CL is to pull the semantically similar sam-ples together and push the dissimilar ones apart in the em-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19786 bedding space. This is accomplished by suitably designing an objective function, also known as the Contrastive Loss function, which optimizes the mutual information amongst different data points. The learned information from the pre-text task can thereafter be transferred for downstream tasks such as classification [62], segmentation [53, 66], etc. Despite their great success in recent years, CL frame-works are not devoid of problems, which broadly include: (a)sampling bias and aggravated class collision are re-ported in [15] because semantically similar instances are forcefully contrasted due to unguided selection of negative samples [9], causing substandard performance; (b)as sug-gested in [21], it is a common and desirable practice in CL to adapt a model trained for some pretext task on an exist-ing large-scale dataset of source domain (e.g., ImageNet) to a specific downstream task of the target domain. How-ever, significant domain shifts in heterogeneous datasets may often hurt the overall performance [73], especially in medical images; and (c)designing a suitable pretext task can be challenging, and often cannot be generalized across datasets [37]. The first of these problems can be addressed by having access to labeled samples. For instance, [27] shows that including labels significantly improves the clas-sification performance, but this is in a fully supervised set-ting. There have been recent attempts to partially address the last two problems, which are highlighted in section 2. Our Proposal and Contribution Taking motivation from these unsolved problems, we aim to leverage the potential of CL in the realm of SemiSL through several novel contributions: • We propose a novel end-to-end segmentation paradigm by harnessing the power of both CL and SemiSL. In our case, the pseudo-labels generated in SemiSL aids CL by providing an additional guidance to the metric learning strategy, whereas the important class discrim-inative feature learning in CL boosts the multi-class segmentation performance of SemiSL. Thus SemiSL aids CL and vice-versa in medical image segmenta-tion tasks. • We introduce a novel Pseudo-label Guided Contrastive Loss ( PLGCL ) which can mine class-discriminative features without any explicit training on pretext tasks , thereby demonstrating generalizability across multiple domains . • We employ a patch-based CL framework, where the positive andnegative patches are sampled from an entropy-based metric guided by the pseudo-labels ob-tained in the SemiSL setting. This prevents ( class col-lision ), i.e., forceful and unguided contrast of semanti-cally similar instances in CL. • Upon the evaluation on three datasets from different domains, our method is proven to be effective, addingto its generalizability androbustness . |
Ao_BUFFER_Balancing_Accuracy_Efficiency_and_Generalizability_in_Point_Cloud_Registration_CVPR_2023 | Abstract An ideal point cloud registration framework should have superior accuracy, acceptable efficiency, and strong gener-alizability. However, this is highly challenging since exist-ing registration techniques are either not accurate enough, far from efficient, or generalized poorly. It remains an open question that how to achieve a satisfying balance between this three key elements. In this paper, we propose BUFFER, a point cloud registration method for balancing acc uracy, efficiency, and gen eralizability. The key to our approach is to take advantage of both point-wise and patch-wise tech-niques, while overcoming the inherent drawbacks simulta-neously. Different from a simple combination of existing methods, each component of our network has been carefully crafted to tackle specific issues. Specifically, a Point-wise Learner is first introduced to enhance computational effi-ciency by predicting keypoints and improving the represen-tation capacity of features by estimating point orientations, a Patch-wise Embedder which leverages a lightweight lo-cal feature learner is then deployed to extract efficient and general patch features. Additionally, an Inliers Generator which combines simple neural layers and general features is presented to search inlier correspondences. Extensive experiments on real-world scenarios demonstrate that our method achieves the best of both worlds in accuracy, effi-ciency, and generalization. In particular, our method not only reaches the highest success rate on unseen domains, but also is almost 30 times faster than the strong base-lines specializing in generalization. Code is available at https://github.com/aosheng1996/BUFFER . | 1. Introduction Point cloud registration plays a critical role in LiDAR SLAM [23, 25], 3D reconstruction [44], and robotic navi-gation [22, 36]. An ideal registration framework not only requires aligning geometries accurately and efficiently, but *Corresponding author: guoyulan@sysu.edu.cn. 60 70 80 90 100 Success Rate on ETH (% )808590 60 70 80 90 100 Success Rate on ETH (% )024 RoReg TPAMI 2023 SpinNet TPAMI 2022GeoTrans CVPR 2022 Predator CVPR 2021 D3Feat CVPR 2020Gedi TPAMI 2022YOHO ACM MM 2022 Ours [99.30, 3.85] D3Feat [74.47, 3.85]Predator [69.00, 2.70] RoReg [74.33, 0.39]YOHO [63.90, 0.32]GeoTrans [90.18, 3.70] Gedi [99.24, 0.15] SpinNet [97.62, 0.14]D3Feat+SpinNet [90.18, 0.42]Predator+SpinNet [93.69, 0.40]Figure 1. Comparisons of the registration accuracy on the indoor 3DMatch [66] dataset, efficiency, and generalizability on the out-door ETH [49] dataset of different approaches. Note that, all meth-ods are trained only on the 3DMatch dataset. Our method not only achieves the highest recall on 3DMatch, but also has the best gen-eralization ability and efficiency across the unseen ETH dataset. also can be generalized to unseen scenarios acquired by dif-ferent sensors. However, due to uneven data quality ( e.g., noise distribution, non-uniform density, varying viewing angles, domain gaps across different sensors), it remains challenging to simultaneously achieve a satisfactory bal-ance between efficiency, accuracy, and generalization. Existing registration techniques can be mainly cat-egorized into correspondences-based [30, 63, 66] and correspondences-free methods [3, 60, 61]. By establishing a series of reliable correspondences, the correspondences-based methods usually have better registration performance compared with correspondences-free methods, especially in large-scale scenarios. However, these correspondence-based methods are still not ready for large-scale real-world applications as they are either not accurate enough, far from efficient, or generalized poorly. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1255 Overall, the limitations of existing correspondence-based methods lie in two aspects. First , there is currently no unified, efficient, and general feature learning framework. A number of patch-wise methods [21, 66] usually employ complex networks coupled with sophisticated steps to en-code the fine-grained geometry of local 3D patches. Ben-efiting from local characteristics that are inherently robust to occlusion and easy to be discriminated, patch-wise meth-ods usually have good generalization ability whilst low effi-ciency. To improve computational efficiency, several point-wise methods [5, 26] resort to adopting a hierarchical ar-chitecture [51] to consecutively sample raw point clouds. However, the hierarchical architecture tends to capture the global context rather than local geometry, which makes the learned point-wise features easy to homogenize and hard to be matched correctly especially for unseen contexts [1]. Second , there is no efficient and general correspondence search mechanism. Most correspondences-based registra-tion frameworks [1, 52] leverage the RANSAC [18] or a coarse-to-fine matching strategy [64] to search reliable cor-respondences. Considering the efficiency of the RANSAC algorithm is related to the inliers, this mechanism would be time-consuming when inlier rate is very low. Additionally, the coarse-to-fine strategy failed to generalize to unseen do-mains due to the reliance on global context matching. A handful of recent works also attempt to leverage un-supervised domain adaptation techniques [24] or simplify the network architecture [1] to achieve a better trade-off be-tween generalization and efficiency. However, they either need an extra target dataset for training or sacrifice the rep-resentation capacity of the learned models. Overall, effi-ciency and generalization seem to contradict each other as existing techniques inherently specialize in one field and do not complement each other. In this paper, we achieve the best of both worlds on ef-ficiency and generalizability by combining the point-wise and patch-wise methods. An efficient and general search mechanism is also proposed to increase the inlier rate of correspondences. The proposed registration framework, termed BUFFER, mainly consists of a Point-wise Learner , aPatch-wise Embedder , and an Inliers Generator . The in-put point clouds are first fed into the Point-wise Learner , where a novel equivariant fully convolutional architecture is used to predict point-wise saliencies and orientations, fur-ther reducing computational cost and enhancing the repre-sentation ability of features. With the selected keypoints and learned orientations, the Patch-wise Embedder uti-lizes a lightweight patch-based feature learner, i.e.,Mini-SpinNet [1], to extract efficient and general local features and cylindrical feature maps. By matching local features, a set of initial correspondences coupled with correspond-ing cylindrical feature maps can be obtained. These general cylindrical feature maps are then fed into the Inlier Gener-ator, which predicts a rigid transformation for each corre-spondence using a lightweight 3D cylindrical convolutional network [1] and generates the final reliable set of correspon-dences by seeking an optimal transformation, followed by RANSAC [18] to estimate a finer transformation. Actually, it is non-trivial to achieve a satisfactory balance between accuracy, efficiency, and generalizability if sim-ply combining existing methods. For example, the point-wise method Predator [26] is vulnerable to unseen scenar-ios while the patch-wise method SpinNet [1] is highly time-consuming. When combining them together directly, the whole framework is neither efficient nor general as verified in Fig. 1. In contrast, each component of our BUFFER has been carefully crafted to tackle specific issues, and thus a superior balance is more likely to be realized. As shown in Fig. 1, being trained only on the 3DMatch dataset, our BUFFER not only achieves the highest regis-tration recall of 92.9% on the 3DMatch dataset, but also reaches the best success rate of 99.30% on the unseen out-door ETH dataset (significantly surpassing the best point-wise baseline GeoTrans [52] by nearly 10% ). Meanwhile, our BUFFER is almost an order of magnitude faster than patch-wise methods [1, 48, 57]. Extensive experiments jus-tify the superior performance and compelling efficiency of our method. Overall, our contributions are three-fold: • We propose a new point cloud registration framework by skillfully combining the point-wise and patch-wise paradigms, achieving the best of both worlds in accuracy, efficiency, and generalizability. • We introduce an equivariant fully convolutional architec-ture to predict point-wise orientations and saliencies. • A new correspondence search strategy is introduced to enhance the inlier ratio of initial correspondences. |
De_Luigi_DrapeNet_Garment_Generation_and_Self-Supervised_Draping_CVPR_2023 | Abstract Recent approaches to drape garments quickly over arbi-trary human bodies leverage self-supervision to eliminate the need for large training sets. However, they are designed to train one network per clothing item, which severely lim-its their generalization abilities. In our work, we rely on self-supervision to train a single network to drape multiple garments. This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative net-work, which models garments as unsigned distance fields. Our pipeline can generate and drape previously unseen garments of any topology, whose shape can be edited by ma-nipulating their latent codes. Being fully differentiable, our formulation makes it possible to recover accurate 3D mod-els of garments from partial observations – images or 3D scans – via gradient descent. Our code is publicly available athttps://github.com/liren2515/DrapeNet . Equal contributions1. Introduction Draping digital garments over differently-shaped bod-ies in random poses has been extensively studied due to its many applications such as fashion design, moviemak-ing, video gaming, virtual try-on and, nowadays, virtual and augmented reality. Physics-based simulation (PBS) [3, 12, 23,32,33,38,39,47–49,51,60] can produce outstanding re-sults, but at a high computational cost. Recent years have witnessed the emergence of deep neu-ral networks aiming to achieve the quality of PBS draping while being much faster, easily differentiable, and offer-ing new speed vs. accuracy tradeoffs [15, 16, 25, 36, 44, 46, 50, 53, 55]. These networks are often trained to pro-duce garments that resemble ground-truth ones. While ef-fective, this requires building training datasets, consisting of ground-truth meshes obtained either from computation-ally expensive simulations [31] or using complex 3D scan-ning setups [37]. Moreover, to generalize to unseen gar-ments and poses, these supervised approaches require train-ing databases encompassing a great variety of samples de-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1451 picting many combinations of garments, bodies and poses. The recent PBNS and SNUG approaches [5, 42] address this by casting the physical models adopted in PBS into con-straints used for self-supervision of deep learning models. This makes it possible to train the network on a multitude of body shapes and poses without ground-truth draped gar-ments. Instead, the predicted garments are constrained to obey physics-based rules. However, both PBNS and SNUG, require training a separate network for each garment. They rely on mesh templates for garment representation and fea-ture one output per mesh vertex. Thus, they cannot handle meshes with different topologies, even for the same gar-ment. This makes them very specialized and limits their applicability to large garment collections as a new network must be trained for each new clothing item. In this work, we introduce DrapeNet , an approach that also relies on physics-based constraints to provide self-supervision but can handle generic garments by condition-ing a single draping network with a latent code describing the garment to be draped. We achieve this by coupling the draping network with a garment generative network, com-posed of an encoder and a decoder. The encoder is trained to compress input garments into compact latent codes that are used as input condition for the draping network. The decoder, instead, reconstructs a 3D garment model from its latent code, thus allowing us to sample and edit new gar-ments from the learned latent space. Specifically, we model the output of the garment de-coder as an unsigned distance function (UDF), which were demonstrated [14] to yield better accuracy and fewer inter-penetrations than the inflated signed distance functions of-ten used for this purpose [10, 19]. Moreover, UDFs can be triangulated in a differentiable way [14] to produce ex-plicit surfaces that can easily be post-processed, making our pipeline fully differentiable. Hence, DrapeNet can not only drape garments over given body shapes but can also perform gradient-based optimization to fit garments, along with body shapes and poses, to partial observations of clothed people, such as images or 3D scans. Our contributions are as follows: • We introduce a single garment draping network con-ditioned on a latent code to handle generic garments from a large collection (e.g. toporbottom garments); • By exploiting physics-based self-supervision, our pipeline only requires a few hundred garment meshes in a canonical pose for training; • Our framework enables the fast draping of new gar-ments with high fidelity, as well as the sampling and editing of new garments from the learned latent space; • Being fully differentiable, our method can be used to recover accurate 3D models of clothed people from im-ages and 3D scans.2. Related Work Implicit Neural Representations for 3D Surfaces. Im-plicit neural representations have emerged a few years ago as an effective tool to represent surfaces whose topology is not known a priori. They can be implemented using (clipped) signed distance functions (SDF) [35] or occupan-cies[28]. When an explicit representation is required, it can be obtained using Marching Cubes [18] and this can be done while preserving differentiability [1,27,41]. However, they can only represent watertight surfaces. Thus, to represent open surfaces, such as clothes, it is possible to use inflated SDFs surrounding them. How-ever, this entails a loss in accuracy and there has been a recent push to replace SDFs by unsigned distance func-tions (UDFs) [9, 52, 61]. One difficulty in so doing was that Marching Cubes was not designed with UDFs in mind, and obtaining explicit surfaces from these UDFs was there-fore non-trivial. This has been addressed in [14] by modi-fying the Marching Cubes algorithm to operate with UDFs. We model garment with UDFs and use [14] to mesh them. Other works augment signed distance fields with covariant fields to encode open surface garments [8, 43]. Draping Garments over 3D Bodies. Two main classes of methods coexist, physics-based algorithms [3, 21, 22, 30, 31,48] that produce high-quality drapings but at a high com-putational cost, and data-driven approaches that are faster but often at the cost of realism. Among the latter, template-based approaches [5,7,17,34, 36, 42, 45, 50] are dominant. Each garment is modeled by a specific triangulated mesh and a draping function is learned for each one. In other words, they do not generalize. There are however a number of exceptions. In [6, 15] the mesh is replaced by 3D point clouds that can represent generic garments. This enables deforming garments with arbitrary topology and geometric complexity, by estimating the de-formation separately for each point. [59] goes further and allows differentiable changes in garment topology by sam-pling a fixed number of points from the body mesh. Un-fortunately, this point cloud representation severely limits possible downstream applications. In recent approaches [10, 19], a space of garments is learned with clothing items modeled as inflated SDFs and one single shared network to predict their deformations as a 3D displacement field. This makes deployment in real-world scenarios easier and allows the reconstruction of garments from images and 3D scans. However, the inflated SDF scheme reduces realism and precludes post-processing using standard physics-based simulators or other cloth-specific downstream applications. Furthermore, both models are fully supervised and require a dataset of draped garments whose collection is extremely time-consuming. Alleviating the need for costly ground-truth draped gar-ments is tackled in [5, 42], by introducing physics-based 1452 IntersectionSolverGarment generative networkGarment draping network Garmentlatent codeℒ!"#$+ℒ%&'!3D queries𝑥∈ℝ(×*Body shape and pose (𝛽, 𝜃)𝑧𝑧$+,GarmentEncoder𝑧!"#++ UDFDecoderMeshUDF GarmentUDF 𝑧$"!𝑧-+$ GarmentDecoderSUPERVISEDSELF-SUPERVISED 𝛥𝑥,𝛥𝑥!"#Δ𝑥$%Skinning-𝒲ℒ%!&'()+ℒ$*)+(),+ℒ,&'-(!.+ℒ/"00(%(") ℒ%!&'()+ℒ$*)+(),+ℒ,&'-(!.+ℒ/"00(%(")+ℒ#()+ℒ0'.*&TopDisplacementsBottomDisplacements𝛥𝑥,𝛥𝑥!"#Figure 2. Overview of our framework. Left: Garment generative network, trained to embed garments into compact latent codes and predict their unsigned distance field (UDF) from such vectors. UDFs are then meshed using [14]. Right: Garment draping network, conditioned on the latent codes of the generative network. It is trained in a self-supervised way to predict the displacements xandxref to be applied to the vertices of given garments, before skinning them according to body shape and pose ( ,) with the predicted blending weights W. It includes an Intersection Solver module to prevent intersection between top and bottom garments. losses to train draping networks in a self-supervised man-ner. The approach of [42] relies on a mass spring model to enforce the physical consistency of static garments de-formed by different body poses. The method of [5] also accounts for variable body shapes and dynamic effects; fur-thermore, it incorporates a more realistic and expressive material model. Both methods, however, require training one network per garment, a limitation we remove. 3. Method We aim to realistically deform and drape generic gar-ments over human bodies of various shapes and poses. To this end, we introduce the DrapeNet framework, pre-sented in Fig. 2. It comprises a generative network shown on the left and a draping network shown on the right. Only the first is trained in a supervised manner, but using only static unposed garments meshes. This is key to avoiding having to run physics-based simulations to generate ground-truth data. Furthermore, we condition the draping network on latent vectors representing the input garments, which al-lows us to use the same network for very different garments, something that competing methods [5, 42] cannot do. The generative network is a decoder trained using an en-coder that turns a garment into a latent code zthat can then be decoded to an Unsigned Distance Function (UDF), from which a triangulated mesh can be extracted in a differen-tiable manner [14]. The UDF representation allows us to accurately represent open surfaces and the many openings that garments typically feature. Since the top and bottom garments – shirts and trousers – have different patterns, we train one generative model for each. Both networks have the same architecture but different weights.The resulting garment generative network is only trained to output garments in a canonical shape, pose, and size that fit a neutral SMPL [24] body. Draping the resulting gar-ments to bodies in non-canonical poses is then entrusted to adraping network , again one for the top and one for the bot-tom. As in [5,19,42], this network predicts vertex displace-ments w.r.t. the neutral position. The deformed garment is then skinned onto the articulated body model. To enable generalization to different tops and bottoms, we condition the draping process on the garment latent codes of the gen-erative network, shown as ztopandzbotin Fig. 2. We use a small database of static unposed garments loosely aligned with bodies in the canonical position to train the two garment generating networks. This being done, we exploit physics-based constraints to train in a fully se | lf-supervised manner the top and bottom draping networks for realism, without interpenetrations with the body and be-tween the garments themselves. 3.1. Garment Generative Network To encode garments into latent codes that can then be decoded into UDFs, we rely on a point cloud encoder that embeds points sampled from the unposed garment surface into a compact vector. This lets us obtain latent codes for previously unseen garments in a single inference pass from points sampled from its surface. This can be done given any arbitrary surface triangulation. Hence, it gives us the flexibility to operate on any given garment mesh. We use DGCNN [56] as the encoder. It first propagates the features of points within the same local region at mul-tiple scales and then aggregates them into a single global embedding by max pooling. We pair it with a decoder that takes as input a latent vector, along with a point in 3D space, 1453 and returns its (unsigned) distance to the garment. The de-coder is a multi-layer perceptron (MLP) that relies on Con-ditional Batch Normalization [54] for conditioning on the input latent vector. We train the encoder and the decoder by encouraging them to jointly predict distances that are small near the training garments’ surface and large elsewhere. Because the algorithm we use to compute triangulated meshes from the predicted distances [14] relies on the gradient vectors of the UDF field, we also want these gradients to be as accurate as possible [2, 61]. We therefore minimize the loss Lgarm =Ldist+gLgrad; (1) whereLdistencodes our distance requirements, Lgrad the gradient ones, and gis a weight balancing their influence. More formally, at training time and given a mini-batch comprisingBgarments, we sample a fixed number Pof points from the surface of each one. For each resulting point cloud pi(1iB), we use the garment encoder EGto compute the latent code zi=EG(pi) (2) and use it as input to the decoder DG. It predicts an UDF field supervised with Eq. (1), whose terms we define below. Distance Loss. Having experimented with many differ-ent formulations of this loss, we found the following one both simple and effective. Given NpointsfxijgjNsam-pled from the space surrounding the i-th garment, we pick a distance threshold , clip all the ground-truth distance val-uesfyijgto it, and linearly normalize the clipped values to the range [0;1]. This yields normalized ground-truth values yij= min(yij;)=. Similarly, we pass the output of the fi-nal layer ofDGthrough a sigmoid function ()to produce a prediction in the same range for point xij eyij=(DG(xij;zi)): (3) Finally, we take the loss to be Ldist=BCEh (yij)iB jN;(eyij)iB jNi ; (4) where BCE [;]stands for binary cross-entropy. As ob-served in [13], the sampling strategy used for points xij strongly impacts training effectiveness. We describe ours in the supplementary. In our experiments, we set = 0:1, being the top and bottom garments normalized respectively into the upper and lower halves of the [ 1;1]3cube. Gradient Loss. Given the same sample points as before, we take the gradient loss to be Lgrad=1 BNX i;jkgij cgijk2 2; (5) where gij=rxyij2R3is the ground-truth gradient of the i-th garment’s UDF at xijandcgij=rxDG(xij;zi)the one of the predicted UDF, computed by backpropagation.3.2. Garment Draping Network We describe our approach to draping generic garments as opposed to specific ones and our self-supervised scheme. We assume that all garments are made of a single common fabric material, and we drape them in a quasi-static manner. 3.2.1 Draping Generic Garments We rely on SMPL [24] to parameterize the body in terms of shape ( ) and pose () parameters. It uses Linear Blend Skinning to deform a body template. Since garments gen-erally follow the pose of the underlying body, we extend the SMPL skinning procedure to the 3D volume around the body for garment draping. Given a point x2R3in the gar-ment space, its position D(x; ;; z)after draping becomes D(x; ;; z) =W(x( ;;z); ;;W(x)); (6) x( ;;z)=x+ x(x; ) + xref(x; ;; z); xref(x; ;; z) =B( ;)M(x;z); whereW()is the SMPL skinning function, applied with blending weights W(x), over the point displaced by x(x; )andxref(x; ;; z).W(x)andx(x; )are computed as in [19, 45]. However, they only give an initial deformation for garments that roughly fits the un-derlying body. To refine it, we introduce a new term, xref(x; ;; z). It is a deformation field conditioned on body parameters and, and on the garment latent code z from the generative network. Following the linear decom-position of displacements in SMPL, it is the composition of an embeddingB( ;)2RNBof body parameters and a displacement matrix M(x;z)2RNB3conditioned on z. Being conditioned on the latent code z,xrefcan deform different garments differently, unlike the methods of [5,42]. The number of vertices does not need to be fixed, since dis-placements are predicted separately for each vertex. Since we have distinct encodings for the top and bottom garments, for each one we train two MLPs ( B,M) to pre-dictxref. The other MLPs for W()andx()are shared. 3.2.2 Self-Supervised Training We first learn the weights of W()andx()as in [19, 45], which does not require any annotation or simulation data but only the blending weights and shape displacements of SMPL. We then train our deformation fields xrefin a fully self-supervised fashion by minimizing the physics-based losses introduced below. In this way, we completely elimi-nate the huge cost that extensive simulations would entail. Top Garments. For upper body garments – shirts, t-shirts, vests, tank tops, etc. – the deformation field is trained using the loss from [42], expressed as Ltop=Lstrain +Lbend+Lgravity +Lcol; (7) 1454 whereLstrain is the membrane strain energy of the de-formed garment,Lbend the bending energy caused by the folding of adjacent faces, Lgravity the gravitational poten-tial energy, andLcola penalty for collisions between body and garment. Unlike in [42], we only consider the quasi-static state after draping, that is, without acceleration. Bottom Garments. Due to gravity, bottom garments, such as trousers, would drop onto the floors if we used only the loss terms of Eq. (7). We thus introduce an extra loss term to constrain the deformation of vertices around the waist and hips. The loss becomes Lbottom =Lstrain +Lbend+Lgravity +Lcol+Lpin; Lpin=X v2Vjxyj2+(jxxj2+jxzj2); (8) whereVis the set of garment vertices whose closest body vertices are located in the region of the waist and hips. See supplementary material for details. The terms xx,xy andxzare the deformations along the X, Y and Z axes, respectively. is a positive value smaller than 1 that penal-izes deformations along the vertical direction (Y axis) and produces natural deformations along the other directions. Top-Bottom Intersection. To ensure that the top and bottom garments do not intersect with each other when we drape them on the same body, we define a loss LISthat ensures that when the top and the bottom garments overlap, the bottom garment vertices are closer to the body mesh than the top ones, which prevents them from intersecting – this is arbitrary, and the following could be formulated the other way around. To this end, we introduce an Intersection Solver (IS) network. It predicts a displacement correction xIS, added only when draping bottom garments as ~x(ztop;zbot)=x(zbot)+ xIS(x;ztop;zbot); (9) where we omit the dependency of ~x,xandxISon the body parameters ( ;)for simplicity. ztopandzbotare the latent codes of the top and bottom garments, and x(zbot)is the input point displaced according to Eq. (6). The skinning function of Eq. (6) is then applied to ~x(ztop;zbot)for draping. xIS()is implemented as a simple MLP and trained with LIS=Lbottom +Llayer; (10) whereLlayer is a loss whose minimization requires the top and bottom garments to be separated from each other. We formulate it as Llayer =X vB2Cmax(dbot(vB) dtop(vB);0);(11) whereCis the set of body vertices covered by both the top and bottom garments, dtop()anddbot()the distance to the top and the bottom garments respectively, and a positive value smaller than 1 (more details in the supplementary). Sampling and draping new garments Draping generic garmentsFitting garments to observationsBottom garmentslatent spaceTop garmentslatent space𝑧!"##"$𝑧#"% 𝑧#"%𝑧!"##"$(𝛽,𝜃)GarmentEncoderDrapingNetworkDrapingNetworkGarmentDecoderDrapingNetworkGarmentDecoder backpropagation Figure 3. Overview of DrapeNet applications. Top: New gar-ments can be sampled from the latent spaces of the generative net-works, and deformed by the draping networks to fit to a given body. Center: The garment encoders and the draping networks form a general purpose framework to drape any garment with a single forward pass. Bottom: Being a differentiable parametric model, our framework can reconstruct 3D garments by fitting ob-servations such as images or 3D scans. The red boxes indicate the parameters optimized in this process. 4. Experiments We first describe our experimental setup and test DrapeNet for the different purposes depicted by Fig. 3. They include reconstructing different kinds of garments and editing them by manipulating their latent codes. We then gauge the draping network both qualitatively and quantita-tively. Finally, we use DrapeNet to reconstruct garments from images and 3D scans. 4.1. Settings, Datasets and Metrics Datasets. Both our generative and draping networks are trained with garments from CLOTH3D [4], a synthetic dataset that contains over 7K sequences of animated 3D hu-mans parametrized used the SMPL model and wearing dif-ferent garments. Each sequence comprises up to 300 frames and features garments coming from different templates. For training, we randomly selected 600 top garments (t-shirts, shirts, tank tops, etc.) and 300 bottom garments (both long and short trousers). Neither for the generative nor for the draping networks did we use the simulated deformations of the selected garments. Instead, we trained the networks us-ing only garment meshes on average body shapes in T-pose. 1455 GTPredicted TARGETRESULTFigure 4. Generative network: reconstruction of unseen gar-ments in neutral pose/shape. The latent codes are obtained with the garment encoder, then decoded into open surface meshes. By contrast, for testing purposes, we selected random cloth-ing items – 30 for top garments and 30 bottom ones – and considered whole simulated sequences. Training. We train two different models for top and bottom garments, both for the generative and for the drap-ing parts of our framework. First, the generative models are trained on the 600/300 neutral garments Then, with the generative networks weights frozen, we train the draping networks by following [42]: body poses are sampled ran-domly from the AMASS [26] dataset, and shapes uni-formly from [ 3;3]10at each step. The other hyperparam-eters are given in the supplementary material. Metrics. We report the Euclidean distance (ED), in-terpenetration ratio between body and garment (B2G), and intersection between top and bottom garments (G2G). ED is computed between corresponding vertices of the consid-ered meshes. B2G is the area ratio between the garment faces inside the body and the whole surface as in [19]. Since CLOTH3D exclusively features pairs of top/bottom garments with the bottom one closer to the body, G2G is computed by detecting faces of the bottom garment that are outside of the top one, and taking the area ratio betwee |
Chen_FFF_Fragment-Guided_Flexible_Fitting_for_Building_Complete_Protein_Structures_CVPR_2023 | Abstract Cryo-electron microscopy (cryo-EM) is a technique for reconstructing the 3-dimensional (3D) structure of biomolecules (especially large protein complexes and molecular assemblies). As the resolution increases to the near-atomic scale, building protein structures de novo from cryo-EM maps becomes possible. Recently, recognition-based de novo building methods have shown the potential to streamline this process. However, it cannot build a complete structure due to the low signal-to-noise ratio (SNR) prob-lem. At the same time, AlphaFold has led to a great break-through in predicting protein structures. This has inspired us to combine fragment recognition and structure predic-tion methods to build a complete structure. In this paper, we propose a new method named FFF that bridges protein structure prediction and protein structure recognition with flexible fitting. First, a multi-level recognition network is used to capture various structural features from the input 3D cryo-EM map. Next, protein structural fragments are generated using pseudo peptide vectors and a protein se-quence alignment method based on these extracted features. Finally, a complete structural model is constructed using the predicted protein fragments via flexible fitting. Based on our benchmark tests, FFF outperforms the baseline meth-ods for building complete protein structures. | 1. Introduction With the advances in hardware and image processing al-gorithms, cryo-EM has become a major experimental tech-nique for determining the structures of biological macro-molecules, especially proteins. Cryo-EM data process-ing consists of two major steps: 3D reconstruction and structure building. In the 3D reconstruction step, a set of 2-dimensional (2D) micrographs (projection images) are collected using transmission electron microscopy for bio-logical samples embedded in a thin layer of amorphous *Corresponding authorice. Each micrograph contains many 2D projections of the molecules in unknown orientations. Software tools such as RELION [17], cryoSPARC [12] and cryoDRGN [28] can be used to recover the underlying 3D molecular density map. In the second step, we build the atomic structure of the underlying protein by trying to determine the positions of all the atoms using the 3D density map from the 3D re-construction step. This is done through an iterative process that contains the following steps: (1) rigid-body docking of an initial staring structure into the cryo-EM map; (2) man-ual or automated/semi-automated flexible refinement of the docked structure to match the map [15,24,25]. The manual process is extremely time consuming and requires extensive expert-level domain knowledge. Existing automated/semi-automated flexible fitting may mismatch the initial structure and density map regions. Figure 1. A protein backbone along the chain with increasing residue index from left to right. Arrow marks the peptide bond linking two consecutive amino acids from C atom of one amino acid and N atom of another. In the past decades, the resolution of cryo-EM has been drastically improved from medium-resolution (5–10 ˚A) to near-atomic resolution (1.2–5 ˚A) [20]. At the same time, deep learning-based de novo protein structure building methods have shown great progress. As a result, building a reasonable atomic structural model de novo is already feasi-ble for maps whose resolutions are better than 3 ˚A. Even so, due to the flexible nature of some local structures, de novo modeling methods often cannot model a complete protein This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19776 structure and still require significant manual effort and do-main expertise. With the advent of AlphaFold [8], high-accuracy struc-ture predictions have greatly helped biologists in struc-ture modeling from cryo-EM maps. AlphaFold can pre-dict single-chain structures that closely match the cryo-EM density maps in many cases [1]. However, its ability is still limited to predicting structures for multimeric protein complexes, a protein with alternative conformations, and protein-ligand complexes. Cryo-EM studies are required in solving structures in these complex scenarios. One possible solution is to effectively combine exper-imental information and machine learning-based predic-tion. In this work, we propose a new method called FFF (“Fragment-guided Flexible Fitting”) that enables more re-liable and complete cryo-EM structure building by bridging protein structure prediction and protein structure recogni-tion with flexible fitting. First, we use a multi-level recog-nition network to capture different-level structural features from the input 3D volume (i.e. cryo-EM map). Next, the extracted features are used for fragment recognition to con-struct the recognized structure. Finally, we run a flexible fitting to building a complete structure based on the recog-nized structure. Our main contributions are as follows: • We propose a more straightforward method for protein backbone tracing using pseudo peptide vectors. • We propose a more sensitive and accurate protein se-quence alignment algorithm to identify the amino acid types and residue index of detected residues. This is essential for aligning the recognized fragments and the machine-learning predicted structure. • We combine deep-learning-based 3D map recogni-tion with molecular dynamics that surpasses previ-ous methods in cryo-EM structure building using Al-phaFold. |
Gong_Continuous_Pseudo-Label_Rectified_Domain_Adaptive_Semantic_Segmentation_With_Implicit_Neural_CVPR_2023 | Abstract Unsupervised domain adaptation (UDA) for semantic segmentation aims at improving the model performance on the unlabeled target domain by leveraging a labeled source domain. Existing approaches have achieved im-pressive progress by utilizing pseudo-labels on the unla-beled target-domain images. Yet the low-quality pseudo-labels, arising from the domain discrepancy, inevitably hin-der the adaptation. This calls for effective and accurate ap-proaches to estimating the reliability of the pseudo-labels, in order to rectify them. In this paper, we propose to esti-mate the rectification values of the predicted pseudo-labels with implicit neural representations. We view the rectifi-cation value as a signal defined over the continuous spa-tial domain. Taking an image coordinate and the nearby deep features as inputs, the rectification value at a given coordinate is predicted as an output. This allows us to achieve high-resolution and detailed rectification values es-timation, important for accurate pseudo-label generation at mask boundaries in particular. The rectified pseudo-labels are then leveraged in our rectification-aware mixture model (RMM) to be learned end-to-end and help the adap-tation. We demonstrate the effectiveness of our approach on different UDA benchmarks, including synthetic-to-real and day-to-night. Our approach achieves superior results com-pared to state-of-the-art. The implementation is available athttps://github.com/ETHRuiGong/IR2F . | 1. Introduction Semantic segmentation, aiming at assigning the seman-tic label to each pixel in an image, is a fundamental prob-lem in computer vision. Driven by the availability of large-scale datasets and the advancements in deep neural networks (DNNs), the state-of-the-art boundary has been pushed rapidly in the last decade [ 9,35,38,51,59,70,78]. However, the DNNs trained on a source domain, e.g. day images, generalize poorly to a different target domain, e.g.night images, due to the distribution shift between the do-mains. One straightforward idea to circumvent the issue is to annotate the images from the target domain, and then retrain the model. However, annotations for semantic seg-mentation are particularly costly and labor-intensive to pro-duce, since each pixel has to be labeled. To this end, some recent works [ 18,21,61,63,77] resort to unsupervised domain adaptation (UDA), where the model is trained on the labeled source domain and an unlabeled target domain dataset, reducing the annotation burden. Different from the predominant UDA methods that ex-plicitly align the source and target distributions on the image-level [ 18,21,33,73] or the feature-level [ 61–63], pseudo-labeling or self-training [ 23,24,60,76,82,83] has re-cently emerged as a simple yet effective approach for UDA. Pseudo-labeling approaches typically first generate pseudo-labels on the unlabeled target domain using the current model. The model is then fine-tuned with target pseudo-labels in an iterative manner. However, some pseudo-labels are inevitably incorrect because of the domain shift. There-fore, pseudo-label correction, or rectification, is critical for the adaptation process. This is typically implemented in the literature by removing [ 82,83] or assigning a smaller weight [ 24,67,76,79] to pixels with low-quality and poten-tially incorrect pseudo-labels. The key problem is thus to formulate a rectification function that estimates the pseudo-label quality . We identify two important issues with current approaches. First, most existing methods use hard-coded heuristics as the rectification function, e.g. hard thresholding of the softmax confidence [ 82,83], prediction variances of differ-ent learners [ 79], or distance to prototypes [ 67,76]. These heuristic rectification functions assume on strong correla-tions between the function and the pseudo-label quality, which may not be the case. For example, the rectification function that uses the variance of multiple learners [ 79] to suppress disagreement on the pseudo-labels can be sensitive to small objects in the adaptation [ 24]. The second issue is that the existing works [ 24] typically model the rectification function in a discrete spatial grid This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7225 (a) Discrete Modeling(b) Continuous ModelingCoordinate SpaceRectification ValuesCoordinate SpaceRectification ValuesImplicit Neural RepresentationsArbitrary ResolutionsRectification ValuesCoordinate SpaceCoordinate SpacePixel-Wise DecodingConv-Upsampling/InterpolationRectification Values Feature SpaceDown-SamplingInputsFeature SpaceDown-SamplingInputsRectification Values MapRectification Values Map Figure 1. Discrete vs. Continuous Rectification Function Modeling. Discrete modeling suffers from the convolutional pixel-wise decoding in the fixed-grid, where some coordinates are missing (see dashed circle in (a)). Thus, the rectification values corresponding to these coordinates can only be obtained by upsampling/interpolation, which is constrained by the blurring effect and induces the inaccurate rectification values estimation in some areas, e.g. mask boundaries. In contrast, our continuous modeling decodes the features – in the continuous coordinate space – into rectification values, which can be generalized to arbitrary resolution and preserve finer details. (The coordinate space and rectification values are shown in 1-D axis just for better viewing.) (see Fig. 1a). Rectification values are predicted by the pixel-wise decoding from the fixed-grid feature space, which is constrained by the limited resolution. This is especially harmful when the objects in the test images are of a differ-ent scale than in the training, since the rectification function cannot generalize well on these unseen scales (see Fig. 1a). Existing approaches also lose vital high-frequency informa-tion through down-/up-sampling operations [ 24,25,40,56], which may lead to poorer pseudo-labels, in particular close to mask boundaries. To address these two issues, we propose a novel contin-uous rectification-aware mixture model (RMM). First, in-stead of formulating the rectification function with heuris-tics and priors, we propose a principled mixture model rep-resentation, i.e. rectification-aware mixture model (RMM), ensuring a probabilistic end-to-end learnable formulation. Second , the rectification function in RMM is represented by our proposed implicit rectification-representative func-tion (IR2F), to model the pixel-wise rectification of pseudo-labels in continuous spatial coordinates, i.e.continuous RMM . The primary idea of IR2F is to learn pixel-wise rec-tification values as latent codes, which are decoded at arbi-trary continuous spatial coordinates. Given a queried coor-dinate, our IR2F inputs latent codes around the given coor-dinate from the different learners ( e.g. high-/low-resolution decoder in [ 24] and primary/auxiliary classifier in [ 79]) along with their spatial coordinates. IR2F then predicts the rectification value at the queried coordinate. Our principled formulation is a general plug-in module, compatible with different rectification-aware UDA architectures. We thoroughly analyze our continuous RMM on differ-ent UDA benchmarks, including synthetic-to-real andday-to-night settings. Extensive experimental results demon-strate the effectiveness of continuous RMM, outperform-ing the previous state-of-the-art (SOTA) methods by a large margin, including on SYNTHIA !Cityscapes ( +1.9% mIoU), Cityscapes !Dark Zurich ( +3.0% mIoU) and ACDC-Night ( +3.4% mIoU). Overall, continuous RMM reveals the significant potential of modeling pseudo-labels rectification for UDA in the learnable and continuous man-ner, inspiring further research in this field. |
Ge_Hyperbolic_Contrastive_Learning_for_Visual_Representations_Beyond_Objects_CVPR_2023 | Abstract Although self-/un-supervised methods have led to rapid progress in visual representation learning, these methods generally treat objects and scenes using the same lens. In this paper, we focus on learning representations for objects and scenes that preserve the structure among them. Motivated by the observation that visually similar objects are close in the representation space, we argue that the scenes and objects should instead follow a hierarchical structure based on their compositionality. To exploit such a structure, we propose a contrastive learning framework where a Euclidean loss is used to learn object representations and a hyperbolic loss is used to encourage representations of scenes to lie close to representations of their constituent objects in a hyperbolic space. This novel hyperbolic objective encourages the scene-object hypernymy among the representations by optimizing the magnitude of their norms. We show that when pretrain-ing on the COCO and OpenImages datasets, the hyperbolic loss improves downstream performance of several baselines across multiple datasets and tasks, including image classifi-cation, object detection, and semantic segmentation. We also show that the properties of the learned representations allow us to solve various vision tasks that involve the interaction between scenes and objects in a zero-shot fashion. | 1. Introduction Our visual world is diverse and structured. Imagine taking a close-up of a box of cereal in the morning. If we zoom out slightly, we may see different nearby objects such as a pitcher of milk, a cup of hot coffee, today’s newspaper, or reading glasses. Zooming out further, we will probably recognize that these items are placed on a dining table with the kitchen as background rather than inside a bathroom. Such scene-object structure is diverse, yet not completely random. In this paper, we aim at learning visual representations of both the cereal box (objects) and the entire dining table (scenes) in ⇤Equal Contribution. The order is decided randomly. 𝑂Figure 1. Illustration of the representation space learned by our models. Object images of the same class tend to gather near the center around similar directions, while the scene images are far away in these directions with larger norms. the same space while preserving such hierarchical structures. Un-/self-supervised learning has become a standard method to learn visual representations [ 7,12,24,26,27,51]. Although these methods attain superior performance over supervised pretraining on object-centric datasets such as Im-ageNet [ 6], inferior results are observed on images depicting multiple objects such as OpenImages or COCO [ 68]. Several methods have been proposed to mitigate this issue, but all fo-cus either on learning improved object representations [ 1,68] or dense pixel representations [ 39,64,69], instead of explic-itly modeling representations for scene images. The object representations learned by these methods present a natural topology [ 67]. That is, the objects from visually similar This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6840 classes lie close to each other in the representation space. However, it is not clear how the representations of scene images should fit into that topology. Directly applying exist-ing contrastive learning results in a sub-optimal topology of scenes and objects as well as unsatisfactory performance, as we will show in the experiments. To this end, we argue that a hierarchical structure can be naturally adopted. Consider-ing that the same class of objects can be placed in different scenes, we construct a hierarchical structure to describe such relationships, where the root nodes are the visually similar objects, and the scene images consisting of them are placed as the descendants. We call this structure the object-centric scene hierarchy. The intermediate modeling difficulty induced by this structure is the combinatorial explosion. A finite number of objects leads to exponentially many different possible scenes. Consequently, Euclidean space may require an arbitrarily large number of dimensions to faithfully embed these scenes, whereas it is known that any infinite trees can be embedded without distortion in a 2D hyperbolic space [ 25]. Therefore, we propose to employ a hyperbolic objective to regularize the scene representations. To learn representations of scenes, in the general setting of contrastive learning, we sample co-occurring scene-object pairs as positive pairs, and objects that are not part of that scene as negative samples, and use these pairs to compute an auxiliary hyperbolic contrastive objective. Our model is trained to reduce the distance be-tween positive pairs and push away the negative pairs in a hyperbolic space. Contrastive learning usually has objectives defined on a hypersphere [ 12,27]. By discarding the norm information, these models circumvent the shortcut of minimizing losses through tuning the norms and obtain better downstream per-formance. However, the norm of the representation can also be used to encode useful representational structure. In hy-perbolic space, the magnitude of a vector often plays the role of modeling the hypernymy of the hierarchical struc-ture [ 45,53,59]. When projecting the representations to the hyperbolic space, the norm information is preserved and used to determine the Riemannian distance, which eventually affects the loss. Since hyperbolic space is diffeomorphic and conformal to Euclidean space, our hyperbolic contrastive loss is differentiable and complementary to the original con-trastive objective. When training simultaneously with the original con-trastive objective for objects and our proposed hyperbolic contrastive objective for scenes, the resulting representation space exhibits a desired hierarchical structure while leaving the object clustering topology intact as shown in Figure 1. We demonstrate the effectiveness of the hyperbolic objective under several frameworks on multiple downstream tasks. We also show that the properties of the representations allow us to perform various vision tasks in a zero-shot way, from labeluncertainty quantification to out-of-context object detection. Our contributions are summarized below: 1.We propose a hyperbolic contrastive loss that regular-izes scene representations so that they follow an object-centric hierarchy, with positive and negative pairs sam-pled from the hierarchy. |
Blattmann_Align_Your_Latents_High-Resolution_Video_Synthesis_With_Latent_Diffusion_Models_CVPR_2023 | Abstract Latent Diffusion Models (LDMs) enable high-quality im-age synthesis while avoiding excessive compute demands by training a diffusion model in a compressed lower-dimensional latent space. Here, we apply the LDM paradigm to high-resolution video generation, a particu-larly resource-intensive task. We first pre-train an LDM on images only; then, we turn the image generator into a video generator by introducing a temporal dimension to the latent space diffusion model and fine-tuning on encoded im-age sequences, i.e., videos. Similarly, we temporally align diffusion model upsamplers, turning them into temporally consistent video super resolution models. We focus on two relevant real-world applications: Simulation of in-the-wild driving data and creative content creation with text-to-video modeling. In particular, we validate our Video LDM onreal driving videos of resolution 512×1024 , achieving state-of-the-art performance. Furthermore, our approach can easily leverage off-the-shelf pre-trained image LDMs, as we only need to train a temporal alignment model in that case. Doing so, we turn the publicly available, state-of-the-art text-to-image LDM Stable Diffusion into an ef-ficient and expressive text-to-video model with resolution up to 1280×2048 . We show that the temporal layers trained in this way generalize to different fine-tuned text-to-image LDMs. Utilizing this property, we show the first results for personalized text-to-video generation, opening exciting directions for future content creation. Project page: https://nv-tlabs.github.io/VideoLDM/ *Equal contribution. †Andreas, Robin and Tim did the work during internships at NVIDIA. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22563 Figure 2. Temporal Video Fine-Tuning. We turn pre-trained image diffusion mod-els into temporally consistent video gener-ators. Initially, different samples of a batch synthesized by the model are independent. After temporal video fine-tuning, the sam-ples are temporally aligned and form co-herent videos. The stochastic generation process before and after fine-tuning is visu-alised for a diffusion model of a one-dim. toy distribution. For clarity, the figure cor-responds to alignment in pixel space. In practice, we perform alignment in LDM’s latent space and obtain videos after ap-plying LDM’s decoder (see Fig. 3). We also video fine-tune diffusion model up-samplers in pixel or latent space (Sec. 3.4). | 1. Introduction Generative models of images have received unprece-dented attention, owing to recent breakthroughs in the un-derlying modeling methodology. The most powerful mod-els today are built on generative adversarial networks [21, 38–40, 75], autoregressive transformers [15, 63, 105], and most recently diffusion models [10, 28, 29, 57, 58, 62, 65, 68, 79, 82]. Diffusion models (DMs) in particular have de-sirable advantages; they offer a robust and scalable train-ing objective and are typically less parameter intensive than their transformer-based counterparts. However, while the image domain has seen great progress, video modeling has lagged behind—mainly due to the significant computa-tional cost associated with training on video data, and the lack of large-scale, general, and publicly available video datasets. While there is a rich literature on video synthe-sis [1, 6, 8, 9, 17, 19, 22, 23, 32, 32, 37, 42, 44, 47, 51, 55, 59, 71, 78, 85, 91, 94, 97–99, 103, 106], most works, including previous video DMs [24, 31, 33, 93, 104], only generate rel-atively low-resolution, often short, videos. Here, we ap-ply video models to real-world problems and generate high-resolution, long videos. Specifically, we focus on two rel-evant real-world video generation problems: (i) video syn-thesis of high-resolution real-word driving data, which has great potential as a simulation engine in the context of au-tonomous driving, and (ii) text-guided video synthesis for creative content generation; see Fig. 1. To this end, we build on latent diffusion models (LDMs), which can reduce the heavy computational burden when training on high-resolution images [65]. We propose Video LDMs and extend LDMs to high-resolution video genera-tion, a particularly compute-intensive task. In contrast to previous work on DMs for video generation [24, 31, 33, 93, 104], we first pre-train our Video LDMs on images only (or use available pre-trained image LDMs), thereby allowing us to leverage large-scale image datasets. We then trans-form the LDM image generator into a video generator byintroducing a temporal dimension into the latent space DM and training only these temporal layers on encoded image sequences, i.e., videos (Fig. 2), while fixing the pre-trained spatial layers. We similarly fine-tune LDM’s decoder to achieve temporal consistency in pixel space (Fig. 3). To further enhance the spatial resolution, we also temporally align pixel-space and latent DM upsamplers [29], which are widely used for image super resolution [43, 65, 68, 69], turning them into temporally consistent video super resolu-tion models. Building on LDMs, our method can generate globally coherent and long videos in a computationally and memory efficient manner. For synthesis at very high reso-lutions, the video upsampler only needs to operate locally, keeping training and computational requirements low. We ablate our method and test on 512×1024 real driving scene videos, achieving state-of-the-art video quality, and synthe-size videos of several minutes length. We also video fine-tune a powerful, publicly available text-to-image LDM, Sta-ble Diffusion [65], and turn it into an efficient and powerful text-to-video generator with resolution up to 1280×2048 . Since we only need to train the temporal alignment layers in that case, we can use a relatively small training set of cap-tioned videos. By transferring the trained temporal layers to differently fine-tuned text-to-image LDMs, we demonstrate personalized text-to-video generation for the first time. We hope our work opens new avenues for efficient digital con-tent creation and autonomous driving simulation. Contributions. (i)We present an efficient approach for training high-resolution, long-term consistent video genera-tion models based on LDMs. Our key insight is to leverage pre-trained image DMs and turn them into video generators by inserting temporal layers that learn to align images in a temporally consistent manner (Figs. 2 and 3). (ii)We fur-ther temporally fine-tune super resolution DMs, which are ubiquitous in the literature. (iii)We achieve state-of-the-art high-resolution video synthesis performance on real driv-ing scene videos, and we can generate multiple minute long 22564 Figure 3. Top: During temporal decoder fine-tuning, we process video sequences with a frozen encoder, which processes frames independently, and enforce temporally coherent reconstructions across frames. We additionally employ a video-aware discrimina-tor.Bottom: in LDMs, a diffusion model is trained in latent space. It synthesizes latent features, which are then transformed through the decoder into images. Note that the bottom visualization is for individual frames; see Fig. 2 for the video fine-tuning framework that generates temporally consistent frame sequences. videos. (iv)We transform the publicly available Stable Dif-fusion text-to-image LDM into a powerful and expressive text-to-video LDM, and (v)show that the learned temporal layers can be combined with different image model check-points ( e.g.,DreamBooth [66]). |
Jiang_AligNeRF_High-Fidelity_Neural_Radiance_Fields_via_Alignment-Aware_Training_CVPR_2023 | Abstract Neural Radiance Fields (NeRFs) are a powerful repre-sentation for modeling a 3D scene as a continuous func-tion. Though NeRF is able to render complex 3D scenes with view-dependent effects, few efforts have been devoted to exploring its limits in a high-resolution setting. Specif-ically, existing NeRF-based methods face several limita-tions when reconstructing high-resolution real scenes, in-cluding a very large number of parameters, misaligned input data, and overly smooth details. In this work, we conduct the first pilot study on training NeRF with high-resolution data and propose the corresponding solutions: 1) marrying the multilayer perceptron (MLP) with convolu-tional layers which can encode more neighborhood infor-mation while reducing the total number of parameters; 2) a novel training strategy to address misalignment caused by moving objects or small camera calibration errors; and 3) a high-frequency aware loss. Our approach is nearly free without introducing obvious training/testing costs, while ex-periments on different datasets demonstrate that it can re-cover more high-frequency details compared with the cur-rent state-of-the-art NeRF models. Project page: https: * This work was performed while Yifan Jiang interned at Google. †This work was performed while Tianfan Xue worked at Google.//yifanjiang19.github.io/alignerf . | 1. Introduction Neural Radiance Field (NeRF [24]) and its variants [1– 3, 7, 19, 20], have recently demonstrated impressive perfor-mance for learning geometric 3D representations from im-ages. The resulting high-quality scene representation en-ables an immersive novel view synthesis experience with complex geometry and view-dependent appearance. Since the origin of NeRF, an enormous amount of work has been made to improve its quality and efficiency, enabling recon-struction from data captured “in-the-wild” [15,20] or a lim-ited number of inputs [7, 11, 26, 32, 44] and generalization across multiple scenes [4, 41, 45]. However, relatively little attention has been paid to high-resolution reconstruction. mip-NeRF [1] addresses exces-sively blurred or aliased images when rendering at differ-ent resolutions, modelling ray samples with 3D conical frustums instead of infinitesimally small 3D points. mip-NeRF 360 [2] further extends this approach to unbounded scenes that contain more complex appearance and geome-try. Nevertheless, the highest resolution data used in these two works is only 1280×840pixels, which is still far away from the resolution of a standard HD monitor ( 1920×1080 ), not to mention a modern smartphone camera ( 4032×3024 ). This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 46 mip-NeRF 360 ++ Ground Truth Optical Flow mip-NeRF 360 ++Ground Truth Optical FlowTraining Set Testing Set Figure 2. Analysis of misalignment between rendered and ground truth images .mip-NeRF 360++ : Images rendered by a stronger mip-NeRF 360 [2] model ( 16×larger MLPs than the original). Ground Truth : The captured images used for training and testing. Optical Flow : Optical flow between the mip-NeRF 360++ and ground truth images, estimated by PWC-Net [38]. Significant misalignment is present in both training and test view renderings. In this paper, we conduct the first pilot study of train-ing neural radiance fields in the high-fidelity setting, using higher-resolution images as input. This introduces several hurdles. First , the major challenge of using high-resolution training images is that encoding all the high-frequency de-tails requires significantly more parameters, which leads to a much longer training time and higher memory cost, some-times even making the problem intractable [2, 20, 30]. Second , to learn high-frequency details, NeRF requires accurate camera poses and motionless scenes during cap-ture. However, in practice, camera poses recovered by Structure-from-Motion (SfM) algorithms inevitably contain pixel-level inaccuracies [18]. These inaccuracies are not noticeable when training on downsampled low-resolution images, but cause blurry results when training NeRF with higher-resolution inputs. Moreover, the captured scene may also contain unavoidable motion, like moving clouds and plants. This not only breaks the static-scene assumption but also decreases the accuracy of estimated camera poses. Due to both inaccurate camera poses and scene motion, NeRF’s rendered output is often slightly misaligned from the ground truth image, as illustrated in Fig. 2. We investigate this phe-nomenon in Sec. 4.3, demonstrating that image quality can be significantly improved by iteratively training NeRF and re-aligning the input images with NeRF’s estimated geom-etry. The analysis shows that misalignment results in NeRF learning distorted textures, as it is trained to minimize the difference between rendered frames and ground truth im-ages. Previous work mitigates this issue by jointly optimiz-ing NeRF and camera poses [5, 17, 23, 43], but these meth-ods cannot handle subtle object motion and often introduce non-trivial training overheads, as demonstrated in Sec 4.6. To tackle these issues, we present AligNeRF, analignment-aware training strategy that can better preserve high-frequency details. Our solution is two-fold: an ap-proach to efficiently increase the representational power of NeRF, and an effective method to correct for misalign-ment. To efficiently train NeRF with high-resolution inputs, we marry convolutions with NeRF’s MLPs, by sampling a chunk of rays in a local patch and applying ConvNets for post-processing. Although a related idea is discussed in NeRF-SR [40], their setting is based on rendering test im-ages at a higher resolution than the training set. Another line of work combines volumetric rendering with genera-tive modeling [27, 34], where ConvNets are mainly used for efficient upsampling and generative texture synthesis, rather than solving the inverse problem from many input images. In contrast, our approach shows that the inductive prior from a small ConvNet improves NeRF’s performance on high-resolution training data, without introducing signif-icant computational costs. In this new pipeline, we render image patches during training. This allows us to further tackle misalignments between the rendered patch and ground truth that may have been caused by minor pose errors or moving objects. First, we analyze how misalignment affects image qual-ity by leveraging the estimated optical flow between ren-dered frames and their corresponding ground truth images. We discuss the limitations of previous misalignment-aware losses [22, 48], and propose a novel alignment strategy tai-lored for our task. Moreover, our patch-based rendering strategy also enables patch-wise loss functions, beyond a simple mean squared error. That motivates us to design a new frequency-aware loss, which further improves the rendering quality with no overheads. As a result, AligN-eRF largely outperforms the current best method for high-47 resolution 3D reconstruction tasks with few extra costs. To sum up, our contributions are as follows: • An analysis demonstrating the performance degrada-tion caused by misalignment in high-resolution train-ing data. • A novel convolution-assisted architecture that im-proves the quality of rendered images with minor ad-ditional costs. • A novel patch alignment loss that makes NeRF more robust to camera pose error and subtle object mo-tion, together with a patch-based loss to improve high-frequency details. |
Cho_Implicit_3D_Human_Mesh_Recovery_Using_Consistency_With_Pose_and_CVPR_2023 | Abstract From an image of a person, we can easily infer the nat-ural 3D pose and shape of the person even if ambiguity ex-ists. This is because we have a mental model that allows us to imagine a person’s appearance at different viewing direc-tions from a given image and utilize the consistency between them for inference. However, existing human mesh recovery methods only consider the direction in which the image was taken due to their structural limitations. Hence, we propose “Implicit 3D Human MeshRecovery ( ImpHMR )” that can implicitly imagine a person in 3D space at the feature-level via Neural Feature Fields. In ImpHMR, feature fields are generated by CNN-based image encoder for a given image. Then, the 2D feature map is volume-rendered from the fea-ture field for a given viewing direction, and the pose and shape parameters are regressed from the feature. To uti-lize consistency with pose and shape from unseen-view, if there are 3D labels, the model predicts results including the silhouette from an arbitrary direction and makes it equal to the rotated ground-truth. In the case of only 2D labels, we perform self-supervised learning through the constraint that the pose and shape parameters inferred from different directions should be the same. Extensive evaluations show the efficacy of the proposed method. | 1. Introduction Human Mesh Recovery (HMR) is a task that regresses the parameters of a three-dimensional (3D) human body model ( e.g., SMPL [34], SMPL-X [42], and GHUM [57]) from RGB images. Along with 3D joint-based methods [7, 32, 46], HMR has many downstream tasks such as AR/VR, and computer graphics as a fundamental topic in computer vision. In recent years, there has been rapid progress in HMR, particularly in regression-based approaches [6,19,22, 25–27,30,49,55,62]. However, despite these achievements, the existing algorithms still have a gap with the way humans do, so most of them do not show robust performance against the inherent ambiguity of the task. Figure 1. Mental model of human that infers pose and shape from a single image. From an image of a person, we infer pose and shape robustly by imagining the person’s appearance not only from the direction in which the image was taken, but also from other viewing directions ( e.g., left and right sides). Consider the image of a baseball player running, as shown in Fig. 1. For the given single image, we can easily infer that the person’s right elbow and left leg are extended backward in a 3D space, despite the presence of inherent ambiguity ( e.g., depth and occlusion). This is because we have a mental model that allows us to imagine a person’s ap-pearance at different viewing directions from a given image and utilize the consistency between them for inference. Re-cently, many state-of-the-art studies have successfully uti-lized knowledge similar to that used by humans such as hu-man dynamics [20] and temporal information [8,23,35,55]. However, to the best of our knowledge, there have been no studies proposed methods that consider 3D space for HMR similar to the way we infer pose and shape through appear-ance check between different views in 3D space. To overcome this issue, we propose “Implicit 3D Human Mesh Recovery (ImpHMR)” that can implicitly imagine a human placed in a 3D space via Neural Feature Fields [40]. Our assumption is that if the model is trained to infer a hu-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21148 man’s pose and shape at arbitrary viewing directions in a 3D space from a single image, then the model learns better spatial prior knowledge about human appearance; conse-quently, the performance in the canonical viewing direction in which the image was taken is improved. To achieve this, we incorporate Neural Feature Fields into regression-based HMR methods. In ImpHMR, it gen-erates feature fields using a CNN-based image encoder for a given image to construct a person in 3D space, as shown in Fig. 2. A feature field represented by a Multi-Layer Percep-tron (MLP) is a continuous function that maps the position of a point in 3D space and a ray direction to a feature vec-tor and volume density. In a feature field, which is implicit representation, all continuous points in a space can have a respective feature and volume density. Hence, the feature field is more expressive than explicit representation [59] and more suitable for representing human appearance from dif-ferent viewing directions in 3D space. To infer the pose and shape parameters from the Fea-ture Field, the 2D feature map is generated by volume ren-dering for a given viewing direction, and the parameters are regressed from the rendered feature. Unlike previous methods, our model can look at a person from an arbitrary viewing direction by controlling the viewing direction deter-mined by camera extrinsic ( i.e., camera pose). Therefore, to utilize consistency with pose and shape from unseen-view, if there are 3D labels, ImpHMR predicts results including silhouette used as geometric guidance from an arbitrary di-rection and makes it equal to the rotated ground-truth. In addition, in the case of only 2D labels, we perform self-supervised learning through the constraint that SMPL pa-rameters inferred from different directions should be the same. These constraints help feature fields represent a bet-ter 3D space by disentangling human appearance and view-ing direction; as a result, SMPL regression from canoni-cal viewing direction in which the image was taken is im-proved. To verify the efficacy of our method, we conduct experiments on 3DPW, LSP, COCO , and 3DPW-OCC . The contributions of our work can be summarized as follows: • We propose a novel HMR model called “ImpHMR” that can implicitly imagine a human in 3D space from a given 2D observation via Neural Feature Fields. • To utilize consistency with pose and shape from unseen-view, we propose arbitrary view imagination loss and ap-pearance consistency loss. • We propose the geometric guidance branch so that the model can learn better geometric information. • ImpHMR has 2∼3times faster fps than current SOTAs thanks to efficient spatial representation in feature fields. • We confirm that having the model imagine a person in 3D space and checking consistency between human ap-pearance from different viewing directions improves theHMR performance in the canonical viewing direction in which the image was taken. |
Doveh_Teaching_Structured_Vision__Language_Concepts_to_Vision__Language_CVPR_2023 | Abstract Vision and Language (VL) models have demonstrated re-markable zero-shot performance in a variety of tasks. How-ever, some aspects of complex language understanding still remain a challenge. We introduce the collective notion of Structured Vision & Language Concepts (SVLC) which includes object attributes, relations, and states which are present in the text and visible in the image. Recent stud-ies have shown that even the best VL models struggle with SVLC. A possible way of fixing this issue is by collecting dedicated datasets for teaching each SVLC type, yet this might be expensive and time-consuming. Instead, we pro-pose a more elegant data-driven approach for enhancing VL models’ understanding of SVLCs that makes more ef-fective use of existing VL pre-training datasets and does not require any additional data. While automatic under-standing of image structure still remains largely unsolved, language structure is much better modeled and understood, allowing for its effective utilization in teaching VL models. In this paper, we propose various techniques based on lan-guage structure understanding that can be used to manipu-late the textual part of off-the-shelf paired VL datasets. VL models trained with the updated data exhibit a significant improvement of up to 15% in their SVLC understanding with only a mild degradation in their zero-shot capabilities both when training from scratch or fine-tuning a pre-trained model. Our code and pretrained models are available at: https://github.com/SivanDoveh/TSVLC | 1. Introduction Recent Vision & Language (VL) models [19, 31, 43, 44, 47, 57] achieve excellent zero-shot performance with re-spect to various computer-vision tasks such as detection, classification, segmentation, etc. However, recent stud-ies [68, 82] have demonstrated that even the strongest VL models struggle with the compositional understanding of some basic Structured VL Concepts (SVLC) such as ob-*Equal contribution Typical contrastive text-to-image loss (e.g. CLIP)Solvable by ”bag of objects”! Our additional language structure augmented lossesForce language structure topology on the image embedding!A gray cat sits on top of awoodenchair near a plantArubycat sits on top of a plastic chair near a plantNear a plant a gray cat sits on a plastic chairLLM PosLLM NegRB NegA gray cat sits on top of a plastic chair near a plantA woman stands next to the counterTwo happy squirrelsSeveral school kids dancing(a) (b)Negative(repel)Positive(attract)Figure 1. Teaching language structure to VL models. (a) Stan-dard contrastive text-to-image loss (e.g. CLIP [57]) tends to under-emphasize SVLC content of the text, likely due to the random na-ture of the training batches; (b) We generate modified versions of corresponding texts and use them to add losses to explicitly teach language structure (SVLC) to VL models. ject attributes, inter-object relations, transitive actions, ob-ject states and more. Collecting specialized large scale data toteach VL models these missing ‘skills’ is impractical, as finding specialized text-image pairs for each kind and pos-sible value of the different attributes, relations, or states, is both difficult and expensive. Another important challenge in training VL models with new concepts is catastrophic forgetting, which is a com-mon property to all neural models [7, 34, 37, 49, 58] and has been explored for VL models in a recent concurrent work [14]. Large VL models such as CLIP [57] and Cy-CLIP [19] have exhibited excellent zero-shot learning abil-ities in many tasks. Therefore, even given a large dataset with new concepts, it is important not to lose these abilities when performing the adaptation to the new data. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2657 “A gray cat sits on top of a plastic chair near a plant”Text& Image pair input Near a plant a gray cat sits on a plastic chair3. Large Language Model prompting PositivesResulting Positive:Inference using Bloom a woman standing on top of a sitting cat is semantic similar toa cat standing under a woman. a baby crying to the right of a box is semantic similar toa box placed to the left of a crying baby. a man sitting to the right of a dog is semantic similar toa dog sitting to the left of a man. a blue boat is semantic similar toa boat that is blue. a gray cat sitting on top of a plastic chair near a plantis semantic similar to…Pattern matching:•coloroptions•actionoptions•materialoptions•stateoptions•sizeoptions(color, “gray” à“ruby”)Arubycat sits on top of a plastic chair near a plant1. Rule-Based NegativesResulting Negative:Choose an optionrandomly Parsing([ADJ], “plastic”)A gray cat sits on top of a<MASK> chair near a plantA gray cat sits on top ofa plastic chair near a plant[NOUN] [NOUN][NOUN][ADJ][ADJ][ADP][ADP][VERB]A gray cat sits on top of a woodenchair near a plant2. Large Language Model unmasking Negatives Resulting Negative:Choose an optionrandomly Inference using BERT Figure 2. Teaching structured image understanding to VL models via structured textual data manipulation harnessing the power of language modeling. (1) Generating Rule-Based negative texts (Sec. 3.1.1); (2) Generating negatives using Large Language Model (LLM) unmasking (Sec. 3.1.2); (3) Generating analogies (positives) via LLM prompting (Sec. 3.1.3). In this paper, we propose a way to leverage existing (off-the-shelf) VL pre-training data sources in order to improve the SVLC understanding skills of a given model, while at the same time maintaining its zero-shot object recognition accuracy. Naturally, succeeding in this goal would lead to potential improvement w.r.t. SVLC understanding in a wide variety of downstream tasks building upon pre-trained VL models, such as zeros-shot detection, segmentation, image generation, and many more. Recent research [68, 82] has shown that VL models ex-hibit an ‘object bias’ partially due to the contrastive text-to-image loss used in their pre-training. For example, the popular CLIP-loss [57] is computed over a random batch of text-image pairs sampled from a large-scale and diverse VL dataset with the chance of two images in the same batch containing the same set of objects being very low. For such a loss, representing just a ’bag of objects’ in each image or text is sufficient for matching the corresponding pairs. In-tuitively, this leads to the ‘object bias’ where SVLCs like attributes, states, and relations are being underrepresented (e.g. having a much smaller amplitude in the resulting feature superposition), consequently causing the aforemen-tioned issues with SVLC understanding. Based on this intuition, we propose a simple data-driven technique that harnesses existing language parsing and modeling capabilities to enhance the importance of SVLCs in the VL model training losses. For each text in the train-ing batch, we automatically generate alternative negative or positive text by manipulating its content to be opposite or equivalent to the original text. Using the newly gener-ated texts, we explicitly teach SVLC to the model via ad-ditional losses (see Fig. 1) that enforce differentiating be-tween different (original and generated) SVLC texts and are no longer satisfiable by the ’bag of objects’ representation. Towards this end, we propose several techniques for im-plementing this approach, including (i) rule-based priors based on classical NLP parsing |
Chen_UV_Volumes_for_Real-Time_Rendering_of_Editable_Free-View_Human_Performance_CVPR_2023 | Abstract Neural volume rendering enables photo-realistic render-ings of a human performer in free-view, a critical task in immersive VR/AR applications. But the practice is severely limited by high computational costs in the rendering pro-cess. To solve this problem, we propose the UV V olumes , a new approach that can render an editable free-view video of a human performer in real-time. It separates the high-frequency (i.e., non-smooth) human appearance from the 3D volume, and encodes them into 2D neural texture stacks (NTS). The smooth UV volumes allow much smaller and shallower neural networks to obtain densities and texture *Authors contributed equally to this work. †Corresponding Author. This work is partly supported by the Na-tional Key Research and Development Program of China under Grant 2022YFB3303800 and National Key Projects of China, 2021XJTU0040.coordinates in 3D while capturing detailed appearance in 2D NTS. For editability, the mapping between the parame-terized human model and the smooth texture coordinates al-lows us a better generalization on novel poses and shapes. Furthermore, the use of NTS enables interesting applica-tions, e.g., retexturing. Extensive experiments on CMU Panoptic, ZJU Mocap, and H36M datasets show that our model can render 960 ×540 images in 30FPS on average with comparable photo-realism to state-of-the-art methods. The project and supplementary materials are available at https://fanegg.github.io/UV-Volumes. | 1. Introduction Synthesizing a free-view video of a human performer in motion is a long-standing problem in computer vision. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16621 Early approaches [4] rely on obtaining an accurate 3D mesh sequence through multi-view stereo. However, the com-puted 3D mesh often fails to depict the complex geome-try structure, resulting in limited photorealism. In recent years, methods (e.g., NeRF [33]) that make use of volumet-ric representation and differentiable ray casting have shown promising results for novel view synthesis. These tech-niques have been further extended to tackle dynamic scenes. Nonetheless, NeRF and its variants require a large number of queries against a deep Multi-Layer Perceptron (MLP). Such time-consuming computation prevents them from being applied to applications that require high ren-dering efficiency. In the case of static NeRF, a few meth-ods [10, 42, 58] have already achieved real-time perfor-mance. However, for dynamic NeRF, solutions for real-time rendering of volumetric free-view video are still lacking. In this work, we present UV Volumes , a novel frame-work that can produce an editable free-view video of a hu-man performer in motion and render it in real-time. Specif-ically, we take advantage of a pre-defined UV-unwrapping (e.g., SMPL or dense pose) of the human body to tackle the geometry (with texture coordinates) and textures in two branches. We employ a sparse 3D Convolutional Neu-ral Networks (CNN) to transform the voxelized and struc-tured latent codes anchored with a posed SMPL model to a 3D feature volume, in which only smooth and view-independent densities and UV coordinates are encoded. For rendering efficiency, we use a shallow MLP to decode the density and integrate the feature into the image plane by volume rendering. Each feature in the image plane is then individually converted to the UV coordinates. Accordingly, we utilize the yielded UV coordinates to query the RGB value from a pose-dependent neural texture stack (NTS). This process greatly reduces the number of queries against MLPs and enables real-time rendering. It is worth noting that the 3D V olumes in the proposed framework only need to approximate relatively “smooth” signals. As shown in Figure 2, the magnitude spectrum of the RGB image and the corresponding UV image indi-cates that UV is much smoother than RGB. That is, we only model the low-frequency density and UV coordinate in the 3D volumes, and then detail the appearance in the 2D NTS, which is also spatially aligned across different poses. The disentanglement also enhances the generalization ability of such modules and supports various editing operations. We perform extensive experiments on three widely-used datasets: CMU Panoptic, ZJU Mocap, and H36M datasets. The results show that the proposed approach can effec-tively generate an editable free-view video from both dense and sparse views. The produced free-view video can be rendered in real-time with comparable photorealism to the state-of-the-art methods that have much higher computa-tional costs. In summary, our major contributions are: UV magnitude spectrum UV image RGB magnitude spectrum RGB image 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2. Discrete Fourier Transform (DFT) for RGB and UV image. In the magnitude spectrum, the distance from each point to the midpoint describes the frequency, the direction from each point to the midpoint describes the direction of the plane wave, and the value of the point describes its amplitude. The distribution of the UV magnitude spectrum is more concentrated in the center, which indicates that the frequency of the UV image is lower. • A novel system for rendering editable human perfor-mance video in free-view and real-time. • UV V olumes, a method that can accelerate the render-ing process while preserving high-frequency details. • Extended editing applications enabled by this frame-work, such as reposing, retexturing, and reshaping. |
Brachmann_Accelerated_Coordinate_Encoding_Learning_to_Relocalize_in_Minutes_Using_RGB_CVPR_2023 | Abstract Learning-based visual relocalizers exhibit leading pose accuracy, but require hours or days of training. Since train-ing needs to happen on each new scene again, long train-ing times make learning-based relocalization impractical for most applications, despite its promise of high accu-racy. In this paper we show how such a system can ac-tually achieve the same accuracy in less than 5 minutes. We start from the obvious: a relocalization network can be split in a scene-agnostic feature backbone, and a scene-specific prediction head. Less obvious: using an MLP prediction head allows us to optimize across thousands of view points simultaneously in each single training itera-tion. This leads to stable and extremely fast convergence. Furthermore, we substitute effective but slow end-to-end training using a robust pose solver with a curriculum over a reprojection loss. Our approach does not require priv-ileged knowledge, such a depth maps or a 3D model, for speedy training. Overall, our approach is up to 300x faster in mapping than state-of-the-art scene coordinate regres-sion, while keeping accuracy on par. Code is available: https://nianticlabs.github.io/ace1. Introduction Time is really the only capital that any human being has, and the only thing he can’t afford to lose. Thomas Edison Time is relative. Time spent waiting can stretch to infin-ity. Imagine waiting for a visual relocalizer to finally work in a new environment. It can take hours – and feel like days – until the relocalizer has finished its pre-processing of the scene. Only then can it estimate the camera’s position and orientation to support real-time applications like navigation or augmented reality (AR). Relocalizers need that extensive pre-processing to build a map of the environment that defines the coordinate space we want to relocalize in. Visual relocalizers typically build maps from sets of images of the environment, for each of which the camera pose is known. There are two prevalent families of structure-based relocalizers that meet the high accuracy requirements of applications like AR. Sparse feature-matching approaches [12, 25, 40, 44, 48, 49,67] need to build an explicit 3D reconstruction of a scene using structure-from-motion (SfM) software [51, 55, 63]. Even when poses of mapping images are known, the run-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5044 time of SfM for scene triangulation varies a lot, and can lie anywhere between 10 minutes and 10 hours depending on how many mapping frames are used. When mapping suc-ceeds, feature-based relocalizers are fast at query time and accurate [44,49]. Less refined maps can be built in real time using SLAM, if one is willing to accept the detrimental ef-fect on accuracy [4]. In either case, the underlying maps can consume vast amounts of storage, and can reveal private in-formation that was present in the mapping images [16, 56]. On the other hand, scene coordinate regression [5, 7, 10, 20, 31, 53, 64] learns an implicit representation of the scene via gradient descent. The resulting maps can be as small as 4MB [10], and privacy preserving [67]. But, while scene coordinate regression is on-par with feature-matching in terms of accuracy and relocalization time [4], the fact that they map an environment via hours-long training of a network makes them unattractive for most applications. The state-of-the-art scene coordinate regression pipeline, DSAC* [10], requires 15 hours to reach top accuracy on a premium GPU, see Fig. 1. We can stop training any time, and see which accuracy we get but, after 5 minutes mapping time, DSAC* has a relocalization rate in the single digits. In fact, the corresponding data point for the plot in Fig. 1 can be found at the bottom of the previous page. The aim of this work is summarized quickly: we take a scene coordinate regression-based relocalizer, the slow-est approach in terms of mapping time, and make it one of the fastest. In particular, we present Accelerated Coordi-nate Encoding (ACE), a schema to train scene coordinate regression in 5 minutes to state-of-the-art accuracy. Speeding up training time normally causes moderate in-terest in our community, at best. This is somewhat justified in train-once-deploy-often settings. Still, learning-based visual relocalization does not fall within that category, as training needs to happen on each new scene, again. There-fore, fast training has a range of important implications: •Mapping delay. We reduce the time between collect-ing mapping data, and having a top-performing relo-calizer for that environment. •Cost. Computation time is expensive. Our approach maps a scene within minutes on a budget GPU. •Energy consumption. Extensive computation is an environmental burden. We significantly reduce the re-source footprint of learning-based relocalization. •Reproducibility. Using ACE to map all scenes of the datasets used in this paper can be done almost five times over on a budget GPU, in the time it takes DSAC* to map a single scene on a premium GPU. We show that a thoughtful split of a standard scene coor-dinate regression network allows for more efficient train-ing. In particular, we regard scene coordinate regression as a mapping from a high-dimensional feature vector to a 3Dpoint in scene space. We show that a multi-layer perceptron (MLP) can represent that mapping well, as opposed to con-volutional networks normally deployed [7,10,20]. Training a scene-specific MLP allows us to optimize over many (of-tentimes all available) mapping views at once in each single training iteration. This leads to very stable gradients that allow us to operate in very aggressive, high-learning rate regimes. We couple this with a curriculum over a repro-jection loss that lets the network burn in on reliable scene structures at later stages of training. This mimics end-to-end training schemes that involve differentiating through robust pose estimation during training [10], but are much slower than our approach. We summarize our contributions : •Accelerated Coordinate Encoding (ACE), a scene co-ordinate regression system that maps a new scene in 5 minutes. Previous state-of-the-art scene coordinate re-gression systems require hours of mapping to achieve comparable relocalization accuracy. • ACE compiles a scene into 4MB worth of network weights. Previous scene coordinate regression systems required 7-times more storage, or had to sacrifice ac-curacy for scene compression. • Our approach requires only posed RGB images for mapping. Previous fast mapping relocalizers relied on priviledged knowledge like depth maps or a scene mesh for speedy mapping. 2. Related Work Visual relocalization requires some representation of the environment we want to relocalize in. We refer to these representations as “maps”, and the process of creating them as “mapping”. Our work is predominately concerned with the time needed for mapping, and, secondly, the storage de-mand of the maps created. Image Retrieval and Pose Regression. Arguably the simplest form of a map is a database of mapping images and their poses. Given a query image, we look for the most similar mapping images using image retrieval [2, 42, 57], and approximate the query pose with the top retrieved map-ping pose [12, 50]. Pose regression uses neural networks to either predict the absolute pose from a query image di-rectly, or predict the relative pose between the query image and the top retrieved mapping image. All absolute and most relative pose regression methods [11, 28, 29, 52, 58, 62, 68] train scene-specific networks which can take significant time, e.g. [28] reports multiple hours per scene for PoseNet. Some relative pose regression works report results with gen-eralist, scene-agnostic networks that do not incur additional mapping time on top of building the retrieval index [58,62]. Map-free relocalization [3] is an extreme variation that cou-ples scene-agnostic relative pose regression with a single 5045 reference frame for practically instant relocalization. Re-cently, some authors use neural radiance fields (NeRFs) [38] for camera pose estimation [36, 66]. In its early stage, this family of methods has yet to demonstrate its merits against the corpus of existing relocalisers and on standard bench-marks. Some of the aforementioned approaches have at-tractive mapping times, i.e. require only little scene-specific pre-processing. But their pose accuracy falls far behind structure-based approaches that we discuss next. Feature Matching. Feature matching-based relocalizers [12,25,40,44,49] calculate the camera pose from correspon-dences between the query image and 3D scene space. They establish correspondences via discrete matching of local feature descriptors. Thus, they require a 3D point cloud of an environment where each 3D point stores one or multiple feature descriptors for matching. These point clouds can be created by running SfM software, such as COLMAP [51]. Even if poses of mapping images are known in advance, e.g. from on-device visual odometry [1, 24, 27, 39], feature tri-angulation with SfM can take several hours, depending on the number of mapping frames. Also, the storage require-ments can be significant, mainly due to the need for storing hundreds of thousands of descriptor vectors for matching. Strategies exist to alleviate the storage burden, such as stor-ing fewer descriptors per 3D point [26,47,49], compressing descriptors [35, 65] or removing 3D points [65]. More re-cently, GoMatch [67] and MeshLoc [40] removed the need to store descriptors entirely by matching against the scene geometry. None of the aforementioned strategies reduce the mapping time – on the converse, often they incur additional post-processing costs for the SfM point clouds. To reduce mapping time, one could use only a fraction of all mapping images for SfM, or reduce the image resolution. However, this would likely also affect the pose estimation accuracy. Scene Coordinate Regression. Relocalizers in this fam-ily regress 3D coordinates in scene space for a given 2D pixel position in the query image [53]. Robust optimiza-tion over scene-to-image correspondences yields the de-sired query camera pose. To regress correspondences, most works rely on random forests [6, 14, 15, 53, 61] or, more re-cently, convolutional neural networks [5,7,8,10,13,20,31]. Thus, the scene representation is implicit, and the map is en-coded in the weights of the neural network. This has advan-tages as the implicit map is privacy-preserving [56, 67]: an explicit scene representation can only be re-generated with images of the environment. Also, scene coordinate regres-sion has small storage requirements. DSAC* [10] achieves state-of-the-art accuracy with 28MB networks, and accept-able accuracy with 4MB networks. Relocalization in large-scale environments can be challenging, but strategies exist that rely on network ensembles [8].The main drawback of scene coordinate regression is its long mapping time, since mapping entails training a neural network for each specific scene. DSAC++ [7] reported 6 days of training for a single scene. DSAC* reduced the training time to 15 hours – given a powerful GPU. This is still one order of magnitude slower than typical feature matching approaches need to reconstruct a scene. In our work, we show how few conceptual changes to a scene co-ordinate regression pipeline result in a speedup of two or-ders of magnitude. Thus, we pave the way for deep scene coordinate regression to be useful in practical applications. A variety of recipes allow for fast mapping if depth, ren-dered or measured, is given. Indeed, the original SCoRF pa-per [53] re | ported to train their random forest with RGB-D images under 10 minutes. Cavallari et al. [15] show how to adapt a pre-trained neural scene representation in real time for a new scene, but their approach requires depth inputs for the adaptation, and for relocalization. Dong et al. [20] use very few mapping frames with depth to achieve a mapping time of 2 minutes. The architecture described in [20] consists of a scene-agnostic feature backbone, and a scene-specific region classification head – very similar to our setup. However, their prediction head is convolu-tional, and thus misses the opportunity for highly efficient training as we will show. SANet [64] is a scene coordinate regression variant that builds on image retrieval. A scene-agnostic network interpolates the coordinate maps of the top retrieved mapping frames to yield the query scene coordi-nates. None of the aforementioned approaches is applicable when mapping images are RGB only. Depth channels for mapping can be rendered from a dense scene mesh [7], but mesh creation would increase the mapping time. Our work is the first to show fast scene coordinate regression mapping from RGB and poses alone. 3. Method Our goal is to estimate a camera pose hgiven a single RGB image I.We define the camera pose as the rigid body transformation that maps coordinates in camera space eito coordinates in scene space yi, therefore yi=hei. We can estimate the pose from image-to-scene correspondences: h=g(C),withC={(xi,yi)}, (1) whereCis the set of correspondences between 2D pixel po-sitions xiand 3D scene coordinates yi. Function gdenotes a robust pose solver. Usually gconsists of a PnP minimal solver [22] in a RANSAC [21] loop, followed by refine-ment. Refinement consists of iterative optimization of the reprojection error over all RANSAC inliers using Leven-berg–Marquardt [30, 37]. For more details concerning pose solving we refer to [10], as our focus is on correspondence prediction. To obtain correspondences, we follow the ap-5046 Scene-specific Fully Convolutional NetworkMappingImageScene Coordinate Prediction Ground TruthMapping PoseReprojection Loss 3D SceneScene CoordinatesTarget (unknown)Figure 2. Standard Training Loop [10] . Previous works train a coordinate regression network with one mapping image at a time. The network predicts dense scene coordinates, and is supervised with the ground truth camera pose and a reprojection loss. proach of scene coordinate regression [53]. We learn a func-tion to predict 3D scene points for any 2D image location: yi=f(pi;w),withpi=P(xi, I), (2) where fis a neural network parameterized by learnable weights w, andpiis an image patch extracted around pixel position xifrom image I. Therefore, fimplements a map-ping from patches to coordinates, f:RCI×HP×WP→R3. We have RGB images but usually take grayscale inputs with CI= 1. Typical patch dimensions are HP=WP= 81 px [7–10]. For state-of-the-art architectures there is no explicit patch extraction. A fully convolutional neural network [33] with limited receptive field slides over the input image to efficiently predict dense outputs while reusing computation of neighbouring pixels. However, for our subsequent dis-cussion, the explicit patch notation will prove useful. We learn the function fby optimizing over all mapping images IMwith their ground truth poses h∗ ias supervision: argmin wX I∈I MX iℓπ[xi,yiz}|{ f(pi;w),h∗ i], (3) where ℓπis a reprojection loss that we discuss in Sec. 3.2. We optimize Eq. 3 using minibatch stochastic gradient de-scent. The network predicts dense scene coordinates from one mapping image at a time, and all predictions are super-vised using the ground truth mapping pose, see Fig. 2. 3.1. Efficient Training by Gradient Decorrelation With the standard training, we optimize over predictions for thousands of patches in each training iteration – but they all come from the same image. Hence, their loss and their gradients will be highly correlated. A prediction yiand the prediction for the pixel next to it will be very similar, so will be the pixel loss and its gradient. Our key idea is to randomize patches over the entire training set, and construct training batches from many dif-ferent mapping views. This decorrelates gradients within a batch and leads to a very stable training signal, robustness to high learning rates, and, ultimately, fast convergence.A naive implementation of this idea would be slow if it resorted to explicit patch extraction [5]. The expressive power of convolutional layers, and their efficient computa-tion using fully convolutional architectures is key for state-of-the-art scene coordinate regression. Therefore, we pro-pose to split the regression network into a convolutional backbone, and a multi-layer perceptron (MLP) head: f(pi;w) =fH(fi;wH),withfi=fB(pi;wB),(4) where fBis the backbone that predicts a high-dimensional feature fiwith dimensionality Cf, and fHis the regression head that predicts scene coordinates: fB:RCI×HP×WP→RCfandfH:RCf→R3.(5) Similar to [20], we argue that fBcan be implemented us-ing a scene-agnostic convolutional network -a generic fea-ture extractor. In addition to [20], we argue that fHcan be implemented using a MLP instead of another convolutional network. Fig. 2 signifies our network split. Convolution layers with 3×3kernels are blue, and 1×1convolutions are green. The latter are MLPs with shared weights. This stan-dard network design is used in pipelines like DSAC* [10]. Note how function fHneeds no spatial context, i.e. dif-ferently from the backbone, fHdoes not need access to neighbouring pixels for its computation. Therefore, we can easily construct training batches for fHwith random sam-ples across all mapping images. Specifically, we construct a fixed size training buffer by running the pre-trained back-bonefBover the mapping images. This buffer contains mil-lions of features fiwith their associated pixels positions xi, camera intrinsics Kiand ground truth mapping poses h∗ i. We generate this buffer once, in the first minute of train-ing. Afterwards, we start the main training loop that iterates over the buffer. At the beginning of each epoch, we shuf-fle the buffer to mix features (essentially patches) across all mapping data. In each training step, we construct batches of several thousand features, potentially computing a pa-rameter update over thousands of mapping views at once. Not only is the gradient computation extremely efficient for our MLP regression head, but the gradients are also decor-related which allows us to use high learning rates for fast convergence. Fig. 3 shows our training procedure. 3.2. Curriculum Training Previous state-of-the-art scene coordinate regression pipelines use a multi-stage training process. Firstly, they optimize a pixel-level reprojection loss. Secondly, they do end-to-end training, where they propagate a pose error back through a differentiable pose solver [5,7]. End-to-end train-ing lets the network focus on reliable scene structures while ignoring outlier predictions. However, end-to-end training is extremely costly. For example, in [10], end-to-end train-ing incurs half of the training time for 10% of the parameter 5047 Training Buffer Generation (1 Minute) Training BufferShuffleScene-agnostic Convolutional BackboneScene-specific Regression MLP 🎲MappingImage Training Loop (4 Minutes)Reprojection LossScene Coordinate Prediction Ground TruthMapping Poses 3D SceneScene CoordinatesTarget (unknown)Figure 3. ACE Training Loop. Training consists of two stages: Buffer generation (left) and the main training loop (right). To create a training buffer, we pass mapping images through a scene-agnostic backbone that extracts high-dimensional feature vectors. Each colored box in the buffer represents one such feature, and features with the same color came from the same mapping image. In the main loop, we train a scene-specific MLP that predicts scene coordinates from backbone features. We assemble training batches from random features and their associated mapping poses. Thus, we supervise the scene-specific MLP with many, diverse mapping views in each training iteration. updates. To mimic the effects of end-to-end training, we construct a curriculum over a much simpler pixel-wise re-projection loss. We use a moving inlier threshold through-out the training process that starts loose, and gets more re-strictive as training progresses. Therefore, the network can focus on predictions that are already good, and neglect less precise predictions that would be filtered by RANSAC dur-ing pose estimation. Our training loss is based on the pixel-wise reprojection loss of DSAC* [10]: ℓπ[xi,yi,h∗ i] =( ˆeπ(xi,yi,h∗ i)ifyi∈ V ||yi−¯yi||0 otherwise .(6) This loss optimizes a robust reprojection error ˆeπfor all valid coordinate predictions V. Valid predictions are be-tween 10cm and 1000m in front of the image plane, and have a reprojection error below 1000px. For invalid pre-dictions, the loss optimizes the distance to a dummy scene coordinate ¯yithat is calculated from the ground truth cam-era pose assuming a fixed image depth of 10m. The main difference between DSAC* and our approach is in the defi-nition of the robust reprojection error ˆeπ. DSAC* uses the reprojection error eπup to a threshold τ, and the square root of the reprojection error beyond. Instead, we use tanh clamping of the reprojection error: ˆeπ(xi,yi,h∗ i) =τ(t) tanheπ(xi,yi,h∗ i) τ(t) (7) We dynamically re-scale the tanh according to a threshold τthat varies throughout training: τ(t) =w(t)τmax+τmin,with w(t) =p 1−t2,(8) where t∈(0,1)denotes the relative training progress. This curriculum implements a circular schedule of threshold τ, which remains close to τmaxin the beginning of training, and declines towards τminat the end of training.3.3. Backbone Training As backbone, we can use any dense feature description network [19, 32, 43, 59]. However, existing solutions are often optimized towards sparse feature matching. Their de-scriptors are meant to be informative at key points. In con-trast, we need descriptors that are distinctive for any posi-tion in the input image. Thus, we present a simple way to train a feature description network tailored towards scene coordinate regression. We adhere to the network architec-ture of DSAC* [10]. We use the early convolutional layers as our backbone, and split off the subsequent MLP as our scene-specific regression head. To train the backbone, we resort to the image-level training of DSAC* [10] ( cf. Fig. 2) but couple it with our training curriculum of Eq. 6. Instead of training the backbone with one regression head for a single scene, we train it with Nregression heads forNscenes, in parallel. This bottleneck architecture forces the backbone to predict features that are useful for a wide range of scenes. We train the backbone on 100 scenes from ScanNet [17] for 1 week, resulting in 11MB of weights that can be used to extract dense descriptors on any new scene. See the Supplement for more details on the training process. 3.4. Further Improvements We train the entire network with half-precision floating-point weights. This gives us an additional speed boost, es-pecially on budget GPUs. We also store our networks with float16 precision. This allows us to increase the depth of our regression heads while maintaining 4MB maps. On top of our loss curriculum (see Sec. 3.2), we use a one cycle learn-ing rate schedule [54], i.e. we increase the learning rate in the middle of training, and reduce it towards the end. We found a small but consistent advantage in overparameteriz-ing the scene coordinate representation: we predict homo-geneous coordinates y′= (x, y, z, w )⊤and apply a w-clip, enforcing wto be positive by applying a softplus operation. 5048 7 Scenes 12 ScenesMapping w/ Mesh/DepthMapping TimeMap Size SfM poses D- |
Goel_Interactive_Segmentation_of_Radiance_Fields_CVPR_2023 | Abstract Radiance Fields (RF) are popular to represent casually-captured scenes for new view synthesis and several ap-plications beyond it. Mixed reality on personal spaces needs understanding and manipulating scenes represented as RFs, with semantic segmentation of objects as an im-portant step. Prior segmentation efforts show promise but don’t scale to complex objects with diverse appearance. We present the ISRF method to interactively segment ob-jects with fine structure and appearance. Nearest neighbor feature matching using distilled semantic features identifies high-confidence seed regions. Bilateral search in a joint spatio-semantic space grows the region to recover accu-rate segmentation. We show state-of-the-art results of seg-menting objects from RFs and compositing them to another scene, changing appearance, etc., and an interactive seg-mentation tool that others can use. | 1. Introduction Scene representation is a crucial step for any scene un-derstanding or manipulation task. Relevant scene parame-ters, be it shape, appearance, or illumination, can be rep-resented using various modalities like 2D (depth/texture) maps, point clouds, surface meshes, voxels, parametric functions, etc. Each modality has its strengths and weak-Project Page: https://rahul-goel.github.io/isrf/ ∗Equal Contributionnesses. For example, shape correspondence is straightfor-ward between point clouds compared to surface meshes but compromises rendering fidelity. Thus, choosing an appro-priate representation has a major impact on downstream analyses and applications. Neural implicit representations have emerged as a promising modality for 3D analysis recently. Although ini-tially proposed only for shapes [28, 34], they have been extended to encode complete directional radiance at a point [30], other rendering parameters like lightfields, spec-ularity, textual context, object semantics, etc. [1, 9, 11, 12, 16, 19, 50]. The representation was extended beyond static inward-looking and front-facing scenes to complex outward-looking unbounded 360◦views, dynamic clips, oc-cluded egocentric videos, and unconstrained images. Radiance fields have also been used beyond Novel View Synthesis (NVS) for other applications [5,26,35,43,46,48, 52, 55, 58]. Segmenting objects of the scene representation is a first step towards its understanding and manipulation for different downstream tasks. There have been a few ef-forts at segmenting and editing of radiance fields. Recently, N3F [47], and DFF [21] presented preliminary solutions to this in the neural space of radiance fields. Both use dis-tillation for feature matching between user-provided cues with the learned 3D feature volume, with N3F using user-provided patches and DFF using textual prompts or patches as the segmentation cues. These methods struggle to seg-ment objects with a wide appearance variation. The NVOS system provides segmentation with strokes but have poor quality and non-interactive computations [37]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4201 2Pre-Trained DINOTeacher Featur e Map Photometric LossFeatur e Distance Loss User Provided Str okeRepeat iteratively until convergence High Confidence RegionSet of Training Viewpoints2 3D Query Featur es Near est Neighbor Featur e MatchingK Means Clustering of 2D stroke featur es Bilateral Sear chVolumetric RenderingVoxelized Radiance Field + Semantic Featur e Field (T ensoRF representation) K MeansRadiance Field Capture Radiance Field SegmentationFigure 2. ISRF System overview : We capture a 3D scene of voxelized radiance field and distill the semantic feature into it. Once captured, the user can easily mark regions using a brush tool on a reference view (green[ ] stroke). The features are collected corresponding to the marked pixels and clustered using K-Means. The voxel-grid is then matched using NNFM (nearest neighbor feature matching) to obtain a high confidence seed using a tight threshold. The seed is then grown using bilateral search to smoothly cover the boundaries of the object, conditioning the growth in the spatio-semantic domain. In this paper, we present a simple and efficient method to interactively segment objects in a radiance field represen-tation. Our ISRF method uses an intuitive process with the user providing easy strokes to guide it interactively. We use the fast and memory-efficient TensoRF representation [7] to train and render. TensoRF uses an explicit voxel repre-sentation that is more amenable to manipulation. We in-clude a DINO feature [6] at every voxel to facilitate seman-tic matching from 2D to 3D. DINO features are trained on a large collection of images and are known to capture seman-tics effectively. We condense the DINO features from the user-specified regions to create a fixed-length set using K-Means. A nearest neighbor feature matching (NNFM) on this set in the 3D voxels identifies a high-confidence seed region of the object to be segmented. The seed region is grown using a bilateral filtering-inspired search to include neighboring proximate voxels in a joint feature-geometric space. We show results of segmenting several challenging objects in forward facing [29] and 360 degrees [2] scenes. The explicit voxel space we use facilitates simple modifi-cation for segmenting objects. We also show examples of compositing objects from one RF into another. In summary, the following are the core contributions of ISRF:◦An easily interpretable and qualitatively improved 3D object segmentation framework for radiance fields. ◦Interactive modification of segmentation to capture fine structure, starting with high-confidence matching. Our representation allows a spatio-semantic bilateral search to make this possible. The framework can also use other generalized distances to grow the region for specific applications. ◦A hybrid implicit-explicit representation that is memory-efficient and fast to render also facilitates the distillation of semantic information for improved seg-mentation. Our results show improved accuracy and fine-grain object details in very challenging situations over contemporary efforts. ◦An easy-to-use, GUI based tool to interactively seg-ment objects from an RF representation to facilitate object replacement, alteration, etc. ◦Consistent 2D/3D segmentation masks for a few scenes and objects created manually using our method to facilitate future work in segmentation, manipula-tion, and understanding of RFs. 4202 |
Chen_gSDF_Geometry-Driven_Signed_Distance_Functions_for_3D_Hand-Object_Reconstruction_CVPR_2023 | Abstract Signed distance functions (SDFs) is an attractive frame-work that has recently shown promising results for 3D shape reconstruction from images. SDFs seamlessly generalize to different shape resolutions and topologies but lack explicit modelling of the underlying 3D geometry. In this work, we ex-ploit the hand structure and use it as guidance for SDF-based shape reconstruction. In particular, we address reconstruc-tion of hands and manipulated objects from monocular RGB images. To this end, we estimate poses of hands and objects and use them to guide 3D reconstruction. More specifically, we predict kinematic chains of pose transformations and align SDFs with highly-articulated hand poses. We improve the visual features of 3D points with geometry alignment and further leverage temporal information to enhance the robustness to occlusion and motion blurs. We conduct exten-sive experiments on the challenging ObMan and DexYCB benchmarks and demonstrate significant improvements of the proposed method over the state of the art. | 1. Introduction Understanding how hands interact with objects is be-coming increasingly important for widespread applications, including virtual reality, robotic manipulation and human-computer interaction. Compared to 3D estimation of sparse hand joints [24, 38, 51, 53, 67], joint reconstruction of hands and object meshes [11, 18, 21, 26, 62] provides rich infor-mation about hand-object interactions and has received in-creased attention in recent years. To reconstruct high-quality meshes, some recent works [9, 17, 61] explore multi-view image inputs. Multi-view images, however, are less common both for training and testing sce-narios. In this work, we focus on a more practical and user-friendly setting where we aim to reconstruct hand and object meshes from monocular RGB images. Given the ill-posed nature of the task, many existing methods [7, 19, 21, 54, 62] employ parametric mesh models ( e.g., MANO [46]) to im-gSDF gSDF gSDFFigure 1. We aim to reconstruct 3D hand and object meshes from monocular images (top) . Our method gSDF (middle) first predicts 3D hand joints (blue) and object locations (red) from input images. We use estimated hand poses and object locations to incorporate strong geometric priors into SDF by generating hand-and object-aware kinematic features for each SDF query point. Our resulting gSDF model generates accurate results for real images with various objects and grasping hand poses (bottom) . pose prior knowledge and reduce ambiguities in 3D hand re-construction. MANO hand meshes, however, have relatively limited resolution and can be suboptimal for the precise capture of hand-object interactions. To reconstruct detailed hand and object meshes, another line of efforts [11, 26] employ signed distance functions (SDFs). Grasping Field [26] makes the first attempt to model hand and object surfaces using SDFs. However, it does not explicitly associate 3D geometry with image cues and has no prior knowledge incorporated in SDFs, leading to unrealistic meshes. AlignSDF [11] proposes to align SDFs with respect to global poses ( i.e., the hand wrist transformation and the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12890 object translation) and produces improved results. However, it is still challenging to capture geometric details for more complex hand motions and manipulations of diverse objects, which involve the articulation of multiple fingers. To address limitations of prior works, we propose a geometry-driven SDF (gSDF) method that encodes strong pose priors and improves reconstruction by disentangling pose and shape estimation (see Figure 1). To this end, we first predict sparse 3D hand joints from images and derive full kinematic chains of local pose transformations from joint locations using inverse kinematics. Instead of only using the global pose as in [11], we optimize SDFs with respect to poses of all the hand joints, which leads to a more fine-grained alignment between the 3D shape and articulated hand poses. In addition, we project 3D points onto the image plane to extract geometry-aligned visual features for signed distance prediction. The visual features are further refined with spatio-temporal contexts using a transformer model to enhance the robustness to occlusions and motion blurs. We conduct extensive ablation experiments to show the effectiveness of different components in our approach. The proposed gSDF model greatly advances state-of-the-art accu-racy on the challenging ObMan and DexYCB benchmarks. Our contributions can be summarized in three-fold: (i)To embed strong pose priors into SDFs, we propose to align the SDF shape with its underlying kinematic chains of pose transformations, which reduces ambiguities in 3D recon-struction. (ii)To further reduce the misalignment induced by inaccurate pose estimations, we propose to extract geometry-aligned local visual features and enhance the robustness with spatio-temporal contexts. (iii)We conduct comprehensive experiments to show that our approach outperforms state-of-the-art results by a significant margin. |
Guo_Hierarchical_Fine-Grained_Image_Forgery_Detection_and_Localization_CVPR_2023 | Abstract Differences in forgery attributes of images generated in CNN-synthesized and image-editing domains are large, and such differences make a unified image forgery detection and localization (IFDL) challenging. To this end, we present a hierarchical fine-grained formulation for IFDL represen-tation learning. Specifically, we first represent forgery at-tributes of a manipulated image with multiple labels at different levels. Then we perform fine-grained classifica-tion at these levels using the hierarchical dependency be-tween them. As a result, the algorithm is encouraged to learn both comprehensive features and inherent hierarchi-cal nature of different forgery attributes, thereby improving the IFDL representation. Our proposed IFDL framework contains three components: multi-branch feature extrac-tor, localization and classification modules. Each branch of the feature extractor learns to classify forgery attributes at one level, while localization and classification modules segment the pixel-level forgery region and detect image-level forgery, respectively. Lastly, we construct a hier-archical fine-grained dataset to facilitate our study. We demonstrate the effectiveness of our method on 7different benchmarks, for both tasks of IFDL and forgery attribute classification. Our source code and dataset can be found: github.com/CHELSEA234/HiFi-IFDL. | 1. Introduction Chaotic and pervasive multimedia information sharing offers better means for spreading misinformation [1], and the forged image content could, in principle, sustain re-cent “infodemics” [3]. Firstly, CNN-synthesized images made extraordinary leaps culminating in recent synthesis methods—Dall ·E [52] or Google ImageN [57]—based on diffusion models (DDPM) [24], which even generates re-alistic videos from text [23, 60]. Secondly, the availability of image editing toolkits produced a substantially low-cost access to image forgery or tampering ( e.g., splicing and in-painting). In response to such an issue of image forgery, the computer vision community has made considerable ef-InpaintingCopy-moveSplicingSTGANFaceshifterNIST16HiFi-IFDLCASIAUSCISIHiFi-IFDLColumbia HiFi-IFDLHiFi-IFDLHiFi-IFDL Detection LocalizationReal v.s.Forgery Image Editing CNN-synthesized (a)Mean/Variance of forgery area.5 .4 .3 .2 .1 (b) Figure 1. (a) In this work, we study image forgery detection and localization (IFDL), regardless of forgery method domains. (b) The distribution of forgery region depends on individual forgery methods. Each color represents one forgery category (x-axis). Each bubble represents one image forgery dataset. The y-axis de-notes the average of forgery area. The bubble’s area is proportional to the variance of the forgery area. forts, which however branch separately into two directions: detecting either CNN synthesis [62,64,73], or conventional image editing [17,26,43,63,68]. As a result, these methods may be ineffective when deploying to real-life scenarios, where forged images can possibly be generated from either CNN-sythensized or image-editing domains. To push the frontier of image forensics [59], we study the image forgery detection and localization problem (IFDL)— Fig. 1a—regardless of the forgery method domains, i.e., CNN-synthesized or image editing. It is challenging to de-velop a unified algorithm for two domains, as images, gen-erated by different forgery methods, differ largely from each other in terms of various forgery attributes. For example, a forgery attribute can indicate whether a forged image is fully synthesized or partially manipulated, or whether the forgery method used is the diffusion model generating im-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3155 ForgeryFull-synthesizedDiffusionGANCond.Uncond.EditingFu.Sy.Diffu.Uncond.DDIM Pa.MaEditingCond..Splicing GANCond.STGANFu.Sy.Diffu.Uncond.DDPMLevel1Level2Level3Level4Partial-Manipulated DDPMDDIMFu.Sy.v.sPaMa.Diffu.v.s.GANv.s.Editing…Cond.Diffu.v.sUncond.Diffu...Realv.sDDPMv.s.DDIM…Level1Level2Level3Level4Model RealForgeryPa.Ma(a) (b) (c) Figure 2. (a) We represent the forgery attribute of each manipulated image with multiple labels, at different levels. (b) For an input image, we encourage the algorithm to classify its fine-grained forgery attributes at different levels, i.e.a2-way classification (fully synthesized or partially manipulated) on level 1. (c) We perform the fine-grained classification via the hierarchical nature of different forgery attributes, where each depth lnode’s classification probability is conditioned on classification probabilities of neighbor nodes at depth ( l−1). [Key: Fu. Sy.: Fully Synthesized; Pa. Ma.: Partially manipulated; Diff.: Diffusion model; Cond.: Conditional; Uncond.: Unconditional]. ages from the Gaussian noise, or an image editing process that splices two images via Poisson editing [51]. Therefore, to model such complex forgery attributes, we first repre-sent forgery attribute of each forged image with multiple labels at different levels. Then, we present a hierarchical fine-grained formulation for IFDL, which requires the algo-rithm to classify fine-grained forgery attributes of each im-age at different levels, via the inherent hierarchical nature of different forgery attributes. Fig. 2a shows the interpretation of the forgery attribute with a hierarchy, which evolves from the general forgery attribute, fully-synthesized vs partial-manipulated, to spe-cific individual forgery methods, such as DDPM [24] and DDIM [61]. Then, given an input image, our method per-forms fine-grained forgery attribute classification at differ-ent levels (see Fig. 2b). The image-level forgery detection benefits from this hierarchy as the fine-grained classification learns the comprehensive IFDL representation to differenti-ate individual forgery methods. Also, for the pixel-level lo-calization, the fine-grained classification features can serve as a prior to improve the localization. This holds since the distribution of the forgery area is prominently correlated with forgery methods, as depicted in Fig. 1b. In Fig. 2c, we leverage the hierarchical dependency be-tween forgery attributes in fine-grained classification. Each node’s classification probability is conditioned on the path from the root to itself. For example, the classification prob-ability at a node of DDPM is conditioned on the classification probability of all nodes in the path of Forgery →Fully Synthesis →Diffusion →Unconditional →DDPM . This differs to prior work [44, 45, 68, 71] which assume a “flat” structure in which attributes are mutually exclusive. Predicting the entire hierarchical path helps understanding forgery attributes from the coarse to fine, thereby capturing dependencies among individual forgery attributes. To this end, we propose Hierarchical Fine-grained Net-work (HiFi-Net). HiFi-Net has three components: multi-branch feature extractor, localization module and detectionmodule. Each branch of the multi-branch extractor clas-sifies images at one forgery attribute level. The localiza-tion module generates the forgery mask with the help of a deep-metric learning based objective, which improves the separation between real and forged pixels. The classifica-tion module first overlays the forgery mask with the input image and obtain a masked image where only forged pixels remain. Then, we use partial convolution to process masked images, which further helps learn IFDL representations. Lastly, to faciliate our study of the hierarchical fine-grained formulation, we construct a new dataset, termed Hierarchical Fine-grained (HiFi) IFDL dataset. It con-tains 13forgery methods, which are either latest CNN-synthesized methods or representative image editing meth-ods. HiFi-IFDL dataset also induces a hierarchical structure on forgery categories to enable learning a classifier for vari-ous forgery attributes. Each forged image is also paired with a high-resolution ground truth forgery mask for the localiza-tion task. In summary, our contributions are as follows: ⋄We study the task of image forgery detection and local-ization (IFDL) for both image editing and CNN-synthesized domains. We propose a hierarchical fine-grained formula-tion to learn a comprehensive representation for IFDL and forgery attribute classification. ⋄We propose a IFDL algorithm, named HiFi-Net, which not only performs well on forgery detection and localiza-tion, also identifies a diverse spectrum of forgery attributes. ⋄We construct a new dataset (HiFi-IFDL) to facilitate the hierarchical fine-grained IFDL study. When evaluating on7benchmarks, our method outperforms the state of the art (SoTA) on the tasks of IFDL, and achieve a competitive performance on the forgery attribute classifications. |
Gunawan_Modernizing_Old_Photos_Using_Multiple_References_via_Photorealistic_Style_Transfer_CVPR_2023 | Abstract This paper firstly presents old photo modernization us-ing multiple references by performing stylization and en-hancement in a unified manner. In order to modernize old photos, we propose a novel multi-reference-based old photo modernization (MROPM) framework consisting of a net-work MROPM-Net and a novel synthetic data generation scheme. MROPM-Net stylizes old photos using multiple ref-erences via photorealistic style transfer (PST) and further enhances the results to produce modern-looking images. Meanwhile, the synthetic data generation scheme trains the network to effectively utilize multiple references to perform modernization. To evaluate the performance, we propose a new old photos benchmark dataset (CHD) consisting of di-verse natural indoor and outdoor scenes. Extensive exper-iments show that the proposed method outperforms other baselines in performing modernization on real old photos, even though no old photos were used during training. More-over, our method can appropriately select styles from mul-tiple references for each semantic region in the old photo to further improve the modernization performance. *Soo Ye Kim is currently affiliated with Adobe Research. †Hyeonjun Sim is currently affiliated with Qualcomm. ‡Corresponding author.1. Introduction Old photos taken a long time ago may contain impor-tant information that carry cultural and heritage values, e.g., photos of Queen Elizabeth II’s coronation. Such old im-ages may contain multiple degradations, e.g., scratches, and old photo artifacts, e.g., color fading, often preventing peo-ple from understanding the scene. To restore these images, a skilled expert needs to perform laborious manual pro-cesses such as degradation restoration and modernization, i.e., colorization or enhancement, to make them look mod-ern [44]. Consequently, early studies [8, 39] try to restore damaged old photos automatically by using traditional in-painting techniques. However, solely re-synthesizing dam-aged regions in the image is inadequate to ensure old photos look modern, as the overall style remains similar. Recent work [28] formulates the task as time-travel rephotography which aims to translate old photos into a modern photos space. The authors considered a multi-task problem consisting of two main tasks: (i) restora-tion of old photos with both unstructured (noise, blur) and structured (scratch, crack) degradations; (ii) modernization which aims to change old photos’ characteristics to look like modern images, e.g., better color saturation and contrast by using colorization [28, 48] or enhancement [42]. However, simply using an enhancement method [42] fails to modern-ize old photos, as shown in Fig. 1, since the overall look This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12460 still remains similar to old photos, e.g., with a sepia color. In this paper, we propose to modernize old color pho-tos of natural scenes by changing their styles and enhancing them to look modern. For this, a novel unified framework is proposed which leverages multiple modern photo refer-ences in solving the modernization task of old photos by utilizing photorealistic style transfer (PST). Although one prior work [48] is also reference-based, it only relies on a single reference to colorize greyscale portrait photos. How-ever, in natural scene cases, it is challenging to find a sin-gle modern photo as a reference that can well match the whole semantics of an old photo. Moreover, changing only the color is not sufficient to alter the overall look of an im-age [12]. Thus, our framework uses multiple references to modernize old photos by changing the style instead of only the color. Since there is no public old photos benchmark dataset of natural scenes, we propose a new Cultural Her-itage Dataset (CHD) with 644 indoor and outdoor old color photos collected from various national museums in Korea. Our multiple-reference-based old photo modernization framework (MROPM) consists of two main parts: (i) MROPM-Net and (ii) a novel training strategy that enables the network to utilize multiple references. The MROPM-Net consists of two different subnets: The first is a single stylization subnet that transfers both global and local styles without any semantic segmentation from a modern photo into an old photo; Specifically, we propose an improved ver-sion of WCT2 [51], inspired by its universal generalization, as the backbone of the single stylization subnet, and present a new architecture that can perform both global PST and lo-cal PST without requiring any semantic segmentation; The second is a merging-refinement subnet that merges multiple stylization results from multiple references based on seman-tic similarities and further refines the merged result to pro-duce a modernized version of the old photo. To effectively train the MROPM-Net, we propose a synthetic data gener-ation scheme that uses the style-variant (i.e., color jittering and unstructured degradation) and -invariant (i.e., rotation, flipping, and translation) transformations. Our MROPM can modernize old photos better than the state-of-the-art (SOTA) old photo restoration method [42], even without us-ing any old photos during training, thanks to the generaliza-tion of PST. Our contributions are summarized as follows: • We propose the first old photo modernization frame-work (MROPM) that allows the usage of multiple ref-erences to guide the modernization process. • Our photorealistic multi-stylization network and train-ing strategy enable the MROPM-Net to utilize multiple style references in modernizing old photos. • Our training strategy based on synthetic data allows the MROPM-Net to modernize real old photos even without using any old photos during training.• We propose a new old photo dataset of natural scenes, called Cultural Heritage Dataset (CHD), with 644 out-door and indoor cultural heritage images. 2. Related Work Reference-based color-related tasks. One way to change the overall look of an image is by changing color, which is one of the style components [12]. To change the color of old photos, one can employ two methods: exemplar-based colorization [10, 26, 40, 46, 49, 50] and color transfer orrecolorization [1, 11, 20, 24]. However, exemplar-based colorization methods cannot utilize the color information in the input images for matching, al-though color is an important feature representing object semantics [38], limiting the methods for the modernization of old color photos. Color transfer aims to transfer the ref-erence image’s color statistics into the input image. Early deep learning works [11, 24] use deep feature matching from features extracted with pre-trained VGG19 [37] to perform the color transfer, which can also be extended to multi-reference cases [11]. Due to the long execution time of the optimization process, recent works develop end-to-end networks, where Lee et al. [20] utilize color histogram analogy, and Afifi et al. [1] utilize a color-controlled gener-ative adversarial network (GAN). However, recent works can only use a single reference, where finding a single reference image containing similar semantics as the input old photo can be challenging. Thus, from the perspective of color transfer, our work is the first end-to-end network that can utilize multiple references to handle content mismatch without any slow optimization technique. Photorealistic style transfer (PST). The PST aims at achieving photorealistic rendering of an image with the style of another image. Since the development of post-processing and regularization techniques [27, 31], PST has gained much popularity. Recent works can be categorized into architecture [2, 6, 7, 23, 35, 47, 51] and feature trans-formation [15, 21, 22] improvements to effectively and effi-ciently produce photorealistic results. Specifically, WCT2 [51] utilizes wavelet-based skip connection and progres-sive stylization to achieve better PST where the method can work universally without re-training to pre-defined styles. Due to these benefits, we base our network architecture on WCT2. However, WCT2 produces unnatural style transfer results when performing global and local stylization with unreliable semantic segmentation (shown in Supplemen-tary Material), which hinders the application to old pho-tos. Thus, our MROPM-Net is designed to enable local stylization without any semantic segmentation, which in consequence, can perform multi-style PST in one unified framework without specifying any masks. To the best of our knowledge, this is the first work in multi-style PST, al-though there is one work in multi-style artistic style transfer 12461 (AST) [14]. Note that the AST is different from the PST in that it utilizes learning-dependent feature transformation, which can cause severe visual artifacts in PST. Old photo restoration. Early works in old photo restora-tion focus on detecting and restoring structured degrada-tion (scratch and crack) of images using traditional inpaint-ing techniques [8, 39]. Besides the structured degrada-tion, [25, 42, 48] incorporate additional spatially-uniform unstructured degradation, e.g., blur and noise, using syn-thetic degradation and formulate the problem as mixed degradation restoration. However, restoring mixed degra-dation is not enough to ensure that old photos look mod-ern. Consequently, Luo et al. [28] formally introduce the time-travel rephotography problem, which aims to translate old photos to look like ones taken in the modern era. This problem adds modernization, synthes | izing the missing col-ors and enhancing the details, on top of degradation restora-tion. To solve the modernization problem, Luo et al. [28] use a StyleGAN2-generated [18] sibling image to serve as a reference for old portrait photos. However, generating com-plex natural scene photos via GAN to be used as references is challenging [3], making the method unable to be applied to natural scene old photos. Another work [48] proposes to use a single reference image to colorize an old greyscale photo. However, using a single reference is not enough to cover the whole semantics of old photos (shown in Fig. 1). Thus, different from previous methods, we propose to mod-ernize old photos by stylizing and enhancing old photos in a unified manner using multiple references to better cover the entire semantics of old photos. 3. Proposed Cultural Heritage Dataset (CHD) Although some public datasets such as Historical Wiki Face Dataset (HWFD) [28] and RealOld [48] have been re-leased recently, these datasets only contain portrait or face photos which are much simpler compared to natural scenes. In addition, these datasets only contain greyscale photos and disregard color photos produced during the 20th century us-ing reversal films [29], which have specific degradations such as color dye fading and have not been analyzed before. Therefore, we propose a Cultural Heritage Dataset (CHD) consisting of 644 old color photos produced in the 20th cen-tury. Specifically, we collect these old photos in the form of reversal films or papers from three national museums in Ko-rea, which are then scanned in resolutions varying from 4K to 8K. The photos have been well preserved and stored care-fully due to their value, containing little structured degra-dation, e.g., scratches, but varying degrees of unstructured degradation, e.g., noises. These photos contain indoor and outdoor scenes of cultural heritage, such as special exhibi-tions and excavation ruins. After collection, all photos are divided into train and test sets by randomly splitting with a proportion of 8 (514 photos):2 (130 photos). The train set 𝑠1 𝑐 𝑠𝑁 𝑆𝐹𝑁 𝐶𝑀𝑁𝑆𝐹1 𝐶𝑀1Merging -refinement subnetƸ𝑐𝑠1 Ƹ𝑐𝑠𝑁Merging 𝑐sharedƸ𝑐𝑚𝒮 𝒮Ƹ𝑐ℳ 𝒮:Single stylization subnetFigure 2. The overall framework of our multiple-reference-based old photo modernization network (MROPM-Net). is only used for other baselines that need to be trained using real old photos. Since our task is reference-based old photo modernization, we further collect modern photos as refer-ences by crawling images with similar contexts from the internet. In total, we obtain 130 old photos in the test set, each of which has one or two references selected manually. Further details can be found in Supplementary Material . 4. Proposed Method 4.1. Overall Framework Fig. 2 shows our proposed multi-reference-based old photo modernization network (MROPM-Net) with a shared single stylization subnet Sand a merging-refinement subnet M. We denote an old photo input as content c∈RH×W×3 andNnumber of modern photos as styles s={si}N i=1∈ RN×H×W×3. Our goal is to modernize cusing s. In the first step, we utilize S, which is built based on a photoreal-istic style transfer (PST) backbone, to stylize cusing each si, yielding Nstylized features and correlation matrices {SFi, CM i}N i=1. After having multiple stylization results, we merge the features {SFi}N i=1based on the semantic sim-ilarity {CMi}N i=1between candsand further refine the merging result via M. Specifically, Mselects the appropri-ate styles for each semantic region based on multiple styl-ization results {SFi}N i=1to produce an intermediate merg-ing image output ˆcm, e.g., selecting the most appropriate feature for a sky region from SF1that contains a sky style, not from SFi̸=1, which do not contain sky styles. Then, ˆcm is further refined to get the final result ˆc. Given relevant ref-erences, ˆcbecomes a modern version with a modern style and enhanced details for old photo input c. 4.2. Network Architecture Single stylization subnet S.Fig. 3 shows a detailed structure of S. For given multiple references, our single stylization subnet is shared for all input pairs and takes a single pair of an old photo cand a reference siat a time. Given a pair of (c, si),Sstylizes cbased on the style code ofsilocally and globally, resulting in a stylized feature SFi, a stylized old photo ˆcsi, and a correlation matrix CMi. This subnet Sconsists of two main parts: (i) an improved PST 12462 Pretrained PST Network 𝜃Pretrained PST Encoder 𝜃𝑒𝑛𝑐𝑠𝑖 𝑐𝐹𝑠𝑖𝐷2,𝐹𝑠𝑖𝐷1 Pretrained PST Network 𝜃Local Statistic Extractor Local Filter 𝐻𝜇,𝐻𝜎Convolution blocks ො𝜎𝑙1 Ƹ𝜇𝑙1Extractor Block (EB) L1 EB L2𝐹𝑠𝑖𝐷1𝐹𝑠𝑖𝐷2 𝜓𝑙1:Ƹ𝜇𝑙1,ො𝜎𝑙1𝜓𝑙2:Ƹ𝜇𝑙2,ො𝜎𝑙2Global Statistic Extractor Global Statistic Computation RepeatƸ𝜇𝑔1ො𝜎𝑔1Extractor Block (EB) G1 EB G2 𝜓𝑔1:Ƹ𝜇𝑔1,ො𝜎𝑔1𝜓𝑔2:Ƹ𝜇𝑔2,ො𝜎𝑔2𝐹𝑠𝑖𝐷1𝐹𝑠𝑖𝐷2 𝐹𝑠𝑖𝑘 𝑘=14 𝐶𝑀𝑖Residual blocks Fusion Module 𝜓𝑎1 𝜓𝑎2𝜓𝑎1,𝜓𝑎2𝜓𝑔1,𝜓𝑔2 Ƹ𝜇𝑔1,Ƹ𝜇𝑎1 ො𝜎𝑔1,ො𝜎𝑎1 Ƹ𝜇𝑔2,Ƹ𝜇𝑎2 ො𝜎𝑔2,ො𝜎𝑎2𝜓𝑓1:Ƹ𝜇𝑓1,ො𝜎𝑓1 𝜓𝑓2:Ƹ𝜇𝑓2,ො𝜎𝑓2𝜓𝑙1,𝜓𝑙2 Alignment Module 000AdaIN AdaIN𝜓𝑓1 𝜓𝑓2𝑆𝐹𝑖 𝑐Ƹ𝑐𝑠𝑖 000 CCC 𝐹𝑐𝑘 𝑘=14 C Style Code PredictorFigure 3. The architecture of the single stylization subnet S. network and (ii) a style code predictor . For the PST network, we improve some drawbacks of the concatenated version of WCT2 [51]. We observed that the stylization only affects the last decoder block due to the de-sign of its skip connection, where this issue is called a “short circuit” in [2]. Thus, instead of transferring three different high-frequency components as in the WCT2, we propose to simplify it by transferring a single high-frequency compo-nent in level-0 of the Laplacian pyramid representation [4]. Second, we only apply feature transformation in the net-work’s decoder part, especially the last two decoder blocks, which achieves the best trade-off between the stylization ef-fect and the photorealism. Third, we use the differentiable adaptive instance normalization (AdaIN) [13] instead of the non-differentiable WCT [22] to learn and predict the local style rather than compute it. The second part of Sis a style code predictor. This part aims to predict style codes ψ={µ, σ}consisting of mean and standard deviation (std), which are statistics used to perform stylization in AdaIN [13]. We propose to pre-dictψinstead of computing it as in AdaIN to perform local style transfer without requiring any semantic segmentation. The first step (yellow) of the style code predictor is to ex-tract local style codes ψj l={ˆµj l,ˆσj l}and global style codes ψj g={ˆµj g,ˆσj g}from the j-th level feature {FDj si}extracted by the last two decoder blocks ( j= 1,2) of the pre-trained PST network as shown in Fig. 3. In this regard, ψj lis ex-tracted using a local statistic extractor which consists of a local mean filter Hµand local std filter Hσwith a kernel size of 3 and convolution blocks to refine both filtered out-puts. Meanwhile, the global statistic extractor extracts ψj g by computing channel-wise mean and std values, which are then spatially repeated to the same spatial size of ψj l. After style code extraction, the second step (green) is to align ψj l tocby using non-local attention [43]. Specifically, we ex-tract multi-level feature maps {Fk c}4 k=1and{Fk si}4 k=1for bothcandsi, respectively, map them into the same feature 𝑆𝐹𝑁 𝐶𝑀𝑁SAM 𝐶𝑀𝑁 𝑆𝐹𝑁𝑆𝐹1 𝐶𝑀1𝑆𝐹1 𝐶𝑀1Merging SoftmaxƸ𝑐𝑚Refinement Ƹ𝑐 Convolution blocksSAMshared𝑊1𝑠𝑎 𝑊𝑁𝑠𝑎𝑊𝑛𝑜𝑟,1𝑠𝑎 𝑊𝑛𝑜𝑟,𝑁𝑠𝑎 SAM Spatial attention moduleFigure 4. The architecture of the merging-refinement subnet M. space using shared convolution blocks, and perform matrix multiplication between mapped features to obtain correla-tion matrix CMi. Then, we align ψj ltocby using CMivia matrix multiplication. The aligned style code ψj ais further refined to prevent interpolation artifacts by using residual blocks [9], resulting in a refined version ˆψj a={ˆµj a,ˆσj a}. After obtaining ˆψj a, we fuse it with ψj gvia the fusion mod-ule to obtain a fused style code ψj f={ˆµj f,ˆσj f}. The fu-sion module performs channel-wise concatenation for ˆψj a andψj g, which is then fed into the following convolution blocks as shown in the blue part of Fig. 3. Finally, after performing all the operations from the lo-cal and global statistic extractors to the fusion module, we obtain ψ1 fandψ2 f. These fused style codes are then used for stylizing c. We use our PST network with AdaIN to perform the stylization as shown in the right part of Fig. 3. Merging-refinement subnet M.After stylizing an old photo cwithNdifferent modern photos s={si}N i=1us-ingS, we obtain multiple stylized features and correlation matrices {SFi, CM i}N i=1. The next step is to select the most appropriate styles from {SFi}N i=1for each semantic region via the merging part of M, as shown in Fig. 4. For this, a spatial attention module (SAM) [45] is employed, which strengthens and dampens semantically related and unrelated spatial features, respectively, in the merging pro-cess of the stylized features. The SAM computes spatial attention weights Wsa iby using CMifor the corresponding SFi. Then, we normalize all the spatial attention weights by 12463 Split into 2 unique sets𝑀𝑐𝑚1 𝑚2SIT + URFSVT SVT𝑐 𝑐𝑔𝑡 Masked photo 1 Masked photo 2 𝑠1 𝑠2 SIT + URF URF Unmasked region filling S(I/V)T Style( -invariant/ -variant) transformationFigure 5. Our synthetic data generation pipeline. using Softmax, thus having Wsa nor={Wsa nor,i}N i=1. All nor-malized attention weights Wsa nor,i are multiplied with their corresponding SFi, whose results are summed and then fed into the final convolution blocks to obtain a merging result ˆcmas an intermediate multi-style PST image. We further refine ˆcmvia the U-Net [36] based refinement subnet to pro-duce a final modern version |
Ahn_Interactive_Cartoonization_With_Controllable_Perceptual_Factors_CVPR_2023 |