title
stringlengths
28
135
abstract
stringlengths
0
12k
introduction
stringlengths
0
12k
Choi_Balanced_Spherical_Grid_for_Egocentric_View_Synthesis_CVPR_2023
Abstract We present EgoNeRF , a practical solution to reconstruct large-scale real-world environments for VR assets. Given a few seconds of casually captured 360 video, EgoNeRF can efficiently build neural radiance fields. Motivated by the recent acceleration of NeRF using feature grids, we adopt spherical coordinate instead of conventional Carte-sian coordinate. Cartesian feature grid is inefficient to rep-resent large-scale unbounded scenes because it has a spa-tially uniform resolution, regardless of distance from view-ers. The spherical parameterization better aligns with the rays of egocentric images, and yet enables factorization for performance enhancement. However, the na ¨ıve spherical grid suffers from singularities at two poles, and also cannot represent unbounded scenes. To avoid singularities near poles, we combine two balanced grids, which results in a quasi-uniform angular grid. We also partition the radial grid exponentially and place an environment map at infin-ity to represent unbounded scenes. Furthermore, with our resampling technique for grid-based methods, we can in-crease the number of valid samples to train NeRF volume. We extensively evaluate our method in our newly introduced synthetic and real-world egocentric 360 video datasets, and it consistently achieves state-of-the-art performance.1. Introduction With the recent advance in VR technology, there ex-ists an increasing need to create immersive virtual environ-ments. While a synthetic environment can be created by ex-pert designers, various applications also require transferring a real-world environment. Spherical light fields [4–6,26,28] can visualize photorealistic rendering of the real-world en-vironment with the help of dedicated hardware with care-fully calibrated multiple cameras. A few works [3, 16] also attempt to synthesize novel view images by reconstructing an explicit mesh from an egocentric omnidirectional video. However, their methods consist of complicated multi-stage pipelines and require pretraining for optical flow and depth estimation networks. In this paper, we build a system that can visualize a large-scale scene without sophisticated hardware or neural net-works trained with general scenes. We utilize panoramic images, as suggested in spherical light fields. However, we acquire input with a commodity omnidirectional camera with two fish-eye lenses instead of dedicated hardware. As shown in Fig. 1 (a), the environment can be captured with the omnidirectional camera attached to a selfie stick within less than five seconds. Then the collected images observe a large-scale scene that surrounds the viewpoints. We in-troduce new synthetic and real-world datasets of omnidirec-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16590 0.00.20.40.60.81.00.000.020.040.06 𝐨!𝐨"𝐨!𝐨" 𝐨: camera center: camera path: 360°camera: ray-grid hit: camera ray(a) 2D visualization of ray-grid intersections in outward-looking scenario (b) Ray-grid intersection distribution along distance from centerRRay-Grid Hit DistributionGrid TypeBalanced spherical grid (ours)CartesianGrid Resolution10!20!300!Figure 2. (a) When the camera trajectory is short relative to the scene size, the proposed balanced spherical grid (left) exhibits uni-form hitting rate for grid cells whereas the conventional Cartesian grid (right) suffers from non-uniform ray-grid hits. The orange shade indicates the relative density of hit count of the grid cells. (b) Experiments show that spherical coordinates achieve nearly uniform ray-grid hit distribution, while Cartesian coordinate is bi-ased to the center especially when we use a fine-resolution grid. tional videos acquired from both indoor and outdoor scenes. Combined with Neural Radiance Fields (NeRF) [22], the images can train a neural volume that can render fine de-tails or view-dependent effects without explicit 3D models. To this end, we present Egocentric Neural Radiance Fields, or EgoNeRF, which is the neural volume represen-tation tailored to egocentric omnidirectional visual input. Although NeRF and its variants with MLP-based methods show remarkable performance in view synthesis, they suffer from lengthy training and rendering time. The recent Carte-sian feature grids can lead to faster convergence [7, 31] for rendering a bounded scene with an isolated object, but they have several limitations for our datasets which mostly con-tain inside-out views of large scenes: (1) The uniform grid size, regardless of distance from the camera, is insufficient to represent fine details of near objects and extravagant for coarse integrated information from far objects. (2) Carte-sian grid suffers from non-uniform ray-grid hits in the ego-centric scenario as demonstrated in Fig. 2, thus, as pointed in [31], prior arts need careful training strategies such as progressive scaling [7,31] or view-count-adaptive per-voxel learning rate [31]. EgoNeRF models the volume using a spherical coordinate system to cope with the aforemen-tioned limitations. Figure 3 shows that EgoNeRF converges faster compared to MLP-based methods (NeRF [22] and mip-NeRF 360 [2]) and has higher performance compared to Cartesian grid methods (TensoRF [7] and DVGO [31]). Our spherical grid is designed to be balanced in any di-Training time (min)test PSNRNeRFmip-NeRF 360TensoRFDVGOEgoNeRFFigure 3. Training curve comparison in OmniBlender scenes. rection, which leads to a more efficient data structure for the large-scale environment. The na ¨ıve spherical grid con-tains high valence vertices at two poles, and, when adapted as a feature grid for neural volume rendering, the polar re-gions suffer from undesirable artifacts. We exploit a quasi-uniform angular grid by combining two spherical grids [18]. In the radial direction, the grid intervals increase exponen-tially, which not only allows our representation to cover large spaces but also makes the spherical frustum have a similar length in the angular and radial directions. We add an environment map at infinite depth, which is especially useful for outdoor environments with distant backgrounds such as skies. Last but not least, we propose an efficient hi-erarchical sampling method exploiting our density feature grid without maintaining an additional coarse density grid. We demonstrate that our proposed approach can lead to faster convergence and high-quality rendering with a small memory footprint in various scenarios for large-scale envi-ronments. EgoNeRF is expected to create a virtual render-ing of large scenes from data captured by non-expert users, which cannot be easily modeled with 3D assets or conven-tional NeRF. 2. Related Works Visualizing Omnidirectional View of Scenes Panoramic images are widely used in many applications for remote ex-periences. After captured by photo-stitching apps or dedi-cated hardware, they allow users to rotate around the cap-tured position. However, we need additional information to allow the full 6 DoF movement in the scene. Prior works propose sophisticated camera rigs to capture spherical light fields [4–6, 26, 28]. Given multi-view images, they enable synthesizing images at novel viewpoints by reconstructing 3D mesh or multi-sphere images instead of multi-plane im-ages in ordinary images. With additional depth information, recent works demonstrate novel view synthesis with a sin-gle panoramic image [12,14]. The depth channel is acquired from RGBD camera or approximated coarse planar facades. In contrast, we assume more casual input, using com-modity 360◦camera with two fish-eye lenses to capture a short video clip of the large-scale scene. A few works also explored the same setup [3, 16] and represented the scene with a deformed proxy mesh with texture maps us-16591 ing pre-trained neural networks for optical flow and depth estimation. Our pipeline is simpler as we train a neural net-work with the captured sequence of images without any pre-trained network. We combine the visualization pipeline for large-scale scenes with NeRF formulation and can capture complex view-dependent effects and fine structures, unlike reconstructed textured mesh. Practical Variants of NeRF NeRF [22] flourished in the field of novel view synthesis, showing photorealistic qual-ity with its simple formulation. However, the original NeRF formulation exhibits clear drawbacks, such as lengthy train-ing and rendering time, and the difficulty of deformation or scene edits. Many follow-up works exploded, overcoming the limitations in various aspects [1,2,8,21,25,27,30]. Here we specifically focus on practical extension for fast render-ing and training. NeRF represents a scene as a single MLP that maps coordinates into color and volume density. It is slow in rendering and optimization as the volume rendering requires multiple forward passes of the MLP. To accelerate the rendering speed, radiance is repre-sented with an explicit voxel grid storing features [13, 20, 35]. However, they train the network by distilling informa-tion from pre-trained NeRF, which even lengthens the train-ing time. More recent works exploit various data structures to directly optimize the feature grid [7, 10, 24, 31]. They have shown that employing an explicit feature grid achieves fast optimization without sacrificing quality. The feature grids are defined on the Cartesian coordinate system, which assumes a scene within a bounding box. These are not suit-able for representing large-scale scenes whose viewpoints observe outside of the captured locations. The na ¨ıve strategy to choose ray samples wastes most samples and it leads to slow convergence since many re-gions are either free spaces or occluded by other objects in the real world. To increase the sample efficiency, the orig-inal NeRF [22] employs a hierarchical sampling strategy for the volume density and maintains two density MLPs for coarse and fine resolution, respectively. In the same con-text, M ¨uller et al. [24] maintain additional multi-scale oc-cupancy grids to skip ray marching steps. Hu et al. [15] al-locate dense momentum voxels for valid sampling, and Sun et al. [31] also use an extra coarse density voxel grid. Main-taining separate coarse feature grids or neural networks re-quires additional memory and increases computational bur-dens. We propose an efficient sampling strategy and quickly train a volume that represents a large-scale environment. 3. Feature Grid Representation for EgoNeRF EgoNe
RF utilizes feature grids to accelerate the neural volume rendering of NeRF. Feature grids in previous works employ a Cartesian coordinate system, which regularly par-tition the volume in xyzaxis [13, 20, 35]. To better expressthe egocentric views captured from omnidirectional videos, we use a spherical coordinate system. We modify the spher-ical coordinate in both angular and radial partitions to effi-ciently express outward views of the surrounding environ-ment, as described in Sec. 3.1. For rendering and training, the values are interpolated from the feature grid, which can be further factorized to reduce the memory and accelerate the learning [7] (Sec. 3.2). With our balanced feature grid, individual cells produce a uniform hitting rate of rays. 3.1. Balanced Spherical Grid Our balanced spherical grid is composed of the angular partition and the radial partition. Angular Partitions The desirable angular partition should result in regular shapes and be easily parameterized. When we regularly partition on the angle parameters, the na¨ıve spherical coordinate system results in irregular grid partitions, which severely distort the two polar regions. Ex-isting regular partitions do not maintain orthogonal axis pa-rameterization [11], which hinders further factorization. As a simple resolution, we only use the quasi-uniform half of the ordinary spherical coordinate system and com-bine two of them [18]. The two grids are referred to as the Yin grid and Yang grid, respectively, which have identical shapes and sizes as shown in Fig. 1 (b) and Fig. 4 (a). To-gether they can cover the entire sphere with minimal over-lap, similar to the two regions of a tennis ball. The Yin grid is defined as: (π/4≤θ≤3π/4)∩(−3π/4≤ϕ≤3π/4), (1) where θis colatitude and ϕis longitude. The axis of an-other component grid, namely the Yang grid, is located at the equator of the Yin grid:  xYin yYin zYin =M xYang yYang zYang , M= −1 0 0 0 0 1 0 1 0 . (2) We discretize the angular grid of Yin and Yang grid by Ny θ andNy ϕpartitions for θy, ϕyaxis respectively, where y∈ {Yin,Yang}. The partition is uniform in angles leading to the grid size of ∆θy=π 21 Ny θ,∆ϕy=3π 21 Ny ϕ. (3) Radial Partitions By adopting the spherical coordinate system, the grid cells cover larger regions as rincreases. This is desired in the egocentric setup, as the panoramic im-age capture more detailed close-by views of central objects while distant objects occupy a small area on the projected images. We further make the grid along the raxis increase 16592 Coarse samplesFinesamples Environment map ℰ 𝒢!"=𝐾∗𝒢!(Eq. (9))Coarse Density Feature Grid++++𝒢!#$%,𝒢&#$%𝒢!#'%(,𝒢&#'%( 𝒢!𝐌!"#,%&𝐌!'#(,%&𝐌!"#,&)𝐌!'#(,&)𝐌!"#,)%𝐌!'#(,)%𝐯!"#,&𝐯!'#(,&𝐯!'#(,%𝐯!"#,%𝐯!"#,)𝐯!'#(,) Weight Distribution from Coarse samples(a)Balanced Feature Grids== ResamplingDensity Feature Grid(b) Vector-Matrix Decomposition (c) Hierarchical Density Adaptation(d) Optimization𝐨,𝐝ℰ(𝑢,𝑣;𝐝) 𝐁)𝒢&𝐶0=1𝑤*𝑐(𝐱*,𝐝)+*,-+𝜏+.-𝑐/01(𝐝)Query𝒯𝒯𝜎(𝐱)𝑐(𝐱,𝐝)Volume Rendering𝑓234(Sec. 3.1)(Sec. 3.2)𝒜!"#,)𝒜!"#,%𝒜!"#,&𝒜!'#(,)𝒜!'#(,%𝒜!'#(,&:1𝒜!5𝒢!:⊕𝒜&5 (Sec. 4.1)(Sec. 4.2)ℒ= 𝐶0−𝐶22Figure 4. Overview of our method. (a) We represent radiance fields as features stored in balanced feature grids Gσ,Ga, (b) which are further decomposed into vector and matrix components. (c) The hierarchical sampling is conducted by obtaining a coarse density grid from the density feature grid on the fly during optimization. (d) The balanced feature grids are optimized with photometric loss. exponentially for far regions such that the resulting cell ex-hibit similar lengths in the angular and radial direction. Specifically, if we denote the radial scales of both the Yin and Yang grids as ry, ry i=r0ki−1, R max=r0kNy r−1, (4) where Rmaxis the radius of the scene bounding sphere and constant value r0is the radius of the first spherical shell. We set the grid interval to r0for the grid interval less than r0. We can optionally use the environment map for outdoor or large indoor environments. Our spherical grid is still bounded by Rmax, limiting the size of the environment. The environment map denoted as E ∈RH×W×3, is a simple equirectangular image and represents what is visible at an almost infinite distance. 3.2. Feature Grid as Radiance Field Now we describe our radiance field representation with the balanced spherical feature grid. Given a set of ego-centric images with corresponding camera parameters, EgoNeRF aims to reconstruct 3D scene representation and synthesize novel view images. Instead of regressing for the volume density σand color cfrom MLP [22], we build explicit feature grids of the density Gσand the appear-anceGawhich serve as the mapping function. Both grids are composed of our balanced spherical grids of resolu-tion2Ny r×Ny θ×Ny ϕ, as defined in Sec. 3.1. The density gridGσ∈R2Ny r×Ny θ×Ny ϕhas a single channel which storesthe explicit volume density value, and the appearance grid Ga∈R2Ny r×Ny θ×Ny ϕ×Cstores C-dimensional neural ap-pearance features. The volume density and color at position xand viewing direction dare obtained by σ(x) =T(Gσ,x), c(x,d) =fMLP(T(Ga,x),d),(5) whereTdenotes a trilinear interpolation, and fMLPis a tiny MLP that decodes the neural feature to color. Inspired by [7], we further decompose the feature tensor into vectors and matrices as shown in Fig. 4 (b): Gy σ=NσX n=1vy,R σ,n⊗My,ΘΦ σ,n +vy,Θ σ,n⊗My,ΦR σ,n +vy,Φ σ,n⊗My,RΘ σ,n =NσX n=1X m∈RΘΦAy,m σ,n, (6) Gy a=NaX n=1Ay,R a,n⊗by 3n−2+Ay,Θ a,n⊗by 3n−1+Ay,Φ a,n⊗by 3n,(7) Gσ=[ y∈YGy σ,Ga=[ y∈YGy a,Y={Yin, Yang }, (8) where ⊗represents the outer product and v,b,Mrepre-sents vector and matrix factors. This low-rank tensor fac-torization significantly reduces the space complexity from O(n3)toO(n2). With the minimal overhead of storing two grids, we can maintain regular angular components and yet factorize the grid using spherical parameterization. The full 16593 decomposed formulation is described in the supplementary material. 4. Training EgoNeRF We utilize the balanced spherical grids to represent the volume density σand color c, which are stored in Gσand Ga, respectively. In this chapter, we describe the technical details of the optimization process of our proposed method. 4.1. Hierarchical Density Adaptation As the scenes typically contain sparse occupied regions, we adapt the hierarchical sampling strategy of the original NeRF [22] for feature grids. While other recent variants us-ing feature grid [15,24,31] maintain a dedicated data struc-ture for the coarse grid, we exploit our dense geometry fea-ture grid Gσfor the first coarse sampling stage without allo-cating additional memory for the coarse grid. The hierarchical sampling strategy first samples coarse Ncpoints along the ray to obtain a density estimate σfrom which we can sample fine Nfpoints with importance sam-pling. However, evaluating σwith dense Gσat the coarsely sampled points might skip the important surface regions. Therefore, we obtain σvalue from a coarser density feature grid which can be obtained on the fly by applying a non-learnable convolution kernel K: σ(xcoarse) =T(Gc σ,xcoarse) =T(K∗ Gσ,xcoarse).(9) We use the average pooling kernel as K. It is reasonable to define a coarse grid by convolving the dense grid because our density grid Gσstores the volume density itself, which has a physical meaning, not neural features. From the volume density values of coarsely sampled points, we calculate weights for importance sampling by wi=τi(1−e−σiδi), i∈[1, Nc], (10) where δiis the distance between coarse samples, τi= e−Pi−1 j=1σjδjis the accumulated transmittance. Then the fineNflocations are sampled from the filtered probabil-ity distribution. Finally, the volume density σand color cat Nc+Nfsamples are used to render pixels. 4.2. Optimization The images of EgoNeRF are synthesized by applying the volume rendering equation along the camera ray [22] and the optional environment map. Specifically, the points xi= o+tidalong the camera ray from camera position oand ray direction dare accumulated to find the pixel value by ˆC=NX i=1τi(1−e−σ(xi)δi)c(xi,d) +τN+1cenv(d).(11)N=Nc+Nfis the number of samples as described in Sec. 4.1. σ(x)andc(x,d)are obtained from our bal-anced feature grids in Eq. (5). Since the size of our feature grid is exponentially increasing along the rdirection, we distribute Nccoarse samples exponentially rather than uni-formly. The second term in Eq. (11) is fetched from the environment map cenv(d) =E(u, v;d), (12) where the sampling position (u, v)is only dependent on the viewing direction d. The effect of the environment map is further discussed in Sec. 5.3. Finally, we optimize the photometric loss between ren-dered images and training images L=1 |R|X r∈R ˆC(r)−C(r) 2 2, (13) where Ris a randomly sampled ray batch, ˆC(r), C(r)are rendered and the ground-truth color of the pixel correspond-ing to ray r. With the simple photometric loss, our feature gridsGσ,Ga, decoding MLP fMLP, and environment map Eare jointly optimized. For real-world datasets, in which camera poses are not perfect, we additionally optimize a TV loss [29] at our feature grid to reduce noise. Furthermore, since our balanced feature grid guarantees a nearly uniform ray-grid hitting rate, EgoNeRF does not need a coarse-to-fine reconstruction approach for robust optimization used in other feature grid-based methods [7, 31]. 5. Experiments We demonstrate that EgoNeRF can quickly capture and synthesize novel views of large-scale scenes. We describe full implementation details including hyperparameter setup in the supplementary material. Datasets Since many of the existing datasets for NeRF are dedicated to a setup where a bounded object is cap-tured from outside-in viewpoints, we propose new synthetic and real datasets of large-scale environments captured with omnidirectional videos. OmniBlender is a realistic syn-thetic dataset of 11 large-scale scenes with detailed textures and sophisticated geometries in both indoor and outdoor environments, 25 images for both train and test, respec-tively. It consists of omnidirectional images along a rela-tively small circular camera path. The spherical images are rendered using Blender’s Cycles path tracing renderer [9] with 2000 ×1000 resolution. Ricoh360 is a real-world 360◦ video dataset captured with a Ricoh Theta V camera with 1920×960 resolution. We record video on the circular path by rotating an omnidirectional camera fixed with a selfie stick as shown in Fig. 1 (a). The dataset consists of 11 di-verse indoor and outdoor scenes, 50 images for train and 16594 Step MethodOmniBlender Ricoh36
Chen_SeqTrack_Sequence_to_Sequence_Learning_for_Visual_Object_Tracking_CVPR_2023
Abstract In this paper, we present a new sequence-to-sequence learning framework for visual tracking, dubbed SeqTrack. It casts visual tracking as a sequence generation problem, which predicts object bounding boxes in an autoregres-sive fashion. This is different from prior Siamese track-ers and transformer trackers, which rely on designing com-plicated head networks, such as classification and regres-sion heads. SeqTrack only adopts a simple encoder-decoder transformer architecture. The encoder extracts visual fea-tures with a bidirectional transformer, while the decoder generates a sequence of bounding box values autoregres-sively with a causal transformer. The loss function is a plain cross-entropy. Such a sequence learning paradigm not only simplifies tracking framework, but also achieves competitive performance on benchmarks. For instance, Se-qTrack gets 72.5% AUC on LaSOT, establishing a new state-of-the-art performance. Code and models are available at https://github.com/microsoft/VideoX.
1. Introduction Visual object tracking is a fundamental task in computer vision. It aims to estimate the position of an arbitrary tar-get in a video sequence, given only its location in the ini-tial frame. Existing tracking approaches commonly adopt a divide-and-conquer strategy, which decomposes the track-ing problem into multiple subtasks, such as object scale es-timation and center point localization. Each subtask is ad-dressed by a specific head network. For example, SiamRPN [27] and its follow-up works [3, 7, 48, 55, 58] adopt classi-fication heads for object localization and regression heads for scale estimation, as sketched in Fig. 1(a). STARK [53] and transformer-based trackers [4, 10, 17, 44] design corner head networks to predict the bounding box corners of target objects, as visualized in Fig. 1(b). Such a divide-and-conquer strategy has demonstrated su-perior performance on tracking benchmarks and thereby be-come the mainstream design in existing models. However, †Corresponding authors: Houwen Peng (houwen.peng@microsoft.com), Dong Wang (wdice@dlut.edu.cn). Backbonepositive negative (a) Trackers with classification and regression headsx y w hClassification RegressionTemplate & Search Region BackboneCorner Prediction Corner Prediction (b) Trackers with corner headsTemplate & Search Region (c) Our sequence-to-sequence tracker (SeqTrack)Template & Search RegionOutput Sequence x, y, w, h, endInput Sequence start, x, y, w, h Sequence ModelBounding Box Bounding BoxFigure 1. Comparison of tracking frameworks. (a) The framework with object classification head and bounding box regression head. (b) The framework with corner prediction heads. (c) Sequence-to-sequence tracking framework without complicated head networks. two deficiencies still exist. First, each subtask requires a customized head network, leading to a complicated track-ing framework. Second, each head network requires one or more learning loss functions, e.g., cross-entropy loss [7,27], ℓ1loss [7,27,53,55], generalized IoU loss [7,53,55], which make the training difficult due to extra hyperparameters. To address these issues, in this paper, we propose a new Sequence-to-sequence Tracking (SeqTrack) framework, as shown in Fig. 1(c). By modeling tracking as a sequence generation task, SeqTrack gets rid of complicated head net-works and redundant loss functions. It is based upon the intuition that if the model knows where the target object is, we could simply teach it how to read the bounding box out, rather than explicitly performing additional classifica-tion and regression using a divide-and-conquer strategy. To this end, we convert the four values of a bounding box into a sequence of discrete tokens and make the model to learn generating this sequence token-by-token. We adopt a simple encoder-decoder transformer to model the gener-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14572 ation. The encoder is to extract visual features of video frames, while the decoder is to generate the sequence of bounding box values using the extracted features. The generation is executed in an autoregressive fashion, which means the model generates a token depending on previously observed ones. At each step, a new generated token value is fed back into the model to produce the next one. We impose a causal mask on the self-attention modules in the decoder to prevent tokens from attending to subsequent to-kens. Such a causal masking mechanism ensures that the generation of the token at position ionly depends on its proceeding tokens at positions less than i. The visual fea-tures are integrated into the decoder through cross-attention layers [46]. The generation ends when it outputs four token values of the bounding box. The output sequence is directly used as the result. Experiments demonstrate our SeqTrack method is effec-tive, achieving new state-of-the-art performance on several tracking benchmarks. For instance, SeqTrack-B256 obtains 74.7% AO score on GOT-10k [20], outperforming the re-cent OSTrack-256 tracker [55] by 3.7% under aligned set-tings, i.e., using the same encoder architecture and input resolution. Moreover, compared to the recent state-of-the-art tracker MixFormer [10], SeqTrack-B256 runs 1.4 times faster (40 v.s. 29fps) while getting 0.7% superior AUC score on LaSOT [16]. It is worth noting that all these prior methods heavily rely on well-designed head networks and the corresponding complicated loss functions [30, 41]. In contrast, our SeqTrack only adopts a plain encoder-decoder transformer architecture with a simple cross-entropy loss. In summary, the contributions of this work are two-fold: • We propose a sequence-to-sequence learning method for visual tracking. It casts tracking as a generation task, which offers a new perspective on tracking modeling. • We present a new family of sequence tracking models, which strike a good trade-off between speed and accu-racy. Experiments verify the efficacy of the new models.
Fang_Self-Supervised_Non-Uniform_Kernel_Estimation_With_Flow-Based_Motion_Prior_for_Blind_CVPR_2023
Abstract Many deep learning-based solutions to blind image de-blurring estimate the blur representation and reconstruct the target image from its blurry observation. However, these methods suffer from severe performance degradation in real-world scenarios because they ignore important prior information about motion blur (e.g., real-world motion blur is diverse and spatially varying). Some methods have at-tempted to explicitly estimate non-uniform blur kernels by CNNs, but accurate estimation is still challenging due to the lack of ground truth about spatially varying blur ker-nels in real-world images. To address these issues, we pro-pose to represent the field of motion blur kernels in a latent space by normalizing flows, and design CNNs to predict the latent codes instead of motion kernels. To further improve the accuracy and robustness of non-uniform kernel estima-tion, we introduce uncertainty learning into the process of estimating latent codes and propose a multi-scale kernel at-tention module to better integrate image features with es-timated kernels. Extensive experimental results, especially on real-world blur datasets, demonstrate that our method achieves state-of-the-art results in terms of both subjec-tive and objective quality as well as excellent generaliza-tion performance for non-uniform image deblurring. The code is available at https://see.xidian.edu.cn/ faculty/wsdong/Projects/UFPNet.htm .
1. Introduction Blind single image deblurring is a classic low-level vi-sion problem that aims to recover the unknown sharp im-age from its observed blurry image without knowing the blur kernel. The uniform degradation model assumes that a blurry image is generated by a spatially invariant convo-lution process, which can be mathematically formulated as y=B(x,k) +n, (1) *Corresponding author Blurry DeepRFT Stripformer MSDI -Net NAFNet UFPNet (Ours) Figure 1. The non-uniform kernel estimation and deblurring re-sults of the proposed UFPNet on the RealBlur-J dataset. where xandyare sharp image and blurry image, respec-tively, B(·,k)represents the blurring operator with the blur kernel kandndenotes the additive Gaussian noise. The simple case assumes the blur operation in Eq. (1) is uniform and the corresponding blur kernel is shift-invariant [11,43]. Several methods have been proposed to estimate the blur kernel and sharp image simultaneously [6,34,42]. However, in the real world, there are several factors that can cause blur degradation, such as camera shake and object movement. Although camera shake usually causes uniform and global background blurring, fast-moving objects often produce lo-cal blurring in the situation of a stationary background [50]. Therefore, the uniform blur in Eq. (1) is inappropriate for characterizing local blurring in the real world. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18105 Traditional approaches to blind image deblurring first estimate the underlying blur kernels and then reconstruct the sharp image by iterative optimization [12, 28, 40, 44]. To constrain the solution space, both the image-and blur-related priors are exploited. In [29], the dark channel prior is used to estimate the blur kernel and reconstruct the sharp image. In [45], a novel extreme channel prior is proposed to facilitate the process of simultaneous image and kernel estimation. More recently, deep learning-based solutions have been proposed for blind image deblurring. Existing methods can be categorized into two classes. One class is to explicitly estimate the non-uniform blur kernel using convolutional neural networks (CNNs) [1, 2, 33, 37]. The other class of approaches is to use CNNs to directly recon-struct the original sharp image end-to-end without estimat-ing the blur kernel [7, 19, 23, 26, 31, 46–48, 51]. DeepDe-blur method [27] designs a multi-scale CNN to mimic con-ventional coarse-to-fine optimization and directly restores sharp images without assuming any restricted blur kernel model. SRN [38] proposes a scale-recurrent network and an encoder-decoder ResBlocks structure in each scale. Kupyn et al. propose DeblurGAN [19] and DeblurGAN-v2 [20] to reconstruct sharp images by adversarial training. Unfortunately, both types of methods mentioned above have their fundamental limitations. First, since the charac-teristics of blur in real scenarios are complex, accurate es-timation of non-uniform (i.e., spatially varying) blur kernel is challenging. For example, there exists an inevitable un-certainty in kernel estimation because a blurry image may have multiple kernel candidates due to its ill-posed nature. Therefore, incorrect blur kernels will lead to severe perfor-mance degradation in real-world image deblurring. Second, end-to-end methods ignore the information of motion prior, because the formation of image blur is usually associated with the motion trajectory of the camera and objects, which can be exploited for image deblurring effectively. The above observations inspire us to tackle the problem of blind image deblurring from a different perspective. The motiva-tion for our work is threefold. On the one hand, since there is no ground truth of the blur kernel of real blur datasets, we attempt to simulate the non-uniform motion kernels to facil-itate the kernel estimation in a self-supervised manner. On the other hand, we advocate a latent space approach to non-uniform blur kernel estimation, which is inspired by recent work on normalizing flows [13, 14, 16, 25]. Third, we intro-duce uncertainty learning to the process of estimating latent code, aiming to improve both the accuracy and robustness of non-uniform kernel estimation. In this paper, we propose to model spatially varying mo-tion blur prior by introducing normalizing flow and uncer-tainty learning in the latent space to kernel estimation. To address the issue of non-uniform blur that varies from pixel to pixel, we propose to represent the motion blur kernelsin a latent space by normalizing flow and designing CNNs to predict spatially varying latent codes instead of motion kernels. This latent space approach can be interpreted as the generalization of the existing flow-based kernel prior (FKP) [24] from uniform to non-uniform by incorporating kernel generation from simulated random trajectories (e.g., DeblurGAN [19]). To further improve the accuracy and robustness of kernel estimation, we introduce uncertainty learning into the process of estimating latent codes and pro-pose a multi-scale kernel attention module to better inte-grate image features with estimated kernels. The technical contributions of this paper are listed below. •We propose to represent the non-uniform motion blur kernels in a latent space by normalizing flow. Our la-tent space approach allows CNNs to predict spatially varying latent codes rather than motion kernels. For the first time, we show how to estimate spatially vary-ing motion blur on a pixel-by-pixel basis. •To further improve performance and robustness, we in-troduce uncertainty learning to the latent code estima-tion process. The network learns the variance of the latent code to quantify the corresponding uncertainty, which leads to a more accurate prediction than the de-terministic model. •We propose a novel multi-scale kernel attention mod-ule to integrate image features and kernel information, which can be plugged into encoder-decoder architec-tures to incorporate the estimated kernels with the de-blurring network. •In view of the lack of ground truth about the non-uniform motion kernel in real-world images, we tackle the training set generation in a self-supervised manner. Extensive experimental results on benchmark datasets show that the proposed method significantly outper-forms existing state-of-the-art methods and demon-strated excellent generalization performance from Go-Pro to other real-world blur datasets.
Chen_Generative_Semantic_Segmentation_CVPR_2023
Abstract We present Generative Semantic Segmentation (GSS), a generative learning approach for semantic segmentation. Uniquely, we cast semantic segmentation as an image-conditioned mask generation problem . This is achieved by replacing the conventional per-pixel discriminative learn-ing with a latent prior learning process. Specifically, we model the variational posterior distribution of latent vari-ables given the segmentation mask. To that end, the seg-mentation mask is expressed with a special type of image (dubbed as maskige ). This posterior distribution allows to generate segmentation masks unconditionally. To achieve semantic segmentation on a given image, we further intro-duce a conditioning network. It is optimized by minimizing the divergence between the posterior distribution of maskige (i.e. segmentation masks) and the latent prior distribution of input training images. Extensive experiments on standard benchmarks show that our GSS can perform competitively to prior art alternatives in the standard semantic segmentation setting, whilst achieving a new state of the art in the more challenging cross-domain setting.
1. Introduction The objective of semantic segmentation is to predict a label for every single pixel of an input image [32]. Condi-tioning on each pixel’s observation, existing segmentation methods [4,9,50,56] naturally adopt the discriminative learn-ingparadigm, along with dedicated efforts on integrating task prior knowledge ( e.g., spatial correlation) [9, 23, 46, 56]. For example, existing methods [4, 50, 56] typically use a linear projection to optimize the log-likelihood classification for each pixel. Despite the claim of subverting per-pixel clas-sification, the bipartite matching-based semantic segmenta-tion [8, 9] still cannot avoid the per-pixel max log-likelihood. In this paper, we introduce a new approach, Genera-tive Semantic Segmentation (GSS), that formulates seman-*Li Zhang (lizhangfd@fudan.edu.cn) is the corresponding author with School of Data Science, Fudan University. Latent distribution Mask N classes Prior learning(a) Discriminative semantic segmentation (b) Generative semantic segmentation Discriminative learning Posterior learning Figure 1. Schematic comparison between ( a) conventional discrim-inative learning and ( b) our generative learning based model for semantic segmentation. Our GSS introduces a latent variable z and, given the segmentation mask c, it learns the posterior distri-bution of zsubject to the reconstruction constraint. Then, we train a conditioning network to model the prior of zby aligning with the corresponding posterior distribution. This formulation can thus generate the segmentation mask for an input image. tic segmentation as an image-conditioned mask generation problem . This conceptually differs from the conventional for-mulation of discriminative per-pixel classification learning, based on the log-likelihood of a conditional probability ( i.e. the classification probability of image pixels). Taking the manner of image generation instead [24,44], we generate the whole segmentation masks with an auxiliary latent variable distribution introduced. This formulation is not only simple and more task-agnostic, but also facilitates the exploitation of off-the-shelf big generative models ( e.g.DALL ·E [39] trained by 3 billion iterations on a 300 million open-image dataset, far beyond both the data scale and training cost of semantic segmentation). This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7111 However, achieving segmentation segmentation in a generic generation framework ( e.g.the Transformer archi-tecture [15]) is non-trivial due to drastically different data format. To address this obstacle, we propose a notion of maskige that expresses the segmentation mask in the RGB image form. This enables the use of a pretrained latent posterior distribution ( e.g.VQV AE [39]) of existing gener-ative models. Our model takes a two-stage optimization: (i)Learning the posterior distribution of the latent variables conditioned on the semantic segmentation masks so that the latent variables can simulate the target segmentation masks; To achieve this, we introduce an fixed pre-trained VQV AE [39] and a couple of lightweight transformation modules, which can be trained with minimal cost, or they can be manually set up without requiring any additional training. In either case, the process is efficient and does not add significant overhead to the overall optimization. (ii) Minimizing the distance between the posterior distribution and the prior distribution of the latent variables given input training images and their masks, enabling to condition the generation of semantic masks on the input images. This can be realized by a generic encoder-decoder style architecture (e.g.a Transformer). We summarize the contributions as follows. (i)We pro-pose a Generative Semantic Segmentation approach that reformulates semantic segmentation as an image-conditioned mask generation problem. This represents a conceptual shift from conventional discriminative learning based paradigm. (ii)We realize a GSS model in an established conditional image generation framework, with minimal need for task-specific architecture and loss function modifications while fully leveraging the knowledge of off-the-shelf generative models. (iii) Extensive experiments on several semantic segmentation benchmarks show that our GSS is competitive with prior art models in the standard setting, whilst achieving a new state of the art in the more challenging and practical cross-domain setting ( e.g.MSeg [26]).
Jiang_Instant-NVR_Instant_Neural_Volumetric_Rendering_for_Human-Object_Interactions_From_Monocular_CVPR_2023
Abstract Convenient 4D modeling of human-object interactions is essential for numerous applications. However, monoc-ular tracking and rendering of complex interaction scenar-ios remain challenging. In this paper, we propose Instant-NVR, a neural approach for instant volumetric human-object tracking and rendering using a single RGBD cam-era. It bridges traditional non-rigid tracking with recent in-stant radiance field techniques via a multi-thread tracking-rendering mechanism. In the tracking front-end, we adopt a robust human-object capture scheme to provide suffi-cient motion priors. We further introduce a separated in-stant neural representation with a novel hybrid deforma-tion module for the interacting scene. We also provide an on-the-fly reconstruction scheme of the dynamic/static ra-diance fields via efficient motion-prior searching. More-over, we introduce an online key frame selection scheme and a rendering-aware refinement strategy to significantly improve the appearance details for online novel-view syn-thesis. Extensive experiments demonstrate the effective-ness and efficiency of our approach for the instant gen-eration of human-object radiance fields on the fly, no-tably achieving real-time photo-realistic novel view synthe-sis under complex human-object interactions. Project page: https://nowheretrix.github.io/Instant-NVR/.
1. Introduction The accurate tracking and photo-realistic rendering for human-object interactions are critical for numerous human-centric applications like telepresence, tele-education or im-mersive experience in VR/AR. However, a convenient solu-tion from monocular input, especially for on-the-fly setting, remains extremely challenging in the vision community. Early high-end solutions [6, 9, 13, 18] require dense cameras for high-fidelity reconstruction. Recent ap-proaches [11, 12, 17, 46, 47, 59, 63] need less RGB or RGBD video inputs (from 3 to 8 views) by using volu-*Equal Contribution. Figure 1. Our Instant-NVR adopts a separated instant neural rep-resentation to achieve photo-realistic rendering for human-object interacting scenarios. metric tracking techniques [19, 32]. Yet, the multi-view setting is still undesirable for consumer-level daily usage. Differently, the monocular method with a single handi-est commercial RGBD camera is more practical and at-tractive. For monocular human-object modeling, most ap-proaches [2, 15, 53, 57, 65, 66] track the rigid and skeletal motions of object and human using a pre-scanned template or parametric model. Besides, the monocular volumetric methods [32,41,43,58,64] obtain detailed geometry through depth fusion, while the recent advance [44] further extends it into the human-object setting. However, they fail to gen-erate realistic appearance results, restricted by the limited geometry resolution. Recent neural rendering advances, represented by Neural Radiance Fields (NeRF) [29], have recently enabled photo-realistic rendering with dense-view supervision. Notably, some recent dynamic variants of NeRF [21, 28, 50, 51, 55, 60, 67] obtain the compelling novel-view synthesis of hu-man activities even under monocular capturing. However, they rely on tedious and time-consuming per-scene training to fuse the temporal observations into the canonical space, thus unsuitable for on-the-fly usage like telepresence. Only recently, Instant-NGP [30] enables fast radiance field gener-ation in seconds, bringing the possibility for on-the-fly ra-diance field modeling. Yet, the original Instant-NGP can only handle static scenes. Few researchers explore the on-the-fly neural rendering strategies for human-object interac-tions, especially for monocular setting. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 595 In this paper, we present Instant-NVR – an instant neural volumetric rendering system for human-object interacting scenes using a single RGBD camera. As shown in Fig. 1, Instant-NVR enables instant photo-realistic novel view syn-thesis via on-the-fly generation of the radiance fields for both the rigid object and dynamic human. Our key idea is to bridge the traditional volumetric non-rigid tracking with instant radiance field techniques. Analogous to the tracking-mapping design in SLAM, we adopt a multi-thread and tracking-rendering mechanism. The tracking front-end provides online motion estimations of both the performer and object, while the rendering back-end reconstructs the radiance fields of the interaction scene to provide instant novel view synthesis with photo-realism. For the tracking front-end, we first utilize off-the-shelf instant segmentation to distinguish the human and object from the input RGBD stream. Then, we adopt an efficient non-rigid tracking scheme for both the performer and rigid object, where we adopt both embedded deformation [45] and SMPL [27] to model human motions. For the rendering back-end, inspired by Instant-NGP [30] we adopt a separate instant neural representation. Specifically, both the dynamic performer and static object are represented as implicit radi-ance fields with multi-scale feature hashing in the canonical space and share volumetric rendering for novel view syn-thesis. For the dynamic human, we further introduce a hy-brid deformation module to efficiently utilize the non-rigid motion priors. Then, we modify the training process of ra-diance fields into a key-frame based setting, so as to enable graduate and on-the-fly optimization of the radiance fields within the rendering thread. For the dynamic one, we fur-ther propose to accelerate our hybrid deform module with a hierarchical and GPU-friendly strategy for motion-prior searching. Yet we observe that naively selecting key-frames with fixed time intervals will cause non-evenly distribution of the captured regions of the dynamic scene. It results in unbalanced radiance field optimization and severe ap-pearance artifacts during free-view rendering. To that end, we propose an online key-frame selection scheme with a rendering-aware refinement strategy. It jointly considers the visibility and motion distribution across the selected key-frames, achieving real-time and photo-realistic novel-view synthesis for human-object interactions. To summarize, our main contributions include: • We present the first instant neural rendering system un-der human-object interactions from an RGBD sensor. • We introduce an on-the-fly reconstruction scheme for dynamic/static radiance fields using the motion priors through a tracking-rendering mechanism. • We introduce an online key frame selection scheme and a rendering-aware refinement strategy to signifi-cantly improve the online novel-view synthesis.2. Related Work Traditional Human Volumetric Capture. Human volu-metric capture and reconstruction have been widely investi-gated to achieve detailed geometry reconstruction and accu-rate tracking. A series of works are proposed to make vol-umetric fusion more robust with SIFT features [16], multi-view systems [11, 12], scene flow [54], human articulated skeleton prior [62, 64], extra IMU sensors [70], data-driven prior [43, 44], learned correspondences [5], neural defor-mation graph [4, 23] or implicit function [17, 63]. Starting from the pioneering work DynamicFusion [32] which ben-efits from the GPU solvers, the high-end solutions [11, 12] rely on the multi-view camera system and complex calibra-tion. V olumeDeform [16] combines depth-based correspon-dences with sparse SIFT features to reduce drift. KillingFu-sion [41] and SobolevFusion [42] support topology changes via more constraints on the motion fields. Thanks to the hu-man parametric model [27], DoubleFusion [64] proposes the two-layer representation to capture scene more robustly. UnstructuredFusion [59] extends it to an unstructured multi-view setup. RobustFusion [44] further handles the chal-lenging human-object interaction scenarios. Besides, Func-tion4d [63] and NeuralHOFusion [17] marry the non-rigid-tracking with implicit modeling. However, these methods are dedicated to getting detailed geometry without focus-ing on high-quality texture and most methods can not han-dle human-object interactions. Comparably, our approach bridges the traditional volumetric capture and neural ren-dering advances, achieving photo-realistic rendering results under human-object interactions. Static Neural Scene Representations. Coordinates-based neural scene representations in static scenes produce im-pressive novel view synthesis results and show huge po-tential. Various data representations are adopted to ob-tain better performance and characteristics, such as point-clouds [1, 48, 56], voxels [26], textured meshes [25, 49], occupancy [33, 40] or SDF [34, 52]. Meanwhile, Since the vanilla NeRF which requires hours of training is time-consuming, some NeRF extensions [30, 39, 61] are pro-posed to accelerate both training and rendering. Plenoc-trees [61] utilizes the octree to skip the empty regions. Plenoxels [39] parameterizes the encoding using spherical harmonics on the explicit 3D volume. Instant-NGP [30] uti-lizes the multi-scale feature hashing and TCNN to speed up. Though its rendering speed seems possible to train on-the-fly, they do not have a specific design for streaming input and only can recover static scenes. Comparably, our Instant-NVR achieves on-the-fly efficiency based on the Instant-NGP [30]. Dynamic Neural Scene Representations. Novel view syn-thesis in dynamic scenes is an important research prob-lem. D-NeRF [38] and Non-rigid NeRF [50] leverage the displacement field to represent the motion while Ner-596 Figure 2. Our approach consists of two stages. The tracking front-end (Sec. 4.1) captures human and object motions, while the rendering back-end (Sec. 4.2) separately reconstructs the human-object radiance fields on-the-fly, for instant novel view synthesis with photo-realism. fies [35] and HyperNeRF [36] use the SE(3) field. More-over, some researchers focus on human reconstruction and utilize the human prior. Neuralbody anchors latent code on the SMPL [27] vertices. Humannerf [68] combines the SMPL warping and deformation net to construct the motion field. TA V A [20] learns the skinning weight for joints via root-finding and can generalize to novel pose. De-VRF [24] incorporates 4D-motion volume into the NeRF pipeline. NDR [7] defines a bijective function which natu-rally compatible with the cycle consistency. However, most methods rely on multi-view camera input and the training is costly. Comparably, our Instant-NVR bridges the non-rigid volumetric capture with the instant radiance field training, achieving photo-realistic rendering results from monocular RGBD stream. 3. Overview From monocular RGBD input, Instant-NVR bridges the real-time non-rigid capture with instant neural rendering, al-lowing for high-quality novel-view synthesis under human-object interactions. As illustrated in Fig. 2, our system consists of two cooperating threads: a tracking front-end (Sec. 4.1) and a neural rendering back-end (Sec. 4.2). Tracking Front-end. We extend the traditional volumet-ric tracking [32, 59, 64] into a human-object setting. For non-rigid human capture, we adopt the embedded deforma-tion(ED) [45] and SMPL [27] as motion representations. For object, we directly track its rigid motions via the Itera-tive Closest Point(ICP) algorithm. This thread provides ac-curate per-frame human-object moti
Gao_Collecting_Cross-Modal_Presence-Absence_Evidence_for_Weakly-Supervised_Audio-Visual_Event_Perception_CVPR_2023
Abstract With only video-level event labels, this paper targets at the task of weakly-supervised audio-visual event perception(WS-A VEP), which aims to temporally localize and catego-rize events belonging to each modality. Despite the recentprogress, most existing approaches either ignore the unsyn-chronized property of audio-visual tracks or discount thecomplementary modality for explicit enhancement. We ar-gue that, for an event residing in one modality, the modality itself should provide ample presence evidence of this event,while the other complementary modality is encouraged toafford the absence evidence as a reference signal. To this end, we propose to collect Cross-Modal Presence-Absence Evidence (CMPAE) in a unified framework. Specifically,by leveraging uni-modal and cross-modal representations,a presence-absence evidence collector (PAEC) is designed under Subjective Logic theory. To learn the evidence in a reliable range, we propose a joint-modal mutual learning(JML) process, which calibrates the evidence of diverse au-dible, visible, and audi-visible events adaptively and dy-namically. Extensive experiments show that our method surpasses state-of-the-arts (e.g., absolute gains of 3.6% and6.1% in terms of event-level visual and audio metrics). Code is available in github.com/MengyuanChen21/ CVPR2023-CMPAE .
1. Introduction Research in computer vision places a significant empha-sis on the visual aspects of event perception; nevertheless, in the real world with multisensory modalities, natural eventsare distinguished by a great deal more than just their appear-ance [ 11,30,52,53,56,66]. For instance, think of playing a specific musical instrument in a concert hall, a barking dog, or starting a car with the engine sound. To properly compre-Dog, Speech, Singing Speech Speech SingingDogVideo-level Annotations 漣 Visual Track Audio TrackOnly audible event Only visible event Audi-visible event Figure 1. With only video-level annotations, weakly-supervised audio-visual event perception (WS-A VEP) aims to predict the tem-poral boundaries of various only audible (in orange), only visible (in green), or audi-visible (in blue) events in a video. hend an event, it is necessary to take acoustics into account and engage in joint audio-visual perception. The target of audio-visual event perception (A VEP) is to temporally categorize video events. However, collectingprecisely temporal audio-visual annotations is a bottleneck and consequently limits the scalability of a fully-supervised learning framework. As a result, Tian et al .[52,53] propose to perceive audio-visual events in an weakly-supervisedmanner, where only easily available video-level labels areneeded during model training. As depicted in Figure 1, given videos which may have various audible, visible,or audi-visible events, the weakly-supervised audio-visualevent perception (WS-A VEP) is commonly optimized by utilizing the video-level annotations. To date in the literature, current WS-A VEP approaches mainly embrace two types of pipelines: (1) To comprehen-sively incorporate both modalities, some pioneering meth-ods [ 53] assume that each event in a video is simultane-ously audible and visible. Based on this characteristic,numerous cross-modal fusion strategies are proposed, in-cluding cross attention [ 60,61,63] and modality interac-tion [ 47,62]. Although achieving promising performance, the rigorous assumption may not always hold in practice This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18827 Table 1. Comparison with the state-of-the-art methods on two tasks, A VVP and A VE. Note that the two tasks have different goals and properties. Please refer to the text for more details. TaskMethodCMBS [ 61] JoMoLD [ 6] Ours A VVP [ 52] 51.7 57.3 60.1 AV E [ 53] 74.2 71.8 74.8 due to some audio-visual non-correspondence caused by out-of-screen objects and background noises. To this end, targeting at unsynchronized audio and visual information modeling, (2) Tian et al .[52] suggest a more general setting that recognizes event categories and temporal boundariesbind to sensory modalities, which breaks the modality con-sistency restriction. Since video-level labels do not indicate the detailed modality information, further research focuseson mining audio-or visual-specific information by learningfrom modality-specific noises [ 6], heterogeneous informa-tion [ 58], or hierarchical features [ 22]. Nonetheless, these approaches discount the complementary modality for ex-plicitly enhancing the prediction of the other modality. Al-though the multimodal multiple instance learning (MMIL) framework [ 6,30,52] can perform cross-modal enhance-ment for the feature learning, it still neglects the explicit andextra assistance of the complementary clues for individualmodality prediction. Consequently, as shown in Table 1, state-of-the-arts of the two pipelines can only achieve sig-nificant performance in one single WS-A VEP setting, show-ing that current methods are in a dilemma of making full useof both uni-modal and cross-modal information. To tackle the above issues, we argue that, for an event re-siding in one modality 1,the modality itself should provide ample presence evidence of this event, while the other com-plementary modality is encouraged to afford the absence evidence as a reference signal . On the one hand, to fully tap the potential of each modality, it is desirable to make the modality self-reliable for determining the evidence strength of an present event in the corresponding track. On the otherhand, for judging which events are absent, relying on a single modality is insufficient, whereas the other track can hand over complementary but not dominant assistance [ 66]. For example, although a baby is out-of-screen and the event“baby cry” only appears in the audio modality, we can still infer that the audio track might not contain outdoor events because the perceived visual scene is considered to be in-doors. Similarly, when the audio track is salient, some vig-orous activity may be less likely to occur in the visual track. Motivated by the above observations, we aim to cap-ture the presence and absence evidence for individual events by using uni-modal and cross-modal information. To ob-tain reliable evidence that can explicitly reflect and mea-1No matter whether the event is modality-specific or audi-visible.sure the event presence/absence intensity in each modal-ity, conventional convolutional neural networks, which arebased on classification probability, could be overconfidentand in the cart [ 43,50,55]. Recently, evidential deep learn-ing (EDL) [ 36,50], which can quantify uncertainty in model predictions trustfully by collecting subjective evidence, hasattracted increasing attention and been successfully used in a variety of computer vision tasks [ 1,3,5,17,27,57]. In this paper, we propose to collect Cross-Modal Presence-Absence Evidence (CMPAE) for WS-A VEP in a unifiedframework. As shown in Figure 2, we design a presence-absence evidence collector (PAEC) by using uni-modal and cross-modal representations. Here, the presence evidenceof events in each track is derived from the modality itself,whereas the other modality acts as a cross-modal selectorfor generating the absence evidence. The evidence of eachtemporal snippet is then accumulated to video-level evi-dence and optimized in accordance with Subjective Logictheory [ 23,64]. To learn the evidence in a reliable range, we propose a joint-modal mutual learning (JML) process, which calibrates the evidence of diverse audible, visible,and audi-visible events adaptively and dynamically. By virtue of the above design, the proposed PAEC and JML modules can cooperate with each other in a unified frame-work for effective presence-absence evidence learning. Our main contributions can be summarized as follows: •We propose a novel cross-modal presence-absence evi-dence learning framework for weakly-supervised audio-visual event perception, which jointly enjoys the meritsof uni-modal discrimination and cross-modal enhance-ment under Subjective Logic theory. •With the cooperative presence-absence evidence collec-tor and the joint-modal mutual learning process, we in-ject the uni-modal and cross-modal information into thelearned evidence and calibrate it to a reliable range. •We conduct extensive and in-depth experiments on sev-eral popular and standard WS-A VEP datasets [ 52,53]. The encouraging results compared with state-of-the-artsdemonstrate the effectiveness of our method.
Gao_High-Fidelity_and_Freely_Controllable_Talking_Head_Video_Generation_CVPR_2023
Abstract Talking head generation is to generate video based on a given source identity and target motion. However, cur-rent methods face several challenges that limit the quality and controllability of the generated videos. First, the gen-erated face often has unexpected deformation and severe distortions. Second, the driving image does not explicitly disentangle movement-relevant information, such as poses and expressions, which restricts the manipulation of dif-ferent attributes during generation. Third, the generated videos tend to have flickering artifacts due to the inconsis-tency of the extracted landmarks between adjacent frames. In this paper, we propose a novel model that produces high-fidelity talking head videos with free control over head pose and expression. Our method leverages both self-supervised learned landmarks and 3D face model-based landmarks tomodel the motion. We also introduce a novel motion-aware multi-scale feature alignment module to effectively transfer the motion without face distortion. Furthermore, we en-hance the smoothness of the synthesized talking head videos with a feature context adaptation and propagation module. We evaluate our model on challenging datasets and demon-strate its state-of-the-art performance. More information is available at https://yuegao.me/PECHead .
1. Introduction Talking head video generation is a process of synthesiz-ing a talking head video with a given source identity and target motion. This process is also called face reenactment when using a driving head to define the relative movement to the source identity [ 4]. This generation technique can This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5609 be used in various applications, including video conferenc-ing [ 51], movie effects [ 39], and entertainment [ 6]. Due to the rapid development of deep learning [ 20] and genera-tive adversarial networks (GAN) [ 17,27], impressive works have been conducted on talking head generation [ 22,51], face reenactment [ 2,24,40,49,57,59,61,65,66], and image animation [ 46,47,52,67]. These works focus on animating objects beyond the face and head [ 45,46]. Early works on talking head generation require multi-ple source and driving images to generate one result [ 4,13]. Recent works focus on one-shot generation [ 51,61,66], i.e., using only one source frame to generate the target by transferring the pose information of one driving frame. Currently, the mainstream works [ 45–47] follow a self-supervised learning pipeline. They mainly utilized the self-supervised learned landmarks to model the movement of the identity between the source and driving images. The learned landmarks pairs are first detected from both source and driving images, and then the dense flow field is esti-mated from the two sets of learned landmarks to transform the source features and guide the reconstruction of the driv-ing image. To further improve the performance, recent ap-proaches propose to utilize additional information, such as 3D learned landmarks [ 51] and depth map [ 22], or enhance the model structure, for example, adopting the flexible thin-plate spline transformation [ 14,67], and representing the motion as a combination of latent vectors [ 52]. However, there are still many challenges with these methods. First, the generated face often has unexpected de-formation and severe distortions. The learned landmarks-based approaches [ 46,51,67], such as FOMM [ 46], which only utilizes the 2D learned landmarks without face shape constraints, produces frontalization results with apparent face distortions (see Fig. 1a). The predefined landmarks-based methods [ 13,24,59,63] model the movement between the source and driving images only based on the predefined facial landmarks, leading to the non-facial parts of the head (such as the hair and neck) are not well handled. Second, all the movement information needs to be obtained via one single driving image. It is rare and difficult to decouple and manipulate these movement-relevant information, in-cluding poses and expressions, when generating the new image. Third, in order to achieve smooth and natural move-ments in generated videos, prior methods [ 46,47,67] typi-cally incorporate techniques to smoothen the extracted land-marks learned between adjacent frames. However, the sen-sitivity and inconsistency of the extracted landmarks pose a challenge in achieving smoothness, resulting in generated videos that are prone to flickering. To address the above challenges, we propose the Pose and Expression Controllable Head model ( PECHead ), which can generate high-fidelity video face reenactment re-sults and enable talking head video generation with full con-trol over head pose and expression. The proposed method first incorporates the learned landmarks and the predefined face landmarks to model the overall head movement and the detailed facial expression changing in parallel. We uti-lize the single image-based face reconstruction model [ 12] to obtain the face landmarks and project them into 2D im-age space. This approach constrains the face to a physi-cally reasonable shape, thereby reducing distortion during motion transfer, as demonstrated in the last row of Fig. 1a. In this work, we introduce the use of learned sparse land-marks for global motion and predefined dense landmarks for local motion, with the Motion-Aware Multi-Scale Fea-ture Alignment (MMFA) module serving to align these two groups of features. Then we use different coefficients as in-put conditions to control the estimation of both predefined and learned landmarks, so that we can realize the head pose and expression manipulation (Fig. 1b). Moreover, inspired by the recent low-level video processing works [ 8,33], we propose the Context Adaptation and Propagation (CAP) module to further improve the smoothness of the gener-ated video. Our proposed method is evaluated on multiple talking head datasets, and experimental results indicate that it achieves state-of-the-art performance, generating high-fidelity face reenactment results and talking head videos with the ability to control the desired head pose and facial expression. Our contributions can be summarized as follows: • We propose a novel method, PECHead , that generates high-fidelity face reenactment results and talking head videos. Our approach leverages head movements to control the estimation of learned and predefined land-marks, enabling free control over the head pose and expression in talking head generation. • We incorporate the learned and predefined face land-marks for global and local motion estimation with the proposed Motion-Aware Multi-Scale Feature Align-ment module, which substantially enhances the quality of synthesized images. • We introduce a video-based pipeline with the Con-text Adaptation and Propagation module to further im-prove the smoothness and naturalness of the generated videos. • Extensive qualitative and quantitative results across several datasets demonstrate the superiority of the pro-posed framework for high-fidelity video face reenact-ment and freely controllable talking head generation.
Forte_Reconstructing_Signing_Avatars_From_Video_Using_Linguistic_Priors_CVPR_2023
Abstract Sign language (SL) is the primary method of communica-tion for the 70 million Deaf people around the world. Video dictionaries of isolated signs are a core SL learning tool. Replacing these with 3D avatars can aid learning and en-able AR/VR applications, improving access to technology and online media. However, little work has attempted to estimate expressive 3D avatars from SL video; occlusion, noise, and motion blur make this task difficult. We address this by introducing novel linguistic priors that are univer-sally applicable to SL and provide constraints on 3D hand pose that help resolve ambiguities within isolated signs. Our method, SGNify, captures fine-grained hand pose, fa-cial expression, and body movement fully automatically from in-the-wild monocular SL videos. We evaluate SGNify quantitatively by using a commercial motion-capture sys-tem to compute 3D avatars synchronized with monocular video. SGNify outperforms state-of-the-art 3D body-pose-and shape-estimation methods on SL videos. A perceptual study shows that SGNify’s 3D reconstructions are signifi-cantly more comprehensible and natural than those of pre-vious methods and are on par with the source videos. Code and data are available at sgnify.is.tue.mpg.de.1. Introduction It is estimated that over 466 million people have dis-abling hearing loss [13] and more than 70 million people use sign language (SL) as their primary means of commu-nication [52]. Increasing use of digital communication mo-tivates research on capturing, understanding, modeling, and synthesizing expressive 3D SL avatars. Existing datasets and dictionaries used in SL recognition (SLR), translation (SLT), and production (SLP) are primarily limited to 2D video because the technology required to capture 3D move-ment is prohibitively expensive, requires expertise to oper-ate, and may limit the movements of the signer. Dictionar-ies of isolated signs are a core SL learning tool, and many SLs have online 2D video dictionaries. The Deaf commu-nity is actively seeking 3D dictionaries of isolated signs to aid learning [40]. The current approach to creating such 3D signing dictionaries is fully manual, requiring an artist or a HamNoSys [21] expert, and the resulting avatars often move unnaturally [3]. We aim to automatically reconstruct expressive 3D signing avatars from monocular SL video, which we term Sign Language Capture (SLC) . We focus on SLC of isolated signs. 3D reconstruction of human pose and shape has received significant attention, but accurate 3D hand-pose estimation This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12791 remains challenging from in-the-wild video. Challenges include the high number of degrees of freedom present in hands [5], frequent occurrence of self-contact and self-occlusions [37, 46], low resolution, and motion blur caused by fast motions [50] that cause hand pose to be unrecogniz-able in many frames (see Fig. 1). To address these issues, we exploit the linguistic nature of SL itself to develop novel priors that help disambiguate hand poses in SL videos, lead-ing to accurate 3D reconstructions. This is a novel use of linguistic “side information” to improve 3D reconstruction. Based on hand movements and poses, Battison [4] de-fines five linguistic classes that contain all SL signs. We build on that work to define eight classes and formalize these as mathematical priors on 3D hand shape. We com-bine Battison’s first two classes and place all one-handed signs in class 0, while two-handed signs are arranged in classes 1, 2, or 3, depending on how the non-dominant hand participates in the articulation of the sign. We then divide each of these four classes into two subclasses depending on whether the pose of the active hand(s) changes during the articulation of the sign. We introduce two class-dependent SL linguistic constraints that capture 1) symmetry and 2) hand-pose invariance. Under Battison’s SL symmetry con-dition [4], when both hands actively move, the articulation of the fingers must be identical; the same is true for one class of two-handed signs in which only the dominant hand moves. We formalize this concept as a regularization term that encourages the pose of the two hands to be similar for such signs. Coupling the hand poses in this way effectively increases the image evidence for a pose, which improves estimates for challenging videos. Our invariance constraint uses the observation that hand pose is either static or transi-tions smoothly from one pose to another during the articu-lation of the sign; other significant changes to hand pose are not common in SL. Specifically, we extract a characteris-tic “reference pose sequence” (RPS) to describe each local hand pose during the sign articulation, and we penalize dif-ferences between the RPS and the estimated hand pose in each frame. These two priors of symmetry and hand-pose invariance are universally applicable to all sign languages. The hands alone, however, are not sufficient to accu-rately reproduce SL. Information is conveyed holistically in SL through hand gestures, facial expressions, and upper-body movements in 3D space. To combine these, we use a 3D whole-body model, SMPL-X [41], that jointly models this information (see Fig. 1). Our novel hand-pose constraints are formulated to be incorporated into the loss function for training a neu-ral network regressor or into the objective function of optimization-based methods. In general, optimization-based methods are more computationally intensive but pro-duce more accurate results when limited training data is available, so we take this approach here and build on theSMPLify-X method [41]. To appropriately incorporate our terms into the objective function, we need to know the class of the sign. We train a simple model that extracts features from the raw video and determines the class to which the depicted sign belongs. While SMPLify-X is a good foun-dation for the hands and body, we find that it does not cap-ture expressive facial motions well. Consequently, we use a more expressive face regressor, SPECTRE [19], to capture the face parameters. We call our method SGNify. To quantitatively evaluate SGNify, we capture a native German (DGS) signer with a frontal RGB camera synchro-nized with a 54-camera Vicon motion capture system and recover ground-truth meshes from the Vicon markers [34]. We run SGNify on the RGB video and compute 3D vertex-to-vertex (V2V) error between our resulting avatars and the ground-truth meshes. We find that SGNify reconstructs SMPL-X meshes more accurately than the competition. We conduct a perceptual evaluation in which we present proficient signers with a video of either an estimated SMPL-X avatar or the real-person source video and task them with identifying the sign being performed. Partici-pants also rate their ease in recognizing the sign and the nat-uralness of the articulation. Our results show that SGNify reconstructs 3D signs that are as recognizable as the original videos and consistently more recognizable, easier to under-stand, and more natural than the existing state of the art. We also evaluate SGNify in a multi-view setting and on contin-uous signing videos. Despite not being designed for the latter, SGNify captures the meaning in continuous SL. SGNify represents a step towards the automatic recon-struction of natural 3D avatars from sign-language videos. Our key contribution is the introduction of novel linguistic priors that are universal and helpful to constrain the problem of hand-pose estimation from SL video. SGNify is designed to work on video from different SL dictionaries across lan-guages, backgrounds, people, trimming, image resolution, and framing, as visible in Sup. Mat. and in the video on our project page. This capability is critical to capture 3D signing at scale, which will enable progress on learning SL avatars. Our code and motion-capture dataset are available for research purposes at sgnify.is.tue.mpg.de. 2. Related Work Expressive 3D Humans From RGB Images: Until re-cently, human-pose estimation has focused on the estima-tion of 2D [12] or 3D [51] joints of the hands and body, as well as those of facial features [6] from single images. In addition to methods that estimate a sparse set of land-marks, there are multiple methods that estimate the param-eters of morphable models for the hand [22, 32, 36, 58], face [11,16,18,20], and body [8,25,27–29,33,39,57]. The advent of expressive 3D body models like SMPL-X [41], Adam [26], and GHUM [54] has enabled research on esti-12792 mating the full 3D body surface [9, 17, 41, 44, 49, 53, 55]. Such body models are ideal for representing the expressive-ness of SL but have rarely been applied to this domain [30]. Human Pose for Sign Language: To enable detailed 3D pose estimation from images, How2Sign [14] provides 3D skeleton reconstructions for three hours of data captured in a Panoptic Studio [24]. However, the skeletal representa-tion lacks the richness of a full 3D body model and omits surface details that are important for communication [38]. Kratimenos et al. [30] use SMPLify-X to estimate 3D p
ose and shape on the GSLL sign-language dataset [48]. They compare SL recognition accuracy using features from raw RGB images, OpenPose [7] 2D skeletons, and SMPL-X bodies and observe the best automated recognition results with SMPL-X, illustrating the benefit of using a 3D model. They also highlight the importance of capturing the face and body; in an ablation study, they show that neglecting the face and body harms recognition accuracy [30]. However, their SMPL-X reconstructions use existing off-the-shelf methods and lack visual realism. SMPLify-X [41] and other recent 3D pose-reconstruction methods [17, 44], as well as keypoint detectors, struggle when applied to SL video due to challenging self-occlusion, hand–hand and hand–body interactions [38], motion blur [50], and cropping inherent to SL. SignPose [31] is a 3D-pose-lifting method for SL; it uses manually created synthetic SL animations to infer a textured avatar from single RGB images. SignPose re-quires all OpenPose keypoints above the pelvis to be de-tected, which is unrealistic in noisy SL videos. We address these challenges by incorporating sign-language knowledge in the form of linguistic constraints. Since the early 2000s, the integration of linguistic information has been known to be beneficial to both SLR [10] and SLP [35], but this strat-egy has not previously been applied to SLC. 3. Method We introduce SGNify, an offline method for reconstruct-ing 3D body shape and pose of SL from monocular RGB video. SGNify centers around a key insight: SL signs follow universal linguistic rules that can be formulated as class-specific priors and used to improve hand-pose estima-tion. Our full pipeline is shown in Fig. 2. 3.1.SMPLify-SL: Baseline for Sign-Language Video Our baseline method builds on SMPLify-X [41], which estimates SMPL-X [41] parameters from RGB images. SMPL-X is a 3D body model, representing whole-body pose and shape, including finger articulations and facial ex-pressions. SMPL-X is a function, M(θ, β, ψ ), parameter-ized by body pose θ(including hand pose θh), body shape β, and facial expressions ψ, that outputs a 3D body mesh. To create a strong baseline, we extend SMPLify-X to video by adapting it in the following ways: (1) We copewith the upper-body framing typical of SL videos by chang-ing the heuristic used for camera initialization and the es-timation of the out-of-view lower-body joints. (2) Since human motion is locally smooth in time, we initialize θt∈R|θ|withθt−1and include a zero-velocity loss on the hands and body to encourage smooth reconstructions. (3) We estimate shape parameters ( β) over multiple frames by taking the median of the parameter estimates and not-op-timizing them during the per-frame reconstruction. (4) To better capture the frequent hand–hand and hand–body in-teractions (mainly with the face and the chest), we em-ploy the more robust self-contact loss of Müller et al. [39] instead of the original SMPLify-X interpenetration term. (5) For each frame, we pre-compute the facial expressions (ψ) and jaw poses with SPECTRE [19]. These parameters are substituted into SMPL-X at the end of the optimization. SPECTRE can be swapped for any method whose expres-sion parameters are consistent with those of SMPL-X , e.g., EMOCA [11]. We denote the baseline SMPLify-SL. 3.2. Linguistic Constraints State-of-the-art optimization-and regression-based hu-man pose estimation methods struggle on SL video, par-ticularly with the estimation of hand pose. We address this challenge by formulating linguistic constraints as ad-ditional losses on hand pose and integrating them into the SMPLify-SL objective function. First, we adapt the five sign-classification and morpheme-structure conditions in-troduced for American Sign Language (ASL) by Batti-son [4] to divide signs into four primary classes: Class 0: one-handed signs in which only the dominant hand articulates the sign. Class 1: two-handed signs in which both hands are active. They share the same poses and perform the same movement in a synchronous or alternating pattern. This class includes all signs that follow Battison’s symmetry condition [4]. Class 2: two-handed signs in which the dominant hand is active, the non-dominant hand is passive (its position and pose do not change during the articulation of the sign), and the two hands have the same initial pose. Class 3: two-handed signs in which the dominant hand is active, the non-dominant hand is passive, and the two hands have different hand poses. All signs in this class follow Battison’s dominance condition [4]. We further divide each class into two subclasses: sub-class a contains signs in which the hand pose of the active hand(s) does not change throughout the articulation of the sign ( static ), and subclass b contains all signs in which the hand pose changes ( transitioning ). Note that the division into these classes is not limited to ASL; Eccarius et al. [15] show that the phonological and prosodic properties of ASL can be successfully transferred to other sign-language lexicons. 12793 2D Pose EstimationSign-Group Classi/f_ier 1a / 2aReference Hand Poses Candidate Frames Keypoint FeaturesMonocular VideoSGNify θ refh θ ref,ihθ ref,fhFigure 2. Given a video of a sign-language (SL) sign as input, our method preprocesses the data to first extract 2D keypoints. The hand keypoints are used to select candidate frames for estimating the reference hand poses ( θh ref,i,θh ref,f, and, for static hand poses, also θh ref). The initial and final reference hand poses ( θh ref,iandθh ref,f), together with wrist-keypoint features detected across the sequence, are then fed into our sign-group classifier, which automatically classifies signs in monocular SL video into six groups based on linguistic rules universally applicable to SL [4]. Using the predicted group labels and the relevant reference hand poses, SGNify applies the appropriate linguistic constraints to improve SL 3D hand-pose estimation, especially when the video frame is ambiguous. ClassHand-Pose SymmetryHand-Pose Invariance Dominant Non-dominant 0a ✗ static ✗ 0b ✗ transitioning ✗ 1a ✓ static static 1b ✓ transitioning transitioning 2a ✓ static static 2b ✗ transitioning static 3a ✗ static static 3b ✗ transitioning static Table 1. Linguistic constraints defining the eight sign classes. We then convert these linguistic classes into two 3D pose constraints: hand-pose symmetry and hand-pose in-variance. Signs in the same class share the same constraints (see Tab. 1 and Tab. S.1 in Sup. Mat.). Below we describe only the new terms added to the SMPLify-X objective. Please see Sup. Mat. for the full SGNify objective. 3.2.1 Hand-Pose Symmetry We encourage the left and right hand poses to match for the relevant classes (classes 1a, 1b, and 2a in Tab. 1): Ls=λs||θr t−r(θl t)||2 2, (1) where θr tis the finger articulation of the right hand, and r(θl t)is a reflection function to represent the articulation of the fingers of the left hand as if it were a right hand. This loss penalizes differences in finger poses between the hands.3.2.2 Hand-Pose Invariance Each sign has a characteristic reference hand pose sequence (RPS). The RPS defines the hand pose that we expect at each time tduring the articulation of the sign. The hand-pose-invariance constraint penalizes differences between the reference hand pose θh ref,t∈RPShand the estimated hand pose θh t: Lh i=λi||θh ref,t−θh t||2 2, (2) where hrepresents either the left or the right hand. Throughout each sign, the hand pose either stays static or transitions between two poses. When static, only one hand pose, θh ref, is representative of the RPS. Signs where the hand pose is transitioning are characterized by two ref-erence hand poses, θh ref,iandθh ref,f, corresponding respec-tively to the initial and final poses. We interpolate θh ref,iand θh ref,fwith spherical linear interpolation [45] to obtain in-termediate poses. We presently do not consider signs with repeated hand-pose transitions, e.g., STORY in ASL, which occur in a small percentage of signs ( ∼3%). 3.3. Automatization To work fully automatically, SGNify must 1) estimate the poses needed to enforce the hand-pose-invariance con-straint and 2) classify which sign group is present in a video sequence (see Fig. 2). To estimate the reference hand poses ( θh ref,θh ref,i, and θh ref,f), our method selects candidate frames in the core part of the sign using hand-keypoint detection confidences, and it uses SMPLify-X (adapted to SL cropping) to reconstruct 12794 a preliminary 3D hand pose for each candidate frame. With static hand poses, θh refis obtained by taking the average hand poses of these candidates. With transitioning hand poses, the core part of a sign is divided into two intervals, and θh ref,i andθh ref,fcorrespond to the average hand poses of the can-didate frames in the first and second intervals, respectively (see Sup. Mat. for more details). The constraints applied to each sign depend on its sign group; we have six sign groups because classes 1a & 2a share the same constraints, as do 2b & 3b (Tab. 1). There is insufficient paired data to train a CNN classifier, so we use an intuitive and interpretable decision tree trained on ex-tracted 2D and 3D pose features. Our features are invariant to the handedness of the signer and
Chen_DeepMapping2_Self-Supervised_Large-Scale_LiDAR_Map_Optimization_CVPR_2023
Abstract LiDAR mapping is important yet challenging in self-driving and mobile robotics. To tackle such a global point cloud registration problem, DeepMapping [1] converts the complex map estimation into a self-supervised training of simple deep networks. Despite its broad convergence range on small datasets, DeepMapping still cannot produce sat-isfactory results on large-scale datasets with thousands of frames. This is due to the lack of loop closures and ex-act cross-frame point correspondences, and the slow con-vergence of its global localization network. We propose DeepMapping2 by adding two novel techniques to address these issues: (1) organization of training batch based on map topology from loop closing, and (2) self-supervised *Equal contribution †The corresponding author is Chen Feng cfeng@nyu.edulocal-to-global point consistency loss leveraging pairwise registration. Our experiments and ablation studies on pub-lic datasets such as KITTI, NCLT, and Nebula demonstrate the effectiveness of our method.
1. Introduction Mapping is a fundamental ability for autonomous mo-bile agents. It organizes an agent’s local sensor observa-tions into a map, i.e., a global spatial representation of the environment. A pre-built map is useful in robotics, self-driving, and augmented reality for agents to localize them-selves [2–6]. Various simultaneous localization and map-ping (SLAM) methods can create maps of new environ-ments from 2D and/or 3D sensors [7–13]. In particular, LiDAR-based mapping is often adopted to build large-scale maps in self-driving and mobile robotics due to LiDAR’s direct and accurate 3D point cloud sensing capability. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9306 Similar to visual SLAM, LiDAR SLAM methods typ-ically contain front-end and back-end modules [14–17]. The front-end module tracks sensor movements by Li-DAR/inertial/wheel odometry and provides constraints be-tween sequential frames of point clouds by either iterative closest point (ICP) or 3D feature detection and correspon-dence matching algorithms. The back-end uses those con-straints in a pose/pose-landmark graph optimization [18,19] to minimize the odometry drift, similar to the bundle adjust-ment in visual SLAM and Structure-from-Motion (SfM). However, without accurate GNSS/IMU as odometry, large-scale LiDAR mapping results could be unsatisfactory (see Fig. 1), due to errors in LiDAR odometry and diffi-culties in correspondence matching and loop closing, es-pecially outdoors. To tackle these issues, researchers start to explore deep learning methods. Some of them focus on replacing sub-tasks in LiDAR mapping with deep net-works [20–23], following the common machine learning paradigm: train-then-test . Yet such methods could face generalization issues when the training dataset domain is different than the testing one. Differently, DeepMapping [1] proposes a new paradigm: training-as-optimization for point cloud map-ping. It encapsulates the global registration in a point-cloud-based PoseNet [24] (L-Net), and evaluates the map quality using another binary occupancy network (M-Net) with a binary cross-entropy (BCE) loss. This converts the continuous map optimization into a self-supervised training of binary classifications. Since no testing is needed , it does not face any generalization issues because mapping is done once training is finished. However, despite its superior performance on small-scale datasets, we found DeepMapping often fails on large-scale datasets due to the following challenges: (1)No-explicit-loop-closure : DeepMapping gradually optimizes L-Net using frames in each mini-batch that are temporal neighbors, and only relies on M-Net to control the global map consistency. This is like incremental registra-tion that is doomed to drift when the number of frames is large. SLAM solves this by loop closing, which is not yet clear how to be incorporated into DeepMapping. (2)No-local-registration : Although previous works have shown local registration to be locally accurate [25–28], DeepMapping only uses it in the ICP-based pose initializa-tion but not in the optimization. This is due to a common problem faced by all LiDAR registration methods, the lack of point correspondences in LiDAR point clouds: the same 3D point rarely appears again in another scan, because of the sparse sensor resolution and long-range sensing. (3)Slow-convergence-in-global-registration : L-Net re-gresses a single frame of point cloud into its global pose, which is supervised only by the M-Net and BCE loss. Unlike pairwise registration, this global registration lacksenough inference cues to output correct poses, thus leading to slow convergence when the dataset is large. We propose DeepMapping2 that is able to effectively optimize maps on large-scale LiDAR datasets. It extends DeepMapping with two novel techniques. The first one ad-dresses challenge (1) by organizing data frames into train-ing batches based on map topology from loop closing. This allows a frame with its topological/spatial neighbors to be grouped into the same batch. We find this to be the best way of adding loop closing into DeepMapping which uses free-space inconsistencies via M-Net and BCE loss to gen-erate self-supervision, because such inconsistencies happen mostly between unregistered neighboring frames. The second technique is a novel self-supervised local-to-global point consistency loss that leverages precomputed pairwise registration. For each point in a frame, we can compute this new consistency as the L2 distance between different versions of its global coordinate calculated using a neighboring frame’s global pose and the relative pose be-tween the two frames from the pairwise registration. This allows us to address challenge (2) without relying on point correspondences between different frames: even if two neighboring frames do not have enough common points as correspondences for pairwise local registration, we can still incorporate the local registration’s results during training. It also addresses the challenge (3) because now L-Net is su-pervised by stronger gradients from not only the BCE loss, but also the new consistency loss. Our contributions are summarized as follows: • Our DeepMapping2 is the first self-supervised large-scale LiDAR map optimization method as far as we know, and this generic method achieves state-of-the-art mapping results on various indoor/outdoor public datasets, including KITTI [29], NCLT [30], and the challenging underground dataset Nebula [31]. • Our analysis reveals why DeepMapping fails to scale up and leads to the two novel techniques–batch orga-nization and local-to-global point consistency loss–to incorporate loop closing and local registration in the DeepMapping framework. Their necessity and effec-tiveness are further validated in our ablation study.
Jiang_DoNet_Deep_De-Overlapping_Network_for_Cytology_Instance_Segmentation_CVPR_2023
Abstract Cell instance segmentation in cytology images has sig-nificant importance for biology analysis and cancer screen-ing, while remains challenging due to 1) the extensive over-lapping translucent cell clusters that cause the ambigu-ous boundaries, and 2) the confusion of mimics and de-bris as nuclei. In this work, we proposed a De-overlapping Network (DoNet) in a decompose-and-recombined strat-egy. A Dual-path Region Segmentation Module (DRM) ex-plicitly decomposes the cell clusters into intersection and complement regions, followed by a Semantic Consistency-guided Recombination Module (CRM) for integration. To further introduce the containment relationship of the nu-cleus in the cytoplasm, we design a Mask-guided Region Proposal Strategy (MRP) that integrates the cell attention maps for inner-cell instance prediction. We validate the proposed approach on ISBI2014 and CPS datasets. Ex-periments show that our proposed DoNet significantly out-performs other state-of-the-art (SOTA) cell instance seg-mentation methods. The code is available at https: //github.com/DeepDoNet/DoNet .
1. Introduction Cytology image has been essential for cancer screen-ing and earlier diagnosis, such as qualitative and quan-titative identification of cellular morphology, nuclei size, nuclear-cytoplasmic ratio, and other cytological features [12, 15, 25]. However, examining tens of thousands of cells under the microscope visually is inherently tedious and suf-fers from inter-/intra-observer variability. Computational techniques enable efficient and accurate characterization of cells from cytology images [12, 16]. Among all computa-tional techniques, cell segmentation has been a fundamental and widely-studied task, since the acquisition of cell-level identification is a pre-requisition for further assessment and analysis [3, 23]. *Equal contribution †Corresponding author OverlappingCellsMicroscopyImage Intersection Layer Complement Layer Instance Layer Figure 1. The schematic illustration of proposed DoNet with decompose-and-recombined strategy, which maps each overlap-ping cell into the intersection, complement, and instance layers to address the overlapping issue in cytology instance segmentation. Deep Learning (DL) methods show promising results for cell-nuclei segmentation in the histopathology image [5,6,14,17]. However, cytology segmentation remains chal-lenging for the following two reasons. Firstly , cells in a cy-tology image are prone to cluster with each other, leading to the overlapping issue. In the cytology images, the translu-cent cytoplasm of the cell (seen in Figure 1) tends to oc-clude each other with low contrast staining, leading to am-biguous cellular boundary predictions. This phenomenon is particularly evident in cervical cell images. Secondly , hard mimics, are widespread in the background, along with other technical artefacts such as bubbles, which could mis-lead the instance segmentation models [4]. Take the cervical cell image as an example, the widespread white blood cells and mucus stains lead to false predictions for nuclei. To ad-dress these challenges, several works [24, 33] propose the segment-then-refine paradigm, while others [39, 40] utilize the detection-based framework, e.g., Mask R-CNN [11]. However, they fail to model the interaction between inter-section and complement sub-regions within the translucent cell cluster explicitly, resulting in a limited understanding of cross-region relationships. Amodal instance segmentation tackles the occlusion This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15641 problem by inferring the integral object based on the par-tially visible region [22]. Based on the fact that humans can infer the occluded region of an object despite the am-biguity, these methods attempt to learn the integrated ob-ject mask (amodal mask) for better occlusion reasoning ca-pability [9, 37] via synthesizing occluded data label pairs and aggregating global information to enhance perceptual ability. Compared to natural scenes, cell instances in cy-tology images are mostly semi-transparent. Therefore, an occlusion (overlapping) region exits in both the occluding and occluded instances. However, treating semi-transparent overlapping regions as general occlusion regions is not op-timal, since they have different appearances compared to non-overlapping regions, and could in fact provides richer shape information than general occlusion regions. Motivated by the amodal perception, we propose a decompose-and-recombine strategy for translucent cell instance segmentation, named De-overlapping Network (DoNet). Figure 1 provides the schematic diagram. For each cell cluster with more than one cellular sub-region, DoNet starts from implicitly learning the hidden interaction of sub-regions by predicting instance masks from clusters. Then, it explicitly models the components and their rela-tionships via the intersection layer, complement layer, and instance layer, to enhance its perceptual capability. Initially, we adopt Mask R-CNN to get the coarse predic-tions, followed by a novel Dual-path Region segmentation Module (DRM) that combines features and coarse masks from the first stage to decompose cell clusters into inter-section and complement sub-regions. Then, the semantic Consistency-guided Recombination Module (CRM) is de-signed to encourage consistency between the refined in-stances and integral sub-region predictions. Furthermore, to impose the morphological constraint that nuclei stay in-side the cellular regions, we propose a Mask-guided Region Proposal Module (MRP) to encourage the model to focus on the intra-cellular area during nuclei segmentation. The overall contributions are summarized as follows: • A novel de-overlapping network for cell instance seg-mentation with a decompose-and-recombined strategy, decomposing the cell regions with the DRM, as well as implicitly and explicitly modeling the semantic re-lationship between intersection, complement, and in-stance (cell) components via the CRM. These designs equip the network with enhanced perceptual capability in overlapping cellular sub-regions. • A mask-guided region proposal module (MRP) that leverages the cytoplasm attention map for the intra-cellular nuclei refinement, which imposes the biology prior of cellular instances into the module, effectively mitigating the influence of mimickers widespread in the background.• Extensive experiments on two overlapping cytology image segmentation datasets, namely ISBI2014 [24] and CPS [39], demonstrating that our proposed DoNet outperforms other state-of-the-art (SOTA) methods by a large margin.
Chatziagapi_AVFace_Towards_Detailed_Audio-Visual_4D_Face_Reconstruction_CVPR_2023
Abstract In this work, we present a multimodal solution to the problem of 4D face reconstruction from monocular videos. 3D face reconstruction from 2D images is an under-constrained problem due to the ambiguity of depth. State-of-the-art methods try to solve this problem by leveraging visual information from a single image or video, whereas 3D mesh animation approaches rely more on audio. How-ever, in most cases (e.g. AR/VR applications), videos in-clude both visual and speech information. We propose AV-Face that incorporates both modalities and accurately re-constructs the 4D facial and lip motion of any speaker, with-out requiring any 3D ground truth for training. A coarse stage estimates the per-frame parameters of a 3D mor-phable model, followed by a lip refinement, and then a fine stage recovers facial geometric details. Due to the temporal audio and video information captured by transformer-based modules, our method is robust in cases when either modality is insufficient (e.g. face occlusions). Extensive qualitative and quantitative evaluation demonstrates the superiority of our method over the current state-of-the-art.
1. Introduction Reconstructing the 4D geometry of the human face has been a long standing research problem in computer vi-sion and graphics. Accurate spatio-temporal (4D) face re-construction has extensive applications in AR/VR, video games, virtual communication, the movie industry etc. However, recovering the per-frame 3D head pose and fa-cial geometry from 2D images is an ill-posed problem due to the ambiguity of depth. Current approaches are largely based on 3D morphable models (3DMMs). Usually, they take a single image [18, 22] or video [60] as input and pre-dict the 3DMM parameters. Some have also tried to pre-dict additional geometric facial details, using the 3DMM fitting as prior [13, 18, 22]. However, most of these video-only methods are either speaker-specific [13,23,29], requir-ing to overfit to a specific speaker to recover their facial details, or fail to accurately capture fine details, like wrin-kles and lip movements. In addition, they cannot handle face occlusions, since they solely rely on the visual input. On the other hand, audio-driven 3D mesh animation ap-proaches [17,21,52] learn better lip motion, but they require 4D ground truth scans for training, which are rare and ex-pensive to capture. Furthermore, such audio-only methods cannot capture any speaker-specific characteristics or fa-cial expressions, as they do not use any visual information. There is limited work in audio-visual 4D face reconstruc-tion [1, 15, 44], but these methods also require 4D ground truth scans and do not recover any facial geometric details. In this work, we propose A VFace that learns to recon-struct detailed 4D face geometry from monocular talking face videos, leveraging both audio and video modalities. Without requiring any 3D ground truth scans, it can recover This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16878 accurate 4D facial and lip motion for any speaker. A coarse stage estimates a coarse geometry per frame, based on a 3DMM and using both image and speech features. Then, a SIREN MLP [58] further improves the lip position, by learning an implicit representation of the lip shape condi-tioned on speech. Finally, a fine stage recovers geometric facial details, guided by pseudo-ground truth face normals and producing a high-fidelity reconstruction of the input speaker’s face per frame. Due to the temporal audio and video information captured by transformer-based modules, our method is robust in cases when either modality is insuf-ficient (e.g. face occlusions). To better handle such hard cases that are frequent in talking face videos, we further fine-tune our coarse stage with synthetic face occlusions. In brief, the contributions of our work are as follows: • We propose A VFace, a novel audio-visual method for detailed 4D face reconstruction, that follows a coarse-to-fine optimization approach, trained only on monoc-ular talking face videos without any 3D ground truth. • We introduce an audio-driven lip refinement network, and a fine stage guided by pseudo-ground truth face normals to accurately recover fine geometric details. • Our temporal modeling, along with fine-tuning on syn-thetic face occlusions, makes our network robust to cases when either modality is insufficient.
Chen_Divide_and_Conquer_Answering_Questions_With_Object_Factorization_and_Compositional_CVPR_2023
Abstract Humans have the innate capability to answer diverse questions, which is rooted in the natural ability to corre-late different concepts based on their semantic relationships and decompose difficult problems into sub-tasks. On the contrary, existing visual reasoning methods assume train-ing samples that capture every possible object and reason-ing problem, and rely on black-boxed models that com-monly exploit statistical priors. They have yet to develop the capability to address novel objects or spurious biases in real-world scenarios, and also fall short of interpret-ing the rationales behind their decisions. Inspired by hu-mans’ reasoning of the visual world, we tackle the afore-mentioned challenges from a compositional perspective, and propose an integral framework consisting of a princi-pled object factorization method and a novel neural mod-ule network. Our factorization method decomposes objects based on their key characteristics, and automatically de-rives prototypes that represent a wide range of objects. With these prototypes encoding important semantics, the pro-posed network then correlates objects by measuring their similarity on a common semantic space and makes deci-sions with a compositional reasoning process. It is ca-pable of answering questions with diverse objects regard-less of their availability during training, and overcoming the issues of biased question-answer distributions. In addi-tion to the enhanced generalizability, our framework also provides an interpretable interface for understanding the decision-making process of models. Our code is available athttps://github.com/szzexpoi/POEM .
1. Introduction One of the fundamental goals in artificial intelligence is to develop systems that are able to reason with the com-plexity of real-world data to make decisions. Most existing visual question answering (VQA) methods [2,13,28,29,33,34, 38, 49, 57] assume a complete overlap between objects involved in training and testing, and commonly rely on the spurious distributions of questions and answers [39]. As a result, they have limited generalizability toward real-life visual reasoning, and also lack the ability to justify the rea-soning process that leads to the answers. “All mammals are animals. All elephants are mammals. Therefore, all elephants are animals [5]. ” The wide ap-plication of syllogistic logic reflects key characteristics of the ways humans reason about the world. Unlike models [2, 38, 49] that utilize implicit features and heavily exploit statistical priors, humans correlate diverse objects from the compositional perspective based on their shared character-istics [26] and tackle problems with a structured reasoning process, which is both generalizable and interpretable. To address the complexity of real-world problems, this study aims to develop object factorization and composi-tional reasoning capabilities in models. As shown in Figure 1, our approach bridges diverse objects by projecting them onto a common space formed by discriminative prototypes (e.g., round shape, stuffed toy), and formulates the reason-ing process with atomic steps [48] representing essential reasoning skills ( e.g.,Find,Relate ). The prototypes are de-rived with object factorization, and they represent important semantics of objects ( e.g., honey jar →<round shape, con-tainer ... >, teddy bear →<bear, stuffed toy ... >). With an improved understanding of semantic relationships, our framework correlates objects ( e.g., honey jar and container, stuffed toy and teddy bear) based on their commonalities in characteristics, leading to enhanced robustness against the diversity of objects and data biases. It also allows interpre-tations of the model’s reasoning process consistent with the ways humans describe their own thinking [8]. Compared to previous studies [2, 21, 33, 34, 48, 49], our major distinction lies in (1) the composition in two impor-tant dimensions of visual reasoning, i.e., objects and the reasoning process, and (2) a tight coupling between them. Instead of using black-boxed features or a direct mapping between question and answer that is vulnerable to object di-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6736 Question : Do you see a stuffed toy behind the honey jar to the left of the sandwich?Find (sandwich) Relate (honey jar , sandwich) Relate (teddy bear , honey jar) Reasoning pr ocessSemantic Pr ototypes Answer : yes Round shape Bread Container Bear Stuffed toy Brown colorteddy bear sandwich honey jarFigure 1. Overview of our method that represents objects with semantically meaningful prototypes and makes decisions via an explicit reasoning process. Honey jar is a novel object unseen during training. Note that our prototypes are not limited to a set of manually defined categories, but learned from factorizing objects to encode broader characteristics ( e.g., shapes, colors, object categories). versity or data biases, our method decomposes objects into bases representing discriminative semantics, and develops a prototypical neural module network to explicitly bridge objects with a compositional reasoning paradigm. The pro-posed method naturally approaches generalizability with its compositional nature, handling novel objects and variable data distributions. It also provides a transparent interface for interpreting how models parse objects based on their characteristics and incorporate them for visual reasoning. To summarize, our major contributions are as follows: 1. We identify the significance of tightly coupling the compositionality between objects and the reasoning process, and for the first time investigate its effective-ness in generalizable and interpretable reasoning.
Chowdhury_SceneTrilogy_On_Human_Scene-Sketch_and_Its_Complementarity_With_Photo_and_CVPR_2023
Abstract In this paper, we extend scene understanding to include that of human sketch. The result is a complete trilogy of scene representation from three diverse and complementary modalities – sketch, photo, and text. Instead of learning a rigid three-way embedding and be done with it, we focus on learning a flexible joint embedding that fully supports the “optionality” that this complementarity brings. Our em-bedding supports optionality on two axes: (i) optionality across modalities – use any combination of modalities as query for downstream tasks like retrieval, (ii) optionality across tasks – simultaneously utilising the embedding for ei-ther discriminative (e.g., retrieval) or generative tasks (e.g., captioning). This provides flexibility to end-users by ex-ploiting the best of each modality, therefore serving the very purpose behind our proposal of a trilogy in the first place. First, a combination of information-bottleneck and condi-tional invertible neural networks disentangle the modality-specific component from modality-agnostic in sketch, photo, and text. Second, the modality-agnostic instances from sketch, photo, and text are synergised using a modified cross-attention. Once learned, we show our embedding can accommodate a multi-facet of scene-related tasks, including those enabled for the first time by the inclusion of sketch, all without any task-specific modifications. Project Page: https://pinakinathc.github.io/scenetrilogy
1. Introduction Scene understanding sits at the very core of computer vi-sion. As object-level research matures [24, 32], an encour-aging shift can be observed in recent years on scene-level tasks, e.g., scene recognition [113], scene captioning [55], scene synthesis [34], and scene retrieval [13, 57]. Scene research has generally progressed from that of sin-gle modality [113, 114] to the very recent focus on multi-modality [3, 13, 19]. The latter setting not only triggered a series of practical applications [34, 57, 101, 115] but im-A bench is there in front of a house. SBIR TBIRSTBIRPhoto Gallery Our Model Sketch PhotoSubjectiveOur ModelCaptioning sketch specific text specific photo specific modality agnostic(a) (b)(c) (d)Figure 1. Some scenes are easy to describe via sketch; for oth-ers, text is better. We provide the option to sketch, write, or both (sketch+text). For “optionality” across tasks, we disentangle sketch, text, and photo into a discriminative (e.g., retrieval) part fagshared across modalities, and a generative (e.g., captioning) part specific to one modality ( fsp s, fsp tfsp p). This supports a multi-facet of scene-related tasks without task-specific modifications. portantly helped to cast insights into scene understanding on a conceptual level (i.e., what is really being perceived by humans). To date, research on multi-modal scene under-standing has mainly focused on two modalities – text and photo [59, 61, 62], via applications such as text-based scene retrieval (TBIR) [35], and scene captioning [23, 61, 62]. This paper follows the said trend of multi-modal scene understanding and extend it to also include human scene-sketch. Sketch is identified because of its unique charac-teristics of being both expressive and subjective, evident in an abundance of object-level sketch research [11], and very recently on scene-level [19]. To verify there is indeed use-ful complementarity that sketch can bring to multi-modal scene understanding, we first conducted two pilot studies (i) on expressivity, we compare text and sketch in terms of scene image retrieval, and (ii) on subjectivity, we test a novel task of subjective captioning where sketch or parts-of-speech [26] are used as guidance for image captioning. On (i), results show there is significant disagreement in terms of retrieval accuracy when one is used as query over the other, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10972 indicating there is complementary information between the two modalities. On (ii), sketch is shown to offer more sub-jectivity as a guiding signal than text, when quantified using common metrics such as BELU-4 [67] and CIDEr [95]. To fully explore the complementarity of all three modal-ities, we desire a flexible joint embedding that best sustains “optionality” across modalities , and also across tasks . The former enables end-users to use any combination of modal-ities (e.g., only sketch, only text, or both sketch+text) as a query for downstream tasks; and the latter provides option of utilising the learned embedding for both discriminative (e.g., retrieval) and generative problems (e.g., captioning). This desired level of “optionality” is however not achiev-able via naive three-way joint embeddings common in the literature [3, 13, 19]. Instead, we advocate a three-way dis-entanglement (Fig. 1(b)), where each of the three modal-ities is disentangled into their modality-specific compo-nent ( fsp s,fsp p,fsp t, for sketch, photo and text), and a shared modality-agnostic component ( fag). The idea is that modality-specific will hold information specific to each modality (e.g., drawing style for sketch, texture for photo, and grammatical knowledge for text). It follows that fil-tering away modality-specific parts from each of the three modalities gives a shared modality-agnostic part that car-ries shared abstract semantic across allthree modalities, (as shown in Fig. 1(b)). How optionality is supported in such a disentangled space then becomes trivial (Fig. 1(c),(d)). To achieve optionality across tasks, we simply use modality-agnostic information as the joint embedding to perform discriminative tasks (e.g., cross-modal retrieval), and for cross-modal generative tasks (e.g., captioning), we just combine modality-agnostic information (from source) with modality-specific (from target) to generate the target modal-ity. Optionality across modality is a little harder, where we make use of a cross-attention [50] mechanism to capture the synergy across the modality-agnostic components. Benefiting from our optionality-enabled embedding, we can perform a multi-facet of tasks without any task-specific modifications: (i) Fig. 1 (c) show cross-modal discrimina-tive tasks such as sketch-based image retrieval (SBIR) using (fag s↔fag p), text-based image retrieval (TBIR) using ( fag t ↔fag p), or sketch+text based image retrieval (STBIR) using (fag s+fag t↔fag p). (ii) Fig. 1 (d) show cross-modal gen-erative tasks such as image captioning (photo branch) using fag p+fsp t→ftto generate textual descriptions ft. Sim-ilarly, for sketch captioning (sketch branch) we use fag s+ fsp t→ft. (iii) Last but not least, to demonstrate what the expressiveness of human sketch can bring to scene under-standing, we introduce a novel task of subjective captioning where we guide image captioning using sketch as a signal (subjective branch) as fag p+fag s→ft. In summary, our contributions are: (i) We extend multi-modal scene understanding to include human scene-sketches, thereby completing a trilogy of scene representa-tion from three diverse and complementary modalities. (ii) We provide optionality to end-users by learning a flexible joint embedding that supports: optionality across modali-ties and optionality across tasks. (iii) Using computation-ally efficient techniques like information bottleneck, con-ditionally invertible neural networks, and modified cross-attention mechanism, we model this flexible joint embed-ding. (iv) Once learned, our embedding accommodates a multi-facet of scene-related tasks like retrieval, captioning.
Huang_RefSR-NeRF_Towards_High_Fidelity_and_Super_Resolution_View_Synthesis_CVPR_2023
Abstract We present Reference-guided Super-Resolution Neural Radiance Field (RefSR-NeRF) that extends NeRF to super resolution and photorealistic novel view synthesis. Despite NeRF’s extraordinary success in the neural rendering field, it suffers from blur in high resolution rendering because its inherent multilayer perceptron struggles to learn high frequency details and incurs a computational explosion as resolution increases. Therefore, we propose RefSR-NeRF , an end-to-end framework that first learns a low resolution NeRF representation, and then reconstructs the high fre-quency details with the help of a high resolution reference image. We observe that simply introducing the pre-trained models from the literature tends to produce unsatisfied arti-facts due to the divergence in the degradation model. To this end, we design a novel lightweight RefSR model to learn the inverse degradation process from NeRF renderings to target HR ones. Extensive experiments on multiple bench-marks demonstrate that our method exhibits an impressive trade-off among rendering quality, speed, and memory us-age, outperforming or on par with NeRF and its variants while being 52speedup with minor extra memory usage. Code will be available at: Mindspore and Pytorch
1. Introduction Neural Radiance Field (NeRF) [29], which was first pro-posed by Mildenhall et al. in 2020, is leading a trend in neural rendering field for its realism and representa-tion parsimony, showing great potential in various down-stream industrial applications such as immersive view syn-thesis [1, 2, 41], 3D scene reconstruction [24, 60], au-tonomous driving [38, 18, 31, 50], aerial surveying [9, 21], digital human deformation [47, 62, 52, 36], robot naviga-tion and environment simulation [38]. In essence, NeRF synthesizes photorealistic renderings by encoding the volu-metric density and color of a scene within the weights of a coordinate-based multilayer perceptron (MLP) [2], and Equal Contributionits magic manifests in reconstructing an intact 3D spacial representation from a handful of sparse observations, while simultaneously retaining a highly compact scene represen-tation [58]. While this approach works well when the training and testing images observe the scene content with low resolu-tion, NeRF, and its follow-ups exhibit significant blurry ef-fects when the resolution goes up. This can be attributed to the following two reasons. First of all, MLPs perform poorly to regress the high frequency details from uniformly-sampled low-dimension 5D coordinates [39]. Although a positional encoding that uses Fourier Transformation will greatly leverage this inherent low frequency bias in neural networks, there is still a big room for photorealism. In addi-tion, as the resolution increases, or in other words, when the scene becomes more complex, NeRF requires a larger MLP to encode more high frequency information [38]. How-ever, simply enlarging the MLP only achieves minor gains in detail restoration [33] and will significantly exacerbate the rendering efficiency. Moreover, due to the dense sampling strategy and fre-quent MLP queries, rendering a NeRF is agonizingly slow. It takes more than one day to train for a scene and 30 seconds to render an 800800image even running on a high-performance desktop GPU, and this issue would be more unacceptable when the resolution continues to as-cend. To accelerate the rendering speed, several follow-up works are proposed from different perspectives. One of the most strait-forward solutions is introducing voxel-grid rep-resentation to NeRF to model local properties of geome-tries [22, 48, 37, 58, 10, 49, 13, 30, 5]. Despite the fact that this paradigm achieves two or three orders of magni-tudes of speed up [37, 10], they consume massive storage and compromise rendering quality, which is unbearable for resource-limited mobile devices. To tackle the issue of lacking high frequency details, a surge of interest is to introduce more advanced positional encoding algorithms [1, 39]. However, these approaches brought minor improvements. To this end, we propose that since MLPs have a natural defect in learning high fre-quency details, it is better to let MLPs learn low frequency 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8244 information only and introduce a high resolution reference frame to provide high frequency details in each scene. This begs our final solution, as shown in Figure 1, an end-to-end reference-guided super-resolution NeRF framework which is a deft combination of NeRF and the experience from the RefSR community. Specifically, We first downsample the HR training images to low resolution ones, then we opti-mize the low resolution NeRF with patch shuffle, followed by a lightweight RefSR model which takes an HR refer-ence image as its input to execute the upsampling and pro-duce our final HR renderings. We observe that simply in-troducing the pre-trained RefSR models from the literature tends to produce unsatisfied artifacts due to divergence in the degradation model from HR to NeRF rendering. This prompted us to design a novel effective and lightweight RefSR model. To sum up, RefSR-NeRF realizes a well-exemplified trade-off among rendering quality, speed, and memory usage, outperforming or on par with NeRF and its variants while being 52 speedup with minor extra mem-ory and storage usage. The main contributions of this paper can be summarized as follows: We propose a novel end-to-end RefSR-NeRF frame-work that extends NeRF to high resolution and photo-realistic novel view synthesis. RefSR-NeRF can act as a novel NeRF acceleration paradigm, which can significantly alleviate the prob-lem of NeRF computation and cache exploding as res-olution increases. Extensive experiments show that RefSR-NeRF quali-tatively and quantitatively outperforms baseline works by a large margin while being 52 faster.
Jeon_Polarimetric_iToF_Measuring_High-Fidelity_Depth_Through_Scattering_Media_CVPR_2023
Abstract Indirect time-of-flight (iToF) imaging allows us to capture dense depth information at a low cost. However, iToF imag-ing often suffers from multipath interference (MPI) arti-facts in the presence of scattering media, resulting in se-vere depth-accuracy degradation. For instance, iToF cam-eras cannot measure depth accurately through fog because ToF active illumination scatters back to the sensor before reaching the farther target surface. In this work, we pro-pose a polarimetric iToF imaging method that can capture depth information robustly through scattering media. Our observations on the principle of indirect ToF imaging and polarization of light allow us to formulate a novel computa-tional model of scattering-aware polarimetric phase mea-surements that enables us to correct MPI errors. We first devise a scattering-aware polarimetric iToF model that can estimate the phase of unpolarized backscattered light. We then combine the optical filtering of polarization and our computational modeling of unpolarized backscattered light via scattering analysis of phase and amplitude. This allows us to tackle the MPI problem by estimating the scattering energy through the participating media. We validate our method on an experimental setup using a customized off-the-shelf iToF camera. Our method outperforms baseline methods by a significant margin by means of our scattering model and polarimetric phase measurements.
1. Introduction Time-of-Flight (ToF) imaging is the cornerstone of mod-ern 3D imaging technology that has received great attention across diverse fields, including computer graphics and vi-sion. Its notable applications include autonomous driving, 3D motion capture, digital-human reconstruction, human-computer interfaces, robotics, etc. Modern ToF cameras can be broadly categorized into direct and indirect sys-tems. Direct ToF measures the round-trip time of pho-tons emitted from an illumination source until they travel back to the ToF detector. Indirect ToF, referred to as amplitude-modulated continuous-wave ToF, utilizes a tem-20 depth [cm] (b) GT depth without fog (e) iToF’s depth with cross pol. (f) Our method’s depth with fog(d) Conventional iToF’s depth (a) Input scene photo without fog 60 (c) Input amplitude with fogFigure 1. We introduce a polarimetric iToF imaging method that can estimate depth robustly through scattering media. (a) A pho-tograph of the input scene without fog. (b) Ground-truth depth measure without fog. (c) Input iToF amplitude map captured with fog. (d) Depth estimated by a conventional iToF camera with fog. (e) Depth improved by na ¨ıve cross-polarization filtering. (f) Our iToF depth measurement result is fairly close to the GT depth. porally modulated illumination source and computationally estimates the round-trip time of photons from modulation phase changes [21]. The indirect acquisition principle low-ers the system-building cost by departing from the neces-sity of the picosecond-accurate illumination, detector, and synchronization module used in direct ToF. Furthermore, indirect ToF achieves low-cost instant 3D imaging of the entire field of view with flood-fill illumination. As a re-sult, indirect ToF cameras have achieved remarkable suc-cess in commercial markets, e.g., Microsoft Azure Kinect and PMD sensors. However, it is also the indirect -imaging scheme that poses critical limitations on robust 3D imaging. One of the notable resulting challenges is multi-path interference (MPI). Light emitted from the ToF illumination module travels through a scene and reaches the ToF sensor. During light transport, some photons interact with only one scene point via direct reflection, thus providing accurate depth in-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12353 formation of that point. However, other photons undergo multiple reflections on different scene points because of in-direct reflection. If a pixel on the ToF sensor receives a mixture of direct and indirect photons, the measured phase shift does not correspond to the analytical phase shift of the target scene depth anymore. Thus, it degrades the accuracy of the reconstructed depth. The MPI problem becomes more severe in the presence of scattering media such as fog (Figure 1(a) for example) because light photons experience numerous indirect reflec-tions with the scattering particles. In this case, the scattered light energy often exceeds that of light interacting with a target scene point, resulting in extremely inaccurate scene depth estimation as shown in Figure 1(c), i.e., the measured distance through fog tends to be closer than the actual dis-tance. This acts as a critical hurdle for indirect ToF cameras to be deployed in the wild, e.g., fire-rescuing robots, au-tonomous driving under fog, and underwater navigation. In this paper, we propose a polarimetric iToF imaging method robust to scattering environments. Our key idea is to revisit the polarization of light and the scattering theory about intensity attenuation and depolarization. Our method allows for accurate scene depth estimation even in the pres-ence of severe scattering media, as shown in Figure 1(d). We leverage the polarization property of light that the backscattered light from scattering particles better main-tains the polarization state of the emitted photons than the light that travels farther to a surface [6]. We first configure the orthogonal polarization modulation of ToF illumination and detection to initially filter out the polarized backscat-tered light optically. While existing methods [7, 13, 36, 39] also demonstrate the effectiveness of this cross-polarization setup, one critical problem of cross-polarization setup is that the assumption on the polarized state of backscattered light does not hold in practice because backscattered light undergoes a change of polarization throughout scattering events toward an unpolarized state [37]. This results in lim-ited depth accuracy. To handle this, we devise a computational method that can eliminate the remaining unpolarized backscattered light based on the indirect ToF’s signal representation: phase and amplitude. First, we estimate the phase of unpolar-ized backscattered light by revisiting the scattering model of intensity attenuation and depolarization [33]. Second, the amplitude of unpolarized backscattered light is estimated based on the observation that the amplitude-offset ratio is consistent for non-scattered light. Then, our method sub-tracts the unpolarized backscattered light from the initial cross-polarization measurements, resulting in the estimates of scattering-free indirect ToF measurements. Our polari-metric iToF imaging method can enhance depth accuracy significantly, outperforming existing baselines for depth es-timation through scattering media, as shown in Figure 1(d).In summary, our contributions are: • A scattering-aware polarimetric phasor model specifi-cally designed for polarimetric iToF imaging, based on the scattering theory of light intensity attenuation and depolarization. • An efficient scattering phasor optimization that can es-timate the phase of unpolarized backscattered light via scattering analysis of phase and amplitude in iToF.
Chen_DisCo-CLIP_A_Distributed_Contrastive_Loss_for_Memory_Efficient_CLIP_Training_CVPR_2023
Abstract We propose DisCo-CLIP , a distributed memory-efficient CLIP training approach, to reduce the memory consump-tion of contrastive loss when training contrastive learning models. Our approach decomposes the contrastive loss and its gradient computation into two parts, one to cal-culate the intra-GPU gradients and the other to compute the inter-GPU gradients. According to our decomposition, only the intra-GPU gradients are computed on the cur-rent GPU, while the inter-GPU gradients are collected via allreduce from other GPUs instead of being repeatedly computed on every GPU. In this way, we can reduce the GPU memory consumption of contrastive loss computation fromO(B2)toO(B2 N), where BandNare the batch size and the number of GPUs used for training. Such a dis-tributed solution is mathematically equivalent to the orig-inal non-distributed contrastive loss computation, without sacrificing any computation accuracy. It is particularly ef-ficient for large-batch CLIP training. For instance, DisCo-CLIP can enable contrastive training of a ViT-B/32 model with a batch size of 32K or 196K using 8 or 64 A100 40GB GPUs, compared with the original CLIP solution which re-quires 128 A100 40GB GPUs to train a ViT-B/32 model with a batch size of 32K.
1. Introduction Vision-language representation learning from massive image-text pairs has recently attracted tremendous atten-tion for its great potential in many applications such as zero-shot classification and text-image retrieval. Represen-tative works include CLIP [27], ALIGN [17], Florence [47], CoCa [46], and BASIC [26], which all leverage hundreds of millions or even billions of image-text pairs collected from the Web to learn a semantic-rich and language-aligned visual representation [22]. As the web-collected data in-evitably contain noises, CLIP [27] for the first time applies *Corresponding author.contrastive learning on 400M image-text pairs, which im-plies a weak but more proper assumption about the data: the relevance between paired image and text is greater than that between unpaired image and text. For its demonstrated performance in CLIP, contrastive learning has been widely adopted in subsequent works. Accordingly, several image-text data sets with increasingly larger scales have also been developed and made publicly available, such as Concep-tual 12M [3], YFCC 100M [38], WIT 37.6M [37], LAION-400M [35], and LAION-5B [34]. The goal of contrastive learning in CLIP is to learn an alignment between image and text via two encoders. That is, it encourages paired image and text (called a positive pair) to be similar and meanwhile enforces unpaired im-age and text (called a negative pair) to be dissimilar. For any positive image-text pair, as there are normally unlim-ited number (up to the total number of images or texts in a data set) of negative image-text pairs, it is crucial to in-clude a sufficiently large number of negative pairs in a con-trastive loss to make the representation learning effective, as validated in all related works such as CLIP [27], Flo-rence [47], OpenCLIP [16], and BASIC [26]. Specifically, BASIC shows that larger batch size, plus larger data set and larger model, theoretically lead to a better generalization performance. However, a fundamental technical challenge in training a CLIP-like model is how to enlarge its batch size under the constraint of limited GPU memory. For instance, when the batch size is 65,536, the similarity matrix for all image-text pairs in the batch will cost about 16GB using Float32. As the backbone part also consumes a significant portion of GPU memory, especially for large backbones such as ViT-Large or ViT-Huge [9], scaling up batch size presents a great challenge, usually requiring hundreds of V100 or A100 GPUs [16, 27, 47], which are inaccessible for most research scientists. In this work, we develop a distributed solution called DisCo-CLIP for constrastive loss computation, which can save a large amount of memory for contrastive loss and make CLIP training more memory-efficient. Our method This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22648 starts from a decomposition of the original contrastive loss. Based on this decomposition, we divide the contrastive loss into two parts, one to calculate the intra-GPU loss and gradients, and the other one to calculate the inter-GPU loss and gradients. For a mini-batch on the n-th GPU (hereinafter called its hosting GPU), its intra-GPU gradients are calculated on its hosting GPU, and its inter-GPU gradients are collected from other GPUs. DisCo is an exact solution, mathematically equivalent to the origi-nal non-distributed contrastive loss, but more memory-and computation-efficient. It can decrease the memory cost of contrastive loss from O(B2)toO(B2 N), where BandNare the batch size and the number of GPUs. When Nequals to 64, it means around 97% (see Sec. 4.1 for details) of the memory, and similarly the computational cost, in con-trastive loss can be saved. Thus, using DisCo in CLIP, we can enable contrastive training with a larger batch size. Us-ing 8 Nvidia A100 40GB GPUs, DisCo-CLIP can enable contrastive training of a ViT-B/32 model with a batch size of 32,768. Using 64 A100 40GB GPUs, DisCo-CLIP can train the same model with a larger batch size of 196K. We summarize our contributions in twofold. • We propose a novel distributed contrastive loss solu-tion called DisCo for memory efficient CLIP training, which can significantly reduce the memory consump-tion of the contrastive loss computation. Such a solu-tion enables a larger batch size for contrastive training using the same computing resource without sacrificing any computation accuracy. • We further validate that training with a larger batch size can further improve the performance of con-trastive learning models.
Chen_GM-NeRF_Learning_Generalizable_Model-Based_Neural_Radiance_Fields_From_Multi-View_Images_CVPR_2023
Abstract In this work, we focus on synthesizing high-fidelity novel view images for arbitrary human performers, given a set of sparse multi-view images. It is a challenging task due to the large variation among articulated body poses and heavy self-occlusions. To alleviate this, we introduce an effec-tive generalizable framework Generalizable Model-based Neural Radiance Fields (GM-NeRF) to synthesize free-viewpoint images. Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy which can alleviate the misalignment between inaccurate geome-try prior and pixel space. On top of that, we further conduct neural rendering and partial gradient backpropagation for efficient perceptual supervision and improvement of the per-ceptual quality of synthesis. To evaluate our method, we conduct experiments on synthesized datasets THuman2.0 and Multi-garment, and real-world datasets Genebody and ZJUMocap. The results demonstrate that our approach out-performs state-of-the-art methods in terms of novel view synthesis and geometric reconstruction.
1. Introduction 3D digital human reconstruction has a wide range of ap-plications in movie production, telepresence, 3D immersive communication, and AR/VR games. Traditional digital hu-man production relies on dense camera arrays [10, 14] or depth sensors [12, 20] followed by complex graphics ren-dering pipelines for high-quality 3D reconstruction, which limits the availability to the general public. Reconstructing 3D humans from 2D images captured by sparse RGB cameras is very attractive due to its low cost and convenience. This field has been studied for decades [21, 46, 50]. However, reconstruction from sparse RGB cameras is still quite challenging because of: 1) heavy self-occlusions of the articulated human body; 2) inconsis-tent lighting and sensor parameters between different cam-*Equal contribution.†Corresponding authors. Codes are available at https://github.com/JanaldoChen/GM-NeRF Figure 1. The effect of inaccurately estimated SMPL . Com-pared with GNR [8] and KeypointNeRF [26], our method still yields a reasonable result. eras; 3) highly non-rigid and diverse clothes. In recent years, with the rise of learning-based methods, we can reconstruct high-quality digital humans from sparse cameras. Learning-based methods [32, 36, 43, 49, 52] have made great processes, however, they lack multi-view geo-metric consistency due to the mere usage of a 2D neural rendering network. To address this problem, many recent works [5, 47, 54] adopt neural radiance fields as 3D rep-resentations, which achieves outstanding performance on novel view synthesis. However, these methods are not ro-bust to unseen poses without the guidance of human geo-metric prior. To better generalize to unseen poses, NeuralBody [31] introduces a statistical body model SMPL [23] into neural radiance fields which can reconstruct vivid digital humans from a sparse multi-view video. However, NeuralBody is designed for identity-specific scenarios, which means it re-quires laborious data collection and long training to obtain the model for one person. Such a limitation restricts its ap-plication in general real-world scenarios. In this work, we focus on synthesizing high-fidelity novel view images for arbitrary human performers from a set of sparse multi-view images. Towards this goal, some very recent works [7, 8, 19, 26] propose to aggregate multi-view pixel-aligned features using SMPL as a geometric prior. However, these methods usually assume perfect ge-ometry ( e.g. accurate SMPL [23] estimation from 2D im-ages) which is not applicable in practical applications. In This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20648 practice, the geometry error does affect the reconstruction performance significantly. As illustrated in the red box of Fig. 1, when the estimated SMPL does not align well with RGB image, prior SMPL-dependent methods [8, 26] yield blurry and distorted results. The such performance gap is caused by the misalignment between the 3d geometry ( i.e. SMPL) and the pixel space ( i.e. pixel-aligned feature and ground-truth image). Specifically, the misalignment will cause: 1) blur and distortion when fusing the geometry and pixel-aligned features; 2) unsuitable supervision during training with a pixel-wise loss like L1 or L2. To alleviate the issue of misalignment, we propose to take the geome-try code as a proxy and then register the appearance code onto the geometry through a novel geometry-guided atten-tion mechanism. Furthermore, we leverage perceptual loss to reduce the influence of misalignment and promote sharp image synthesis, which is evaluated at a higher level with a larger perceptual field. It is non-trivial to apply perceptual loss in NeRF-based methods as the perceptual loss requires a large patch size as input which is memory-consuming through volume rendering. We introduce 2D neural ren-dering and partial gradient backpropagation to alleviate the memory requirement and enhance the perceptual quality. To summarize, our work contributes as follows: •A novel generalizable model-based framework GM-NeRF is proposed for the free-viewpoint synthesis of arbi-trary performers. •To alleviate the misalignment between 3D geometry and the pixel space, we propose geometry-guided attention to aggregate multi-view appearance and geometry proxy. •To enable perceptual loss supervision to further allevi-ate misalignment issues, we adopt several efficient designs including 2D neural rendering and partial gradient back-propagation.
Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023
Abstract Recent object detection approaches rely on pretrained vision-language models for image-text alignment. However, they fail to detect the Mobile User Interface (MUI) ele-ment since it contains additional OCR information, which describes its content and function but is often ignored. In this paper, we develop a new MUI element detection dataset named MUI-zh and propose an Adaptively Prompt Tuning (APT) module to take advantage of discriminating OCR in-formation. APT is a lightweight and effective module to jointly optimize category prompts across different modal-ities. For every element, APT uniformly encodes its vi-sual features and OCR descriptions to dynamically adjust the representation of frozen category prompts. We evalu-ate the effectiveness of our plug-and-play APT upon several existing CLIP-based detectors for both standard and open-vocabulary MUI element detection. Extensive experiments show that our method achieves considerable improvements on two datasets. The datasets is available at github. com/antmachineintelligence/MUI-zh .
1. Introduction While significant progress has been made in object de-tection [2,17,23,24,28], with the development of deep neu-ral networks, less attention has been paid to its challeng-ing variant in the Mobile User Interface (MUI) domain [1]. Instead of personal computers and books, people nowa-days spend more time on mobile phones due to the con-venience of various apps for daily life. However, there may exist some risks, including illegal gambling [10, 19], mal-ware [31,32], security [4,8], privacy [14,15], copy/fake [27] and fraudulent behaviors [6, 13] in apps, which need to be detected and alarmed as required by government authorities and app markets. In apps, these risks may occur in one ele-ment or even hide in the subpage after clicking one element. As a result, it is in great need of an accurate, robust, and even open-vocabulary MUI element detection approach in practice. Such technology can benefit a great variety of sce-{'category' : Text, 'bbox' : [239, 1013, 604, 54]},{'category' : Text, 'bbox' : [168, 1155, 747, 85]},{'category': TextButton, 'bbox': [393, 1312, 295, 107]},{'category': UpperTaskBar, 'bbox': [1, 1, 1079, 63]}, … {'category': Modal, 'bbox' :[19, 227, 338, 215]},{'category': Button, 'bbox' : [18, 400, 339, 42]},{'category': Product, 'bbox': [105, 545, 253, 71]},{'category': Menu, 'bbox': [6, 123, 73, 29]},{'category’: Button, 'bbox': [301, 57, 52, 32]}, …Annotations info:OCR info:[[172, 411, 201, 427], '确定'],[[195, 591, 252, 607], '¥15.00'],[[192, 556, 276, 570], '木耳肉丝盖饭'],[[50, 349, 79, 364], '输入'],[[183, 307, 199, 319], '10'],[[138, 269, 146, 269], '3'],[[17, 129, 72, 144], '盖饭系列’],[[312, 65, 340, 81] '预定'],[[93, 129, 148, 144], '盖饭系列'],[[195, 493, 252, 507], '¥15.00'], [[192, 456, 277, 470], '泡椒肉丝盖饭'],[[33, 235, 86, 247], '就餐人数'],[[20, 92, 214, 104], '重庆市重庆市南岸区荣轩罐罐米线'],[[181, 16, 213, 34], '搜索'], …VINS dataset MUI-zhdatasetFigure 1. Two MUI samples from VINS and MUI-zh dataset. Compared to VINS, we additionally obtain the OCR descriptions as supplemental information in MUI-zh. Moreover, we further link OCR descriptions and element annotations with the same color. narios as mentioned above, towards building a better mobile ecosystem [13, 30]. This paper proposes MUI element detection as a variant object detection task and develops a corresponding dataset named MUI-zh. In general, object detection aims to clas-sify and locate each object, such as an animal or a tool, in one raw image. While in MUI data, our primary concern is detecting elements, e.g., products and clickable buttons in the screenshots. The main difference between the two tasks is that MUI data often have discriminative OCR de-scriptions as supplemental information for every element, significantly influencing detection results. To better explain it, we put two MUI data examples from VINS [1] and our MUI-zh in Figure 1. VINS only provides the category anno-tation and bounding box for every element, as the object de-tection dataset does. At the same time, MUI-zh additionally obtains the OCR descriptions and links them with elements for further usage. Since the OCR descriptions are texts and will be an additional input modality, it is natural to lever-age recent Open-V ocabulary object Detection (OVD) mod-els [3, 11, 20, 22, 36, 37, 40] as the MUI element detection baseline because of their rich vision-language knowledge learned from pretrained CLIP [21]. OVD detectors usually detect and classify objects by cal-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11155 Original prompt of {Button} Original prompt of {Icon}1 Prompt of {Icon} tunedwith1,2Prompt of {Button} tunedwith1,212 APTDecision boundaryClose 221 Farwith APTBaselineButtonIcon1122Figure 2. Decision boundaries of baseline and adding APT dur-ing vision-language alignment. The stars are category prompts, and the circles are element vision embeddings. Element 1 is mis-classified by baseline while our APT tunes its category prompts adaptively and thus successfully matches it and its category. culating the similarity between visual embeddings and tex-tual concepts split from captions. However, according to our experiments, existing OVD methods can not achieve satisfactory performances on MUI datasets. The reason mainly comes from two aspects: Firstly, the samples for training OVD detectors are appearance-centric, while MUI data is not. Besides the appearance, the category of one MUI element is often closely related to its textual expla-nations obtained by OCR tools. Thus, OCR descriptions of one element can be viewed as a discriminative modality to distinguish itself from other categories, but neither exists nor is used in OVD models; Secondly, the category prompts with only category name is not optimal for vision-language alignment since they may not be precise enough to describe an MUI element. For example, we show four buttons (blue) and one icon (red) in Figure 2. The baseline (OVD detector) only uses “a photo of category name” to perform alignment and misclassify button 1 as an icon. To alleviate the above issues, we propose a novel lightweight and plug-and-play Adaptively Prompt Tuning (APT) module in MUI element detection. Firstly, it takes OCR descriptions as input, using a unimodal block to obtain rich elements’ information ( e.g., content and function) for vision-language alignment; Secondly, it adaptive encodes vision and OCR description features into embeddings to ad-just the representation of frozen category prompts, which further reduces the impact of language ambiguity during matching. As shown in Figure 2, the gray dotted lines in-dicate the decision boundaries of the OVD baseline and its variant with APT during the recognizing phase. Element 1 is misclassified by the baseline since its embedding is close to the frozen category prompt of “icon” and far away from its groundtruth “button”. Our APT adaptively tunes two category prompts (noted by the green arrow) for every el-ement and successfully recognizes element 1. As a result,we demonstrate that the APT can achieve noticeable per-formance gains based on previous OVD detectors, which will benefit many mobile layout analyses [34, 35] and risk hunters [4,10]. We summarize our contributions as follows. • We develop a high-quality MUI dataset (called MUI-zh) containing 18 common categories with OCR de-scriptions as the supplemental information. Besides MUI-zh, we will also provide the OCR descriptions of the existing dataset VINS to facilitate future research. • Inspired by the MUI data characteristics, we further proposed a novel Adaptive Prompt Tuning (APT) mod-ule to finetune category prompts for standard and open-vocabulary MUI element detection. • Experiments on two datasets demonstrate that our APT, as a plug-and-play module, achieves competitive improvements upon four recent CLIP-based detectors.
Jin_Perspective_Fields_for_Single_Image_Camera_Calibration_CVPR_2023
Abstract Geometric camera calibration is often required for ap-plications that understand the perspective of the image. We propose Perspective Fields as a representation that mod-els the local perspective properties of an image. Perspec-tive Fields contain per-pixel information about the cam-era view, parameterized as an Up-vector and a Latitude value. This representation has a number of advantages; it makes minimal assumptions about the camera model and is invariant or equivariant to common image editing op-erations like cropping, warping, and rotation. It is also more interpretable and aligned with human perception. We train a neural network to predict Perspective Fields and the predicted Perspective Fields can be converted to cali-bration parameters easily. We demonstrate the robustness of our approach under various scenarios compared with camera calibration-based methods and show example ap-plications in image compositing. Project page: https: //jinlinyi.github.io/PerspectiveFields/ * Work partially done during internship at Adobe.1. Introduction Take a look at the left-most photo in the teaser (Fig. 1-A). Can you tell if the photo is captured from an everyday camera and if it has been geometrically edited? The horizon location at the bottom of the image and the parallel vertical lines of the buildings do not follow a typical camera model: The horizon at the bottom of the image indicates the cam-era was tilted up (pitch ̸=0), but this would instead pro-duce converging vertical lines in the image due to perspec-tive projection (Fig. 1-B). Alternatively suppose the camera has 0 pitch, preserving the vertical lines of the buildings, the horizon line would instead be in the middle of the image (Fig. 1-C). This contradiction is explained by the shift of the photo, yielding a non-center principal point, and breaking a usual assumption of many camera calibration systems. Many single-image camera calibration works make use of a simplified pinhole camera model [22] that assumes a centered principal point [10,24,29] and is parameterized by extrinsic properties such as roll, pitch, and intrinsic proper-ties such as field of view. However, estimating the calibra-tion of a camera is challenging for images in the wild since they are captured by various types of cameras and lenses. Moreover, like the example in Fig. 1, the images are often This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17307 cropped [15] or warped for aesthetic composition, which may shift the image center. In this work, we propose Perspective Fields , an over-parameterized per-pixel image-based camera representa-tion. Perspective Fields consist of per-pixel Up-vectors and Latitude values that are useful on their own for alignment and can be converted to calibration parameters easily by solving a simple inverse problem. The Up-vector gives the world-coordinate up direction at each pixel, which equals the inverse gravity direction of the 3D scene projected onto the image. The Latitude is the angle between the incom-ing light ray and the horizontal plane (see Fig. 1-D). This enables our method to be robust to cropping and we show results on multiple camera projection models. Perspective Fields have a strong correlation with local image features. For example, the Up-vectors can be in-ferred by vertical edges in the image, and the Latitude is 0 at the horizon, positive above, and negative below. Since Perspective Fields have this translation-equivariance prop-erty, they are especially well suited to prediction by con-volutional neural networks. We train a neural network to predict Perspective Fields from a single image by extracting crops from 360◦panoramas where ground truth supervision can be easily obtained (see Fig. 2). We also use a teacher-student distillation method to transfer Perspective Fields to object-cutouts, which lets us train models to predict Per-spective Fields for object-centric images. For applications that require traditional camera parame-ters ( e.g. roll, pitch, field of view and principal point), we propose ParamNet to efficiently derive camera parameters from Perspective Fields. Our method works on image crops and outperforms existing methods in single image camera parameter estimation. In addition, Perspective Fields can be used in image compositing to align the camera view be-tween a foreground object and the background based on a local Perspective Field matching metric. We show with a user study that this metric for view alignment more closely matches human perspective than existing camera models. Our contributions are summarized as follows. • We propose Perspective Fields, a local and non-parametric representation of images with no assump-tion of camera projection models. • We train a network to predict Perspective Fields that works on both scene-level and object-centric images, and we propose ParamNet to efficiently derive camera parameters from Perspective Fields. Our Perspective Fields achieve better performance on recovering cam-era parameters than existing approaches. On cropped images, we reduce the pitch error by 40% over [29]. • We propose a metric of Perspective Fields to esti-mate the low-level perspective consistency between two images. We show that this consistency measure is stronger in correlation with human perception of per-spective mismatch than previous metrics such as Hori-zon line [24, 47]. 2. Related Work Calibration for perspective images. Most calibration methods aimed at consumer cameras assume a pinhole cam-era model [22] to estimate both its intrinsics and extrinsics. Traditional camera calibration processes require a reference object like chessboards or planar grids [5, 13, 14, 20, 21, 23, 36, 41, 43, 52], or multiple images [18, 22, 42]. Other methods strongly rely on the Manhattan world assump-tion to estimate camera parameters via vanishing points [8, 9, 12, 22, 28, 37, 40]. Recently, deep learning meth-ods directly predict camera parameters from single images, including horizon line [47] and focal length [46]. Hold-Geoffroy et al. [24] further extend a CNN to simultaneously predict camera roll, pitch, and FoV . UprightNet [48] pre-dicts 3D surface geometry to optimize for camera rotation. A few works [29, 30, 50] combine learned features with de-tected vanishing points to improve performance. However, these methods are limited to perspective images with a cen-tered principal point and often do not work on images in the wild where the centered pinhole assumption does not hold due to cropping, warping, or other similar edits. Calibration for non-pinhole camera models. Besides the common pinhole camera model, prior works have proposed different non-linear models such as Brown-Conrady for small distortions [16], the division model [17] for fisheye cameras, and the unified spherical model [7,19,35]. Assum-ing certain distortion models, learning-based methods can recover focal length and distortion parameter [3,10,31,34]. With a known 3D shape and its correspondences, [11, 38] can recover lens distortions. Instead of relying on a spe-cific lens model, we propose a generic representation that stores the up and latitude information for each pixel. This local representation encompasses multiple camera projec-tion models. Our versatile Perspective Field can be used to recover the parameters of a specific model, if desired. Perspective aware object placement. Many works aim to automate the image compositing process by directly learn-ing to match lighting, scale, etc. [27, 44, 51, 53]. To plau-sibly composite an object in a background image, one can match their camera parameters. One way to achieve this is to match the horizon lines between two images [24,26]. All these methods share the same limitations as the perspective image calibration methods due to their assumptions. 3. Method We first define Perspective Fields and show some exam-ples on various images. Then we show how we train a net-work to recover Perspective Fields from a single image. Fi-nally, we demonstrate some downstream applications that 17308 (A) (B) (C) (D) (E) (C) (D) (E) (A)(B) Figure 2. Example ground truth Perspective Fields for different camera parameters. Image (A) -(E) are generated from the 360◦panorama (middle top). Image (A, B, C) is perspective projection (Up-vectors point to vertical vanishing point, Horizon is a straight line at Latitude 0◦, and (B) has a shifted principal point to preserve parallel lines. (D) is a rectangular crop from the equirectangular input (Up-vectors point vertically) and (E) has radial distortion [7, 10, 35]. For each view, we visualize the Up-vector field in green arrows and the Latitude field using a blue-red color map with contour lines. Latitude colormap: −π/2 π/2. Perspective Fields enable, including camera parameter re-covery, image compositing, and object cutout calibration. 3.1. Definition of Perspective Fields Each pixel x∈R2on the image frame is originated from a light ray R∈R3emitted from a 3D point in the world frame X∈R3. When the ray travels through the camera, it is bent by the lens and projected onto the image frame. We assume an arbitrary projection function x=P(X)that maps a point in the world to the image plane. We denote the gravity direction in the world frame to be a unit vector g. For each pixel location x, a Perspective Field
consists of a unit Up-vector uxand Latitude φx. The Up-vector uxis the projection of the up direction of X, or ux= lim c→0P(X−cg)− P(X) ||P(X−cg)− P(X)||2(1) The limit is not required for perspective projection since it preserves straight lines. The Latitude φxof this pixel is defined as the angle between the ray Rand the horizontal plane, or φx= arcsinR·g ||R||2 . (2) This representation is applicable to arbitrary camera models. In Fig. 2, we illustrate the Perspective Field repre-sentation of images captured from commonly used cameras extracted from a 360◦panorama. Although our representa-tion is general, we mainly focus on perspective projection to compare with existing works and leave extensive appli-cations to other camera models for future work. 3.2. Estimating Perspective Fields Our goal is to train a neural network to estimate Perspec-tive Fields from a single image. To do this, we introduce PerspectiveNet (Fig. 3 left), an end-to-end network that takes a single RGB image as input and outputs a per-pixelvalue for Up-vector and Latitude. Unlike previous camera calibration works where the network outputs a single vec-tor of camera parameters [24, 29], the output of our system has the same dimension as the input, making it amenable to pixel-to-pixel architectures [6,39,49]. We train our Perspec-tiveNet on crops from 360◦panoramas with cross entropy lossLpers. (see Sec. 3.3.) Camera parameters from Perspective Fields. When cam-era parameters from specific models are needed, we can re-cover the camera parameters from Perspective Fields. For instance, if we parameterize perspective projection by roll, pitch, and field of view following [24, 29], and optionally the principal point location, we can represent these with a vector θ. As extracting these parameters requires combin-ing potentially noisy Perspective Field estimates, we extract them by training a neural network named ParamNet that maps the Perspective Fields to the camera parameters, as shown in Fig. 3. This network is trained directly with a sum ofℓ1lossesLparam = Σ||θi−ˆθi||1. Perspective Fields as a metric for perspective mismatch. Our representation is easy to interpret: the Up-vectors align with structures that are upright, such as trees and vertical lines on buildings; the Latitude values align with viewpoint direction: if the top of an upright object is visible. There-fore, we propose to use Perspective Fields agreement as a measurement for the image compositing quality between a foreground object and background scene. We propose Per-spective Field Discrepancy (PFD), which is defined as the sum of the difference between the Up-vectors and the Lati-tude values, or EPFD=λarccos( u1·u2) + (1 −λ)||l1−l2||1,(3) where uiis the Up-vector and liis the Latitude value. The weight λ= 0.5is used in our experiments. Both the Up-vector and the Latitude are in an angular space, so we can take a weighted sum of their angular differences. We ag-gregate the metric by averaging the PFD over all the pixels, 17309 Input Perspective Fields<latexit sha1_base64="saIre5Ej24XY5pMFmr29vT7JXhk=">AAACA3icbVDLSsNAFJ34rPUVdaebwSK4KokUdVl048JFBfuANoTJdNIOnZmEmYlQQsCNv+LGhSJu/Ql3/o2TNAttPTBwOOce5t4TxIwq7Tjf1tLyyuraemWjurm1vbNr7+13VJRITNo4YpHsBUgRRgVpa6oZ6cWSIB4w0g0m17nffSBS0Ujc62lMPI5GgoYUI20k3z4ccKTHGLH0NvMLLnkam0BW9e2aU3cKwEXilqQGSrR8+2swjHDCidCYIaX6rhNrL0VSU8xIVh0kisQIT9CI9A0ViBPlpcUNGTwxyhCGkTRPaFiovxMp4kpNeWAm8y3VvJeL/3n9RIeXXkpFnGgi8OyjMGFQRzAvBA6pJFizqSEIS2p2hXiMJMLatJCX4M6fvEg6Z3X3vN64a9SaV2UdFXAEjsEpcMEFaIIb0AJtgMEjeAav4M16sl6sd+tjNrpklZkD8AfW5w+vcJgy</latexit>Lpers Lat UpRollPitchFoVcxcyCamera Params<latexit sha1_base64="kf6pbxg8yfUGjxESB6nFNf4aQ24=">AAACBHicbZC7TsMwFIadcivlFmDsYlEhMVUJQsBYwcLAUCR6kdooclyntWo7ke0gVVEGFl6FhQGEWHkINt4GJ80ALb9k6dN/zpHP+YOYUaUd59uqrKyurW9UN2tb2zu7e/b+QVdFicSkgyMWyX6AFGFUkI6mmpF+LAniASO9YHqd13sPRCoaiXs9i4nH0VjQkGKkjeXb9SFHeoIRS28zv2DJ0xhJxLOabzecplMILoNbQgOUavv213AU4YQToTFDSg1cJ9ZeiqSmmJGsNkwUiRGeojEZGBSIE+WlxREZPDbOCIaRNE9oWLi/J1LElZrxwHTma6rFWm7+VxskOrz0UiriRBOB5x+FCYM6gnkicEQlwZrNDCAsqdkV4olJAGuTWx6Cu3jyMnRPm+558+zurNG6KuOogjo4AifABRegBW5AG3QABo/gGbyCN+vJerHerY95a8UqZw7BH1mfP2MTmJM=</latexit>LparamPerspectiveNetEncoderUp DecoderLat DecoderStack ParamNetFigure 3. Left: We use a pixel-to-pixel network (PerspectiveNet) to predict Perspective Fields from a single image. Right: When classical camera parameters are needed, we use a ConvNet (ParamNet) to extract this information directly from the Perspective Fields. denoted as APFD. The experiment in Sec. 4.3 shows that the proposed metric strongly correlates with human perception. Object cutout calibration. Image composition often in-volves compositing a segmented object with a scene. As foreground objects contain little to no background informa-tion, camera calibration methods, including our scene level Perspective Field prediction network, fail on such images due to the domain gap between the panorama training data and the real object images (see Table 2). We can easily train Perspective Fields on objects by tak-ing COCO [32] and doing distillation training using our scene level model as a teacher. Since the Perspective Fields are stored per-pixel, we can crop out an object in the image and its corresponding pseudo ground truth Perspective Field to form a new training pair. 3.3. Implementation details To learn Perspective Fields from single images, we use the architecture of SegFormer [49] with Mix Transformer-B3 encoder which was originally used for semantic segmen-tation tasks. The transformer based encoder is effective to enforce global consistency in the Perspective Fields. We use two decoder heads to output a per-pixel probability over dis-cretized Latitude and Up-vector bins. We use cross-entropy lossLpers=ℓCE, which we empirically found better than regression. The ParamNet uses ConvNeXt-tiny [33] to pre-dict a vector of camera parameters trained with ℓ1loss. 4. Experiments Overview. In the following experiments, we study three questions. First, (Sec. 4.1), can methods that recover a global set of camera parameters ( e.g. pitch) produce ac-curate Perspective Fields. We verify that directly produc-ing Perspective Fields produces more accurate camera cal-ibrations, especially on cropped images. We then ask in Sec. 4.2 the reverse statement: whether our Perspective Field method can be used to recover global camera param-eters well. We find that our method matches and often out-performs previous methods on images with a centered prin-cipal point and substantially outperforms these methods on cropped images. Next, we ask whether errors in Perspec-tive Fields match human judgments so that the evaluationin Perspective Field error is meaningful. We conduct a user study in Sec. 4.3 to evaluate our proposed metric with hu-man perception and show that humans are more sensitive to the Perspective Fields discrepancy than other existing mea-surements on image perspective. We finally show image editing applications from Perspective Fields in Sec. 4.4. 4.1. Predicting Perspective Fields We first evaluate our PerspectiveNet on both natural scenes and object-centric images. Training data and training details. We train our net-work on a diverse dataset of panorama scenes which in-cludes 30,534 indoor, 51,157 natural and 110,879 street views from 360Cities.1Although we can generate arbitrary types of image projections from the panoramas, we choose to train on perspective images for a fair comparison with previous methods. To do this, we uniformly sample crops from the panoramas with camera roll in [−45◦,45◦], pitch in[−90◦,90◦]and FoV in [30◦,120◦]. Our training and validation set consist of 190,830/1,740 panorama images re-spectively. We augment training data with random color jit-tering, blurring, horizontal flipping, rotation and cropping. We later show results on other camera models such as fish-eye images. Ours-distill : We distill our network on COCO [32] images by using pseudo ground truth predicted by our scene level network. We crop out the foreground object and the pseudo ground truth to generate the training pairs, and randomly (70% of the time) remove the background of the image us-ing segmentation masks as data augmentation to generalize to object cutouts. Test data. We test generalization of different methods on publicly available datasets including Stanford2D3D [4] and TartanAir [45] where ground truth camera parameters are available. None of the methods compared were trained on the test set. Stanford2D3D is an indoor panorama dataset where arbitrary camera views can be extracted. TartanAir is a photo-realistic dataset captured by drones with ex-treme viewpoint and diverse scenes (indoor, outdoor, natu-ral, and man-made structures) rendered with different light-ing and weather conditions. Assuming perspective projec-1https://www.360cities.net/ 17310 Table 1. Quantitative evaluation for scene-level Perspective Field prediction. Perturb: None on centered principal point images; Crop on uncentered principal point images. We re-implement Percep. [24] using the same backbone and training data as ours. None of the methods have been trained on Stanford2D3D [4] or TartanAir [45]. Results on warped test data and qualitative results are in the supp. Dataset Stanford2D3D [4] TartanAir [45] Up (o) Latitude (o) Up (o) Latitude (o) Method Perturb Mean↓Median ↓%<5o↑Mean↓Median ↓%<5o↑Mean↓Median ↓%<5o↑Mean↓Median ↓%<5o↑ Upright [28] None 3.63 3.28 64.97 7.03 7.03 41.12 3.53 3.19 65.36 5.63 5.59 49.71 Percep. [24] None 3.58 3.
Ji_Spatial-Temporal_Concept_Based_Explanation_of_3D_ConvNets_CVPR_2023
Abstract Convolutional neural networks (CNNs) have shown re-markable performance on various tasks. Despite its widespread adoption, the decision procedure of the network still lacks transparency and interpretability, making it diffi-cult to enhance the performance further. Hence, there has been considerable interest in providing explanation and in-terpretability for CNNs over the last few years. Explain-able artificial intelligence (XAI) investigates the relation-ship between input images or videos and output predic-tions. Recent studies have achieved outstanding success in explaining 2D image classification ConvNets. On the other hand, due to the high computation cost and complex-ity of video data, the explanation of 3D video recognition ConvNets is relatively less studied. And none of them are able to produce a high-level explanation. In this paper, we propose a STCE (Spatial-temporal Concept-based Expla-nation) framework for interpreting 3D ConvNets. In our approach: (1) videos are represented with high-level su-pervoxels, similar supervoxels are clustered as a concept, which is straightforward for human to understand; and (2) the interpreting framework calculates a score for each con-cept, which reflects its significance in the ConvNet decision procedure. Experiments on diverse 3D ConvNets demon-strate that our method can identify global concepts with dif-ferent importance levels, allowing us to investigate the im-pact of the concepts on a target task, such as action recog-nition, in-depth. The source codes are publicly available at https://github.com/yingji425/STCE .
1. Introduction With the rapid development of large-scale datasets and powerful computational devices, convolutional neural net-works (CNNs) have been widely used in various computer vision tasks, such as image classification [16, 17, 37], se-mantic segmentation [24, 42], object detection [22, 27] and so on. Despite the fact that CNN models show competi-tive performance in these tasks, current neural networks are *These authors contributed equally to this work. †Jien Kato (jien@fc.ritsumei.ac.jp) is the corresponding author.still regarded as black boxes. Due to the large number of parameters and high nonlinearity [25], the underlying pre-diction mechanism is opaque. This reduces the reliability of neural networks in high-stakes real-world applications such as autonomous driving and medical image analysis [18,29]. In recent years, explainable artificial intelligence (XAI) has become a popular topic to help comprehend model predic-tions and increase the credibility of CNNs. In general, the explanation methods can be divided into local methods and global methods. Local methods con-centrate on understanding predictions on individual data in-stances, while global methods attempt to explain the overall logic of the target ConvNets at the class or dataset level. In this paper, we focus on the global explanation, which is cru-cial to comprehend the overall behavior of the black boxes. There are already some methods that provide explana-tions for 2D image classification ConvNets [6,14,26,28,36], and most of them are local techniques. Zhou et al. [43] generated a Class Activation Map (CAM) using global av-erage pooling for each image to highlight the discriminate regions that are used for the 2D ConvNet to predict class. Ribeiro et al. proposed Local Interpretable Model-agnostic Explanations (LIME) [28] to interpret the model by approx-imating the predictions in a local similarity neighborhood of a target image. However, these methods are not only limited to a single prediction, but they are also difficult for humans to comprehend. The highlighted regions are pixel-level, devoid of human-understandable semantic interpreta-tion. More recently, interpretation with high-level concepts has attracted considerable attention. Kim et al. [19] intro-duced concept activation vectors (CA Vs) which use the di-rectional derivatives to quantify the importance of the net-work prediction to user-defined concepts. Based on [19], Ghorbani et al. [12] proposed ACE (Automatic Concept-based) to discover the relationship between image segments and image classification prediction. Despite solid achievements in 2D image classification interpretation, only a few studies have attempted to inter-pret 3D action recognition ConvNets, primarily due to the huge computational cost and rich spatial-temporal content of video data. Existing 3D explanation methods are mainly extended from 2D local explanation methods. Stergiou et This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15444 al. [33] proposed Saliency Tubes, which applied Grad-CAM [31] to 3D ConvNets. The activation maps of the 3D ConvNet’s final convolutional layer are combined to pro-duce heatmaps of input videos. Li et al. [21] adopt ex-tremal perturbations (EP) [8] to the video case by adding a spatial-temporal smoothness constraint. However, these methods have two major drawbacks: (1) the discriminative 3D regions are based on a single frame and lack spatial-temporal consistency; and (2) the regions are pixel-level and lack high-level semantic information. To address these issues, we extend 2D ACE [12] to 3D and propose a high-level global interpretation. For each class, videos are segmented into multiple spatial-temporal supervoxels. Similar supervoxels are grouped to form a meaningful concept. Our method can assign a score for each concept according to its contribution when network predicting. When interpreting the decision procedure of 3D action recognition ConvNets, instead of highlighting essen-tial pixels for a single video, our method can answer two fundamental questions at the class level: which objects or motions in the video are significant for a particular action recognition class andwhich object or motion is the most crucial clue in this class . Our main contributions can be summarized as follows: 1. We propose a novel Spatial-temporal Concept-based Ex-planation (STCE) for 3D ConvNets. The discrimina-tive regions are spatial-temporal continuous and human-understandable. To the best of our knowledge, STCE is among the first to achieve action recognition interpreta-tion based on high-level video supervoxels.
Du_Global_and_Local_Mixture_Consistency_Cumulative_Learning_for_Long-Tailed_Visual_CVPR_2023
Abstract In this paper, our goal is to design a simple learning paradigm for long-tail visual recognition, which not only improves the robustness of the feature extractor but also alleviates the bias of the classifier towards head classes while reducing the training skills and overhead. We pro-pose an efficient one-stage training strategy for long-tailed visual recognition called Global and Local Mixture Con-sistency cumulative learning (GLMC). Our core ideas are twofold: (1) a global and local mixture consistency loss im-proves the robustness of the feature extractor. Specifically, we generate two augmented batches by the global MixUp and local CutMix from the same batch data, respectively, and then use cosine similarity to minimize the difference. (2) A cumulative head-tail soft label reweighted loss mitigates the head class bias problem. We use empirical class fre-quencies to reweight the mixed label of the head-tail class for long-tailed data and then balance the conventional loss and the rebalanced loss with a coefficient accumulated by epochs. Our approach achieves state-of-the-art accuracy on CIFAR10-LT, CIFAR100-LT, and ImageNet-LT datasets. Additional experiments on balanced ImageNet and CIFAR demonstrate that GLMC can significantly improve the gen-eralization of backbones. Code is made publicly available at https://github.com/ynu-yangpeng/GLMC.
1. Introduction Thanks to the available large-scale datasets, e.g., Im-ageNet [10], MS COCO [27], and Places [46] Database, deep neural networks have achieved dominant results in image recognition [15]. Distinct from these well-designed balanced datasets, data naturally follows long-tail distribu-tion in real-world scenarios, where a small number of head classes occupy most of the samples. In contrast, dominant *Corresponding author Share Weights !" #$%&'()*)+,-./0,. !" (1*2( )*)+,(./0,. sim3)4-/56)7 8.. sim3)48/56)7 -.. forward propagation stop-grad propagation predictor head projection head classifier head Figure 1. An overview of our GLMC: two types of mixed-label augmented images are processed by an encoder network and a pro-jection head to obtain the representation hgandhl. Then a predic-tion head transforms the two representations to output ugandul. We minimize their negative cosine similarity as an auxiliary loss in the supervised loss. sg(·)denotes stop gradient operation. tail classes only have a few samples. Moreover, the tail classes are critical for some applications, such as medical diagnosis and autonomous driving. Unfortunately, learning directly from long-tailed data may cause model predictions to over-bias toward the head classes. There are two classical rebalanced strategies for long-tailed distribution, including resampling training data [7, 13, 35] and designing cost-sensitive reweighting loss func-tions [3, 20]. For the resampling methods, the core idea is to oversample the tail class data or undersample the head classes in the SGD mini-batch to balance training. As for This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15814 the reweighting strategy, it mainly increases the loss weight of the tail classes to strengthen the tail class. However, learning to rebalance the tail classes directly would damage the original distribution [45] of the long-tailed data, either increasing the risk of overfitting in the tail classes or sacri-ficing the performance of the head classes. Therefore, these methods usually adopt a two-stage training process [1,3,45] to decouple the representation learning and classifier fine-tuning: the first stage trains the feature extractor on the orig-inal data distribution, then fixes the representation and trains a balanced classifier. Although multi-stage training signifi-cantly improves the performance of long-tail recognition, it also negatively increases the training tricks and overhead. In this paper, our goal is to design a simple learning paradigm for long-tail visual recognition, which not only improves the robustness of the feature extractor but also al-leviates the bias of the classifier towards head classes while reducing the training skills and overhead. For improving representation robustness, recent contrastive learning tech-niques [8,18,26,47] that learn the consistency of augmented data pairs have achieved excellence. Still, they typically train the network in a two-stage manner, which does not meet our simplification goals, so we modify them as an aux-iliary loss in our supervision loss. For head class bias prob-lems, the typical approach is to initialize a new classifier for resampling or reweighting training. Inspired by the cumula-tive weighted rebalancing [45] branch strategy, we adopt a more efficient adaptive method to balance the conventional and reweighted classification loss. Based on the above analysis, we propose an efficient one-stage training strategy for long-tailed visual recogni-tion called Global and Local Mixture Consistency cumula-tive learning (GLMC). Our core ideas are twofold: (1) a global and local mixture consistency loss improves the ro-bustness of the model. Specifically, we generate two aug-mented batches by the global MixUp and local CutMix from the same batch data, respectively, and then use cosine sim-ilarity to minimize the difference. (2) A cumulative head-tail soft label reweighted loss mitigates the head class bias problem. Specifically, we use empirical class frequencies to reweight the mixed label of the head-tail class for long-tailed data and then balance the conventional loss and the rebalanced loss with a coefficient accumulated by epochs. Our method is mainly evaluated in three widely used long-tail image classification benchmark datasets, which include CIFAR10-LT, CIFAR100-LT, and ImageNet-LT datasets. Extensive experiments show that our approach outperforms other methods by a large margin, which ver-ifies the effectiveness of our proposed training scheme. Additional experiments on balanced ImageNet and CIFAR demonstrate that GLMC can significantly improve the gen-eralization of backbones. The main contributions of our work can be summarized as follows:• We propose an efficient one-stage training strategy called Global and Local Mixture Consistency cumu-lative learning framework (GLMC), which can effec-tively improve the generalization of the backbone for long-tailed visual recognition. • GLMC does not require negative sample pairs or large batches and can be as an auxiliary loss added in super-vised loss. • Our GLMC achieves state-of-the-art performance on three challenging long-tailed recognition bench-marks, including CIFAR10-LT, CIFAR100-LT, and ImageNet-LT datasets. Moreover, experimental results on full ImageNet and CIFAR validate the effectiveness of GLMC under a balanced setting.
Huang_Towards_Accurate_Image_Coding_Improved_Autoregressive_Image_Generation_With_Dynamic_CVPR_2023
Abstract Existing vector quantization (VQ) based autoregressive models follow a two-stage generation paradigm that first learns a codebook to encode images as discrete codes, and then completes generation based on the learned code-book. However, they encode fixed-size image regions into fixed-length codes and ignore their naturally different in-formation densities, which results in insufficiency in impor-tant regions and redundancy in unimportant ones, and fi-nally degrades the generation quality and speed. More-over, the fixed-length coding leads to an unnatural raster-scan autoregressive generation. To address the problem, we propose a novel two-stage framework: (1) Dynamic-Quantization VAE (DQ-VAE) which encodes image re-gions into variable-length codes based on their informa-tion densities for an accurate &compact code represen-tation. (2) DQ-Transformer which thereby generates im-ages autoregressively from coarse-grained (smooth regions with fewer codes) to fine-grained (details regions with more codes) by modeling the position and content of codes in each granularity alternately, through a novel stacked-transformer architecture and shared-content, non-shared position input layers designs. Comprehensive experiments on various generation tasks validate our superiorities in both effectiveness and efficiency. Code will be released athttps://github.com/CrossmodalGroup/ DynamicVectorQuantization .
1. Introduction The vision community has witnessed the rapid progress of deep generative models, pushing image generation qual-ity to an unprecedented level. As a fundamental task, gen-erating realistic images from arbitrary inputs ( e.g., class la-bels) can empower humans to create rich and diverse visual content and bring numerous real-world applications. Unify-*Zhendong Mao is the corresponding author. Figure 1. Illustration of our motivation. (a) Existing fixed-length coding ignores information densities , which results in insuffi-ciency in dense information regions like region ②and redundancy in sparse information regions like region ①, generating poor de-tails and inconsistent structure. Our information-density-based variable-length coding encodes accurately and produces rich de-tails and consistent structure. (b) Comparison of existing unnatu-ral raster-scan autoregressive generation order and our natural and more effective coarse-to-fine autoregressive generation order. Error map: l1loss of each 322region between original images and recon-structions, higher (redder) worse. Existing examples are taken from [13]. ing the realism of local details and the consistency of global structure is the eternal pursuit for all image generations. Recently, vector quantization (VQ) [37] has been a foun-dation for various types of generative models as evidenced by numerous large-scale diffusion models like LDM [32], autoregressive models like DALL-E [30], etc. These mod-els follow a two-stage generation paradigm, i.e., the first stage learns a codebook by VQ to encode images as dis-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22596 crete codes, where each code represents a local visual pat-tern, while the second stage learns to generate codes of lo-cal regions and then restores to images. The importance lies in that the local details could be well encoded in the first stage and thus the second stage could effectively focus on global structure modeling, leading to better generation quality and scalability. Existing models mainly focus on the second stage to better generate codes for improving genera-tion quality, such as raster-scan autoregression [11, 30, 43], bi-direction [7, 24, 44], or diffusion [5, 14, 32]. Only a few works aim to improve the fundamental code representation itself in the first stage, including perceptual and adversar-ial loss for context-rich codebook [13], residual quantiza-tion [23], and more expressive transformer backbone [42], etc. Their commonality is that they all focus on encoding more information of all image regions together. However, existing fundamental encoding works inher-ently fail to effectively encode image information for an accurate and compact code representation, because they ig-nore the naturally different information densities of dif-ferent image regions and encode fixed-size regions into fixed-length codes. As a result, they suffer from two lim-itations: (1) insufficient coding for important regions with dense information, which fails to encode all necessary in-formation for faithful reconstruction and therefore degrades the realism of local details in both stages. (2) redundant coding for unimportant ones with sparse information, bring-ing huge redundant codes that mislead the second stage to focus on the redundancy and therefore significantly hinder the global structure modeling on important ones. As shown in Figure 1(a), the fixed-length codes result in large recon-struction errors in important cheetah regions and produce poor local details ( e.g., face, hair) in both stages. Mean-while, the fixed-length codes are overwhelmed for unimpor-tant background regions, which misleads the second stage to generate redundant background and inconsistent cheetah structure. Moreover, as shown in Figure 1(b), since all re-gions are encoded into fixed-length codes, there is no way for the second stage to distinguish their varying importance and thus results in an unnatural raster-scan order [13] for ex-isting autoregressive models [11, 23, 30, 42, 43], which fails to consider the image content for an effective generation. To address this problem, inspired by the classical in-formation coding theorems [18, 33, 34] and their dynamic coding principle , we propose information-density-based variable-length coding for an accurate and compact code representation to improve generation quality and speed. Moreover, we further propose a natural coarse-to-fine au-toregressive model for a more effective generation. Specif-ically, we propose a novel two-stage generation frame-work: (1) Dynamic-Quantization VAE (DQ-VAE) which first constructs hierarchical image representations of mul-tiple candidate granularities for each region, and then usesa novel Dynamic Grained Coding module to assign the most suitable granularity for each region under the con-straint of a proposed budget loss , matching the percent-age of each granularity to the desired expectation holis-tically. (2) DQ-Transformer which thereby generates im-ages autoregressively from coarse-grained (smooth regions with fewer codes) to fine-grained (details regions with more codes) to more effectively achieve consistent struc-tures. Considering the distribution of different granulari-ties varying, DQ-Transformer models the position and con-tent of codes in each granularity alternately through a novel stacked-transformer architecture . To effectively teach the difference between different granularities, we further design shared-content andnon-shared-position input layers. Our main contributions are summarized as follows: Conceptual contribution. We point to the inherent in-sufficiency and redundancy in existing fixed-length cod-ingsince they ignore information density . For the first time, we propose information-density-based variable-length coding for accurate &compact code representations. Technical contribution. (1) We propose DQ-VAE to dynamically assign variable-length codes to regions based on their different information densities through a novel Dy-namic Grained Coding module andbudget loss . (2) We pro-pose DQ-Transformer to generate images autoregressively from coarse-grained to fine-grained for the first time, which models the position and content of codes alternately in each granularity by stacked-transformer architecture with shared-content andnon-shared position input layers design. Experimental contribution. Comprehensive experi-ments on various generations validate our superiority, e.g., we achieve 7.4% quality improvement and faster speed compared to existing state-of-the-art autoregressive model on unconditional generation, and 17.3% quality improve-ment compared to existing million-level parameters state-of-the-art models on class-conditional generation.
Hu_Collaboration_Helps_Camera_Overtake_LiDAR_in_3D_Detection_CVPR_2023
Abstract Camera-only 3D detection provides an economical so-lution with a simple configuration for localizing objects in 3D space compared to LiDAR-based detection systems. However, a major challenge lies in precise depth estima-tion due to the lack of direct 3D measurements in the in-put. Many previous methods attempt to improve depth es-timation through network designs, e.g., deformable layers and larger receptive fields. This work proposes an or-thogonal direction, improving the camera-only 3D detec-tion by introducing multi-agent collaborations. Our pro-posed collaborative camera-only 3D detection ( CoCa3D ) enables agents to share complementary information with each other through communication. Meanwhile, we op-timize communication efficiency by selecting the most in-formative cues. The shared messages from multiple view-points disambiguate the single-agent estimated depth and complement the occluded and long-range regions in the single-agent view. We evaluate CoCa3D in one real-world dataset and two new simulation datasets. Results show thatCoCa3D improves previous SOTA performances by 44.21% on DAIR-V2X, 30.60% on OPV2V+, 12.59% on CoPerception-UAVs+ for AP@70. Our preliminary results show a potential that with sufficient collaboration, the cam-era might overtake LiDAR in some practical scenarios. We released the dataset and code.
1. Introduction As a fundamental task of computer vision, 3D object de-tection aims to localize objects in the 3D physical space given an agent’s real-time sensor inputs. It is crucial in a wide range of applications, including autonomous driv-ing [10,33,43], surveillance systems [39], robotics [12] and unmanned aerial vehicles [7]. Depending on sensor setups, there are multiple technical solutions to realize 3D object detection. In this spectrum, one extreme emphasizes rais-ing the upper bound of detection performance, which uses *Corresponding author. Figure 1. Collaborative camera-only 3D detection can disam-biguate the single-view estimated depth, address the long-range and occlusion issues, and approach LiDAR in 3D detection. high-end LiDAR sensor [2, 3, 10, 21, 28, 42, 45] to collect precise 3D measurements. However, this approach is too expensive to scale up. The other extreme solution empha-sizes cost effectiveness, which tries to use thrifty sensor se-tups, e.g. only using cameras to detect 3D objects in real-time [1, 8, 9, 23, 25, 26, 30, 32, 41, 43]. However, camera-only 3D detection is significantly and consistently worse than LiDAR-based detection in most scenarios [19]. In this paper, we propose an orthogonal direction for im-proving camera-only 3D detection performances by intro-ducing multi-agent collaborations. Hypothetically, empow-ered by advanced communication systems, multiple agents equipped only with cameras could share visual information with each other. This would bring three outstanding ben-efits. First, different viewpoints from multiple agents can largely resolve the depth ambiguity issue in camera-only 3D detection, bridging the gap with expensive LiDARs on depth estimation. Second, multi-agent collaboration avoids inevitable limitations in single-agent 3D detection, such as occlusion and long-range issues, and potentially enables more holistic 3D detection; that is, detecting all the ob-jects existed in the 3D scene, including those beyond vi-sual range. Since LiDAR also suffers from limited field of view, this potentially enables collaborative cameras to out-perform LiDAR. Third, the total expense of a large fleet of vehicles is significantly reduced as cameras are much This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9243 cheaper than LiDAR. However, multi-agent collaboration also brings new challenges. Different from many multi-view geometry problems, here we also have to concern communication bandwidth constraints. Thus, each agent needs to select the most informative cues to share. Following this design rationale, we propose a novel col-laborative camera-only 3D detection framework CoCa3D . It includes three parts: i) single-agent camera-only 3D de-tection, which achieves basic depth estimation and 3D de-tection for each agent; ii) collaborative depth estimation, which disambiguates the estimated depths by promoting spatial consistency across multiple agents’ viewpoints; and iii) collaborative detection feature learning, which comple-ments detection features by sharing key detection messages with each other. Compared to recent collaborative percep-tion methods [11,14] that are dealing with LiDAR, CoCa3D specifically designs novel collaborative depth estimation to customize the task of camera-only 3D detection. To evaluate CoCa3D , we conduct comprehensive ex-periments on one real-world dataset, DAIR-V2X [40], and two new simulation datasets, OPV2V+ and CoPerception-UA Vs+, which are extended based on original OPV2V [37] and CoPerception-UA Vs [6] with more collaborative agents that cover three types of agents (cars, infrastructures and drones). Our results show that i) with 10 collaborative agents, CoCa3D enables camera-only detectors to overtake LiDAR-based detectors on OPV2V+; and ii) CoCa3D con-sistently outperforms previous works in the performance-bandwidth trade-off across multiple datasets by a large mar-gin, improving the previous SOTA performances by 30.60% on OPV2V+, 12.59% on CoPerception-UA Vs+, 44.21% on DAIR-V2X for AP@70. To sum up, our contributions are: •We propose a novel collaborative camera-only 3D de-tection framework CoCa3D , which improves the detection ability of cameras with multi-agent collaboration, promot-ing more holistic 3D detection. •We propose core communication-efficient collabora-tion techniques, which explore the spatially sparse yet crit-ical depth messages and tackle the depth ambiguity, occlu-sion, and long-range issues by fusing complementary infor-mation from different viewpoints, achieving more accurate and complete 3D representation. •We expand two previous collaborative datasets with more agents, and conduct extensive experiments, validating that i) CoCa3D significantly bridges the performance gap between camera and LiDAR on OPV2V+ and DAIR-V2X; and ii) CoCa3D achieves the state-of-the-art performance-bandwidth trade-off.
Clarke_RealImpact_A_Dataset_of_Impact_Sound_Fields_for_Real_Objects_CVPR_2023
Abstract Objects make unique sounds under different perturba-tions, environment conditions, and poses relative to the listener. While prior works have modeled impact sounds and sound propagation in simulation, we lack a standard dataset of impact sound fields of real objects for audio-visual learning and calibration of the sim-to-real gap. We present REALIMPACT , a large-scale dataset of real object impact sounds recorded under controlled conditions. RE-ALIMPACT contains 150,000 recordings of impact sounds of 50 everyday objects with detailed annotations, includ-ing their impact locations, microphone locations, contact force profiles, material labels, and RGBD images. *We make preliminary attempts to use our dataset as a reference to current simulation methods for estimating object impact sounds that match the real world. Moreover, we demon-strate the usefulness of our dataset as a testbed for acoustic and audio-visual learning via the evaluation of two bench-mark tasks, including listener location classification and vi-sual acoustic matching.
1. Introduction Object sounds permeate our everyday natural environ-ments as we both actively interact with them and passively perceive events in our environment. The sound of a drinking glass bouncing on the floor assuages our fear that the glass would shatter. The click made by a knife making contact with a cutting board assures us that we have diced cleanly through a vegetable. And listening to the sound a painted mug makes when we tap it informs us of whether it is made of ceramic or metal. What we perceive from sound comple-ments what we perceive from vision by reinforcing, disam-biguating, or augmenting it. Understanding the cause-and-effect relationships in these sounds at a fine-grained level can inform us about an object’s material properties and geometry, as well as its contact and other environmental conditions. Capturing *The project page and dataset are available at https : / / samuelpclarke.com/realimpact/these relationships from real-world data can help us im-prove our models toward more realistic physical simula-tions, with applications in virtual reality, animation, and training learning-based frameworks in simulation. The sounds we perceive from objects are the result of many intricate physical processes: they encode important properties about the object itself (e.g., geometry, material, mechanical properties), as well as the surrounding environ-ment (e.g., room size, other passive objects present, mate-rials of furniture in the room). More specifically, when a hard object is struck, it vibrates according to its mass and stiffness, and the shape of the object determines the mode shapes of the dominant vibration patterns (§3.1). Acous-tic waves are then emitted into the medium, typically air, bouncing around in the room and interacting with surround-ing objects and the room itself before reaching our ear or a microphone to be perceived as pressure fluctuations (§3.2). Prior work has explored using physical simulation [26, 54] or learning-based methods [28, 29] to reconstruct the sound generation process virtually, as well as building 3D environments with simulated spatial audio for embodied audio-visual learning [7, 15, 18, 35, 42]. However, there has been little work on building physical apparatuses and fea-sible measurement process to quantify sounds made by the everyday objects, despite their importance and intimate re-lationship with our daily lives. As a result, the evaluations of the methods above are largely established on subjective metrics such as user studies. To address this gap, we introduce R EALIMPACT , a dataset containing 150k recordings of 50 everyday objects, each being struck from 5 distinct impact positions. For each impact point, we capture sounds at 600 field points to provide comprehensive coverage of the frequency com-ponents of the sounds and how they are distributed spa-tially. R EALIMPACT thus provides all the inputs most cur-rent simulation frameworks needed to simulate each sound, while also providing the ground truth recording for compar-ison. We show that R EALIMPACT can be used for various downstream auditory and audio-visual learning tasks, such as listener location classification (§5.2) and visual acous-tic matching (§5.3). These results demonstrate that sound This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1516 fields can help improve machine perception and understand-ing of the world, and motivate further studies of even more accurate simulation methodologies to reduce the sim-to-real gap for future applications. We make three contributions. First, we design an auto-mated setup for collecting high-fidelity, annotated record-ings of sounds by controlled striking of everyday objects. Second, using this setup, we acquire a large dataset of spatialized object sounds, R EALIMPACT . Third, we moti-vate the utility of R EALIMPACT by (a) using it to perform comparisons to results generated by current state-of-the-art sound simulation frameworks and (b) evaluating two bench-mark tasks for acoustic and audio-visual learning.
Gu_Preserving_Linear_Separability_in_Continual_Learning_by_Backward_Feature_Projection_CVPR_2023
Abstract Catastrophic forgetting has been a major challenge in continual learning, where the model needs to learn new tasks with limited or no access to data from previously seen tasks. To tackle this challenge, methods based on knowl-edge distillation in feature space have been proposed and shown to reduce forgetting [16, 17, 25]. However, most fea-ture distillation methods directly constrain the new features to match the old ones, overlooking the need for plasticity. To achieve a better stability-plasticity trade-off, we propose Backward Feature Projection (BFP), a method for contin-ual learning that allows the new features to change up to a learnable linear transformation of the old features. BFP preserves the linear separability of the old classes while al-lowing the emergence of new feature directions to accom-modate new classes. BFP can be integrated with existing experience replay methods and boost performance by a sig-nificant margin. We also demonstrate that BFP helps learn a better representation space, in which linear separability is well preserved during continual learning and linear prob-ing achieves high classification accuracy.
1. Introduction Despite their many successes, deep neural networks remain prone to catastrophic forgetting [37], whereby a model’s performance on old tasks degrades significantly while it is learning to solve new tasks. Catastrophic forget-ting has become a major challenge for continual learning (CL) scenarios, where the model is trained on a sequence of tasks, with limited or no access to old training data. The ability to learn continually without forgetting is cru-cial to many real-world applications, such as computer vi-sion [36, 46], intelligent robotics [30], and natural language processing [6, 23]. In these settings, an agent learns from a stream of new data or tasks, but training on the old data is restricted due to limitations in storage, scaling of training time, or even concerns about privacy. The continual learning problem has received significant Features space Ƹ𝑧(t-SNE) after training on task 1Feature space 𝑧(t-SNE) after training on task 2Class 3 Class 4Task 2 Class 1 Class 2Task 1Continual LearningBackward Feature Projection 𝐿𝐵𝐹𝑃(𝐴,𝑧)=𝐴𝑧−Ƹ𝑧2 Figure 1. Feature distribution before and after training on a task in a class incremental learning experiment on MNIST, visualized by t-SNE. Left: before training on task 2, seen classes (1,2) are learned to be separable along the horizontal axis for classification, while unseen classes (3, 4) are not separable. Right : after training on task 2, the new vertical axis is learned to separate new classes (3,4). Based on this observation, we propose the Backward Fea-ture Projection loss LBFP , which allows new feature dimensions to emerge to separate new classes in feature space and also pre-serves the linear separability of old classes to reduce catastrophic forgetting. attention and multiple solution themes have emerged. Ex-perience replay methods [8, 33], for example, store a lim-ited number of (or generate) old training examples and use them together with new data in continual learning. Parame-ter regularization methods [29,50] restrict the change of im-portant network parameters. Knowledge distillation meth-ods [16, 17, 31] regularize the intermediate output of the CL model to preserve the knowledge from old tasks. Ar-chitectural methods [34, 42, 48] adopt expansion and isola-tion techniques with neural networks to prevent forgetting. All these methods strive to balance learning new knowledge (plasticity) and retaining old knowledge (stability). We present a continual learning algorithm, focusing on knowledge distillation (KD) in feature space. In the con-tinual learning context, KD treats the continual learning model as the student and its old checkpoint as the teacher and regularizes the network intermediate outputs to reduce forgetting [4,8,11,16,17,25,31]. Although recent CL meth-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24286 ods based on KD have been effective at reducing forgetting, they typically adopt the L2distance for distillation, forc-ing the learned features to be close to their exact old values. This is too restrictive and results in CL models that are more rigid in retaining old knowledge (stronger stability), but less flexible in adapting to new tasks (weaker plasticity). Our method has a better tradeoff of stability and plasticity. In this paper, we pay attention to the feature space in CL and study its evolution. We show that a small number of principal directions explain most of the variance in feature space and only these directions are important for classifi-cation. A large number of directions in the feature space have little variance and remain unused. When the model is trained on new tasks, new features need to be learned along those unused directions to accommodate new classes, as illustrated in Figure 1. Without handling forgetting, the old principal directions, along which the old classes are lin-early separable, will be forgotten. Our results indicate that such forgetting of learned principal directions in the feature space is an important reason for catastrophic forgetting. Based on this insight, as shown in Figure 1, we propose a Backward Feature Projection (BFP) loss, an effective fea-ture distillation loss that enforces feature consistency up to a learnable linear transformation, not imposing exact equality of features. This transformation aims to preserve the linear separability of features backward in time. We show that this linear projection is important because it can rotate, reflect, and scale features, while maintaining the linear separability of the previously learned classes in the new feature space. Projecting backward allows the features to change and new decision boundaries to be learned along the unused feature directions to classify new classes. BFP can be integrated into existing CL methods in a straightforward way and ex-periments show that this simple change boosts the perfor-mance over baselines by a large margin. Our experiments show that the proposed BFP regular-ization loss can improve the baseline methods by up to 6%-8% on the challenging Split-CIFAR10 and Split-CIFAR100 datasets, achieving state-of-the-art class-incremental learn-ing accuracy. More importantly, the linear probing experi-ments show that BFP results in a better feature space where different classes are more separable. See Figure 1 for an illustrative example. Our contributions are as follows: • We provide an analysis of feature space evolution dur-ing continual learning, distinguishing the important feature components from unimportant ones. • We propose the Backward Feature Projection (BFP) loss, which preserves the linear separability of old classes while allowing plasticity during continual learning, i.e. features are allowed to change. • When combined with simple experience replay base-lines, BFP helps learn better feature space and achieves state-of-the-art performance on challenging datasets.2. Related Work
Gao_Decompose_More_and_Aggregate_Better_Two_Closer_Looks_at_Frequency_CVPR_2023
Abstract Encouraged by the effectiveness of encoding temporal dynamics within the frequency domain, recent human mo-tion prediction systems prefer to first convert the motion representation from the original pose space into the fre-quency space. In this paper, we introduce two closer looks at effective frequency representation learning for robust mo-tion prediction and summarize them as: decompose more and aggregate better. Motivated by these two insights, we develop two powerful units that factorize the frequency representation learning task with a novel decomposition-aggregation two-stage strategy: (1) frequency decompo-sition unit unweaves multi-view frequency representations from an input body motion by embedding its frequency fea-tures into multiple spaces; (2) feature aggregation unit de-ploys a series of intra-space and inter-space feature aggre-gation layers to collect comprehensive frequency represen-tations from these spaces for robust human motion predic-tion. As evaluated on large-scale datasets, we develop a strong baseline model for the human motion prediction task that outperforms state-of-the-art methods by large margins: 8%∼12% on Human3.6M, 3% ∼7% on CMU MoCap, and 7%∼10% on 3DPW.
1. Introduction 3D skeleton-based human motion prediction system forecasts future poses given a past motion. It helps ma-chines understand human behavior and plan their own re-sponses, which is crucial in many real-world applications, including intelligent surveillance [11, 40], human-machine interaction [16, 17] and autonomous driving [18, 32]. The core challenge behind this task lies in developing a powerful mapping function that effectively bridges past body motion to the future [23, 25, 27, 30, 33]. *Corresponding author. Figure 1. Diverse frequency distributions of body poses. As for a human action, the differences in temporal smoothness between different body joints and motion samples enlarge the representa-tion gap in its frequency space. Earlier prediction algorithms tend to extract motion pat-terns from the original pose space [7, 9, 10, 22, 34, 43]. Due to the subject-specific nature of pose space, their em-bedding representations intertwine body motion informa-tion and structure information jointly. In this case, they en-capsulate an inductive bias on general human stature and thus suffer from limited robustness against body shape per-turbation. Inspired by the effectiveness of encoding tem-poral smoothness in frequency domain, frequency space encourages human motion prediction systems to focus on trajectory-related cues [1, 42]. As an initial attempt, Mao et al. [28] propose to convert the motion representation from the pose space into the frequency space with discrete cosine transform (DCT). Following this insight, recent methods widely use DCT as a routine operation in the data prepro-cessing stage and extract feature embeddings from the sin-gle frequency space initialized by the DCT [21, 24, 26, 38]. In this context, the frequency features extracted from past body motions dominate the future motion prediction. A further investigation into developing a powerful frequency representation learning framework for robust human motion prediction remains fundamental yet under-explored. As sketched in Figure 1, diverse frequency distributions of body motions lie in intra-sample and inter-sample lev-els: (1) intra-sample difference. Since the human skeleton This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6451 Figure 2. Network Architecture. we factorize the frequency representation learning into two-stage decomposition-aggregation scheme: Frequency Decomposition Unit (FDU) extracts multi-view frequency features from an input body motion by embedding its frequency representations into Kspaces; Feature Aggregation Unit (FAU) deploys Lintra-space and inter-space feature aggregation layers to collect comprehensive frequency representations for robust human motion prediction. is a non-rigid articulated structure, different body joints ex-hibit different frequency appearances in their motion trajec-tories; (2) inter-sample difference. Different personal mo-tion styles in the same activity brings subtle intra-class bias to different data samples, enlarging the frequency represen-tation gap between human motion samples. These diverse frequency distributions make human motion prediction sys-tems prone to be incapable of governing the input body tra-jectories with unseen frequency variations. It prompts us to develop multi-view augmentation learning into a promising solution for robust human motion prediction. Instead of ex-tracting features from a single frequency space initialized by the DCT, we first introduce an input body motion into multiple frequency spaces to enrich its spectral encoding. Then, we collect richer multi-view frequency representa-tions from these spaces for robust human motion prediction. Specifically, as illustrated in Figure 2, we factorize the frequency representation learning into two sequen-tial stages: (1) Frequency Decomposition Unit (FDU) un-weaves finer frequency representations from an input body motion by tuning each body joint trajectory with multiple versatile filters. By embedding the frequency representa-tion into multiple feature spaces, FDU explores multi-view frequency representations on input body poses; (2) Feature Aggregation Unit (FAU) first deploys a series of adaptive graph filters within each frequency space and then inter-leaves feature-crossing layers to promote message exchange between spaces. These intra-space and inter-space informa-tion aggregations benefit FAU in extracting comprehensive body features for robust body motion prediction. Integrat-ing both FDU and FAU components, we reformulate the frequency representation learning into a novel and powerful decomposition-aggregation scheme. The main contributions of this paper are summarized into the following: • We propose a frequency decomposition unit (FDU) that develops multiple versatile filters to embed each body joint trajectory into multiple frequency spaces.By exploring multi-view frequency representations on an input body motion, FDU enriches its encodings in the spectral domain. • Pairing with FDU, we design a feature aggregation unit (FAU) that deploys a series of intra-space and inter-space feature aggregation layers to extract comprehen-sive representations from multiple frequency spaces. By promoting message propagation within and be-tween different spaces, FAU collects richer multi-view body features for robust motion prediction. • Integrating FDU with FAU, we develop a power-ful motion prediction system that factorizes the fre-quency representation learning into a decomposition-aggregation scheme. As verified on three datasets, it significantly outperforms state-of-the-art methods in short-term and long-term motion predictions.
Huang_Diversity-Aware_Meta_Visual_Prompting_CVPR_2023
Abstract We present Diversity-Aware Meta Visual Prompt-ing (DAM-VP), an efficient and effective prompting method for transferring pre-trained models to downstream tasks with frozen backbone. A challenging issue in visual prompt-ing is that image datasets sometimes have a large data di-versity whereas a per-dataset generic prompt can hardly handle the complex distribution shift toward the original pretraining data distribution properly. To address this issue, we propose a dataset Diversity-Aware prompting strategy whose initialization is realized by a Meta-prompt. Specif-ically, we cluster the downstream dataset into small ho-mogeneity subsets in a diversity-adaptive way, with each subset has its own prompt optimized separately. Such a divide-and-conquer design reduces the optimization diffi-culty greatly and significantly boosts the prompting perfor-mance. Furthermore, all the prompts are initialized with a meta-prompt, which is learned across several datasets. It is a bootstrapped paradigm, with the key observation that the prompting knowledge learned from previous datasets could help the prompt to converge faster and perform bet-ter on a new dataset. During inference, we dynamically select a proper prompt for each input, based on the fea-ture distance between the input and each subset. Through extensive experiments, our DAM-VP demonstrates supe-rior efficiency and effectiveness, clearly surpassing previ-ous prompting methods in a series of downstream datasets for different pretraining models. Our code is available at: https://github.com/shikiw/DAM-VP.
1. Introduction With the increasing scale of training data and model size, the pretraining-finetuning paradigm has shown remarkable achievement in many areas, including natural language pro-cessing (NLP) [4,13] and computer vision (CV) [2,7,8,19]. *Corresponding author. -100102030405060 60 65 70 75 80Top-1 Acc Gain Dataset DiversityVP-1001020304050 60 65 70 75 80Top-1 Acc Gain Dataset DiversityVPTFigure 1. Relation between dataset diversity and the performance gain got by using prompting. The gain is the performance im-provement when compared with the linear-probing accuracy, un-der the head-tuning setting. Both previous methods get a large performance gain on low-diversity datasets, while failing to boost the transfer performance on high-diversity datasets. However, fully finetuning a large pre-trained model for each small downstream task still has some problems in real-world usage. The most practical one is the storage and dis-tribution problem that we have to maintain an independent copy of the model for each task, which is quite expensive and inflexible, especially for increasing numbers of down-stream tasks [9]. To break the dilemma, many efforts [6,17,18,25,51] have been paid to efficiently transfer the given pre-trained models into a particular dataset. Prompting is an extensively studied method in the NLP area, which appends a few tokens before the input sequence to provide some task-specific knowledge to the pre-trained model, so that the model could adapt well on the downstream tasks without the fully-finetuning. In-spired by the success of prompting in NLP, some recent works [1, 26] propose visual prompting for vision models. By adding some learnable noise onto the input image or ap-pending some learnable tokens to the model input sequence, the pre-trained models show promising results on different kinds of downstream tasks. However, we argue that these methods ignore the diverse distribution property of the image dataset and using a sin-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10878 gle prompt for all the images in each dataset is not opti-mal. In Figure 1, we show the relationship between the gain from prompting and the diversity of the dataset. Here the gain represents the accuracy improvement compared with the linear probing setting. We find that both VP [1] and VPT [26] improve the model accuracy by a large margin on the low-diversity dataset, but relatively small gains on the high-diversity datasets, which is intuitively sensible. For low-diversity datasets, such as the street view house num-ber dataset (SVHN) [37], all the images have similar content so a unified prompt is sufficient. On the contrary, when it comes to high-diversity datasets, such as the ImageNet [12] dataset, it covers very diverse classes from the wordnet and there is not any pre-defined relationship between the classes, so it is hard to use a single prompt to provide the prior for all the images, such as for “car” and “dog”. Motivated by this observation, we propose our Diversity-Aware Meta Visual Prompting (DAM-VP). It has two core designs. Firstly, to provide a proper prompt for each image from high-diversity datasets, we propose a clustering-based prompt selection method. In detail, given a pre-trained vi-sual model and a downstream dataset, we use the off-the-shelf clustering method to cluster the feature of the down-stream data into several coarse-grained subsets, and guide each cluster to learn its own prompt separately. Based on the strong homogeneity of the same clustered data, the opti-mization of cluster-specific visual prompts can be greatly facilitated and the data commonalities can be also easily covered. Secondly, we argue the prompt across different clusters or datasets may have some shared pattern, from which the model can be adapted to a new dataset faster and get better performance. This motivates us to introduce a meta-learning-based method that learns a meta prompt and initializes the prompt of each cluster with it. We conduct our experiments on datasets with different data diversity and evaluate the transfer performance with different pre-trained models. We report the performance on both the widely used head-tuning setting and a more chal-lenging head-freezing/missing setting. Our DAM-VP out-performs previous methods by a large margin, especially on high-diversity datasets. For example, with the ImageNet-22k pre-trained ViT-B model, DAM-VP gets 73.1%top-1accuracy under the head-tuning setting on the diverse DTD [10] dataset, surpassing previous methods VP [1] and VPT [26] with +13.6%and+7.3%respectively. Mean-while, we find DAM-VP is quite efficient that with only 10 epoch tuning, it gets 85.7%average top-1accuracy over the 10 datasets, comparable with previous methods that tunes 100 epochs ( 83.4%for VP [1] and 85.5%for VPT [26]). Our contributions can be summarized as follows: • We analyze the limitation of previous visual prompting methods, and point out that vision-suitable prompting should consider the dataset diversity.• Accordingly, we propose a novel Diversity-Aware Meta Visual Prompting (DAM-VP) method. It uses the divide-and-conquer idea by clustering high-diversity datasets into subsets and learning separate prompts for each subset, in cooperation with a meta-prompt learn-ing design. • Through extensive experiments, our DAM-VP demon-strates superior performance, achieving SOTA perfor-mance in a series of downstream datasets for different pretraining models.
Achlioptas_Affection_Learning_Affective_Explanations_for_Real-World_Visual_Data_CVPR_2023
Abstract In this work, we explore the space of emotional reac-tions induced by real-world images. For this, we first in-troduce a large-scale dataset that contains both categorical emotional reactions and free-form textual explanations for 85,007 publicly available images, analyzed by 6,283 annota-tors who were asked to indicate and explain how and why they felt when observing a particular image, with a total of 526,749 responses. Although emotional reactions are subjective and sensitive to context (personal mood, social status, past experiences) – we show that there is significant common ground to capture emotional responses with a large support in the subject population. In light of this observa-tion, we ask the following questions: i) Can we develop neural networks that provide plausible affective responses to real-world visual data explained with language? ii) Can we steer such methods towards producing explanations with varying degrees of pragmatic language, justifying different emotional reactions by grounding them in the visual stimu-lus? Finally, iii) How to evaluate the performance of such methods for this novel task? In this work, we take the first steps in addressing all of these questions, paving the way for more human-centric and emotionally-aware image anal-ysis systems. Our code and data are publicly available at https://affective-explanations.org .
1. Introduction A central goal of computer vision has been to gain a se-mantic understanding of visual stimuli [17, 78]. But what exactly do we mean by this understanding? The vast ma-jority of existing image analysis systems focus solely on image content [17]. Although models aimed at objective image analysis and captioning have achieved unprecedented success during the past years [70, 73], they largely ignore the more subtle and complex interactions that might exist between the image and its potential viewer. In this work, our primary goal is to take a step toward a more viewer-centered understanding going beyond factual image analysis by incorporating the effect that an image might have on a viewer. To capture this effect, we argue thatemotional responses provide a fundamental link between the visual world and human experience. We thus aim to understand what kinds of emotions a given image can elicit to different viewers and, most importantly, why? . Emotion perception and recognition are influenced by and integrate many factors, from neurophysiological to cul-tural, from previous subjective experiences to social and even political context [41]. Thus, capturing and potentially reproducing plausible emotional responses to visual stimuli is significantly more challenging than standard image analy-sis, as it also involves an inherently subjective perspective, which is at the core of perception and consciousness [26]. To proceed with the goal of establishing a novel approach to affective analysis of real-world images, we leverage the fact that free-form language provides the simplest access to emotional expressions [60]. Thus, inspired by recent ad-vances in affective captioning of art-works [7], we study emotional responses induced by real-world visual data in conjunction with human-provided explanations . This ap-proach links emotions with linguistic constructs, which cru-cially are easier to curate at scale compared to other me-dia (e.g., fMRI scans). Put together, our work expands on the recent effort of Achlioptas et al. [7] by considering a visio-linguistic and emotion analysis across a large set of real-world images , not only restricted to visual art. Our main contributions to this end are two-fold: first, we curate a large-scale collection of 526,749 explanations justifying emotions experienced at the sight of 85,007 dif-ferent real-world images selected from five public datasets. The collected explanations are given by 6,283 annotators spanning many different opinions, personalities, and tastes. The resulting dataset, which we term Affection , is very rich in visual and linguistic variations, capturing a wide vari-ety of both the underlying real-world depicted phenomena and their emotional effect. Second, we perform a linguistic and emotion-centric analysis of the dataset and, most impor-tantly, use it to produce deep neural listeners and speakers trained to comprehend, or generate plausible samples of visually grounded explanations for emotional reactions to images. Despite the aforementioned subjectivity and thus the more challenging nature of these tasks compared to purely descriptive visio-linguistic tasks (e.g., COCO-based caption-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6641 ing [18]), our methods appear to learn common biases of how people react emotionally, e.g., the presence of a shark is much more likely to raise fear than the presence of a peace-fully sleeping dog. Such common sense expectations are well captured in Affection, which is why we believe even black-box approaches like ours show promising results. Finally, we explore variants of trained affective neural captioning systems, which allow some control on both the captured emotion and the level of factual visual details that are used when providing an explanation (e.g., ‘The sky looks beautiful’ to ‘The blue colors of the sky and the sea in this sunset make me happy’). Interestingly, we demonstrate that the pragmatic variant demonstrates richer and more diverse language across different images. In summary, this work introduces new task, termed Af-fective Explanation Captioning (AEC) for real-world im-ages. To tackle AEC we release a new large-scale datased, Affection , capturing 526,749 emotional reactions and expla-nations. We then design a variety of components, including, neural speakers that enable affective captioning, with vari-ous degrees of pragmatic and emotional control over their generations. Finally, all our neural speakers show strong per-formance on emotional Turing tests, where humans find their humans find their generations ∼60%-65% of the time likely to be uttered by other humans supporting rich discriminative references contained in Affection’s explanations.
Decatur_3D_Highlighter_Localizing_Regions_on_3D_Shapes_via_Text_Descriptions_CVPR_2023
Abstract We present 3D Highlighter, a technique for localizing se-mantic regions on a mesh using text as input. A key feature of our system is the ability to interpret “out-of-domain” localizations. Our system demonstrates the ability to rea-son about where to place non-obviously related concepts on an input 3D shape, such as adding clothing to a bare 3D animal model. Our method contextualizes the text de-scription using a neural field and colors the correspond-ing region of the shape using a probability-weighted blend. Our neural optimization is guided by a pre-trained CLIP en-coder, which bypasses the need for any 3D datasets or 3D annotations. Thus, 3D Highlighter is highly flexible, gen-eral, and capable of producing localizations on a myriad of input shapes. Our code is publicly available at https: //github.com/threedle/3DHighlighter .
1. Introduction Semantic localization of regions on 3D meshes is an im-portant problem in computer graphics and vision with broad applications. One such application is the incorporation of semantic information into the 3D modeling process. A par-ticularly challenging aspect of this task emerges when 3D geometric signals are insufficient for performing segmenta-tion, e.g. where to add a shirt to a bare 3D human model. We propose 3D Highlighter , a method for automatically localizing fine-grained semantic regions on a shape basedon only a text description. Our system contextualizes the text prompt and highlights the corresponding shape region using the network-predicted probabilities. Using only text, users are able to semantically identify regions on a shape. Our system takes meshes as input, making it compatible with 3D modeling workflows and tools. This highlighting task requires both object-level and part-level understanding. 3D Highlighter demonstrates the ability to reason about where to place seemingly unrelated concepts on the 3D shape, such as a hat on a candle (Fig. 1). Our system localizes attributes that are geometrically ab-sent from a shape, which we refer to as hallucinated high-lighting . Understanding a part’s global shape context is challenging even when relying on salient geometric fea-tures [ 17,27], let alone without them. We optimize the weights of a neural network to produce probabilities that are used to color a given 3D shape in ac-cordance with the specified text. We leverage a pre-trained vision-language model (CLIP [ 31]) to guide the neural opti-mization towards the text-specified region. This neural opti-mization formulation is flexible, bypassing the need for any 3D datasets, 3D annotations, or 3D pre-training. Our sys-tem is not bound to a specific set of classes, and, as shown in Fig. 2, is not limited to object parts defined by salient geometric features. We encode the part selection as a neural field [44] over the mesh surface. Our network learns to map each point on the surface to a probability of belonging to the text-specified region. We translate the inferred probabilities to a visual at-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20930 Shoes Wheels Bow Tie Hair Necklace Shoes Shoes Heart Belt Hat Belt Hat Hat Gloves Necklace Mask Necklace Snout Shoes Arm Shoes Mouth Floor Scarf Roof Wings Car Headlights Shoes Glasses Belt Collar Braids Figure 2. Hallucinated part highlighting . Our system is able to reason about where to highlight a geometrically-absent region on shapes. The resulting localizations demonstrate global understanding and localized part-awareness. tribute on the mesh surface, which can be rendered and visu-ally understood. The network-predicted probabilities act as a soft-selection operator which blends the highlighter color onto the mesh. The network weights are updated by en-couraging the CLIP [ 31] embedding of the 2D renders of the highlighted mesh to adhere to the specified text. As a result, the network implicitly learns to segment the object to adhere to the text prompt. We make several design choices that are key to the suc-cess of 3D Highlighter. Our network does not directly color the mesh. Rather, we predict a probability of being inside the text-specified highlight , which is used to blend colors on the mesh. The network is initialized such that points have roughly a 50% probability of being highlighted, resulting in a mesh with albedo halfway between the highlight and background color. During optimization, the relative blend weight of the highlight color directly corresponds to the highlight probability. This blending enables the network to naturally and smoothly increase or decrease the segmenta-tion probability in accordance with the text specification of the target region. In summary, we present a method for localizing seman-tic regions on 3D shapes. The localization is specified by a textual description, which is intuitive, flexible, and not limited to a specific training dataset. We demonstrate appli-cations of our method to shape editing and stylization. Fur-thermore, our field formulation enables the 3D Highlighter to work with different mesh resolutions and triangulations. A key feature of our system is the ability to interpret out-of-domain localizations. For example, 3D Highlighter is able to figure out where to place a ‘hat’ on a candle as seen in Fig.1, demonstrating the ability to reason about where to place seemingly unrelated concepts on the 3D shape.
Ding_PLA_Language-Driven_Open-Vocabulary_3D_Scene_Understanding_CVPR_2023
Abstract Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated la-bel space. The recent breakthrough of 2D open-vocabulary perception is largely driven by Internet-scale paired image-text data with rich vocabulary concepts. However, this success cannot be directly transferred to 3D scenarios due to the inaccessibility of large-scale 3D-text pairs. To this end, we propose to distill knowledge encoded in pre-trained vision-language (VL) foundation models through captioning multi-view images from 3D, which allows ex-plicitly associating 3D and semantic-rich captions. Fur-ther, to foster coarse-to-fine visual-semantic representa-tion learning from captions, we design hierarchical 3D-caption pairs, leveraging geometric constraints between 3D scenes and multi-view images. Finally, by employ-ing contrastive learning, the model learns language-aware embeddings that connect 3D and text for open-vocabulary tasks. Our method not only remarkably outperforms base-line methods by 25.8% ∼44.7% hIoU and 14.5% ∼50.4% hAP50in open-vocabulary semantic and instance segmen-tation, but also shows robust transferability on challenging zero-shot domain transfer tasks. See the project website at https://dingry.github.io/projects/PLA.
1. Introduction 3D scene understanding is a fundamental perception component in real-world applications such as robot manipu-lation, virtual reality and human-machine interaction. Deep learning has attained remarkable success in this area [13, 38, 28]. However, deep models trained on a human-annotated dataset are only capable of understanding semantic cate-gories in that dataset, i.e. closet-set prediction. As a result, they fail to recognize unseen categories in the open world (see Fig. 1). This largely restricts their applicability in real-world scenarios with unbounded categories. Besides, heavy annotation costs on 3D datasets ( e.g. 22.3 minutes for one scene with 20 classes [7]) further make it infeasible to rely *Equal contribution: {ryding, jhyang }@eee.hku.hk †Part of the work is done during an internship at ByteDance AI Lab. ‡Corresponding authors: song.site@gmail.com, xjqi@eee.hku.hk (a)Close-setclassification (c)Close-setlocalization(b)Open-vocabularyclassification (d)Open-vocabularylocalization bookshelf(unseenclass)bookshelfcabinetwall Mistake‘bookshelf’as‘cabinet’Miss‘bookshelf’Successfullydetect‘bookshelf’Figure 1. An example of 3D open-vocabulary scene understanding with “bookshelf” as unseen class for ScanNet [7]. The close-set model mistakes “bookshelf” as “cabinet” or simply misses “book-shelf” in (a) and (c). Our open-vocabulary model correctly local-izes and recognizes “bookshelf” in (b) and (d). on human labor to cover all real-world categories. This motivates us to study open-vocabulary 3D scene un-derstanding, which equips a model with the ability to local-ize and recognize open-set classes beyond the label space of an annotated dataset (see Fig. 1). Recently, vision-language (VL) foundation models [33, 22, 47] trained on billions of web-crawled image data with semantic-rich captions [36] are capable of learning adequate vision-language embed-dings to connect text and image, which are further leveraged to solve many 2D open-vocabulary tasks including object detection [15, 35], semantic segmentation [43, 26, 51], vi-sual question answering [31] and etc. Albeit significantly advancing open-vocabulary image understanding tasks, this pre-training paradigm is not directly viable in the 3D do-main due to the absence of large-scale 3D-text pairs. To this end, initial efforts [50, 20] have attempted to project 3D data into 2D modalities, such as RGB images and depth maps, enabling pre-trained VL foundation mod-els to process the 2D data and achieve object-level open-vocabulary recognition. Nevertheless, this line of methods suffers from several major issues, making it suboptimal to handle scene-level understanding tasks (e.g., instance seg-mentation). First, multiple RGB images and depth maps are required to represent a 3D sample, which incurs heavy com-putation and memory costs during training and inference. Second, the projection from 3D to 2D induces information This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7010 loss and prohibits direct learning from rich 3D data, lead-ing to subpar performance. Our preliminary study shows the cutting-edge 2D open-vocabulary semantic segmenta-tion method MaskCLIP [51] attains a mere 17.8%mIoU with a 20-fold increase in latency when applied to analyze projected 2D images from 3D ScanNet dataset. Thus, considering the success of VL foundation mod-els for a variety of vision-language tasks [15, 35, 43, 26, 51, 50, 20], we ask: is it possible to elicit knowledge encoded in powerful VL foundation models to build an explicit associa-tion between 3D and language for open-vocabulary under-standing? To this end, our core idea is to exploit pre-trained VL foundation models [1, 39] to caption easily-obtained im-age data aligned with 3D data ( i.e. the point set in the corre-sponding frustum to produce the image). Note that these images can be acquired through neural rendering [9, 46] or from the 3D data collection pipeline [7]. By doing so, we can distill semantic-rich textual descriptions to the 3D domain, which allows explicit association between 3D and vocabulary-rich text for zero-shot 3D scene understanding. Given 3D-language association, the next question is en-abling a 3D network to learn language-aware embeddings from (pseudo) captions. The key challenge stems from intri-cate object compositions in 3D scene-level data (see Fig. 3), making it difficult to connect objects with corresponding words in the caption. This differs from object-centric image data containing a single centered object [33]. Fortunately, the captioned multi-view images from a 3D scene are re-lated by 3D geometry, which can be leveraged to build hi-erarchical point-caption pairs, including scene-, view-and entity-level captions. These multi-level point-caption pairs offer coarse-to-fine supervision signals, facilitating learning adequate visual-semantic representations from rich vocabu-lary by contrastive learning. Without task-specific design, ourPoint-Language Association paradigm, namely PLA, is generic for various open-vocabulary 3D scene understand-ing tasks, such as semantic and instance segmentation. Experimental results for ScanNet [7] and S3IDS [2] datasets show the effectiveness of our method in in-domain open-vocabulary tasks with only category shifts, i.e. train-ing and evaluation are conducted on the same dataset, sur-passing baselines by 25.8% ∼44.7% hIoU on semantic seg-mentation and 14.5% ∼50.4% hAP 50on instance segmen-tation. Besides, our model, trained on a dataset ( i.e. Scan-Net), can generalize to another dataset ( i.e. S3IDS) with both data distribution and category shifts, manifesting its transferability. Finally, our model can benefit from more ad-vanced foundation models that provide higher-quality cap-tion supervision, showing its scalability and extensibility.
Gupta_Visual_Programming_Compositional_Visual_Reasoning_Without_Training_CVPR_2023
Abstract We present VISPROG, a neuro-symbolic approach to solving complex and compositional visual tasks given nat-ural language instructions. VISPROG avoids the need for any task-specific training. Instead, it uses the in-context learning ability of large language models to gener-ate python-like modular programs, which are then executed to get both the solution and a comprehensive and inter-pretable rationale. Each line of the generated program may invoke one of several off-the-shelf computer vision models,image processing subroutines, or python functions to pro-duce intermediate outputs that may be consumed by subse-quent parts of the program. We demonstrate the flexibility ofVISPROG on 4 diverse tasks -compositional visual ques-tion answering, zero-shot reasoning on image pairs, factual knowledge object tagging, and language-guided image edit-ing. We believe neuro-symbolic approaches like VISPROG are an exciting avenue to easily and effectively expand the scope of AI systems to serve the long tail of complex tasks that people may wish to perform. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14953
1. Introduction The pursuit of general purpose AI systems has lead to the development of capable end-to-end trainable models [1, 5, 8, 13, 17, 22, 24], many of which aspire to provide a simple natural language interface for a user to interact with the model. The predominant approach to building these systems has been massive-scale unsupervised pretraining followed by supervised multitask training. However, this approach requires a well curated dataset for each task that makes it challenging to scale to the infinitely long tail of complex tasks we would eventually like these systems to perform. In this work, we explore the use of large language models to tackle the long tail of complex tasks by decom-posing these tasks described in natural language into sim-pler steps that may be handled by specialized end-to-end trained models or other programs. Imagine instructing a vision system to “Tag the 7 main characters on the TV show Big Bang Theory in this image.” To perform this task, the system first needs to understand the intent of the instruction and then perform a sequence of steps -detect the faces, retrieve list of main characters on Big Bang Theory from a knowledge base, classify faces using the list of characters, and tag the image with recog-nized character’s faces and names. While different vision and language systems exist to perform each of these steps, executing this task described in natural language is beyond the scope of end-to-end trained systems. We introduce V ISPROG which inputs visual data (a sin-gle image or a set of images) along with a natural language instruction, generates a sequence of steps, a visual pro-gram if you will, and then executes these steps to produce the desired output. Each line in a visual program invokes one among a wide range of modules currently supported by the system. Modules may be off-the-shelf computer vi-sion models, language models, image processing subrou-tines in OpenCV [4], or arithmetic and logical operators. Modules consume inputs that are produced by executing previous lines of code and output intermediate results that can be consumed downstream. In the example above, the visual program generated by V ISPROG invokes a face de-tector [16], GPT-3 [5] as a knowledge retrieval system, and CLIP [20] as an open-vocabulary image classifier to pro-duce the desired output (see Fig. 1). VISPROG improves upon previous methods for gener-ating and executing programs for vision applications. For the visual question answering (VQA) task, Neural Module Networks (NMN) [2,9,10,12] compose a question-specific, end-to-end trainable network from specialized, differen-tiable neural modules. These approaches either use brittle, off-the-shelf semantic parsers to deterministically compute the layout of modules, or learn a layout generator through weak answer supervision via R EINFORCE [30]. In con-trast, V ISPROG uses a powerful language model (GPT-3) Figure 2. Modules currently supported in V ISPROG .Red modules use neural models (OWL-ViT [19], DSFD [16], Mask-Former [6], CLIP [20], ViLT [15], and Stable Diffusion [25]). Blue modules use image processing and other python subroutines. These modules are invoked in programs generated from natural language instructions. Adding new modules to extend V ISPROG’s capabilities is straightforward (Code. 1). and a small number of in-context examples to create com-plex programs without requiring any training1. Programs created by V ISPROG also use a higher-level of abstraction than NMNs and invoke trained state-of-the-art models and non-neural python subroutines (Fig. 2). These advantages make V ISPROG an easy-to-use, performant, and modular neuro-symbolic system. VISPROG is also highly interpretable. First, V ISPROG produces easy-to-understand programs which a user can verify for logical correctness. Second, by breaking down the prediction into simple steps, V ISPROG allows a user to inspect the outputs of intermediate steps to diagnose errors and if required, intervene in the reasoning process. Alto-gether, an executed program with intermediate step results (e.g. text, bounding boxes, segmentation masks, generated images, etc.) linked together to depict the flow of informa-tion serves as a visual rationale for the prediction. To demonstrate its flexibility, we use V ISPROG for 4 dif-ferent tasks that share some common skills ( e.g. for im-age parsing) while also requiring some degree of special-ized reasoning and visual manipulation capabilities. These tasks are -(i) compositional visual question answering; (ii) zero-shot natural language visual reasoning (NLVR) on im-age pairs; (iii) factual knowledge object tagging from natu-ral language instructions; and (iv) language-guided image editing. We emphasize that neither the language model nor any of the modules are finetuned in any way. Adapt-ing V ISPROG to any task is as simple as providing a few in-context examples consisting of natural language instruc-tions and the corresponding programs. While easy to use, VISPROG shows an impressive gain of 2.7points over a base VQA model on the compositional VQA task, strong zero-shot accuracy of 62.4%on NLVR without ever train-ing on image pairs, and delightful qualitative and quantita-tive results on knowledge tagging and image editing tasks. 1We use “training” to refer to gradient-based learning to differentiate it from in-context learning which only involves a feedforward pass. 14954 Our key contributions include -(i) V ISPROG -a sys-tem that uses the in-context learning ability of a language model to generate visual programs from natural language instructions for compositional visual tasks (Sec. 3); (ii) demonstrating the flexibility of V ISPROG on complex vi-sual tasks such as factual knowledge object tagging and lan-guage guided image editing (Secs. 4.3 and 4.4) that have eluded or seen limited success with a single end-to-end model; and (iii) producing visual rationales for these tasks and showing their utility for error analysis and user-driven instruction tuning to improve V ISPROG’s performance sig-nificantly (Sec. 5.3).
Ghunaim_Real-Time_Evaluation_in_Online_Continual_Learning_A_New_Hope_CVPR_2023
Abstract Current evaluations of Continual Learning (CL) meth-ods typically assume that there is no constraint on train-ing time and computation. This is an unrealistic assump-tion for any real-world setting, which motivates us to pro-pose: a practical real-time evaluation of continual learn-ing, in which the stream does not wait for the model to com-plete training before revealing the next data for predictions. To do this, we evaluate current CL methods with respect to their computational costs. We conduct extensive experi-ments on CLOC, a large-scale dataset containing 39 million time-stamped images with geolocation labels. We show that a simple baseline outperforms state-of-the-art CL methods under this evaluation, questioning the applicability of ex-isting methods in realistic settings. In addition, we explore various CL components commonly used in the literature, in-cluding memory sampling strategies and regularization ap-proaches. We find that all considered methods fail to be competitive against our simple baseline. This surprisingly suggests that the majority of existing CL literature is tai-lored to a specific class of streams that is not practical. We hope that the evaluation we provide will be the first step to-wards a paradigm shift to consider the computational cost in the development of online continual learning methods.
1. Introduction Deep Neural Networks (DNNs) have demonstrated im-pressive success in solving complex tasks [20, 29, 42] when trained offline, for several passes, over large well-curated la-beled datasets. However, in many real-world scenarios, data is only available in the form of a stream with a changing distribution. Due to this challenge, there has been a grow-ing interest in the problem of learning from a time-varying stream, also known as Continual Learning (CL), which is * Equal Contribution. Correspondence to: yasir.ghunaim@kaust.edu.sa Code: github.com/Yasir-Ghunaim/RealtimeOCLa key challenge for DNNs due to a phenomenon known as catastrophic forgetting [17, 36]. In particular, when a DNN is trained with data from a new distribution, the DNN per-formance significantly drops on previously learned data. While mitigation efforts have been proposed, e.g. through regularizing the training [2, 25, 53], replaying pre-viously seen examples [11, 23, 39], and many other ap-proaches [16,40,51], current evaluation approaches are still far from real-world scenarios. For example, the majority of literature is on Offline Continual Learning , under which methods are allowed unlimited budget, both time and com-putation. Furthermore, the majority of CL evaluations are conducted on small-scale datasets with well-defined tem-poral distribution boundaries in the form of learning a se-quence of tasks. To that end, there has recently been a growing interest in the more realistic setting – Online Continual Learning (OCL). In such a setup [1, 4, 21, 32], CL methods are re-stricted to a single training pass over a shuffled split of ex-isting offline CL benchmarks. This is certainly a step for-ward towards resolving some of the unrealistic assumptions of offline CL. However, current evaluations do not suffi-ciently address the challenges of real-time learning for high-throughput streams with rapid distribution changes. To illustrate this, consider the problem of continuously learning a Twitter stream where 350K tweets are uploaded per minute on various trending topics [41]. Every uploaded tweet needs to be predicted with a DNN for misinformation and hate speech, among other things, while simultaneously learning and adapting to them. Given the scale at which data is being updated, there is an inherent key limitation on the time and computational budget affordable to learning incoming tweets, an aspect that is often overlooked in the prior art from the OCL literature. Consider an OCL method that is 10 times slower than the Twitter high throughput stream, i.e., it takes 10 minutes to train on one minute worth of tweets (350K tweets). This inefficiency results in an ac-cumulation of ∼3.1 million new samples that need to be predicted and trained on. Since it is not acceptable to pause This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11888 Figure 1. OCL Real-Time Evaluation Example. We show an example of real-time evaluation, using the CLOC dataset [7], of two different OCL methods AandB. Method Bis twice as slow as method A. Both methods are evaluated on every incoming sample. Since Ahas a stream-model relative complexity of one, i.e.CS(A) = 1 , it is able to train on all the stream samples. In contrast, B, which has a relative complexity of two, requires two time steps to train on a single stream batch. Thus, Bonly trains on half of the stream samples. all tweets from appearing online until the method training is complete, predictions for all new samples will be performed with an older version of the model. This poses a key chal-lenge where efficient learning from streams becomes nec-essary. This is because slow-training OCL methods can re-sult in subpar performance, as they resort to predicting new stream data using an older model. This behavior worsens for streams that experience a faster change in distribution. In this paper, we propose a real-time evaluation protocol for OCL that factors in training computational complexity. Given a stream, consider an OCL method Athat is as fast as the stream, i.e.,Acan train on every step of revealed data before the stream presents new samples. Then, if an OCL Bis twice as expensive as A, thenBwill update the model for evaluation every other stream step, i.e., the model will be updated half the number of times compared to A. Figure 1 illustrates our proposed real-time evaluation. This is in contrast to all prior art [3, 4, 6] that ( 1) unreasonably allows an unlimited computational budget to train on any given stream data, and ( 2) unfairly compares OCL methods de-spite having different training complexity levels. Using our real-time evaluation protocol, we benchmark many existing OCL methods against a simple and inexpensive baseline, which mitigates forgetting by simply storing and replaying recently seen samples. Contributions. We summarize our conclusions as follows: (1)We show that under our practical real-time evaluation, our simple baseline outperforms allthe considered meth-ods from the OCL literature, including recent SOTA ap-proaches like ACE [6]. (2)We consider a complementary setup where the stream is as slow as the most training-expensive OCL method and compare that method againstthe compute-equivalent baseline. Under this computation-ally normalized setting, we find that the compute-equivalent baseline outperforms all existing methods. (3)Our experi-ments are consistent, holding for all the considered contin-ual learning strategies, and extensive, amounting to more than 2 GPU-months. Our results highlight that the current progress in OCL needs to be rethought and a paradigm shift is needed. We hope our work will lead to a new direction for continual learning that takes into account the computational cost of each method.
Chen_Effective_Ambiguity_Attack_Against_Passport-Based_DNN_Intellectual_Property_Protection_Schemes_CVPR_2023
Abstract Since training a deep neural network (DNN) is costly, the well-trained deep models can be regarded as valuable intellectual property (IP) assets. The IP protection asso-ciated with deep models has been receiving increasing at-tentions in recent years. Passport-based method, which re-places normalization layers with passport layers, has been one of the few protection solutions that are claimed to be secure against advanced attacks. In this work, we tackle the issue of evaluating the security of passport-based IP protec-tion methods. We propose a novel and effective ambiguity attack against passport-based method, capable of success-fully forging multiple valid passports with a small train-ing dataset. This is accomplished by inserting a specially designed accessory block ahead of the passport parame-ters. Using less than 10% of training data, with the forged passport, the model exhibits almost indistinguishable per-formance difference (less than 2%) compared with that of the authorized passport. In addition, it is shown that our attack strategy can be readily generalized to attack other IP protection methods based on watermark embedding. Direc-tions for potential remedy solutions are also given.
1. Introduction With the geometric growth of computing power of com-putational devices in recent decades, there have emerged many deep learning applications that have contributed to the human world such as super-resolution reconstruction [7, 9, 30], image inpainting [31, 34, 35] and forgery detec-tion [32]. It usually costs many resources to develop new DNN models and developers will not tolerate the act of theft of their IP. The IP protection problem of deep models be-†Corresponding author.comes more severe with the birth of Machine Learning as a Service (MLaaS) [26]. Preventing the infringement be-havior of deep models now emerges as a necessary concern when developing new algorithms and systems. Model watermark [20, 25, 27, 28, 37] has been a popu-lar method to protect the IP of DNN models. In the em-bedding process, the owners embed the secret signatures (watermarks), and then in the verification process, they can claim their ownership to the model by matching the ex-tracted signatures with the original versions. The existing model watermark methods can be roughly divided into two categories [10, 11]: feature-based and trigger-based meth-ods. Specifically, feature-based methods [4, 8, 24, 29] ap-plied a regularizer to embed the secret watermark into the activation functions or model weights. Uchida et al. [29] proposed to use a regularizer to embed a watermark into the model weights. Darvish et al. [8] embedded the fingerprints in the Probability Density Function of trainable weights in-stead. Aramoon et al . [3] inserted the signature into the gradient of the cross-entropy loss function with respect to the inputs. In contrast, trigger-based methods make the out-put target respond to specific inputs. Along this line, Adi et al. [1] used the backdoor attack as a means to watermark the model. Merrer et al. [18] designed a zero-bit watermarking algorithm that uses adversarial samples as watermarks to claim the ownership. Zhang et al. [39] applied watermarks to images and then trained the network to output target la-bels when input images carry these watermarks. Despite the strength in retaining ownership of DNN models, most existing model watermark methods are shown to be vulnerable to the so-called ambiguity attack, in which the attacker manages to cast doubts on the ownership ver-ification by crafting counterfeit (forged) watermarks [11]. Recently, Fan et al. [10] first designed a series of ambiguity attacks, which are effective in attacking DNN watermark methods. It was stated that for conventional watermark This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8123 methods, a counterfeit watermark can be forged as along as the model performance is independent of the signature [11]. Following this proposition, Fan et al. designed a passport layer through which the functionality of the model is con-trolled by the signature called passport . However, Fan et al. encountered a heavy performance drop when batch nor-malization layers exist. To solve this problem, Zhang et al. [38] added learnable affine transformations to the scale and bias factors. It was claimed that an attacker cannot find a substitute passport that maintains the model performance, which ensures the security of these passport-based methods against existing ambiguity attacks. In this work, we aim to design an advanced ambiguity attack to the passport-based method, capable of generat-ing valid substitute passports with only a small number of data. Here, valid substitute passports are defined as those leading to an indistinguishable model performance, but suf-ficiently different from the original authorized passports. Clearly, with such valid substitute passports, an attacker can claim the ownership of the model. To this end, we first experimentally justify the existence of multiple valid substitute passports. Noticing the fact that it is easy to lo-calize the passport layers, we then propose our ambiguity attack by replacing passport layers with our designed two types of structures, namely Individual Expanded Residual Block (IERB) andCollective Expanded Residual Block (CERB) . Both structures are built in a way to encourage the significant changes of the parameters in the passport layers during the training, which could help us search for valid substitute passports. Benefiting from these two struc-tures and assisting with a small amount training data, we can obtain valid substitute passports, and hence, defeat the passport-based methods which are the only type of method claimed to be immune to existing ambiguity attacks. Our major contributions can be summarized as follows: • We propose a novel and effective ambiguity attack against the passport-based IP protection schemes. With less than 10% of training data, our ambiguity at-tack on passport-layer protected model can restore the functionality of the model with a less than 2% perfor-mance gap from the original accuracy. • We design two novel structures for replacing the pass-port layers, based on the multi-layer perceptron (MLP) and skip connection to assist with our ambiguity attack for searching valid substitute passports with a small amount of training data. • Experiments on both overlapping (attacker’s training dataset is part of the original training dataset) and non-overlapping datasets (attacker’s dataset and the origi-nal one come from the same source but no overlap ex-ists), and on different network structures have proved the effectiveness of our ambiguity attack. • Our attack method can be readily generalized to attack other DNN watermark methods [8, 21, 29].2. Related Works DNN watermark methods have been popular solutions for DNN model IP protection. However, these techniques might still be vulnerable to flagrant infringement from no-torious adversaries. In this section, we review the two types of representative attack methods, namely, removal at-tack [2, 5, 6, 14, 22, 33] and ambiguity attack [10, 11, 38], along with the passport-based method attempting to defend against ambiguity attacks [11]. Removal Attack : This type of attack tries to remove the watermark from the protected model, malfunctioning the ownership verification mechanism. Along this line, many fine-tuning based methods have been proposed. Chen et al. [5] combined a redesigned elastic weight consolidation al-gorithm and unlabeled data augmentation to achieve unified model watermark removal with limited data. Guo et al. [14] used a dataset transformation method called PST (Pattern embedding and Spatial-level Transformation) to preprocess the data before fine-tuning. Chen et al. [6] utilized auxil-iary unlabeled data to decrease the amount of labeled train-ing data required for effective watermark removal. Aiken et al. [2] provided a three-stage scheme to remove backdoor-based watermarks by exploiting another trigger-free dataset from the same domain. Liu et al. [22] designed a frame-work to remove backdoor-based watermarks, in which a data augmentation was proposed to imitate the behavior of the backdoor triggers. Yan et al. [33] attempted to break the passport-based method by scaling the neurons and flipping the signs of parameters. However, this method assumed that the authorized passports are available to the attacker, which is not realistic in practice. Also, these aforementioned at-tack methods only enable the attackers to remove the water-marks, while unable to claim the ownership. Ambiguity Attack : Another more threatening attack is the ambiguity attack, where the attacker can forge another substitute watermark to claim the model ownership. The concept of ambiguity attack originally appeared in image watermark community [19, 23], and recently has been ex-tended to the DNN watermark methods. The pioneering work was conducted by Fan et al. in [10], which pointed out the vulnerability of Uchida’s watermark method [29] under the ambiguity attack. They also showed that the same weakness of Adi’s DNN watermark method [1] exists, by proving that another trigger can be optimized exclusively to cause the same model response as the original one. Passport-based method : Passport-based method was originally proposed by Fan et al. [11] as a remedy enabling DNN watermark methods to defeat the ambiguity attack. This is achieved by replacing the traditional normalization layer with the so-called passport layer, whose difference mainly lies in how the affine factors are obtained. In pass-port layer, the scale factor γand bias factor βare computed 8124 with the passport as follows: γ=Avg(Wconv∗sγ), β=Avg(Wconv∗sβ),(1) where s={sγ,sβ}is called the passport, Wconv is the convolutional layer weight before this layer, and Avg(·) represents the average pooling function. To embed the passport sinto the model, the network Np is optimized on the training set D={(xi, yi)}N i=1, where xiis the input and yiis the corresponding label, usi
Ding_Visual_Dependency_Transformers_Dependency_Tree_Emerges_From_Reversed_Attention_CVPR_2023
Abstract Humans possess a versatile mechanism for extracting structured representations of our visual world. When look-ing at an image, we can decompose the scene into entities and their parts as well as obtain the dependencies between them. To mimic such capability, we propose Visual Depen-dency Transformers (DependencyViT)1that can induce vi-sual dependencies without any labels. We achieve that with a novel neural operator called reversed attention that can naturally capture long-range visual dependencies between image patches. Specifically, we formulate it as a depen-dency graph where a child token in reversed attention is trained to attend to its parent tokens and send information following a normalized probability distribution rather than gathering information in conventional self-attention. With such a design, hierarchies naturally emerge from reversed attention layers, and a dependency tree is progressively in-duced from leaf nodes to the root node unsupervisedly. DependencyViT offers several appealing benefits. (i) En-tities and their parts in an image are represented by dif-ferent subtrees, enabling part partitioning from dependen-cies; (ii) Dynamic visual pooling is made possible. The leaf nodes which rarely send messages can be pruned with-out hindering the model performance, based on which we propose the lightweight DependencyViT-Lite to reduce the computational and memory footprints; (iii) DependencyViT works well on both self-and weakly-supervised pretraining paradigms on ImageNet, and demonstrates its effectiveness on 8 datasets and 5 tasks, such as unsupervised part and saliency segmentation, recognition, and detection.
1. Introduction Humans have a rich mental representation of our sur-rounding environments. When looking at an image (see Figure 1(a)), we can recognize the scene and also can *This work was done when Mingyu was visiting MIT. 1https://github.com/dingmyu/DependencyViT (a) (b) Figure 1. (a) is an example of hierarchical dependency structure. (b) illustrates the dynamic pooling and information aggregation process of DependencyViT. quickly decompose it into hierarchical elements with de-pendencies, e.g., a laptop consisting of a screen and a key-board is placed on the table. This ability to construct depen-dencies between objects (and/or their parts) serves as the cornerstone of human intelligence, enabling us to perceive, interact, and reason about the world. From the pre-deeplearning era, many classical image de-pendency parsing algorithms [25, 27, 66, 70, 81, 98] have been proposed. For example, Bayesian framework [70], And-Or graph [27], and hierarchical probabilistic mod-els [25, 66] for parsing images into their constituent visual patterns. Apart from that, Capsule Network [40, 61] shows the potential to learn geometrically organized parts from images. After that, visual grounding methods [10, 18, 21, 23, 85] try to align the semantic meaning between visual objects and words to distill effective structures for the vi-sion branch from language. Similarly, human-object inter-action approaches [39] learn the relationships between two objects, e.g., a boy “holds” an ice cream, from manually annotated labels. Such methods struggle to learn hierarchi-cal visual structures, such as different parts of an object, unless exhaustive and time-consuming manual annotations are provided. Recently, vision-language (VL) grammar in-duction [72] proposes to extract shared hierarchical object This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14528 dependencies for both vision and language unsupervisedly from image-caption pairs. However, the above works suffer two key issues: 1) the parsing relies heavily on supervision from natural language or human annotations rather than the image itself, and 2) their parsed structures are object-level based on a pre-trained object detection model, like Faster/Mask-RCNN [30, 58], hindering their generalizabil-ity in part-level and non-detector scenarios. This paper answers a question naturally raised from the above issues: can we efficiently induce visual dependencies and build hierarchies from images without human annota-tions? Currently, visual parsing works mainly lie in seman-tic and instance segmentation. Unlike detector-based works that rely on pre-trained detectors, they parse the image at the pixel level, which is resource-intensive and costly. Inspired by vision transformers [22] that take image patches as input and leverage self-attention to perform interactions between patches, we propose to build a dependency tree at the patch level. Taking patches as basic elements and building a tree structure based on them has two benefits: 1) it unifies part-level and object-level dependencies, all of which are formu-lated into subtrees; 2) in the dependency structure, informa-tion can be aggregated from leaves to the parent (as shown in Figure 1(b)) to produce a hierarchy of representations for different parts and object along the path. In practice, it is non-trivial to build the dependency tree with the standard transformer. Although the self-attention mechanism is designed to collect information dynamically from other patches, the number of attention heads con-straints the number of tokens that a patch can attend to. 1) However, each parent could have an arbitrary number of children in a dependency tree, while each child only has one parent. Thus it’s more straightforward for a node to select its parent instead of selecting the child. 2) Further-more, the transformer treats each patch equally, it does not distinguish between root and leaf nodes. Contributions for different subtrees should be distinct. Motivated by the above observations, in this work, we propose a dependency-inspired vision transformer, named Visual Dependency Transformers (DependencyViT). We propose three innovations to the standard self-attention, as shown in Figure 2. Firstly, to form a root-centric depen-dency parser, we introduce a reversed self-attention mecha-nism by transposing the adjacency matrix. In this way, leaf nodes can send information to their parents and form hi-erarchical subtrees. Secondly, we propose a message con-troller to determine how a node or subtree sends messages. Thirdly, a soft head selector is introduced to generate a unique dependency graph for each layer. As a result, self-attentions in DependencyViT naturally form a dependency tree parser. We did extensive studies in both supervised and self-supervised pretraining to show DependencyViT is ca-pable of capturing either object-or part-level dependencies.Intuitively, dependency parsing should ease scene under-standing, as humans can understand complex scenes at a glance based on visual dependencies. Based on this, we further introduce a lightweight model DependencyViT-Lite by proposing a dynamic pooling scheme, reducing the com-putational cost largely. Within each subtree, we prune those leaf nodes with the least information received because they have sent information to their parent node. We show the pruned nodes can be retrieved by soft aggregations from their parents, preserving the model capability and dense representation capability. We make three main contributions. (i) DependencyViT performs visual dependency parsing by reversed attention in self-or weakly-supervised manners. We demonstrate its effectiveness in both part-level and object-level parsing. (ii) We propose a visual dynamic pooling scheme for De-pendencyViT hence DependencyViT-Lite. The dependency tree can also be progressively built during the pruning pro-cess. (iii) Extensive experiments on both self-and weakly-supervised pretraining on ImageNet, as well as five down-stream tasks, show the effectiveness of DependencyViT.
Jain_Enhanced_Stable_View_Synthesis_CVPR_2023
Abstract We introduce an approach to enhance the novel view syn-thesis from images taken from a freely moving camera. The introduced approach focuses on outdoor scenes where re-covering accurate geometric scaffold and camera pose is challenging, leading to inferior results using the state-of-the-art stable view synthesis (SVS) method. SVS and related methods fail for outdoor scenes primarily due to (i) over-relying on the multiview stereo (MVS) for geometric scaf-fold recovery and (ii) assuming COLMAP computed camera poses as the best possible estimates, despite it being well-studied that MVS 3D reconstruction accuracy is limited to scene disparity and camera-pose accuracy is sensitive to key-point correspondence selection. This work proposes a principled way to enhance novel view synthesis solutions drawing inspiration from the basics of multiple view geome-try. By leveraging the complementary behavior of MVS and monocular depth, we arrive at a better scene depth per view for nearby and far points, respectively. Moreover, our ap-proach jointly refines camera poses with image-based ren-dering via multiple rotation averaging graph optimization. The recovered scene depth and the camera-pose help better view-dependent on-surface feature aggregation of the entire scene. Extensive evaluation of our approach on the popu-lar benchmark dataset, such as Tanks and Temples, shows substantial improvement in view synthesis results compared to the prior art. For instance, our method shows 1.5 dB of PSNR improvement on the Tank and Temples. Similar statis-tics are observed when tested on other benchmark datasets such as FVS, Mip-NeRF 360, and DTU.
1. Introduction Image-based rendering, popularly re-branded as view synthesis, is a long-standing problem in computer vision and graphics [42, 44]. This problem aims to develop a *Equal Contribution †Corresponding Author (k.sur46@gmail.com)method that allows the user to seamlessly explore the scene via rendering of the scene from a sparse set of captured im-ages [2, 20, 42]. Furthermore, the rendered images must be as realistic as possible for a better user experience [34–36]. Currently, among the existing approaches, Riegler and Koltun stable view synthesis (SVS) approach [36] has shown excellent results and demonstrated photorealism in novel view synthesis, without using synthetic gaming en-gine 3D data, unlike [34]. SVS is indeed stable in render-ing photorealistic images from novel viewpoints for large-scale scenes. Yet, it assumes MVS [39, 40] based dense 3D scene reconstruction and camera poses from COLMAP [39] are correct. The off-the-shelf algorithms used for 3D data acquisition and camera poses from images are, of course, popular, and to assume these algorithms could provide fa-vorable 3D reconstruction and camera poses is not an out-landish assumption. Nonetheless, taking a step forward, in this paper, we argue that although choices made by SVS for obtaining geometric scaffold and camera poses in the pursuit of improving view synthesis is commendable, we can do better by making mindful use of fundamentals from multiple-view geometry [11,12] and recent developments in deep-learning techniques for 3D computer vision problems. To start with, we would like to emphasize that it is clearly unreasonable, especially in an outdoor setting, to assume that multi-view stereo (MVS) can provide accurate depth for all image pixels. It is natural that pixels with low dis-parity will not be reconstructed well using state-of-the-art MVS approaches [8, 39, 40, 46]. Even a precise selection of multiple view images with reasonable distance between them (assume good baseline for stereo) may not be helpful due to loss of common scene points visibility, foreshorten-ing issue, etc. [43]. Such issues compel the practitioner to resort to post-processing steps for refining the MVS-based 3D geometry so that it can be helpful for rendering pipeline or neural-rendering network at train time. Another critical component to view synthesis, which is often brushed aside in the literature is the accurate recovery of the camera poses. In neural view synthesis approaches such as [36], if the camera pose is wrong, the feature ag-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13208 Figure 1. Qualitative comparison. Our result compared to the popular SVS method [36] on the M60 scene of the tanks and temples dataset [18]. It is easy to observe that our approach can better render fine details in the scene. For this scene, the PSNR values for SVS [36] and our method are 19 .1and20 .8, respectively, demonstrating improved PSNR result. gregation corresponding surface points could be mislead-ing, providing inferior results. Therefore, we should have camera poses as accurate as possible. Unfortunately, de-spite the camera pose importance to this problem, discus-sion on improving camera pose is often ignored under the assumption that COLMAP [39] provide the best possible camera pose estimates for any possible scenarios. Practi-cally speaking, this is generally not the case for outdoor scenes [5, 11, 15]. What is more surprising is that some re-cent benchmark datasets put COLMAP recovered poses as the ground-truth poses [33]. Hence, we want to get this out way upfront that a robust and better camera-pose estimates are vital for better modeling view synthesis problem. From the above predication, it is apparent that a more mindful approach is required to make view synthesis ap-proaches practically useful, automatic, and valuable for real-world application. To this end, we propose a principled and systematic approach that provides a better geometric scaffold and camera poses for reliable feature aggregation of the scene’s surface points, leading to improved novel-view synthesis results enabling superior photorealism. In practice, we can have suitable initial camera poses from images using COLMAP. Yet, it must be refined fur-ther for improved image-feature aggregation corresponding to 3D surface points for neural rendering. It is well-studied in multiple-view geometry literature that we can improve and refine camera poses just from image key-point corre-spondences [10, 11]. Accordingly, we introduce a learning-based multiple motion averaging via graph neural network for camera pose recovery, where the pose graph is initial-ized using COLMAP poses for refinement. Meanwhile, it is challenging to accurately recover the 3D geometry of scene points with low or nearly-zero disparity using MVS methods [12, 43]. Another bad news from the theoretical side is that a precise estimation of scene depth from a single image is unlikely1, which is a correct state-ment and hard to argue. The good news is that advance-ments in deep-learning-based monocular depth prediction have led to some outstanding results in several practical 1As several 3D scene points can have same image projection.applications [24, 31]. Thus, at least practically, it seems possible to infer reliable monocular depth estimates up to scale. Using single image depth prediction, we can reason about the depth of scene points with low disparities. So, our proposed strategy is to use confidence based multiple-view stereo 3D that favours pixels with near-to-mid disparity and allows monocular depth estimates for the rest of the pixels. Overall depth is recovered after scaling all the scene depth appropriately using MVS reconstructed metric. By encoding the image features via convolutional neural networks, we map the deep features to our estimated 3D ge-ometric scaffold of the scene. Since we have better camera poses and scene reconstruction, we obtain and aggregate ac-curate feature vectors corresponding to each imaging view-rays—both from the camera to the surface point and from the surface point to viewing image pixels, giving us a fea-ture tensor. We render the new image from the features ten-sor via a convolutional network and simultaneously refine the camera pose. In summary, our contributions are • A systematic and principled approach for improved stable view synthesis enabling enhanced photorealism. • The introduced approach exploits the complementary na-ture of MVS and monocular depth estimation to recover better 3D geometric scaffold of the scene. Meanwhile, the robust camera poses are recovered using graph neural network based multiple motion averaging. • Our approach proposes an improved loss function to jointly optimize and refine for poses, neural image ren-dering, and scene representation showing superior results. Our approach when tested on benchmark datasets such as Tank and Temples [18], FVS [35], Mip-NeRF 360 [1], and DTU [16] gives better image based rendering results with generally more than 1 dB PSNR gain (see Fig.1).
Aydemir_TempSAL_-_Uncovering_Temporal_Information_for_Deep_Saliency_Prediction_CVPR_2023
Abstract Deep saliency prediction algorithms complement the ob-ject recognition features, they typically rely on additional information such as scene context, semantic relationships, gaze direction, and object dissimilarity. However, none of these models consider the temporal nature of gaze shifts during image observation. We introduce a novel saliency prediction model that learns to output saliency maps in se-quential time intervals by exploiting human temporal atten-tion patterns. Our approach locally modulates the saliency predictions by combining the learned temporal maps. Our experiments show that our method outperforms the state-of-the-art models, including a multi-duration saliency model,on the SALICON benchmark and CodeCharts1k dataset. Our code is publicly available on GitHub
1. 1. Introduction Humans have developed attention mechanisms that al-low them to selectively focus on the important parts of a scene. Saliency prediction algorithms aim to computation-ally detect these regions that stand out relative to their sur-roundings. These predictions have numerous applications 1https://ivrl.github.io/Tempsal/in image compression [37], image enhancement [51], im-age retargeting [1], rendering [43], and segmentation [28]. Since the seminal work of Itti et al. [18], many have de-veloped solutions using both handcrafted features [7] anddeep ones [9, 17,26,34,46,48]. Nowadays, employing deep neural networks is preferred in saliency prediction as they outperform bottom-up models. These methods typically de-pend on pre-trained object recognition networks to extract features from the input image [31]. In addition to these features, scene context [47], object co-occurrence [50], and dissimilarity [2] have been exploited to improve the saliencyprediction. However, while these approaches model the scene context and objects, they fail to consider that humans dynamically observe scenes [49]. In neuroscience, the inhi-bition of return paradigm states that a suppression mecha-nism reduces visual attention towards recently attended ob-jects [39] and encourages selective attention to novel re-gions. Motivated by this principle, we develop a saliency prediction model that incorporates temporal information. Fosco et al. [14] also exploit temporal information in saliency prediction, but they consider snapshots containing observations up to 0.5, 3, and 5 seconds, thus not leverag-ing saliency trajectory but rather saliency accumulation. Bycontrast, here, we model consecutive time slices, connect-ing our approach more directly with the human gaze and thus opening the door to automated visual appeal assess-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6461 ment in applications such as website design [32], advertise-ment [36] and infographics [13]. To achieve this, we show that when viewing images, hu-man attention yields temporally evolving patterns, and we introduce a network capable of exploiting this temporal in-formation for saliency prediction. Specifically, our model learns time-specific predictions and is able to combine them with a conventional image saliency map to obtain a tempo-rally modulated image saliency prediction. We evaluate our method on the SALICON [20] and CodeCharts1k [14] datasets which contain temporal infor-mation, unlike the other popular saliency datasets such as CAT2000 [3] and MIT1003 [23]. By showing the bene-fits of estimating temporal saliency we hope to encourage the community to publish the temporal information of their data along with the final saliency maps. Note that exist-ing works typically collect saliency data by conducting psy-chophysical experiments [3, 14, 20, 23], and the attention data recorded during these experiments already includes temporal information. Therefore, no further experiments are required. As evidenced by our experiments, using temporal in-formation boosts the accuracy of the baseline network, enabling us to consistently outperform the state-of-the-art models on the SALICON saliency benchmark. More-over, in the CodeCharts1k dataset we outperform the multi-duration model [14] in two out of three metrics. We summarize our contributions as follows: • We evidence the presence of temporally evolving pat-terns in human attention. • We show that temporal information in the form of a saliency trajectory improves saliency prediction in nat-ural images, providing an investigation of the SALI-CON dataset for temporal attention shifts. • We introduce a novel, saliency prediction model, called TempSAL, capable of simultaneously predict-ing conventional image saliency and temporal saliency trajectories. • We propose a spatiotemporal mixing module that learns time-dependent patterns from temporal saliency maps. Our approach outperforms the state-of-the-art image saliency models that either do not consider tem-poral information or encode it in a cumulative manner.
Cui_Biomechanics-Guided_Facial_Action_Unit_Detection_Through_Force_Modeling_CVPR_2023
Abstract Existing AU detection algorithms are mainly based on appearance information extracted from 2D images, and well-established facial biomechanics that governs 3D fa-cial skin deformation is rarely considered. In this paper, we propose a biomechanics-guided AU detection approach, where facial muscle activation forces are modelled and are employed to predict AU activation. Specifically, our model consists of two branches: 3D physics branch and 2D image branch. In 3D physics branch, we first derive the Euler-Lagrange equation governing facial deformation. The Euler-Lagrange equation represented as an ordinary differential equation (ODE) is embedded into a differen-tiable ODE solver. Muscle activation forces together with other physics parameters are firstly regressed, and then are utilized to simulate 3D deformation by solving the ODE. By leveraging facial biomechanics, we obtain physically plausible facial muscle activation forces. 2D image branch compensates 3D physics branch by employing additional appearance information from 2D images. Both estimated forces and appearance features are employed for AU detec-tion. The proposed approach achieves competitive AU de-tection performance on two benchmark datasets. Further-more, by leveraging biomechanics, our approach achieves outstanding performance with reduced training data.
1. Introduction Action unit (AU) describes a local facial behavior, rep-resenting the movement of one facial muscle or a group of facial muscles [6]. For example, AU12 (lip corner puller) is corresponding to the muscle zygomatic major . AU15 (lip corner depressor) is corresponding to the muscle depressor anguli oris . In Figure 1, we visualize the muscles zygomatic major anddepressor anguli oris . Due to the muscle activa-tion, facial changes, in terms of both appearance and skin geometry, can be observed. Action unit detection task is to automatically predict if an AU is activated or not, given a 2D image. The majority of existing AU detection methods perform AU detection based on appearance information ex-(a) Zygomaticus major(b) Depressor angulioris Figure 1. Visualization of depressor anguli oris (shown in (a)) and orbicularis oris (shown in (b)). Images are from https://en. wikipedia.org/wiki available under Public Domain. tracted from 2D images [4,16,26,46]. These algorithms are mainly data-driven, whose performance highly depends on the quantity and quality of AU annotations. Unfortunately, AU annotations are hard to obtain and prone to errors. Be-sides, data-driven AU detection algorithms can’t generalize well to unseen scenarios beyond training samples. To perform robust and generalizable AU detection un-der limited AU annotations, generic knowledge about the anatomical spatial relationships among facial muscles is considered, based on which AU relationships are de-rived [16, 17, 48, 49]. Facial biomechanics, which defines the dynamic of facial 3D deformation given muscle acti-vation forces and is represented as second-order ODEs, is rarely considered. Facial biomechanics is important since it directly connects the muscle activation to skin deforma-tion through principled physics laws, which is applicable to different subjects and independent of a specific dataset. In this paper, we propose a biomechanics-guided AU detection approach, where facial muscle activation forces are modelled given 2D images and are employed for AU detection task. Our model consists of two branches: 3D physics branch and 2D image branch. In 3D physics branch, the Euler-Lagrange equation governing 3D deformation is firstly derived, which is represented as an ordinary differ-ential equation (ODE). Muscle activation forces together with other physics parameters are regressed and utilized for physics-based reconstruction by solving the ODE. 2D im-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8694 age branch compensates 3D branch by employing appear-ance information. Finally, the estimated muscle activation forces together with image features are employed for AU detection. Our contributions lie in three parts: • We are the first to explicitly integrate facial biome-chanics for AU detection task. Particularly, our physics branch explicitly models muscle activation forces. Af-ter training, physically plausible and anatomically meaningful forces are employed for AU detection. • We are the first to introduce a generalized coordinate using facial blendshape basis and derive the Euler-Lagrange equation in the defined generalized coordi-nate. The Euler-Lagrange equation is then embedded into a differentiable ODE solver for physics-based re-construction. • We empirically demonstrate the effectiveness of our proposed approach on two benchmark datasets. Fur-thermore, our method remains robust under limited AU annotations and is cross-dataset generalizable.
Cao_Iterative_Proposal_Refinement_for_Weakly-Supervised_Video_Grounding_CVPR_2023
Abstract Weakly-Supervised Video Grounding (WSVG) aims to localize events of interest in untrimmed videos with only video-level annotations. To date, most of the state-of-the-art WSVG methods follow a two-stage pipeline, i.e., firstly gen-erating potential temporal proposals and then grounding with these proposal candidates. Despite the recent progress, existing proposal generation methods suffer from two draw-backs: 1) lack of explicit correspondence modeling; and 2) partial coverage of complex events. To this end, we propose a novel IteRative pr Oposal refi Nement network (dubbed as IRON) to gradually distill the prior knowledge into each proposal and encourage proposals with more complete cov-erage. Specifically, we set up two lightweight distillation branches to uncover the cross-modal correspondence on both the semantic and conceptual levels. Then, an itera-tive Label Propagation (LP) strategy is devised to prevent the network from focusing excessively on the most discrim-inative events instead of the whole sentence content. Pre-cisely, during each iteration, the proposal with the minimal distillation loss and its adjacent ones are regarded as the positive samples, which refines proposal confidence scores in a cascaded manner. Extensive experiments and ablation studies on two challenging WSVG datasets have attested to the effectiveness of our IRON. The code will be available at https://github.com/mengcaopku/IRON.
1. Introduction Weakly-Supervised Video Grounding (WSVG) [21, 37, 41, 72, 73] aims to localize the moment of interest from an untrimmed video according to a query sentence with-out frame-wise annotations. It has drawn increasing atten-tion in both industry and academia due to its wide applica-tions, e.g., video retrieval [13, 19], video question answer-*Work done during the internship at Microsoft. †Corresponding author: Daxin Jiang (djiang@microsoft.com). FusionProposal GenerationGroundingModuleOutputVideoQueryExplicit correspondence modeling(a) Query: The kite weaves through the air, turningand twisting as it goes.Ground TruthProposal #2Proposal #1confidence score: 0.87113.3s169.1s131.6sconfidence score: 0.24142.3s112.9s148.3s (b) Figure 1. (a) The conventional WSVG pipeline ( i.e., baseline) lacks explicit correspondence modeling . (b) Partial coverage of complex events . Proposal #1 with the high confidence score ( i.e., 0.87) tends to be of short duration. The more reasonable proposal #2 has the lower confidence score. ing [1], human-computer interaction [49], etc. Currently, the overwhelming majority of state-of-the-art WSVG meth-ods follow a two-stage pipeline, i.e., they firstly generate potential proposals and then use these proposals to conduct grounding via multi-instance learning (MIL) [21,27,40,41] or query reconstruction [37, 40, 50, 72]. This paradigm commonly relies on densely-placed proposals to achieve high recall and ensure as much coverage as possible, which causes severe computation redundancy. Recent works [72, 73] reduce the number of required proposals by predict-ing Gaussian masks to highlight query-relevant segments. However, a such constraint is too rigorous and lacks flexi-bility. Thus, in this paper, we work toward designing sparse and reliable proposals without any distribution assumptions. Despite of the dominated performance achieved, it is worth noting that current proposal generation methods suf-fer from two inherent drawbacks: 1) Lack of explicit cor-respondence modeling : A simple pipeline1for the conven-tional proposal-based WSVG methods is illustrated in Fig-1We call this pipeline as baseline and refer to the appendix for details. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6524 ure 1a. As shown, under the weakly-supervised scenario, there exist no explicit regression supervisions ( e.g., tempo-ral boundary annotations) for the proposal generation proce-dure. Accordingly, the proposal coordinate update is solely based on the outputs of the grounding module. This leads to achicken and egg situation ,i.e., the succeeding grounding module requires plausibly reliable proposals to achieve ac-curate localization results while the proposal distributions rely on decent grounding results to update. 2) Partial cov-erage of complex events. Compared to the atomic action instances in Temporal Action Localization (TAL) [8,47,71], the query sentences in WSVG are much more complex, e.g., containing multiple events. Empirically, it is easy to ex-cessively concentrate on the most discriminative parts in-stead of the whole picture [37]. For example, the case in Figure 1b aims to ground the complete process of the kite, i.e.,weaving ,turning , and twisting . However, the top-ranking proposal #1 only covers the weaving process and overlooks the other parts. In contrast, a more accu-rate proposal #2 has much lower confidence scores. In Fig-ure 2a, we compute the length distribution of the proposals with highest confidence score (Charades-STA [48] test set), which are always obviously shorter than their ground truths. To alleviate these aforementioned problems, we propose a novel IteRative pr Oposal refi Nement network for WSVG (dubbed as IRON ), which distills the prior knowledge into the proposal generation in a cascaded manner. For correspondence modeling, we contend that it should be conducted from two aspects: 1) Semantic-level : The overall semantics of the proposals should match the query sentence. Specifically, we respectively feed the proposal frames and the query sentence into the visual and language encoders of the pre-trained video-language (VL) model [56] to estimate their semantic similarity. Due to the powerful transfer ability of pre-trained VL models [28, 39, 44, 56], we use the estimated similarity as the semantic distillation target. Then a lightweight semantic distillation branch is leveraged to optimize towards this target, referred to as the semantic distillation loss . 2) Conceptual-level : The pro-posal ought to be sensitive to the linguistic salient concepts including object words ( e.g.,kite ),attribute words ( e.g., white ) and relationship words ( e.g.,through ). This is similar to the human way of reasoning, i.e., tending to fo-cus on the most prominent objects when assessing given videos. Here we define the concepts as the high-frequency words ( i.e., verbs, adjectives, and nouns) in the dataset cor-pus. Then, one multi-hot label is generated for each query sentence according to whether it hits the corresponding con-cept. We introduce a concept classification branch to esti-mate proposal-wise concept predictions supervised by the multi-hot label, yielding the conceptual distillation loss . To mitigate the partial coverage issue, we devise a Label Propagation (LP) algorithm, which aims to refine proposal-80 0.00.10.50.20.30.4070605040302010FrequencyNormalized Average Length BaselineGround Truth(a) # Iteration012345Normalized Average Length 0.120.140.160.180.200.220.240.26Ground Truth Average Length (b) Figure 2. (a) The gaussian distributions of normalized average length of the ground truth and the proposals with highest confi-dence scores in baseline1. Results are calculated based on the test set of Charades-STA [48]. (b) The normalized average length of proposals with highest confidence scores v.s.iteration numbers . wise confidence scores in an iterative manner. Our motiva-tion lies in that proposals with the minimal distillation loss (both semantic and conceptual distillation loss) can be re-garded as the biased indicator ,i.e., these proposals always contain some salient events-of-interest and have short dura-tions ( cf. Sec. 4.5). Therefore, during each iteration in LP, we assign the positive pseudo label to the proposal with the minimal distillation loss and its adjacent ones2. Based on the generated pseudo label, the proposal confidence scores are rectified via a binary cross-entropy loss in the conse-quent stage. After the multi-step refinements, our IRON gradually converges to more complete intervals instead of parts ( cf. Figure 2b). In summary, we make three contributions in this paper: • We propose to model explicit correspondence for each proposal at both semantic and conceptual levels, which distills in-depth knowledge from the well-trained VL model and the linguistic structure of the query sentence. • To avoid biased and partial grounding results, a label propagation algorithm is crafted to refine proposal-wise confidence scores iteratively. • Extensive experiments on both Charades-STA and Ac-tivityNet Captions datasets have witnessed the state-of-the-art performance of our proposed IRON.
Foo_Unified_Pose_Sequence_Modeling_CVPR_2023
Abstract We propose a Unified Pose Sequence Modeling ap-proach to unify heterogeneous human behavior understand-ing tasks based on pose data, e.g., action recognition, 3D pose estimation and 3D early action prediction. A major obstacle is that different pose-based tasks require different output data formats. Specifically, the action recognition and prediction tasks require class predictions as outputs, while 3D pose estimation requires a human pose output, which limits existing methods to leverage task-specific network ar-chitectures for each task. Hence, in this paper, we propose a novel Unified Pose Sequence (UPS) model to unify het-erogeneous output formats for the aforementioned tasks by considering text-based action labels and coordinate-based human poses as language sequences. Then, by optimiz-ing a single auto-regressive transformer, we can obtain a unified output sequence that can handle all the aforemen-tioned tasks. Moreover, to avoid the interference brought by the heterogeneity between different tasks, a dynamic rout-ing mechanism is also proposed to empower our UPS with the ability to learn which subsets of parameters should be shared among different tasks. To evaluate the efficacy of the proposed UPS, extensive experiments are conducted on four different tasks with four popular behavior understand-ing benchmarks.
1. Introduction Pose sequences, which capture the movements of the hu-man body via human joint coordinates, are well-known to be an efficient and effective representation of human mo-tion and behaviour [58,80]. This is mainly because pose se-quences often provide enough information to characterize complex motion patterns [31], while being robust against superficial visual variations such as the background, cloth-ing texture and illumination conditions [43,44]. At the same *equal contribution †corresponding authortime, by using depth sensors such as the Kinect, pose data can also be conveniently obtained in real-time to facilitate downstream applications. Therefore, the potential of pose sequences to tackle behaviour understanding has attracted a lot of attention in recent years. Notably, the usage of pose sequences has been widely explored across many practical applications, including human-robot interaction [1, 59], augmented reality [4, 51] and security surveillance [18, 71]. Specifically, pose se-quences, as informative inputs, can facilitate certain aspects in these applications, such as action recognition [12, 44, 48, 66, 67, 86–88], 3D pose estimation [41, 45, 84, 92, 95, 96] and early action prediction [23,35,39,77,78], making these tasks popular and important areas of research. However, existing methods for each task still often re-quire task-specific architectures, e.g., hourglass networks for pose estimation [84] and specialized GCN architectures for action recognition [11,69], while the performing of mul-tiple pose-based tasks with a single model is not well ex-plored. Therefore, in order to perform multiple tasks, users will often need to design and train multiple separate models, which can be inconvenient and inefficient. Hence, in this work, we seek to simplify and unify the modeling for several popular and important pose-based tasks: 3D action recognition, 2D action recognition, 3D pose estimation and 3D early action prediction. This is a challenging goal that has not been achieved before, requir-ing a single model to cover a large scope involving 2D tasks, 3D tasks, as well as 2D to 3D lifting. By unifying these pose-based tasks and removing the need to design and train separate task-specific models to tackle different pose-based tasks, we can greatly reduce the difficulty and complexity involved in tackling these tasks. Moreover, a unified model is also an elegant way of handling multiple tasks that brings us one step closer in our pursuit of general purpose vision systems [25], i.e., an efficient multi-purpose AI model akin to the human brain. To this end, we propose a Unified Pose Sequence (UPS) model to unify the architecture and output format for mul-tiple popular pose-based tasks. Our UPS is a single uni-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13019 fied model that simultaneously tackles multiple tasks with-out task-specific designs or branches, i.e., with a unified decoder. In order to unify the output formats of different tasks (which can be very different) to be produced by a sin-gle decoder, our UPS predicts a sequence of output tokens , similar to language modeling tasks. Specifically, our UPS’s decoder auto-regressively produces a sequence of output to-kens, such that the output sequence can potentially be of different lengths to meet the requirements of multiple tasks. Additionally, these output tokens can be interpreted as text embeddings, which are a powerful and general represen-tation that can be mapped into various predictions as re-quired. Moreover, to mitigate the potential destructive inter-ference [60, 90] brought by the heterogeneity between dif-ferent tasks, we propose a dynamic routing mechanism for our UPS that facilitates parameter sharing between tasks. In summary, our contributions are as follows: • We propose a Unified Pose Sequence (UPS) model that can tackle several popular pose-based tasks through a single unified framework. UPS simultaneously tack-les multiple tasks without task-specific designs or branches by modeling the output as a sequence of to-kens, enabling it to handle different output formats. • On four popular pose-based tasks (3D action recog-nition, 2D action recognition, 3D pose estimation and 3D early action prediction), UPS achieves good perfor-mance that is comparable to state-of-the-art methods.
Ho_Learning_Locally_Editable_Virtual_Humans_CVPR_2023
Abstract In this paper, we propose a novel hybrid representa-tion and end-to-end trainable network architecture to model fully editable and customizable neural avatars. At the core of our work lies a representation that combines the mod-eling power of neural fields with the ease of use and in-herent 3D consistency of skinned meshes. To this end, we construct a trainable feature codebook to store local geom-etry and texture features on the vertices of a deformable body model, thus exploiting its consistent topology under articulation. This representation is then employed in a generative auto-decoder architecture that admits fitting to unseen scans and sampling of realistic avatars with var-ied appearances and geometries. Furthermore, our repre-sentation allows local editing by swapping local features between 3D assets. To verify our method for avatar cre-ation and editing, we contribute a new high-quality dataset, dubbed CustomHumans, for training and evaluation. Our experiments quantitatively and qualitatively show that our method generates diverse detailed avatars and achieves bet-ter model fitting performance compared to state-of-the-art methods. Our code and dataset are available at https: //ait.ethz.ch/custom-humans .
1. Introduction 3D Avatars are an important aspect of many emerging applications such as 3D games or the Metaverse. Allowing for easy personalization of such avatars, holds the promise of increased user engagement. Traditionally, editing 3D as-sets requires knowledge of computer graphics tools and re-lies on standardized data formats to represent shapes and appearances. While methods for reconstruction or genera-tive modeling of learned avatars achieve impressive results, it is unknown how such neural avatars can be edited and customized. Thus, the goal of our work is to contribute a simple, yet powerful data-driven method for avatar creation and customization (Fig. 1): our method enables (a) the abil-ity to transfer partial geometric and appearance details be-tween 3D assets, and (b) the ability to author details via 2D-3D transfer. The resulting avatars (c) retain consistent local details when posed. Feature EditingInput Avatar Texture Drawing Unseen Scans (a)(b)(c)Reposing Figure 1. Creating locally editable avatars: Given an input avatar, (a) the avatar can be edited by transferring clothing geome-try and color details from existing, yet unseen 3D assets. (b) Users can customize clothing details such as logos and letters via draw-ing on 2D images. (c) The avatars retain local detail consistently under pose changes. Existing methods do not allow for such capabilities. While 3D generative models of articulated human bod-ies [5, 21, 25, 42, 75] leverage differentiable neural render-ing to learn from images, they cannot control local de-tails due to highly entangled color and geometry in the 2D supervision signal. Generative models trained on 3D data [9, 13, 36, 44, 45] can produce geometric details for surfaces and clothing. However, the diversity of generated samples is low due to the lack of high-quality 3D human scans and not all methods model appearance. At the core of the issue lies the question of represen-tation: graphics tools use meshes, UV , and texture maps which provide consistent topologies under deformation. However, human avatar methods that are built on mesh-based representations and linear blend skinning (LBS) are limited in their representational power with respect to chal-lenging geometry (e.g., puffy garments) and flexible topolo-gies (e.g., jackets), even with adaptations of additional dis-placement parameters [36] and mesh subdivision [66]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21024 Inspired by the recent neural 3D representations [40, 62, 70, 72], we propose a novel hybrid representation for digi-tal humans. Our representation combines the advantages of consistent topologies of LBS models with the representa-tional power of neural fields. The key idea is to decompose the tasks of deformation consistency on one hand and lo-cal surface and appearance description on the other. For the former, we leverage existing parametric body models (e.g., SMPL [33] and SMPL-X [48]). For the latter, we leverage the fixed topology of the poseable mesh to store local fea-ture codebooks. A decoder, shared across subjects, is then conditioned on the local features to predict the final signed distance and color values. Since only local information [15] is exposed to the decoder, overfitting and memorization can be mitigated. We experimentally show that this is crucial for 3D avatar fitting and reposing. Complementing this hybrid representation, we propose a training pipeline in the auto-decoding generative frame-work [9, 46, 52]. To this end, we jointly optimize multi-subject feature codebooks and the shared decoder weights via 3D reconstruction and 2D adversarial losses. The 3D losses help in disentangling appearance and geomet-ric information from the input scans, while the latter im-proves the perceptual quality of randomly generated sam-ples. To showcase the hybrid representation and the gen-erative model we implement a prototypical avatar editing workflow shown in Fig. 1. Furthermore, to enable research on high-quality 3D avatars we contribute training data for generative 3D hu-man models. We record a large-scale dataset (more than 600 scans of 80 subjects in 120 garments) using a volumet-ric capture stage [11]. Our dataset consists of high-quality 3D meshes alongside accurately registered SMPL-X [48] models and will be made available for research purposes. Finally, we assess our design decisions in detailed evalua-tions, both on existing and the proposed datasets. In summary, our contributions are threefold: (a) a novel hybrid representation for 3D virtual humans that allows for local editing across subjects, (b) a generative pipeline of 3D avatars creation that allows for fitting to unseen 3D scans and random sampling, and (c) a new large-scale high-quality dataset of 3D human scans containing diverse sub-jects, body poses and garments.
Jin_Learning_Instance-Level_Representation_for_Large-Scale_Multi-Modal_Pretraining_in_E-Commerce_CVPR_2023
Abstract This paper aims to establish a generic multi-modal foundation model that has the scalable capability to mas-sive downstream applications in E-commerce. Recently, large-scale vision-language pretraining approaches have achieved remarkable advances in the general domain. However, due to the significant differences between natu-ral and product images, directly applying these frameworks for modeling image-level representations to E-commerce will be inevitably sub-optimal. To this end, we propose an instance-centric multi-modal pretraining paradigm called ECLIP in this work. In detail, we craft a decoder archi-tecture that introduces a set of learnable instance queries to explicitly aggregate instance-level semantics. Moreover, to enable the model to focus on the desired product in-stance without reliance on expensive manual annotations, two specially configured pretext tasks are further proposed. Pretrained on the 100 million E-commerce-related data, ECLIP successfully extracts more generic, semantic-rich, and robust representations. Extensive experimental results show that, without further fine-tuning, ECLIP surpasses existing methods by a large margin on a broad range of downstream tasks, demonstrating the strong transferability to real-world E-commerce applications.
1. Introduction Nowadays, the flourishing growth of E-commerce has brought great convenience to people’s daily life. And a wide range of product-based application tasks has subsequently emerged, such as item classification [18, 29], product re-trieval [7, 35], commodity recommendation [21, 28], and so on. Compared to developing individual task-specific mod-els, building a general-purpose foundation model that works for massive E-commercial applications simultaneously can enhance applicability and reduce training costs. *Corresponding Author. PROYA ruby face creamfor ladies Agroup of people on horseback next to a churchStainless steel frying panForeground: horse, people, churchForeground:frying pan,coffeemachine General DomainE-commerce Domain Multiple Images For a Product Italian semi-automatic home coffee maker (a)(b)(c)Figure 1. Domain difference between natural and product im-ages. For natural images, it is the frequent case that most pixels are semantically correlated to the textual sentence. However, in E-commerce, such correlation is much more sparse ( e.g., “frying pan” or ”coffee machine” only occupy small portions of the entire images). Moreover, images for a product are often provided in a group from multiple sources such as (a) advertisement videos, (b) product pages, (c) customer comments (see the bottom examples). Recent developments in vision-language pretraining (VLP) [9, 12, 17, 20, 31, 34] have demonstrated remarkable advances in diverse VL downstream tasks. Profiting from large-scale image-text pairs, these methods are able to learn generic multimodal representations that are reused across various tasks. In E-commerce scenario, the related data nat-urally contains cross-modal information to describe a cor-responding product. Motivated by the tremendous success achieved by VL modeling, several approaches [4,33,35,37] have made attempts at designing a commerce-specific mul-timodal representation learning paradigm. They imitate the existing VLP methods (e.g., CLIP [20], VilBERT [17]) to learn the image-level representations of the product via pre-training on abundant commerce image-text pairs. Though promising results have been achieved, directly applying these VLP methods in the general domain to E-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11060 commerce still suffers from inherent deficiencies. The prop-erties of natural and product images appear to be dramati-cally different. Given a natural image-text pair, almost ev-ery pixel in the natural image is mentioned by the corre-sponding textual description. In contrast, as shown in Fig-ure 1, in a real E-commerce scenario, the images are mostly product-oriented. Only very few instances are related to the product description. Simply treating the whole image as a monolithic entity to perform cross-modal alignment with text will inevitably confound the foreground and noisy background. Hence, to establish a foundation model that generalizes well to diverse E-commerce applications, it is of great significance to learn the product-related instance-level representation. With this goal in mind, a crucial challenge needs to be addressed: How can we enable the model to fo-cus on the product instance in the presence of background interference? A straightforward way to tackle this problem would be to resort to object-level human annotations, but it is labori-ous and infeasible to scale on larger data from the Internet. In this work, we strive to derive the capability of ground-ing product instances from uncurated data. Our motiva-tion is built on the natural characteristics of E-commerce data itself. As illustrated in Figure 1, a product usually has multiple image samples from different sources ( e.g., mer-chant, customer comments, attached advertisement videos, etc.). Although the appearance of these samples may be diverse due to the changes of camera view or scenes, they all include the identical product entity. This fact strongly spurs us to pursue an instance-centric multi-modal learning paradigm by leveraging such explicit correlation. The proposed pretraining framework, dubbed as ECLIP (E for “E-commerce”), employs two separate encoders to embed the images and texts of products. Our key idea is to develop a decoder architecture built upon the above-mentioned encoders, which aims to aggregate the instance-centric product representations without additional hand-crafted annotation. Inspired by [1,15,30], the decoder intro-duces a set of learnable tokens that we refer to as instance query . At each decoder block, these instance queries are updated via interacting with the encoded visual features. Through the stack of multiple blocks, they will gradually probe the potential product instance from the entire image. Moreover, each instance query is conditioned on a concrete text or image called multi-modal prompt . Such a design renders it dedicated to a particular instance type indicated by the content of its associated prompt. Therefore, by spec-ifying the content of multi-modal prompt, the decoder can adaptively discover the corresponding instance. During pre-training, there is only one positive prompt for a given sam-ple. The rest are negative ones sampled from other products. To effectively optimize the generated instance represen-tations, we newly craft two pretext tasks: inter-productand intra-product multi-modal learning. The first one is in charge of pulling the representations of the identical product closer to each other and pushing away the unmatched ones. It is noteworthy that the appearance of the positive image samples varies a lot except for the presented product. Bring-ing their representations closer than negative pairs in the feature space will implicitly encourage the instance query to focus on the visual region that corresponds to the desired product. The second one aims to ensure that only positive queries can aggregate the semantics of the foreground in-stance, rather than negative ones. Coupling these two novel pretext tasks together, we find that the whole framework is capable of learning a generic product representation. Our core contributions can be summarized as follows: (1) We propose ECLIP, an effective and simple multi-modal representation learning paradigm in the E-commerce scenario. Going beyond regular global representations, it can successfully obtain instance-centric product representa-tions via a decoder architecture. (2) By fully exploiting the natural characteristics of E-commerce data and the proposed pretext tasks, ECLIP ob-tains the fine-grained alignment capability to ground the de-sired product instance (see Figure 4a) without reliance on any manual annotation. (3) Pre-trained on large-scale product data, the resulting foundation model can seamlessly generalize to downstream E-commerce applications. Comprehensive experimental re-sults further demonstrate the superiority of ECLIP: without any fine-tuning, it achieves substantial improvements over the existing state-of-the-art methods on diverse real-world E-commerce tasks.
Chen_AnchorFormer_Point_Cloud_Completion_From_Discriminative_Nodes_CVPR_2023
Abstract Point cloud completion aims to recover the completed 3D shape of an object from its partial observation. A common strategy is to encode the observed points to a global fea-ture vector and then predict the complete points through a generative process on this vector. Nevertheless, the results may suffer from the high-quality shape generation problem due to the fact that a global feature vector cannot sufficient-ly characterize diverse patterns in one object. In this pa-per, we present a new shape completion architecture, name-ly AnchorFormer, that innovatively leverages pattern-aware discriminative nodes, i.e., anchors, to dynamically capture regional information of objects. Technically, AnchorFormer models the regional discrimination by learning a set of an-chors based on the point features of the input partial ob-servation. Such anchors are scattered to both observed and unobserved locations through estimating particular offset-s, and form sparse points together with the down-sampled points of the input observation. To reconstruct the fine-grained object patterns, AnchorFormer further employs a modulation scheme to morph a canonical 2D grid at in-dividual locations of the sparse points into a detailed 3D structure. Extensive experiments on the PCN, ShapeNet-55/34 and KITTI datasets quantitatively and qualitatively demonstrate the efficacy of AnchorFormer over the state-of-the-art point cloud completion approaches. Source code is available at https://github.com/chenzhik/AnchorFormer.
1. Introduction As a 3D data description, point cloud can characterize various attributes of real-world objects. Although the point cloud data is readily acquired via laser scanners or depth cameras, factors like occlusion, transparency of surface, or the limit of sensor resolution, often cause geometric infor-mation loss and result in incomplete point cloud. As a re-sult, it is an essential task of point cloud completion to im-prove the data quality for the downstream tasks, e.g., point 3.33x10-3
Chrysos_Regularization_of_Polynomial_Networks_for_Image_Recognition_CVPR_2023
Abstract Deep Neural Networks (DNNs) have obtained impres-sive performance across tasks, however they still remain as black boxes, e.g., hard to theoretically analyze. At the same time, Polynomial Networks (PNs) have emerged as an alternative method with a promising performance and improved interpretability but have yet to reach the per-formance of the powerful DNN baselines. In this work, we aim to close this performance gap. We introduce a class of PNs, which are able to reach the performance of ResNet across a range of six benchmarks. We demonstrate that strong regularization is critical and conduct an exten-sive study of the exact regularization schemes required to match performance. To further motivate the regularization schemes, we introduce D-PolyNets that achieve a higher-degree of expansion than previously proposed polynomial networks. D-PolyNets are more parameter-efficient while achieving a similar performance as other polynomial net-works. We expect that our new models can lead to an understanding of the role of elementwise activation func-tions (which are no longer required for training PNs). The source code is available at https://github.com/ grigorisg9gr/regularized_polynomials .
1. Introduction Deep neural networks (DNNs) are dominating the re-search agenda in computer vision since the previous decade owing to their stellar performance in image recognition [17, 24] and object detection [27, 28]. The design of tailored normalization schemes [21], data augmentation [11] and specific architectural blocks [19, 42] have further fostered this trend. However, our theoretical understanding of DNNs pales in comparison. There is little progress in making DNNs interpretable, or a principled understanding of the training dynamics or the role of the network depth. So far, a handful of works have attempted to mitigate that lack of understanding by designing principled archi-tectures. Combining neural networks with the research on kernel methods has emerged for designing principled archi-Cifar-10 Cifar-100 STL-10 Tiny ImageNet ImageNet Datasets40.050.060.070.080.090.0100.0Accuracy (%) ResNet18 Net -PolyNets D-PolyNets 10 100 10 200 1000 # Class32*32 32*32 96*96 64*64224*224# ResolutionFigure 1. The proposed networks ( R-PolyNets, D-PolyNets) en-able polynomial networks to reach the performance of the power-ful neural networks across a range of tasks. tectures with guarantees. In [30], the kernel feature map of the training data is used for achieving invariance to certain transformations. Recently, high-performing kernels were used for defining a principled architecture [40]. Using fixed components such as wavelets has been considered for re-placing the learnable convolutions [33]. Another approach approximates the target function with a polynomial expan-sion. Polynomial Nets (PNs) rely on capturing higher-order correlations of the input data for expressing the output with-out the use of elementwise activation functions [37]. De-spite the progress in the principled design of networks, the aforementioned works have yet to achieve a performance comparable to standard baselines, such as the performance of the seminal residual neural networks (ResNet) [17]. In this work, we aim to close the gap between well-established neural network architectures and principled ar-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16123 chitectures by focusing on the PNs. In particular, we con-centrate on the recent parametrization of Π-Nets [5] that has outperformed the aforementioned principled methods. We validate our hypothesis that the performance of PNs can be significantly improved through strong regularization schemes. To this end, we introduce a class of polynomial networks, called R-PolyNets. In our study, we explore which regularization schemes can improve the performance of PNs. For instance, we find that initializations proposed for neural networks [15, 36] are not optimal for PNs. Over-all, our exploration enables R-PolyNets to achieve perfor-mance on par with the (unregularized) ResNet, which is the de facto neural network baseline. To further motivate our regularization schemes, we design a new class of polynomial expansions achieving a higher total degree of expansion than previous PNs. In R-PolyNets, the final degree of expansion is obtained by a sequential concatenation of a series of lower-degree polyno-mial expansions. That is, R-PolyNets concatenate Npoly-nomials of second-degree to obtain a 2Npolynomial expan-sion. Instead, we use outputs from previous polynomials in the current expansion, increasing the previous total degree. Our goals are twofold: a) transfer representations from earlier polynomials, b) increase the total degree of poly-nomial expansion. The proposed regularization schemes are critical for training these dense polynomials, named D-PolyNets. We showcase that D-PolyNets are more ex-pressive than previously proposed polynomial expansions. Overall, our contributions can be summarized as follows: • We introduce a class of regularized polynomial net-works, called R-PolyNets, in sec. 3. • We propose densely connected polynomials, called D-PolyNets. D-PolyNets use multiple terms from a pre-vious polynomial as input to the current polynomial re-sulting in a higher-degree of expansion than previous PNs (sec. 4). • Our thorough validation in both image and audio recognition illustrates the critical components for achieving performance equivalent to vanilla DNNs.
Chen_TexPose_Neural_Texture_Learning_for_Self-Supervised_6D_Object_Pose_Estimation_CVPR_2023
Abstract In this paper , we introduce neural texture learning for 6D object pose estimation from synthetic data and a fewunlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to pre-dict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmen-tation imperfection present during the texture learning phase,we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demon-strate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation im-provements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
1. Introduction For spatial interaction with objects, one needs an under-standing of the translation and rotation of targets within 3D space. Inferring these 6D object pose parameters from asingle RGB image is a core task for 3D computer vision. This visually retrieved information has a wide range of appli-cations in AR/VR [ 13,33], autonomous driving [ 12,30,57], and robotic manipulation [ 24,51,53]. Noteworthy, accuracy and runtime have both recently made a huge leap forward thanks to deep learning [ 19,23,31,39,40]. Unfortunately, most of these methods heavily rely on a massive amount of labelled data for supervision to learn precise models withstrong generalisation capabilities [ 16,39,53,59]. However, it is very labor-intensive and time consuming to generate accurate annotations for pose data [ 15,53]. Meanwhile, this process also easily suffers from labelling errors as precise annotation in 3D is highly challenging [ 14]. Therefore, most of the benchmarks have only few hundreds images, which does not allow proper learning of large models. In fact, methods training with such low amount of real data tend to strongly overfit to the data domain and fail to generalise to new scenes [ 18]. As a consequence, many different approaches have been proposed in the literature to tackle this problem. The sim-plest solution is to employ cut-and-paste strategy to increase domain invariance [ 11,26]. Nonetheless, this requires highly accurate manual annotations and most models tend to overfit to the original domain. Another alternative is to rely ona large amount of synthetically generated data to prevent overfitting. Though this process is relatively cheap and fast, rendered images can exhibit drastic visual discrepancy in comparison with real images even when advanced physically-based renderers [ 9] are used. A handful of approaches try to close the domain gap via the use of generative adversarial networks to translate the synthetic images into the real do-main [ 2,58]. Unfortunately, these methods do not achieve promising results as the generated images are still easilydistinguishable from real imagery. Notably, very recently a new line of work that proposes to self-supervise the pose estimator on real data has emerged. After training in sim-ulation they fine-tune on the new datasets [ 46,47]. While these methods achieve impressive results that are even on par with fully supervised methods, they still suffer from sig-nificant performance drop when not leveraging additional supervisory signals such as depth data [ 46] or ground truth camera poses [ 25,42]. In this work, we propose a novel way to conduct self-supervised 6D pose estimation that is free from any additional supervision sources and further yields state-of-the-art performance. The core idea of previous attempts [ 44,46,47,54] to adapt pretrained pose estimators to the real domain is to conduct render-and-compare using a differentiable renderer. With This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4841 CAD models and poses given, a renderer can output several attributes of the current object pose (e.g. mask, colour, depth) which allows to refine the pose through iterative comparison between the renderings and observations. However, when relying on 2D visual contents (mask and colour), such strate-gies tend to fail due to measured silhouette imperfection, lack of textures, and domain discrepancies. This undesirable behaviour is also discussed in [ 47]. Thus, different from the previous attempts that heavily rely on render-and-compare for self-supervision [ 46,47], we instead propose to regard realistic textures of the objects as an intermediate representation before conducting training for the pose estimator. Our approach is formulated as two interconnected sub-optimisation problems on texture learn-ing and pose learning. In the core, we first learn realistictextures of objects from raw image collections, then syn-thesise training data to supervise the pose estimator with pixel-perfect labels and realistic appearance. The key chal-lenge of our proposed scheme lies in capturing accurate texture under noise introduced by poses initialised by a pre-trained pose estimator during supervision. To this end, inaddition to leveraging synthetic data to establish geome-try priors, we learn robust supervision through adversarial training by conditioning synthesised colours on local sur-face information. Furthermore, we establish regularisation from synthetic textures to compensate segmentation artefacts during a texture-learning phase. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen domains. Our method significantly improves even difficult objects with little variance in appearance and geometry through self-supervision. Impressively, Our ap-proach demonstrates a robust self-improving ability for the employed pose estimators even when initialised with much inferior pose estimates than stronger baselines [ 46]. To summarise, our main contributions are: •We formulate a new learning scheme, TexPose , that decomposes self-supervision for 6D object pose into Texture learning and Pose learning. •We propose a surfel-conditioned adversarial training loss and a synthetic texture regularisation term to han-dle pose errors and segmentation imperfection during texture learning, further delivering self-improving abil-ity to the pose estimators. •We show significant improvements over recent strong baselines with additional supervision signals. Our pose estimators demonstrates a substantial generalisation ability even on unseen scenes.2. Related Work Model-based 6D Pose Estimation To retrieve the 6D pose, early methods use local or global features and search for keypoints correspondence on CAD models [ 1,6,7,28]. In recent years, learning-based methods dominate the field and solve the task using convolution neural networks (CNN) under the supervision from annotated data to extract deep features. There are two major approaches for pose estimation, in par-ticular, correspondence-based and regression/classification-based approaches. Correspondence-based methods establish 2D-3D correspondences [ 37,39,40,42,59], prior to lever-aging a variant of the RANSAC&PnP paradigm to solvefor pose. Regression-based approaches, on the other hand,directly regress or classify the pose of the detected object. Initially these methods usually have shown a lower perfor-mance due to the existence of ambiguities [ 29] such as pose symmetries [ 53]. Therefore, methods like SSD-6D [ 19] dis-cretise the rotation space to circumvent this issue. Recently, with better continuous representations for rotation [ 60], these methods gradually demonstrate high effectiveness [ 10,48]. Self-Supervised Pose Learning Considering tremendous effort to collect large amount of annotations for 6D object pose [ 17,50], several recent works have been proposed to explore the possibility of self-supervised pose learning using labeled synthetic data together with unlabelled real sensor data. [ 8] designed a novel labelling pipeline using a manip-ulator to generate reliable pose annotations for supervision. Self6D [ 47] proposed a self-supervision workflow by first pretraining a pose estimator with synthetic data, which was then adapted to the real world through a render-and-compare strategy by imposing consistencies between the rendered depth and sensed depth under the current pose estimate.CPS ++ [32] used similar approach for categorical-level object pose estimation, and the shape is jointly deformedduring optimisation. AAE [ 45] parameterise SO(3) space using a latent embedding obtained from synthetic images, which is later employed for orientation retrieval. Both works demonstrate the strong requirement for depth data in order to allow for a reasonable performance either in training or testing. However, due to the uninformative appearance and the sim-to-real domain gap, render-and-compare strategies easily diverge when there is no depth data available as shown in [47]. Hence, consecutive works introduce other supervi-sory signals to prune the need for depth. DSC-PoseNet [ 54] employs a key point consistency regularisation for dual-scale images with labelled 2D bounding box. Sock et al .[44] use photometric consistency from multiple views to refine the raw estimate, while ground-truth masks are required for su-pervision. Recently, Self6D++ [ 46] proposed to initialise therender-and-compare process with a powerful deep pose refiner [ 26] to guide the learning process of the pose esti-4842 mator, which currently yields state-of-the-art performance among all methods without manual annotations. Chen et al.[4] leverages a tailored heuristics for pseudo labelling under student-teacher learning scheme. Novel View Synthesis With a few images given, novel view synthesis aims to render scene content from unseenviewpoints, which can be regarded as a process to acquire ”textures” of the scene. NOL [ 38] is proposed to address the diffic
Ichikawa_Fresnel_Microfacet_BRDF_Unification_of_Polari-Radiometric_Surface-Body_Reflection_CVPR_2023
Abstract Computer vision applications have heavily relied on the linear combination of Lambertian diffuse and microfacet specular reflection models for representing reflected radi-ance, which turns out to be physically incompatible and limited in applicability. In this paper, we derive a novel analytical reflectance model, which we refer to as Fresnel Microfacet BRDF model, that is physically accurate and generalizes to various real-world surfaces. Our key idea is to model the Fresnel reflection and transmission of the surface microgeometry with a collection of oriented mirror facets, both for body and surface reflections. We carefully derive the Fresnel reflection and transmission for each mi-crofacet as well as the light transport between them in the subsurface. This physically-grounded modeling also allows us to express the polarimetric behavior of reflected light in addition to its radiometric behavior. That is, FMBRDF uni-fies not only body and surface reflections but also light re-flection in radiometry and polarization and represents them in a single model. Experimental results demonstrate its ef-fectiveness in accuracy, expressive power, image-based es-timation, and geometry recovery.
1. Introduction Reflection is a fundamental physical phenomenon of light that serves as a key creator of our rich visual world. Models of light reflection lie at the heart of visual infor-mation processing both for synthesis and analysis. In com-puter vision, reflectance models play an essential role in 3D reconstruction, inverse rendering, and material estimation. The goal is to invert light reflection to deduce its physical in-gredients, such as the surface geometry, from images. Nat-urally, devising simple yet accurate models that are faithful to the underlying physics becomes vital. Analytical mod-els provide a sound basis for solving these inverse problems as parameter estimation and physically-based models lend semantic interpretations of the results. Physically-based analytical reflectance models have been studied extensively. Parametric representations of the Bidirectional Reflectance Distribution Function (BRDF) are of particular importance, as they enable pixel-wise esti-mation of its parameters. Most models widely adopted in computer vision are built on two representative mod-els corresponding to the two distinct reflection components, namely body reflection and surface reflection. Body reflec-tion refers to the light that transmits into the subsurface and is eventually emitted from the surface. It is also called dif-fuse reflection as it is comparatively scattered in directions. The Lambertian reflectance model [17] which models it as uniform distribution in the angular domain dominates com-puter vision applications due to its simple linear form. Surface reflection is the light that immediately reflects off the surface. It is also referred to as specular reflection as it primarily concentrates around the perfect mirror reflec-tion direction of incident light. Torrance and Sparrow [29] introduced the idea of modeling the microgeometry within a single pixel that causes this angular spread of surface reflec-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16489 tion with oriented mirror microfacets. Then on, many mod-els have built on this key idea of oriented microfacets [5,30]. Oren and Nayar [22] applied the idea to body reflection by assuming Lambertian instead of mirror microfacets. A linear combination of these diffuse and specular re-flection models, most often Lambertian or Oren-Nayar plus Torrance-Sparrow, have been widely used in vision appli-cations. There are, however, three problems that funda-mentally limit the accuracy of such a reflection represen-tation. The first is that the two reflection components are modeled on inconsistent microgeometry. Lambertian and other body reflection models assume a single Lambertian microfacet or an oriented distribution of Lambertian micro-facets [22], while specular reflection models assume mirror microfacets [29]. This is physically implausible and also hinders physical interpretation of the parameter estimates. The second is that past diffuse reflection models do not account for light transport inside the microgeometry. The Oren-Nayar model ignores discrepancies in incident and exitant microfacets. This can be fine for mesoscopic and macroscopic geometry ( i.e., Bidirectional Texture Func-tion) as demonstrated in their work [22], but leads to signif-icant inaccuracy for microgeometry ( i.e., regular imaging conditions). Incident light to one microfacet will likely exit from a different microfacet whose effect cannot be ignored for accurate body reflection representation. The third is that estimation of the parameter values ( i.e., reflectometry) of such linear combinations of diffuse and specular reflection models is inherently unstable. Specular reflection is usually either sparse ( e.g., a shiny surface with a narrow highlight) or weak ( e.g., a rough surface with a broad specular lobe). This makes estimation of specular parameter values while disentangling diffuse and specular components challenging. Most works thus require multiple images captured from different imaging conditions. In this paper, we derive a novel analytical reflectance model that is physically accurate and generalizes to vari-ous real-world surfaces. Our key idea is to build up from the very atomic behavior of light reflection, namely Fresnel reflection. We model surface microgeometry with a col-lection of oriented mirror facets, both for body and surface reflections. We carefully derive the Fresnel reflection and transmission for each microfacet as well as the light trans-port between them in the subsurface. By modeling the full Fresnel behavior of light for an analytically oriented distri-bution of mirror microfacets, we arrive at a generalized re-flection model that subsumes past representative models as special cases. This physically-grounded modeling allows us to describe the polarimetric behavior of reflected light by a rough surface, in addition to its radiometric behavior. As a result, our novel reflectance model, which we refer to as Fresnel Microfacet BRDF model (FMBRDF), unifies not only body and surface reflections but also light reflection inModel MSR MBR FT MLT Pol. T-S [29] + Lambertian ✓ T-S [29] + O-N [22] ✓ ✓ Baek et al. [2] ✓ ✓ ✓ Ours ✓ ✓ ✓ ✓ ✓ Table 1. Our Fresnel Microfacet BRDF model is, to our knowl-edge, the first physically-based reflection model that accurately expresses microfacet surface reflection (MSR), microfacet body reflection (MBR), Fresnel transmission (FT), microscopic light transport (MLT), and polarization (Pol.) in a single model. radiometry and polarization in a single model. We experimentally validate our FMBRDF model by evaluating its accuracy with a wide range of measured BRDFs and images of real surfaces. The results show that FMBRDF can accurately model both the intensity and po-larization, particularly in comparison with past representa-tive models. We also show that FMBRDF can be estimated from a single polarimetric image. In the supplemental ma-terial, we demonstrate the use of FMBRDF for joint esti-mation of reflectance and geometry from multiple images taken under different light source directions. To the best of our knowledge, FMBRDF is the first re-flectance model to seamlessly unify body and surface reflec-tions with the same microgeometry and also describe both its radiometric and polarimetric light reflections in a single model. We believe FMBRDF will provide an invaluable ba-sis for accurate radiometric and polarimetric image analysis and serve as a backbone for a wide range of computer vision applications. All code and data can be found on our project page.
Jin_A_Unified_Pyramid_Recurrent_Network_for_Video_Frame_Interpolation_CVPR_2023
Abstract Flow-guided synthesis provides a common framework for frame interpolation, where optical flow is estimated to guide the synthesis of intermediate frames between consec-utive inputs. In this paper, we present UPR-Net, a novel Unified Pyramid Recurrent Network for frame interpola-tion. Cast in a flexible pyramid framework, UPR-Net ex-ploits lightweight recurrent modules for both bi-directional flow estimation and intermediate frame synthesis. At each pyramid level, it leverages estimated bi-directional flow to generate forward-warped representations for frame synthe-sis; across pyramid levels, it enables iterative refinement for both optical flow and intermediate frame. In particular, we show that our iterative synthesis strategy can significantly improve the robustness of frame interpolation on large mo-tion cases. Despite being extremely lightweight (1.7M pa-rameters), our base version of UPR-Net achieves excel-lent performance on a large range of benchmarks. Code and trained models of our UPR-Net series are available at: https://github.com/srcn-ivl/UPR-Net .
1. Introduction Video frame interpolation (VFI) is a classic low-level vision task. It aims to increase the frame rate of videos, by synthesizing non-existent intermediate frames between consecutive frames. VFI technique supports many practi-cal applications including novel view synthesis [10], video compression [21], cartoon creation [32], etc. Despite great potential in applications, video frame in-terpolation remains an unsolved problem, due to challenges like complex and large motions, occlusions, and illumina-tion changes in real-world videos. Depending on whether or not optical flow is incorporated to compensate for inter-frame motion, existing methods can be roughly classified into two categories: flow-agnostic methods [5,6,25,28], and flow-guided synthesis [2, 14, 19, 26, 27, 29, 30]. With recent advances in optical flow [12,13,34,35], flow-guided synthe-sis has developed into a popular framework with compelling performance for video frame interpolation. 29.2029.4029.6029.8030.0030.2030.4030.6030.8031.00 0 10 20 30 40 50PSNR on SNU -FILM hard subset Number of parameters (millions)UPR -Net BMBC (ECCV’20)IFRNet large (CVPR’22 )ABME (ICCV’21) IFRNet (CVPR’22) CAIN (AAAI’20) CDFI (CVPR’21)AdaCoF (CVPR’20)UPR -Net large RIFE (ECCV’22 )DAIN (CVPR’19)VFIformer (CVPR’22 ) XVFI (ICCV’21 )UPR -Net LARGEFigure 1. Comparison of performance and model size on the hard subset of SNU-FILM benchmark [6]. Our UPR-Net series achieve state-of-the-art accuracy with extremely small parameters. Most of existing flow-guided methods follow a sim-ilar procedure: estimating optical flow for desired time step, warping input frames and their context features based on optical flow, and synthesizing intermediate frame from warped representations. Where technical choices may di-verge in this procedure, is the warping operation and the optical flow it requires. Backward-warping is traditionally used for frame interpolation [2,14,19,29,30], but acquiring high-quality bilateral intermediate flow for it is often chal-lenging. Forward-warping can directly use linearly-scaled bi-directional flow between input frames (which is easier to obtain), and thus has recently emerged as a promising di-rection for frame interpolation [26, 27]. In common flow-guided synthesis pipeline [1,18,27,30], optical flow is typically estimated from coarse to fine by a pyramid network, but intermediate frame is synthesized just once by a synthesis network. Despite promising per-formance on low-resolution videos, this practice misses the opportunity of iteratively refining the interpolation for high-resolution inputs. Second, for large motion cases, an im-portant issue has been overlooked by previous works: even This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1578 when estimated motion is visually plausible, in many cases, the obvious artifacts in warped frames ( e.g., large holes in forward-warped frames) may also degrade the interpolation performance. Last, existing methods typically rely on heavy model architectures to achieve good performance, block-ing them from being deployed on platforms with limited resources, e.g., mobile devices. Aiming at these issues, we introduce UPR-Net, a novel Unified Pyramid Recurrent Network for frame interpola-tion. Within a pyramid framework, UPR-Net exploits lightweight recurrent modules for both bi-directional flow estimation and forward-warping based frame synthesis. It enables iterative refinement of both optical flow and inter-mediate frame across pyramid levels, producing compelling results on complex and large motion cases. Our work draws inspirations from many existing works, but is significantly distinguished from them in three as-pects. First, UPR-Net inherits the merit of recent pyramid recurrent bi-directional flow estimators [15,31], allowing to customize the number of pyramid levels in testing to esti-mate extremely large motions. But, it goes one step further, by exploiting pyramid recurrent network for coarse-to-fine frame synthesis, and unifying motion estimation and frame synthesis within a single pyramid recurrent network. Second, we reveal that our coarse-to-fine iterative syn-thesis can significantly improve the robustness of frame in-terpolation on large motion cases. At high-resolution pyra-mid levels, forward-warped frames may suffer from obvious holes due to large motions, resulting in poor interpolation for many cases. We show that this issue can be remedied to a large extent, by feeding the frame synthesis module with the intermediate frame estimate upsampled from previous lower-resolution pyramid level. Third, both of our optical flow and frame synthesis mod-ules are extremely lightweight. Yet, they are still carefully integrated with the key ingredients from modern researches on optical flow [34, 35] and frame synthesis [26]. Specifi-cally, at each pyramid level, UPR-Net firstly extracts CNN features for input frames, then constructs a correlation vol-ume for simultaneous bi-directional flow estimation. It pre-dicts refined intermediate frame from forward-warped input frames and their CNN features, along with upsampled inter-mediate frame estimate. We conduct extensive experiments to verify the effec-tiveness of UPR-Net for frame interpolation. Our base ver-sion of UPR-Net only has 1.7M parameters. Yet, it achieves excellent performance on both low-and high-resolution benchmarks, when trained with low-resolution data. Fig-ure 1 gives a comparison of accuracy and model size on the hard subset of SNU-FILM [6], where our UPR-Net series achieve state-of-the-art accuracy with much fewer papram-eters. In addition, we validate various design choices of UPR-Net by ablation studies.2. Related Work Pyramid recurrent optical flow estimator. PWC-Net [34] has been traditionally used for optical flow by frame interpolation methods [1, 26, 27]. However, the fixed number of pyramid levels makes it difficult to handle ex-tremely large motions beyond the training phase. Recently, pyramid recurrent optical flow estimators [31,38] are devel-oped to handle large motion, by sharing the structure across pyramid levels and customizing the pyramid levels in test-ing. A larger number of pyramid levels can better handle large motions that often appear in high-resolution videos. Previous pyramid recurrent estimators typically employ a plain U-Net as the base estimator at each pyramid level. However, U-Net is over-simplified for optical flow due to the lack of correlation volume [34, 35]. Very recently, EBME [15] incorporates correlation volume into pyramid recurrent network for simultaneous bi-directional flow esti-mation. We follow the basic idea in [15] for bi-directional flow, but modify it to better adapt to our unified pyramid network for frame interpolation. Coarse-to-fine image synthesis. Coarse-to-fine process-ing is a mature technology for high-resolution image syn-thesis, where low-resolution images are firstly synthesized, and then iteratively refined until generating the desired high-resolution output. It has many successful applications, including photographic image synthesis conditioned on se-mantic layouts [4], adversarial image generation [7], and recent diffusion model based image synthesis [9]. However, coarse-to-fine synthesis has been largely over-looked by existing frame interpolation methods. Zhang et al. [38] iteratively estimate the occlusion mask within a pyramid recurrent framework, but still needs an extra re-finement network to obtain the final result. XVFI [31] estimates multi-scale intermediate frames during training, but does not perform iterative refinement of intermediate frame. IFRNet [16] gradually refines the intermediate fea-ture (rather than frame) until generating the desired output, but it is not recurrent, and has limited capacity in handling large motion. In this work, we iteratively refine the inter-mediate frame within a pyramid recurrent framework. Artifacts in warped frames. Although warping can com-pensate for per-pixel motion, it often creates distortion and artifacts. If a pixel is moved to a new location, and no other pixels are moved to fill the old location, this pixel will ap-pear twice in backward-warped frame [18, 23], or leave a hole at original location in forward-warped frame [26]. To robustly synthesize intermediate frame from warped frames, existing frame interpolation methods typically feed the synthesis network with both warped frames and their context features [11, 26, 27]. The synthesis network can 1579 ······ ······up-sampled bi-directional flowforward warping layer bi-directional flow modulecost volume layer prediction layers forward warping layer frame synthesis moduleencoder layersrefined bi-directional flowfeature encoder refined interpolationup-sampled interpolationdecoder layers image pyramidsFigure 2. Overview of our UPR-Net. Given two input frames, we first construct image pyramids for them, then apply a recurrent structure across pyramid levels to repeatedly refine estimated bi-directional flow and intermediate frame. Our recurrent structure consists of a feature encoder that extracts multi-scale features for input frames, a bi-directional flow module that refines bi-directional flow with correlation-injected features, and a frame synthesis module that refines intermediate frame estimate with forward-warped representations. leverage rich contextual cues to infer the intermediate frame from warped representations, even when artifacts exist in warped frames. In this work, we observe that the synthesis network does work well for small motion cases where forward-warped frames contain slight artifacts. However, in presence of large motions, in many cases, the obvious holes in forward-warped frames may lead to artifacts in interpolation. We show that our iterative synthesis can significantly improve the robustness of frame interpolation on large motion cases. Additionally, during forward-warping, the conflicted pixels mapped to the same target should be addressed by simple averaging or certain weighted averaging operation (e.g., softmax splatting [27]). In this work, we adopt the average splatting [27] as forward-warping for simplicity. 3. Our Approach 3.1. Unified Pyramid Recurrent Network We illustrate the overall pipeline of UPR-Net in Figure 2. It unifies bi-directional flow estimation and frame synthesis within a pyramid structure, and shares the weights across pyramid levels. This macro pyramid recurrent architecture has two advantages: (i) reducing the parameters of the full pipeline; (ii) allowing to customize the number of pyramid levels in testing to handle large motions. Given a pair of consecutive frames I0,I1, and the desired time step t(0≤t≤1), our goal is to synthesize the non-existent intermediate frame It. UPR-Net tackles this task via an iterative re
Dabral_Mofusion_A_Framework_for_Denoising-Diffusion-Based_Motion_Synthesis_CVPR_2023
Abstract Conventional methods for human motion synthesis have either been deterministic or have had to struggle with the trade-off between motion diversity vs motion quality. In re-sponse to these limitations, we introduce MoFusion , i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can synthesise long, temporally plausible, and semantically accurate mo-tions based on a range of conditioning contexts (such as mu-sic and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion-diffusion framework through our scheduled weighting strat-egy. The learned latent space can be used for several inter-active motion-editing applications like in-betweening, seed-conditioning, and text-based editing, thus, providing cru-cial abilities for virtual-character animation and robotics. Through comprehensive quantitative evaluations and a per-ceptual user study, we demonstrate the effectiveness of Mo-Fusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video at https://vcai.mpi-inf.mpg.de/projects/MoFusion/ .
1. Introduction 3D human motion synthesis is an important generative computer vision problem that often arises in robotics, vir-tual character animation and video games and movie pro-duction ( e.g., for crowd dynamics simulation). It saw im-pressive progress over the last years; several works recently tackled it with reinforcement learning [41, 60, 64], deep generative models [2, 42, 43, 50] or using deterministic ap-proaches [12, 29, 35]. Despite the progress, multiple open challenges remain, such as improving motion variability, enabling higher motion realism and enhancing synthesis fi-delity under user-specified conditioning. Under condition-ing, we understand influencing the model outputs according to a control signal ( e.g., “walking counter-clockwise”). The key goal of conditional human motion synthesis is to generate motions that semantically agree with the con-ditioning while exhibiting diversity for the same condition-ing signal. To facilitate the same, the recent state-of-the-art approaches have widely adopted generative techniques like conditional variational auto-encoders (CV AE) [17, 32, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9760 42, 43], normalizing flows [2, 3], as well as GANs [19, 30]. Naturally, each of them has strengths and limitations. GAN-based synthesis methods suffer from mode-collapse, thus resulting in insufficient fidelity of synthesis, especially for less common input conditioning. On the other hand, meth-ods using CV AEs and normalizing flows typically have to deal with the trade-off between synthesis quality and the richness of the latent space ( i.e., diversity) [3, 50]. The seminal works of Sohl-Dickstein et al . [20] and Hoet al. [21] recently demonstrated the ability of Denoising Diffusion Probabilistic Models (DDPM) to learn the under-lying data distribution while also allowing for diverse sam-pling. Recent works [37, 49, 52] exhibited remarkable ca-pabilities in the conditional synthesis of images and audio with high-frequency details while also allowing interactive applications like editing and inpainting. However, it has remained unclear how DDPM could be trained for such a problem with the temporal component as human motion synthesis. Motivated by the recent advances in diffusion models, we propose MoFusion, i.e.,a new approach for human mo-tion synthesis with DDPM. This paper shows that diffu-sion models are highly effective for this task; see Fig. 1 for an overview. Our proposal includes a lightweight 1D U-Net network for reverse diffusion to reduce the rather long inference times. Furthermore, we demonstrate how domain-inspired kinematic losses can be introduced to dif-fusion framework during training, thanks to our time-varying weight schedule, which is our primary contribution. The result is a new versatile framework for human motion synthesis that produces diverse, temporally and kinemati-cally plausible, and semantically accurate results. We analyse DDPM for motion synthesis on two rele-vant sub-tasks: music-conditioned choreography genera-tion and text-conditioned motion synthesis. While most existing choreography generation methods produce repeti-tive (loopy) motions, and text-to-motion synthesis methods struggle with left-right disambiguation, directional aware-ness and kinematic implausibility, we show that MoFu-sion barely suffers from these limitations. Finally, formu-lating motion synthesis in a diffusion framework also af-fords us the ability to perform interactive editing of the syn-thesised motion. To that end, we discuss the applications of a pre-trained MoFusion, like motion forecasting and in-betweening.(both are important applications for virtual character animation). We show improvements in both the sub-tasks through quantitative evaluations on AIST++ [29] and HumanML3D [15] datasets as well as a user study. In summary, our core technical contributions are as follows: • The first method for conditional 3D human motion synthesis using denoising diffusion models. Thanks to the proposed time-varying weight schedule, we in-corporate several kinematic losses that make the syn-thesised outputs temporally plausible and semantically accurate with the conditioning signal. • Model conditioning on various signals, i.e.,music and text, which is reflected in our framework’s architec-ture. For a music-to-choreography generation, our re-sults generalise well to new music and do not suffer from degenerate repetitiveness.
Hou_Mask3D_Pre-Training_2D_Vision_Transformers_by_Learning_Masked_3D_Priors_CVPR_2023
Abstract Current popular backbones in computer vision, such as Vision Transformers (ViT) and ResNets are trained to per-ceive the world from 2D images. However, to more effec-tively understand 3D structural priors in 2D backbones, we propose Mask3D to leverage existing large-scale RGB-D data in a self-supervised pre-training to embed these 3D priors into 2D learned feature representations. In con-trast to traditional 3D contrastive learning paradigms re-quiring 3D reconstructions or multi-view correspondences, our approach is simple: we formulate a pre-text reconstruc-tion task by masking RGB and depth patches in individual RGB-D frames. We demonstrate the Mask3D is particu-larly effective in embedding 3D priors into the powerful 2D ViT backbone, enabling improved representation learn-ing for various scene understanding tasks, such as semantic segmentation, instance segmentation and object detection.Experiments show that Mask3D notably outperforms exist-ing self-supervised 3D pre-training approaches on ScanNet, NYUv2, and Cityscapes image understanding tasks, with an improvement of +6.5% mIoU against the state-of-the-art Pri3D on ScanNet image semantic segmentation.
1. Introduction Recent years have seen remarkable advances in 2D im-age understanding as well as 3D scene understanding, al-though their representation learning has generally been treated separately. Powerful 2D architectures such as ResNets [21] and Vision Transformers (ViT) [14] have achieved notable success in various 2D recognition and seg-mentation tasks, but focus on learning from 2D image data. Current large-scale RGB-D datasets [1,4,10,34,35] provide an opportunity to learn key geometric and structural priors to provide more informed reasoning about the scale and cir-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13510 cumvent view-dependent effects, which can provide more efficient representation learning. In 3D, various successful methods have been leveraging the RGB-D datasets for con-strastive point discrimination [6, 24, 39, 44] for downstream 3D tasks, including high-level scene understanding tasks as well as low-level point matching tasks [15, 42, 43]. How-ever, the other direction from 3D to 2D is less explored. We thus aim to embed such 3D priors into 2D back-bones to effectively learn the structural and geometric pri-ors underlying the 3D scenes captured in 2D image pro-jections. Recently, Pri3D [25] adopted similar multi-view and reconstruction-based constraints to induce 3D priors in learned 2D representations. However, this relies on not only acquiring RGB-D frame data but also the robust registration of multiple views to obtain camera pose information for each frame. Instead, we consider how to effectively learn such geometric priors from only single-view RGB-D data in a more broadly applicable setting for 3D-based pre-training. We thus propose Mask3D, which learns effective 3D pri-ors for 2D backbones in a self-supervised fashion by pre-training with single-view RGB-D frame data. We propose a pre-text reconstruction task to reconstruct the depth map by masking different random RGB and depth patches of an in-put frame. These masked input RGB and depth are encoded simultaneously in separate encoding branches and decoded to reconstruct the dense depth map. This imbues 3D priors into the RGB backbone which can then be used for fine-tuning downstream image based scene understanding tasks. In particular, our self-supervised approach to embedding 3D priors from single-view RGB-D data to 2D learned fea-tures is not only more generally applicable, but we also demonstrate that it is particularly effective for pre-training vision transformers. Our experiments demonstrate the ef-fectiveness of Mask3D on a variety of datasets and image understanding tasks. We pre-train on ScanNet [10] with our masked 3D pre-training paradigm and fine-tune for 2D semantic segmentation, instance segmentation, and object detection. This enables notable improvements not only on ScanNet data but also generalizes to NYUv2 [34] and even Cityscapes [8] data. We believe that Mask3D makes an im-portant step to shed light on the paradigm of incorporating 3D representation learning to powerful 2D backbones. In summary, our contributions are: • We introduce a self-supervised pre-training approach to learn masked 3D priors for 2D image understanding tasks based on learning from only single-view RGB-D data, without requiring any camera pose or 3D recon-struction information, and thus enabling more general applicability. • We demonstrate that our masked depth reconstruction pre-training is particularly effective for the modern, powerful ViT architecture, across a variety of datasets and image understanding tasks.2. Related Work Pre-training in Visual Transformers. Recently, visual transformers have revolutionized computer vision and at-tracted wide attention. In contrast to popular CNNs that operate in a sliding window fashion, Vision Transformers (ViT) describe the image as patches of 16x16 pixels. The Swin Trasnformer [28] has set new records with its hier-archical transformer formulation on major vision bench-marks. The dominance of visual transformers in many vi-sion tasks has inspired study into how to pre-training such backbones. MoCoV3 [5] first investigated the effects of sev-eral fundamental components for self-supervised ViT train-ing. MAE [19] then proposed an approach inspired by BERT [13], which randomly masks words in sentences and leveraged masked image reconstruction for self-supervised pre-training that achieved state-of-the-art results in ViT. A similar self-supervision has also been proposed by Mask-Feat [37] for self-supervised video pre-training. MaskFeat randomly masks out pixels of the input sequence and then predicts the Oriented Gradients (HOG) of the masked re-gions. However, such ViT pre-training methods focus on image or video data, without exploring how 3D priors can potentially be exploited. MultiMAE [2] on the other hand introduces depth priors. However, it requires depth as input not only in pre-training but also in downstream tasks. In ad-dition to depth, human annotations (e.g., semantics) are also leveraged in the pre-training. To achieve a self-supervised pre-training, we do not use semantics in the pre-training and only use RGB images as input in downstream tasks. RGB-D Scene Understanding. Research in 3D scene un-derstanding have been spurred forward with the introduc-tion of larger-scale, annotated real-world RGB-D datasets [1, 4, 10]. This has enabled data-driven semantic under-standing of 3D reconstructed environments, where we have now seen notable progress, such as for 3D semantic seg-mentation [7,11,17,31,32,36], object detection [29,30,45], instance segmentation [16, 18, 22, 23, 26, 27, 40, 41], and re-cently panoptic segmentation [9]. Such 3D scene under-standing tasks have been analogously defined to 2D im-age understanding, which considers RGB-only input with-out depth information. However, learning from 3D en-ables geometric reasoning without requiring learning view-dependent effects or resolving depth/scale ambiguity that must be learned when considering solely 2D data. We thus take advantage of existing large-scale RGB-D data to ex-plore how to effectively embed 3D priors for better repre-sentation learning for 2D scene understanding tasks. Embedding 3D Priors in 2D Backbones. Learning cross-modality features has been seen in extensive studies of the ties between languages and images. In particular, CLIP [33] learns visual features from language supervi-sion during pre-training, showing promising results in zero-13511 shot learning for image classification. Pri3D [25] explores 3D-based pre-training for image-based tasks by leverag-ing multi-view consistency and 2D-3D correspondence with contrastive learning to embed 3D priors into ResNet back-bones. This results in enhanced features over ImageNet pre-training on 2D scene understanding tasks. However, Pri3D requires camera pose registration across RGB-D video se-quences and is specifically designed for CNNs-based archi-tectures. In contrast, we formulate a self-supervised pre-training that operates on only single-view RGB-D frames and leverages masked 3D priors that can effectively pre-train powerful ViT backbones. 3. Method We introduce Mask3D to embed 3D priors into learned 2D representations by self-supervised pre-training from only single-view RGB-D frames. To effectively learn 3D structural priors without requiring any camera pose infor-mation or multi-view constraints, we formulate a pre-text depth reconstruction task to inform the RGB feature extrac-tion to be geometrically aware. Randomly masked color and depth images are used as input to reconstruct the dense depth map, and the RGB backbone can then be used to fine-tune downstream image understanding tasks. In particular, we show in Sec. 4 that this single-frame self-supervision is particularly well-suited for powerful vision transformer (ViT) backbones, even without any multi-view information. 3.1. Learning Masked 3D Priors We propose to learn masked 3D priors to embed to learned 2D backbones by pre-training to reconstruct dense depth from RGB images with the guidance of sparse depth. That is, for an RGB-D frame F= (C, D )with RGB image Cand depth map D, we train to reconstruct Dfrom masked patches of Cguided with sparse masked patches of D. An overview of our approach is shown in Fig. 2. To create masked color and depth McandMdfromC andDas input for reconstruction, a 240x320 RGB image Cis uniformly divided into 300 16x16 patches, from which we randomly keep a percentage pcof patches, masking out the others, to obtain Mc.Mdis created similarly by keep-ing only a percentage pdof patches, such that the resulting depth patches do not coincide with the RGB patches in Mc. We then train color and depth encoders ΨcandΨdto separately encode RGB and depth signals. RGB patches are fed into Ψcand concatenated with a positional embedding, following the ViT architecture, and similarly for depth. The positional embedding used encodes the patch location by a cosine function. Patches and their positional embeddings are then mapped into higher dimensional feature vectors via ΨcandΨd. The encoders ΨcandΨdare built by blocks composed of linear and norm layers. The features from Ψc andΨdare then fused in the bottleneck; since depth patcheswere selected in regions where no RGB patches were se-lected, there are no duplicate patches representing the same patch location. For those regions which do not have any associated RGB or depth patch, we use patches of constant values as mask tokens to create a placeholder in the bottleneck to enable reconstruc
Ishtiak_Exemplar-FreeSOLO_Enhancing_Unsupervised_Instance_Segmentation_With_Exemplars_CVPR_2023
Abstract Instance segmentation seeks to identify and segment each object from images, which often relies on a large num-ber of dense annotations for model training. To allevi-ate this burden, unsupervised instance segmentation meth-ods have been developed to train class-agnostic instance segmentation models without any annotation. In this pa-per, we propose a novel unsupervised instance segmenta-tion approach, Exemplar-FreeSOLO, to enhance unsuper-vised instance segmentation by exploiting a limited num-ber of unannotated and unsegmented exemplars. The pro-posed framework offers a new perspective on directly per-ceiving top-down information without annotations. Specif-ically, Exemplar-FreeSOLO introduces a novel exemplar-knowledge abstraction module to acquire beneficial top-down guidance knowledge for instances using unsupervised exemplar object extraction. Moreover, a new exemplar em-bedding contrastive module is designed to enhance the dis-criminative capability of the segmentation model by exploit-ing the contrastive exemplar-based guidance knowledge in the embedding space. To evaluate the proposed Exemplar-FreeSOLO, we conduct comprehensive experiments and perform in-depth analyses on three image instance seg-mentation datasets. The experimental results demonstrate that the proposed approach is effective and outperforms the state-of-the-art methods.
1. Introduction Instance segmentation is among the most fundamental and challenging tasks in computer vision, aiming to recog-nize and segment each object in an image. By utilizing a significant amount of densely annotated data to train seg-mentation models, existing techniques have achieved desir-able results [4, 5, 10, 16, 26, 41, 47]. However, acquiring nu-merous pixel-level labels requires substantial labour and fi-nancial resources, limiting the developments and practical applications in the field. To reduce the costly annotation re-Figure 1. An illustration of the proposed idea. The proposed Exemplar-FreeSOLO framework addresses the unsupervised in-stance segmentation problem by excavating information from un-labeled data through an exemplar mechanism, which produces top-down knowledge guidance and enhances the discriminability of the segmentation model. quirement, some solutions have been put forth to investigate ways of using less expensive training labels to complete complex tasks, such as weakly-supervised [18, 22, 29, 42], partially-supervised [20, 51] and semi-supervised instance segmentation [3, 52, 55]. Although some significant advances have been achieved, the labelling conundrum obstacle remains as these methods still require nontrivial dense labels and precise position in-formation. By contrast, unsupervised instance segmenta-tion methods benefit from not requiring any annotated data; they can directly exploit many existing unannotated images while being able to continuously upgrade the effectiveness of the segmentation models with incoming data. Therefore it is important to investigate unsupervised instance segmen-tation, which enables learning class-agnostic instance seg-mentation models without any data annotation. Recently, an unsupervised instance segmentation framework, FreeSOLO [48], has been proposed to extract coarse object masks as pseudo-labels and train an instance segmentation model using a self-supervised method. Although FreeSOLO at-tempts to improve the quality of pseudo labels and predic-tion masks, it can hardly overcome the detrimental effects of the considerable noise in pseudo labels without any guid-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15424 ance information in the training process. The overall fundamental challenges for unsupervised in-stance segmentation lie in the following two aspects: (1) Unsupervised segmentation models are heavily influenced by the noisy pseudo-labels. When the objects of an image are part of the background relative to the features of interest, models are prone to generating a large number of false pos-itive regions. (2) The unsupervised nature makes it difficult to learn discriminative information. Constructing compar-ison relations directly between different instances for the same type of target tends to lead to fragmentation prob-lems. Meanwhile, as suggested by the generalized context model [36], humans can capture different categories of in-formation through exemplars in their memory. Inspired by this idea, we aim to overcome the unsupervised instance segmentation challenges by developing new segmentation models to integrate an exemplar learning mechanism, which is desirable from both biological and practical perspectives. In this paper, we propose a novel approach, Exemplar-FreeSOLO , for performing instance segmentation without any annotation. The core of the framework is an exem-plar mechanism that aims to extract and utilize pertinent information from unlabeled data in order to obtain useful knowledge that can guide model training, as shown in Fig-ure 1. Exemplar-FreeSOLO obtains beneficial top-down guidance for objects through exemplar knowledge extrac-tion, while consequently enhancing the discriminability of instance segmentation models by exploiting the exemplar guidance information in a contrastive manner. Specifically, given randomly selected exemplar images, we design an exemplar knowledge abstraction module (EKA) to acquire top-down guidance knowledge for objects. The exemplar images are roughly cropped and fed into an unsupervised model to extract masked-out images for constructing a pool of exemplar objects, which are then used to produce the ex-emplar guidance knowledge. Next, an exemplar embedding contrastive module (EEC) is devised to capture homoge-neous components of the same type of instances through a contrastive learning paradigm. This is achieved by con-sidering similarities between the embeddings of unlabeled images and the exemplar embeddings and constructing con-trastive relationships among them. This module is expected to enhance the discriminability of the instance segmenta-tion model. Finally, we incorporate the two modules into the FreeSOLO framework to effectively train instance seg-mentation models. The main contributions of our paper are summarized as follows: • We propose a novel Exemplar-FreeSOLO approach to tackle the unsupervised instance segmentation prob-lem by leveraging useful information from the unla-belled data through an exemplar mechanism. • We design an exemplar knowledge abstraction module to acquire beneficial top-down guidance knowledge byextracting exemplar objects in an unsupervised way. • We devise an exemplar embedding contrastive mod-ule to enhance the discriminative capability of instance segmentation models by exploiting contrasting exem-plar guidance knowledge in the embedding space. • Experimental results on three datasets show that the proposed Exemplar-FreeSOLO can substantially out-perform the state-of-the-art unsupervised instance seg-mentation and object detection methods.
Fini_Semi-Supervised_Learning_Made_Simple_With_Self-Supervised_Clustering_CVPR_2023
Abstract Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations. However, in many real-world scenarios, labels are partially available, motivating a recent line of work on semi-supervised methods inspired by self-supervised prin-ciples. In this paper, we propose a conceptually sim-ple yet empirically powerful approach to turn clustering-based self-supervised methods such as SwAV or DINO into semi-supervised learners. More precisely, we introduce a multi-task framework merging a supervised objective using ground-truth labels and a self-supervised objective relying on clustering assignments with a single cross-entropy loss. This approach may be interpreted as imposing the clus-ter centroids to be class prototypes. Despite its simplicity, we provide empirical evidence that our approach is highly effective and achieves state-of-the-art performance on CI-FAR100 and ImageNet.
1. Introduction In recent years, self-supervised learning became the dominant paradigm for unsupervised visual representation learning. In particular, much experimental evidence shows that augmentation-based self-supervision [3, 8–10, 14–16, 18, 21, 29, 32, 35, 71] can produce powerful representa-tions of unlabeled data. Such models, although trained without supervision, can be naturally used for supervised downstream tasks via simple fine-tuning. However, the most suitable way to leverage self-supervision is perhaps by multi-tasking the self-supervised objective with a cus-tom (possibly supervised) objective. Based on this idea, the community has worked on re-purposing self-supervised methods in other sub-fields of computer vision, as for instance in domain adaptation [25], novel class discov-ery [31, 77], continual learning [30] and semi-supervised learning [4, 13, 19, 72]. *Enrico Fini and Pietro Astolfi contributed equally. †Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France. Labeled samplesUnlabeled samples Cluster prototypes Class prototypes (a) SwA V / DINO (b) SwA V / DINO + linear (c) SwA V / DINO + fine-tuning (d) Suave / Daino (ours)Mistakes Figure 1. Schematic illustration of the motivation behind the pro-posed semi-supervised framework. (a) Self-supervised clustering methods like SwA V [15] and DINO [16] compute cluster proto-types that are not necessarily well aligned with semantic cate-gories, but they do not require labeled data. (b) Adding a lin-ear classifier provides class prototypes, but the labeled (and unla-beled) samples are not always correctly separated. (c) Fine-tuning can help separating labeled data. (d) Our framework learns clus-ter prototypes that are aligned with class prototypes thus correctly separating both labeled and unlabeled data. One of the areas that potentially benefits from the advancements in unsupervised representation learning is semi-supervised learning. This is mainly due to the fact that in semi-supervised learning it is crucial to efficiently extract the information in the unlabeled set to improve the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3187 classification accuracy on the labeled classes. Indeed, sev-eral powerful semi-supervised methods [6, 13, 61, 72] were built upon this idea. In the self-supervised learning landscape, arguably the most successful methods belong to the clustering-based family, such as DeepCluster v2 [14], SwA V [15] and DINO [16]. These methods learn representations by contrast-ing predicted cluster assignments of correlated views of the same image. To avoid collapsed solutions and group samples together, they use simple clustering-based pseudo-labeling algorithms such as k-means and Sinkhorn-Knopp to generate the assignments. A peculiar fact about this fam-ily of methods is the discretization of the feature space, that allows them to use techniques that were originally devel-oped for supervised learning. Indeed, similar to supervised learning, the cross-entropy loss is adopted to compare the assignments, as they represent probability distributions over the set of clusters. In this paper, we propose a new approach for semi-supervised learning based on the simple observation that clustering-based methods are amenable to be adapted to a semi-supervised learning setting: the cluster prototypes can be replaced with class prototypes learned with supervision and the same loss function can be used for both labeled and unlabeled data. In practice, semi-supervised learning can be achieved by multi-tasking the self-supervised and super-vised objectives. This encourages the network to cluster unlabeled samples around the centroids of the classes in the feature space. By leveraging on these observations we pro-pose a new framework for semi-supervised methods based on self-supervised clustering. We experiment with two in-stances of that framework: Suave and Daino , the semi-supervised counterparts of SwA V and DINO. These meth-ods have several favorable properties: i) they are efficient at learning representations from unlabeled data since they are based on the top-performing self-supervised methods; ii) they extract relevant information for the semantic cate-gories associated with the data thanks to the supervised con-ditioning; iii) they are easy to implement as they are based on the multi-tasking of two objectives. The motivation be-hind our proposal is also illustrated in Fig. 1. As shown in the figure, our multi-tasking approach enables to compute cluster centers that are aligned with class prototypes thus correctly separating both labeled and unlabeled data. Ourcontributions can be summarized as follows: • We propose a new framework for semi-supervised learning based on the multi-tasking of a supervised ob-jective on the labeled data and a clustering-based self-supervised objective on the unlabeled samples; • We experiment with two representatives of such frame-work: Suave and Daino, semi-supervised extensions of SwA V [15] and DINO [16]. These methods, whilesimple to implement, are powerful and efficient semi-supervised learners; • Our methods outperform state-of-the-art approaches, often relying on multiple ad hoc components, both on common small scale (CIFAR100) and large scale (Im-ageNet) benchmarks, setting a new state-of-the-art on semi-supervised learning.
Gao_Exploring_Data_Geometry_for_Continual_Learning_CVPR_2023
Abstract Continual learning aims to efficiently learn from a non-stationary stream of data while avoiding forgetting the knowledge of old data. In many practical applications, data complies with non-Euclidean geometry. As such, the commonly used Euclidean space cannot gracefully capture non-Euclidean geometric structures of data, leading to in-ferior results. In this paper, we study continual learning from a novel perspective by exploring data geometry for the non-stationary stream of data. Our method dynami-cally expands the geometry of the underlying space to match growing geometric structures induced by new data, and pre-vents forgetting by keeping geometric structures of old data into account. In doing so, making use of the mixed cur-vature space, we propose an incremental search scheme, through which the growing geometric structures are en-coded. Then, we introduce an angular-regularization loss and a neighbor-robustness loss to train the model, capa-ble of penalizing the change of global geometric structures and local geometric structures. Experiments show that our method achieves better performance than baseline methods designed in Euclidean space.
1. Introduction Unlike humans, artificial neural networks perform poorly to learn new knowledge in a continual manner. The tendency to lose the knowledge previously learned, known ascatastrophic forgetting , is due to the fact that important parameters of a neural network for old data are changed to meet the objectives of new data. There have been many continual learning methods [12, 14, 23, 32, 49, 53], and their ∗Corresponding authors: Chen Xu and Yuwei Wu.goal is to remember the knowledge from old data while ef-fectively learning from new data. They have achieved im-pressive performance in alleviating catastrophic forgetting. However, a long-lasting issue with existing methods is that data geometry is rarely studied in continual learning. Existing methods usually assume that data is Euclidean and they use Euclidean geometry to process the data stream. In fact, data in countless applications intrinsically has non-Euclidean geometric structures [2, 6]. Several studies show that non-Euclidean geometric structures can be better cap-tured by particular forms of Riemannian geometry [15, 35]. For example, the hyperbolic geometry has a natural expres-sive ability for the hierarchical structure and is hence used successfully for fine-grained images [22, 29]. The spheri-cal geometry is shown as a suitable choice for face images that have the cyclical structure [27, 46]. In addition to the geometric structures discussed above, natural data may be diverse and irregular in structure, e.g., data exhibits hier-archical forms in some regions and cyclical forms in oth-ers [31, 40]. Overall, distortions produced when using Eu-clidean geometry for non-Euclidean geometric structures are overwhelming, causing the loss of semantic informa-tion, and hence resulting in inferior performance [3]. In this paper, we study how to attain suitable non-Euclidean ge-ometry to capture the intrinsic geometric structures of data during continual learning. To achieve our goal, we have to face two challenges (see Fig. 1). (1)Non-stationary stream of data will inevitably increase the complexity of intrinsic geometric structures. In other words, fixing the geometry of the underlying space cannot always match new and unseen data in continual learning. For example, more and more complex hierarchies in a data stream bring more leaf nodes, requiring a faster growing space volume with the radius, which conflicts with a fixed geometry [17]. (2)Old data is not accessible in con-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24325 (a) growing geometric structures (b) learning new dataRodentia Castoridae Cricetidae RhizomyidaeMuridaeHystricidaeSquirrel Arctic Squirrel Rock SquirrelTownsend's Vole Cabrera's Vole Tamias Castor fiber Castor canadensis Rhizomys sumatrensis Tachyoryctes splendensXerini Atherurus macrourusHystrix cristataRodentia Castoridae Rhizomyidae Rodentia CastoridaeRhizomyidaeRodentia Castoridae Rhizomyidae Cricetidae Figure 1. Illustrations of the two challenges of exploring data geometry in continual learning. Different color denotes different classes. (a) A fixed geometry cannot handle more and more com-plex hierarchy in a data stream. The geometric structure of leaf nodes is destroyed. (b) Learning from new data (in the blue dash box) will destroy the captured hierarchical structures of old data. tinual learning, and learning from new data may destroy the captured geometric structures of old data, resulting in the catastrophic forgetting problem. Since geometric structures are characterized by distances or angles between instances, destroying the captured geometric structures may cause un-desirable data distribution ( e.g., classes are not separable). In this work, we use the mixed-curvature space to em-bed data, which is a product of multiple constant curva-ture spaces acting as submanifolds [19, 42]. The mixed-curvature space has shown to be superior to Euclidean space in some machine learning tasks, owing to its ability to cap-ture non-Euclidean geometric structures [40, 43]. Exam-ples include image classification [16], graph analysis [46], and information retrieval [48]. The geometry of a mixed-curvature space is determined by the number, dimension, and curvature of constant curvature spaces. By changing the geometry, we are able to adjust the mixed-curvature space to match specific geometric structures of data [52]. For example, positive curvatures are suitable for local cyclical structures, and negative curvatures are suitable for hierar-chical structures in a region. Based on the mixed-curvature space, we restate the two challenges: (1)how to identify the suitable geometry of the mixed-curvature space for growing geometric structures, and (2)how to preserve the geometric structures of old data when learning from new data in the mixed-curvature space. We introduce a geometry incremental search scheme to solve the first challenge. We build a submanifold pool by sampling subsets of coordinates from features, where the length of coordinates is the dimension of constant curvaturespaces and features are projected to them using initial curva-tures. Given new data, we select constant curvature spaces that contribute significantly to the current task to expand ge-ometry of the mixed-curvature space. In this case, the grow-ing geometric structures are well encoded. We introduce two loss functions, i.e., an angular-regularization loss and a neighbor-robustness loss, to solve the second challenge. The angular-regularization loss penalizes the change of an-gles between any pair of instances to preserve global struc-tures. The neighbor-robustness loss realizes within-class compactness and between-class separability in a neighbor to preserve the discriminative power of local structures. As a result, our method is capable of efficiently learning from new data and preventing forgetting of old data. Our method is evaluated on multiple continual learning settings, and ex-perimental results show the effectiveness of our method. In summary, our contributions are three-fold. (1) To the best of our knowledge, we are the first to explore data ge-ometry for continual learning. Our method is efficient for learning from a non-stationary stream of data. (2) We intro-duce an incremental search scheme that identifies the suit-able geometry for growing geometric structures of data. (3) We introduce an angle-regularization loss and a neighbor-robustness loss, capable of preserving geometric structures of old data in the mixed-curvature space.
Giebenhain_Learning_Neural_Parametric_Head_Models_CVPR_2023
Abstract We propose a novel 3D morphable model for complete human heads based on hybrid neural fields. At the core of our model lies a neural parametric representation that dis-entangles identity and expressions in disjoint latent spaces. To this end, we capture a person’s identity in a canonical space as a signed distance field (SDF), and model facial ex-pressions with a neural deformation field. In addition, our representation achieves high-fidelity local detail by intro-ducing an ensemble of local fields centered around facial anchor points. To facilitate generalization, we train our model on a newly-captured dataset of over 3700 head scans from 203 different identities using a custom high-end 3D scanning setup. Our dataset significantly exceeds compara-ble existing datasets, both with respect to quality and com-pleteness of geometry, averaging around 3.5M mesh faces per scan1. Finally, we demonstrate that our approach out-performs state-of-the-art methods in terms of fitting error and reconstruction quality. 1We will publicly release our dataset along with a public benchmark for both neural head avatar construction as well as an evaluation on a hidden test-set for inference-time fitting.1. Introduction Human faces and heads lie at the core of human visual perception, and hence are key to creating digital replica of someones identity, likeliness, and appearance. In particular, 3D reconstruction of human heads from sparse inputs, such as point clouds, is central to a wide range of applications in the context of gaming, augmented and virtual reality, and digitization in our modern digital era. One of the most suc-cessful lines of research to address this challenging prob-lem are parametric face models, which represent both shape identities and expressions featuring a low-dimensional para-metric space. These Blendshape and 3D morphable models (3DMMs) have achieved incredible success, since they can be fitted to sparse inputs, regularize out noise, and provide a compact 3D representation. As a result, many practical settings could be realized, ranging from face tracking and 3D avatar creation to facial-reenactment applications [49]. Traditionally, 3DMMs, are based on a low-rank approx-imation of the underlying 3D mesh geometry. To this end, a template mesh with fixed topology is non-rigidly regis-tered to a series of 3D scans. From this template regis-tration, a 3DMM can be computed using dimensionality Website: https://simongiebenhain.github.io/NPHM This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21003 reduction methods such as principal component analysis (PCA). The quality of the resulting parametric space de-pends strongly on the quality of 3D scans, their registra-tion, and the ability to disentangle identity and expression variations. While these PCA-based models exhibit excel-lent regularizing properties, their inherent limitation lies in their inability to represent local surface detail and the re-liance on a template mesh of fixed topology, which inhibits the representation of diverse hair styles. In this work, we propose neural parametric head models (NPHM), which represent complete human head geometry in a canonical space using an SDF, and morph the resulting geometry to posed space using a forward deformation field. By decoupling the human head representation into these two spaces, we are able to learn disentangled latent spaces – one of the core concepts of 3DMMs. Furthermore, we decompose the implicit geometry representation in canon-ical space into an ensemble of local MLPs. Each part is represented by a small MLP that operates in a local coordi-nate system centered around face keypoints. Additionally, we exploit face symmetry by sharing network weights of symmetric regions. This decomposition into separate parts imposes a strong geometry prior and helps to improve both generalization and provide higher levels of detail. In order to train our model, we capture a new high-fidelity head dataset with a high-end capture rig, which is composed of over 3700 3D head scans from 203 different people. After rigidly aligning all scans in a canonical co-ordinate system, we train our identity network on scans in canonical expression. In order to train the deformation net-work, we non-rigidly register each scan against a template mesh, which we in turn use as training data for our neural deformation model. At inference time, we can then fit our model to a given input point cloud by optimizing for the la-tent code parameters for both expression and identity. In a series of experiments, we demonstrate that our neural para-metric model outperforms state-of-the-art models and can represent complete heads, including fine details. In sum, our contributions are as follows: • We introduce a novel 3D dataset captured with a high-end capture rig, including over 3700 3D scans of hu-man heads from 203 different identities. • We propose a new neural-field-based parametric head representation, which facilitates high-fidelity local de-tails through an ensemble of local implicit models. • We demonstrate that our neural parametric head model can be robustly fit to range data, regularize out noise, and outperform existing models. 2. Related Work 3D morphable face and head models. The seminal work of Blanz and Vetter [1] was one of the first to introducea model-based approach to represent variations in human faces using PCA. Since the scans were captured in con-strained environments, the expressiveness of the model was relatively limited. As such, improvements in the regis-tration [29], as well as the use of data captured in the wild [3,4,31], led to significant advances. Thereafter, more advanced face models were introduced, including multilin-ear models of identity and expression [2,6], as well as mod-els that combined linear shape spaces with articulated head parts [18], and localized approaches [23]. With the advent of deep learning, various works focused on extending face and head 3DMMs beyond linear spaces. To this end, convolutional neural network based architec-tures have been proposed to both regress the model parame-ters and reconstruct the face [16,37–39,42,43]. At the same time, graph convolutions [5, 14] and attention modules [11] have been proposed to model the head mesh geometry. Neural field representations. Neural field-based networks have emerged as an efficient way to implicitly represent 3D scenes. In contrast to explicit representations (e.g., meshes or voxel grids), neural fields are well-suited to represent ge-ometries of arbitrary topology. Park et al. [26] proposed to represent a class-specific SDF with an MLP that is condi-tioned on a latent variable. Similarly, Mescheder et al. [21] implicitly define a surface as the decision boundary of a bi-nary classifier and Mildenhall et al. [22] represent a radi-ance field using an MLP by supervising a photometric loss on the rendered images. Building upon these approaches, a series of works focus on modeling deformations. These methods use a separate network to model the deformations that occur in a sequence (e.g., [27, 28]), and have been successfully applied to ani-mation of human bodies [17,19] and heads [46]. Following this paradigm, a number of neural parametric models have been proposed for bodies [9,24,25], faces [45], and —most closely related to our work— heads [32, 41, 44]. For in-stance, H3D-Net [32] and MoRF [41] proposed 3D gener-ative models of heads, but do not account for expression-specific deformations. Recently, neural parametric mod-els for human faces [44, 45] and bodies [9, 10, 24, 25] have explored combinations of SDFs and deformation fields, to produce complex non-linear deformations, while maintain-ing the flexibility of an implicit geometry representation. Our work is greatly inspired by these lines; however, the key difference is that we tailor our neural field represen-tation specifically to human heads through an ensemble of local MLPs. Thereby, our work is also related to lo-cal conditioning methods for neural fields of arbitrary ob-jects [8, 12, 13, 30], human bodies [25, 48] and faces [45]. Compared to ImFace [45], our model is more local, incor-porates a symmetry prior, represents a complete head and models forward instead of backward deformations, which allows much faster animation. 21004 Neutr al E xpr essions Identities Figure 2. 3D head scans from our newly-captured dataset: for each person (rows), we first capture a neutral pose, followed by several scans in different expressions (columns). Overall, our dataset has more than 3700 3D scans from 203 people. 3. Dataset Acquisition Our dataset comprises 203 subjects, 29% female, and contains over 3700 3D scans; see Table. 1. Our 3D head scans show great levels of detail and completeness, as shown in Fig. 2. Additionally, we do not require partici-pants to wear a bathing cap as in the FaceScape dataset [43], allowing for the capture of natural hair styles to a certain de-gree. See Fig. 3 for a visual comparison of our new dataset to other 3D face datasets. Num. Subjects 203 (144m/59f) Total num. Scans 3720 Num. Vertices/Scan ≈1.5M Table 1. Statistics of our 3D scanning dataset. 3.1. Capture Setup Our setup is composed of two Artec Eva scanners [35], that are rotated 360° around a subject’s head using a robotic actuator. Each scan takes only 6 seconds, which is crucial to keep invo
luntary, non-rigid facial movements to a min-imum. The scanners operate at 16 FPS, and are aligned through the scanning sequence and fused into a single mesh; each fused scan contains approximately 1.5M vertices and 3.5M triangles. Each participant is asked to perform 23 dif-ferent expressions, which are adopted from the FACS coded expression proposed in FaceWarehouse [7], see our sup-plemntal for details. Importantly, we capture a neutral ex-pression with the mouth open, which later serves as canon-ical expression, as described in Section 4. FaceScape [43] FaceVerse [42] Ours Figure 3. Compared to recent multi-view stero 3D face dataset, our data exhibits sharper details and less noise. 3.2. Registration Pipeline Registering all head scans against a common template is a key requirement to effectively train our parametric head model. First, we start with a rigid alignment into our canon-ical coordinate system; second, we non-rigidly register all scans to a common template. 3.2.1 Rigid Alignment We leverage 2D face landmark detectors to obtain a rigid transformation into the canonical coordinate system of the FLAME model [18]. To this end, we deploy the Medi-aPipe [20] face mesh detector and back-project a subset of 48 landmarks corresponding to iBUG68 annotations [33] to the 3D scan. Since not all viewing angles of the scanner’s 21005 MLPScans NeutralExpression Facial Landmarks Registered Template Non-Rigid Registration Pipeline Identity Learning Expression Learning MLPMLP … … Local MLP Ensemble Figure 4. Method overview: at the core of our neural parametric head model lies a neural field representation that parameterizes shape and expressions in disentangled latent spaces. Specifically, we propose a local MLP ensemble that is anchored at face keypoints (left). We train this model by leveraging a set of high-fidelity 3D scans from our newly-captured dataset comprising various expressions for identity (middle). In order to obtain the ground truth deformation samples, we non-rigidly register all scans to a common template (right). trajectories are suited for 2D facial landmark detection, we instead use frontal renderings of the colored meshes, which yields robust detection quality. Note that the initial land-mark detection is the only time we use the scanner’s color images. We then calculate a similarity transform using [40] to transform the detected landmarks to the average face of FLAME. 3.2.2 Non-Rigid Registration As a non-rigid registration prior, we first constrain the non-rigid deformation to FLAME parameter space, before op-timizing an offset for each vertex. Additionally, we back-project 2D hair segmentation masks obtained by FaRL [47] to mask out the respective areas of the scans. Initialization. Given the 23 expression scans {Sj}23 j=1of a subject, we jointly estimate identity parameters zid∈ R100, expression parameters {zex j}23 j=1, and jaw poses {θj}23 j=1of the FLAME model, as well as a shared scale s∈Rand per-scan rotation and translation corrections {Rj}23 j=1and{tj}23 j=1. Updating the initial similarity trans-form is crucial to obtaining a more consistent canonical alignment. LetΦjdenote all parameters affecting the j-th FLAME model and VΦjits vertices. We jointly optimize for these parameters by minimizing arg min Φ1,...Φ2323X j=1h λl∥Lj−ˆLj∥1+d(VΦj,Sj)+R(Φj)i ,(1)where Lj∈R68×3denotes the back-projected 3D land-marks, ˆLjare the 3D landmarks from VΦj,d(VΦj, Sj)is the mean point-to-plane distance from VΦjto its nearest neighbors in scan Sj, andR(Φj)regularizes FLAME pa-rameters. Fine tuning. Once the initial alignment has been ob-tained, we upsample the mesh resolution by a factor of 16 for the face region, and perform non-rigid registration using ARAP [36] for each scan individually. LetVbe the upsampled vertices, which we aim to regis-ter to the scan S. We seek vertex-specific offsets {δv}v∈V, and auxiliary, vertex-specific rotation {Rv}v∈Vfrom the ARAP term. Therefore, we solve arg min {δv}v∈V {Rv}v∈VX v∈V" d(ˆv,S)+X u∈Nv∥R(v−u)−(ˆv−ˆu)∥2 2# ,(2) using the L-BFGS optimizer, where ˆv=v+δv,Nvdenotes all neighboring vertices, and d(ˆv,S)is as before. See the supplemental for more details. 4. Neural Parametric Head Models Our neural parametric head model separately represents geometry in a canonical space and facial expression as for-ward deformations; see Sections 4.1 and 4.2, respectively. 4.1. Identity Representation We represent a person’s identity-specific geometry im-plicitly in its canonical space as a SDF. Compared to 21006 template-mesh-based approaches, this offers the necessary flexibility that is required to model a complete head with hair. In accordance with related work on human body mod-eling, e.g. [9,24,25], we choose a canonical expression with an open mouth to avoid topological issues. While a canon-ical coordinate system already reduces the dimensionality of the learning problem at hand, we further tailor our neu-ral identity representation to the domain of human heads; as described below. 4.1.1 Local Decomposition Instead of globally conditioning the SDF network on a spe-cific identity, we exploit the structure of the human face to impose two important geometric priors. First, we em-brace the fixed composition of human faces by decompos-ing the SDF network into an ensemble of several smaller local MLP-based networks, which are defined around cer-tain facial anchors, as shown in Fig. 4. Thereby, we reduce the learning problem into smaller, more tractable ones.We choose facial anchor points as a trade-off between the rele-vance of an area and spatial uniformity. Second, we exploit the symmetry of the face by only learning SDFs on the left side of the face, which are shared with the right half after flipping spatial coordinates accordingly. More specifically, we divide the face into K= 2Ksymm+Kmiddle regions, which are centered at facial anchor points a∈RK×3. We useMto denote the index set anchors lying on the sym-metry axis, and SandS∗for symmetric regions on the left and right side respectively, such that for k∈ S there is a k∗∈ S∗that corresponds to the symmetric anchor point. In addition to a global latent vector zglob∈Rdglob, thek-th region is equipped with a local latent vector zid k∈Rdloc. Together, the k-th region is represented by a small MLP fk:Rdglob+dloc+3→R (3) (x,zid glob,zid k)7→MLP θk([x−ak,zid glob,zid k]),(4) that predicts SDF values for points x∈R3, where [·]de-notes the concatenation operator. In order to exploit face symmetry, we share the network parameters and mirror the coordinates for each pair (k, k∗) of symmetric regions: fk∗(x,zid glob,zid k∗) :=fk(flip(x−ak∗),zid glob,zid k∗),(5) where flip(·)represents a flip of the coordinates along the face symmetry axis. 4.1.2 Global Blending In order to facilitate a decomposition that helps generaliza-tion, it is crucial that reliable anchor positions aare avail-able. To this end, we train a small MLP posthat predicts a from the global latent zid glob.Since each local SDF focuses on a specific semantic re-gion of the face, as defined by the anchors a, we addi-tionally introduce f0(x,zid glob,zid 0) = MLP 0(x,zid glob,zid 0), which operates in the global coordinate system, hence cov-ering all SDF values far away from any anchor in a. To clarify the notation, we set a0:=0∈R3. Finally, we blend all local fields fkinto a global field Fid(x) =KX k=0wk(x, ak)fk(x,zid glob,zid k), (6) using Gaussian kernels, similar to [12, 48], where w∗ k(x, ak) =( e−||x−a||2 2σ,ifk >0 c, if k = 0(7) andwk(x, ak) =w∗ k(x, ak)P k′w∗ k′(x, ak′)(8) We use a fixed isotropic kernel with standard deviation σ and a constant response cforf0. 4.2. Expression Representation In contrast to our local geometry representation, we model expressions only with a globally conditioned defor-mation field; e.g. a smile will effect the cheeks corners of the mouth and eye region. In this context, we define zex∈Rdexas a latent expression description. Since such a deformation field is defined in the ambient Euclidean space, it is crucial to additionally condition the deformation net-work with an identity feature. By imposing an information bottleneck on the latent expression description, the defor-mation network is then forced to learn a disentangled repre-sentation of expressions. More formally, we model deformations using an MLP Fex(x,zex,ˆzid) :Rdex+did-ex→R3. (9) Rather than directly feeding all identity information into Fex directly, we first project the information to a lower dimen-sional representation ˆZid=W[zid glob,zid 0, . . .zid K,a1, . . . ,aK], (10) using a single linear layer W, where did-exdenotes the di-mensionality of the interdependence of identity and expres-sion. 4.3. Training Strategy Our training strategy closely follows NPMs [24] and se-quentially trains the identity and expression networks in an auto-decoder fashion. Identity Representation For the identity space, we jointly train latent codes Zid j:={zid glob,j,zid 0,j, . . . ,zid K,j}for each j 21007 in the set of training indices Jand network parameters θpos andθ0, . . . , θ K, by minimizing Lid=X j∈JLIGR+λa∥ˆaj−aj∥2 2+λsyLsy+λid reg∥Zid j∥2 2,(11) where LIGRis the loss introduced in [15] which enforces SDF values to be zero on the surface and contains an Eikonal term. This ensures consistency between surface normals and SDF gradients and is in similar spirit to [15, 34]. For training, we directly sample points and sur-face normals from our ground truth scans. Additionally, we supervise anchor predictions ajusing the corresponding vertices from our registration
Deng_Harmonious_Teacher_for_Cross-Domain_Object_Detection_CVPR_2023
Abstract Self-training approaches recently achieved promising re-sults in cross-domain object detection, where people it-eratively generate pseudo labels for unlabeled target do-main samples with a model, and select high-confidence samples to refine the model. In this work, we reveal that the consistency of classification and localization predic-tions are crucial to measure the quality of pseudo labels, and propose a new Harmonious Teacher approach to im-prove the self-training for cross-domain object detection. In particular, we first propose to enhance the quality of pseudo labels by regularizing the consistency of the clas-sification and localization scores when training the detec-tion model. The consistency losses are defined for both labeled source samples and the unlabeled target samples. Then, we further remold the traditional sample selection method by a sample reweighing strategy based on the con-sistency of classification and localization scores to improve the ranking of predictions. This allows us to fully ex-ploit all instance predictions from the target domain with-out abandoning valuable hard examples. Without bells and whistles, our method shows superior performance in various cross-domain scenarios compared with the state-of-the-art baselines, which validates the effectiveness of our Harmonious Teacher. Our codes will be available at https://github.com/kinredon/Harmonious-Teacher.
1. Introduction Object detection aims to recognize and localize objects in images simultaneously. As one of the fundamental tasks in computer vision, it plays an important role in many downstream vision tasks, including face recognition [33], person re-identification [42], instance segmentation [11], action recognition [8] and so on. With the development of deep convolution neural network (DCNN) [12,32], we have witnessed a performance breakthrough of object detection *The corresponding author inconsv3 Score=0.82IoU=0.55Score=0.78IoU=0.83 Score=0.82IoU=0.55 Score=0.78IoU=0.83ClassificationScoreHarmony Measure Figure 1. Comparison of pseudo labels selection using classifi-cation score and our proposed harmony measure. Object detec-tion models often produce inconsistent predictions, e.g., bound-ing boxes with low classification score but high localization IoU (blue box) or with high classification score but low localization IoU (red box) with ground truth box (green box). Existing self-training methods [6, 13, 22] usually adopt classification scores to rank the predictions and are easily biased to low-quality predic-tion. In contrast, we use the harmony measure to consider the consistency of the classification and localization scores and prefer the accurate bounding box. in recent years [1, 11, 27, 28, 36]. One important driving force for such an advance is the availability of large-scale annotated training data. However, collecting and annotating those data are often extremely expensive in both time and fund, which even has been a major challenge for many real-world applications, for example, face authentication [39], autonomous driving [4], etc. Cross-domain Object Detection (CDOD) [4, 6, 7, 13, 17– 19,22,30] has been proposed to address this problem, where the goal is to adapt an object detector from a labeled source domain to a novel unlabeled target domain. In this way, great efforts can be saved from annotating training data in the target domain. Recently, researchers have reported that the self-training strategy achieves promising results in the CDOD task. Generally, in the self-training framework, peo-ple use an existing object detection model to predict the ob-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23829 ject category labels and bounding boxes for the target do-main images, and select confident predictions as pseudo la-bels to continuously train the object detection model. These two steps are repeated alternatively for a certain times, and the final model is found to perform quite well in the target domain in many scenarios [6, 13, 22, 26, 43], as the infor-mation of the target domain is effectively exploited through training model with pseudo labels. Nevertheless, existing methods are mainly motivated by self-training classification works, which may have draw-backs for the object detection task. One major issue is that existing methods usually adopt the classification score to select pseudo labels. However, since the classification and localization branches are separately trained, inconsis-tency between classification and localization scores may happen when predicting the target domain images. For ex-ample, the bounding boxes with high classification scores could considerably deviate from the ground-truth position (see example in Fig. 1). Such noise in pseudo labels in-evitably introduces bias in the learnt object detection model, leading to degradation in performance. Another issue is the hard thresholding for selecting confident pseudo-labeled instances. Despite how sophisticated it is for determin-ing such a threshold, simply abandoning low-confidence pseudo boxes is definitely undesirable, since valuable hard examples cannot be fully exploited, which is actually cru-cial for training the object detection model. In this work, we propose a novel approach called Har-monious Teacher (HT) to improve the self-training frame-work for the CDOD task. On the one hand, to generate high-quality pseudo labels, we first propose to regularize the consistency of the classification prediction and the lo-calization score when training the detection model. For this purpose, a supervised harmonious loss and an unsu-pervised harmonious loss are respectively designed for the labeled source domain and the unlabeled target domain. On the other hand, to simultaneously alleviate the damage of low-quality predictions while fully exploiting hard exam-ples, we design a harmony measure to estimate the quality of pseudo-labeled samples based on the consistency of the classification prediction and the localization score. Then, we take all predicted instances into consideration and use the harmony measure to reweigh these instances for self-training, thus avoiding simply abandoning those valuable hard examples. The contributions of this paper are listed as follows: • We improve the self-training framework for CDOD and reveal that existing methods neglect the inconsis-tency between classification and localization, which hinders the performance of self-training. • We propose a simple yet effective approach named Harmonious Teacher (HT). We first propose harmo-nious model learning to regularize the consistency of the classification and localization predictions for both source and target domains. Then, we design a har-mony measure to estimate the quality of predictions and leverage the harmony measure to reweigh all the predictions in self-training without abounding valuable hard examples. • We have conducted extensive experiments on four widely used CDOD benchmarks. The experimen-tal results show that our method clearly outper-forms the state-of-the-art baselines by a large mar-gin. For example, our method reaches 50.4%mAP on Cityscapes →FoggyCityscapes, which exceeds the state-of-the-art method OADA [43] by 5%mAP.
Humayun_SplineCam_Exact_Visualization_and_Characterization_of_Deep_Network_Geometry_and_CVPR_2023
Abstract Current Deep Network (DN) visualization and inter-pretability methods rely heavily on data space visualiza-tions such as scoring which dimensions of the data are re-sponsible for their associated prediction or generating new data features or samples that best match a given DN unit or representation. In this paper, we go one step further by de-veloping the first provably exact method for computing the geometry of a DN’s mapping – including its decision bound-ary – over a specified region of the data space. By lever-aging the theory of Continuous Piece-Wise Linear (CPWL) spline DNs, SplineCam exactly computes a DN’s geome-try without resorting to approximations such as sampling or architecture simplification. SplineCam applies to any DN architecture based on CPWL activation nonlinearities, including (leaky) ReLU, absolute value, maxout, and max-pooling and can also be applied to regression DNs such as implicit neural representations. Beyond decision bound-ary visualization and characterization, SplineCam enables one to compare architectures, measure generalizability, and sample from the decision boundary on or off the data man-ifold. Project website: bit.ly/splinecam.
1. Introduction Deep learning and in particular Deep Networks (DNs) have redefined the landscape of machine learning and pattern recognition [22]. Although current DNs employ a variety of techniques that improve their performance, their core op-eration remains unchanged, primarily consisting of sequen-tially mapping an input vector xto a sequence of Lfea-ture maps zℓ,ℓ= 1, . . . , L by successively applying simple nonlinear transformations, as in zℓ=σ Wℓzℓ−1+bℓ , ℓ = 1, . . . , L (1) starting with z0=x. Here Wℓandbℓdenotes the weight matrix and the bias vector for layer ℓ, andσis an activation operator that applies an element-wise nonlinear activation Figure 1. Exact visualization of the decision boundary and par-tition geometry of a 3D neural signed distance field (SDF). (Top left) Surface normals obtained from the learned signed distance field with annotations indicating slices used for visualization. For each of the slices, we can see the spline partition geometry of the learned SDF-each contiguous line represents a neuron, on either side of which it gets activated/deactivated. Neurons from differ-ent depths of the network create a partitioning of the input space into ’linear regions’. Here the colored lines represent the decision boundary learned by the SDF. Note that while the final neuron ob-tains the decision boundary, many neurons place their boundaries close to the ground truth surface to obtain the final SDF represen-tation. function. One popular choice for σis the Rectified Lin-ear Unit (ReLU) [12] that takes the elementwise maximum between its entry and 0. The parametrization of Wℓ,bℓ controls the type of layer, e.g., circulant matrix for convo-lutional layer. Interpreting the geometry of a DN is a nontrivial task since many different sets of parameters can lead to the same 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3789 input-output mapping. One example is obtained by per-muting the rows of Wℓ,bℓand the columns of Wℓ+1for any two consecutive layers in a DN. Another example is to rescale Wℓ,bℓby some constant κand to rescale Wℓ+1 by1/κfor a ReLU-DN [33]; the list of such parameter manipulations preserving the underlying DN’s function is an active area of research [32]. Since one cannot trivially use the DN’s parameters to describe its mapping, practi-tioners have relied on different solutions to interpret what has been learned by a model by looking at the activations instead of the weights of the network [19, 41]. Activation based interpretability methods however can be susceptible to feature adversarial attacks, i.e., adversarial attacks that don’t cross the decision boundary but changes the activa-tion [11]. Some alternative empirical methods for model in-terpretation, therefore rely on sampling the decision bound-ary or finding the point on a model’s decision boundary closest to a sample x[36]. Beyond interpretability, such methods find practical use in active learning [23] and ad-versarial robustness [15]. In this setting, gradient updates are performed from an initial guess for xbased on an ob-jective function that reaches its minimum whenever its argu-ment lies on the model’s decision boundary. While alterna-tive and more efficient solutions have been developed, most of the progress in this direction has focused on providing more optimized losses and sampling strategies [15, 36]. In short, there doesn’t exist an exact (up to machine precision) method to compute the decision boundary of a DN. In this paper, we focus on DNs employing Continuous Piece-Wise Linear (CPWL) activation functions σ, such as the (leaky-)ReLU, absolute value, and max-pooling. In this setting, the entire DN itself becomes a CPWL operator, i.e., its mapping is affine within regions of a partition of its do-main. There has been previous studies dedicated to estimat-ing the partition of such CPWL DNs and bridging empir-ical findings with interpretability. For example, Raghu et al. [34] shows that the partition density provides measures of DN expressivity, Hanin et al. [13] connects the DN par-tition density with the complexity of the learned function, Jordan et al. [20] approximates the DN partition to provide robustness certificates, Zhang et al. [43] interprets the im-pact of dropout with respect to DN partitions, Balestriero et al. [3] proposes to improve batch-normalization to fur-ther adapt DN partitions to the data geometry, Humayun et al. [17, 18] proposes methods to control pre-trained gen-erative network output distributions by approximating the DN partition, Chen et al. [25] proposes a neural architec-ture search method based on partition statistics. Despite be-ing successful, all these methods rely on approximation of the DN partition. We propose SplineCam , a sampling-free method to compute the exact partition of a DN. Our method computes the parti-tion on two-dimensional domains of the input space, easily scales with width and depth of DNs, can handle convolu-tional layers and skip connections, and can be scaled to dis-cover numerous regions as opposed to previously existing methods. Our method also allows local characterization of the input space based on partition statistics, and enables one to tractably and efficiently sample arbitrarily many samples that provably lie on a DN’s decision boundary -opening new avenues for visualization and interpretability. We sum-marize our contributions as follows: • Development of a scalable enumeration method that, given a bounded 2D domain of a DN’s input space, computes the DN’s input space partition (aka, linear regions) and decision boundary. • Development of SplineCam that leverages our new enumeration method to directly visualize a DN’s input space partition, compute partition statistics and sample from the decision boundary. • Quantitative analysis that demonstrates the ability of SplineCam to characterize the DN and compare be-tween architectural choices and training regimes. The SplineCam library, and codes required to reproduce our results are provided in our Github1. In Appendix E, we also demonstrate the usage of SplineCAM with example code blocks.
Chai_Recognizability_Embedding_Enhancement_for_Very_Low-Resolution_Face_Recognition_and_Quality_CVPR_2023
Abstract Very low-resolution face recognition (VLRFR) poses unique challenges, such as tiny regions of interest and poor resolution due to extreme standoff distance or wide viewing angle of the acquisition devices. In this paper, we study principled approaches to elevate the recognizability of a face in the embedding space instead of the visual quality. We first formulate a robust learning-based face recogniz-ability measure, namely recognizability index (RI), based on two criteria: (i) proximity of each face embedding against the unrecognizable faces cluster center and (ii) closeness of each face embedding against its positive and negative class prototypes. We then devise an index diversion loss to push the hard-to-recognize face embedding with low RI away from unrecognizable faces cluster to boost the RI, which reflects better recognizability. Additionally, a percep-tibility attention mechanism is introduced to attend to the most recognizable face regions, which offers better explana-tory and discriminative traits for embedding learning. Our proposed model is trained end-to-end and simultaneously serves recognizability-aware embedding learning and face quality estimation. To address VLRFR, our extensive eval-uations on three challenging low-resolution datasets and face quality assessment demonstrate the superiority of the proposed model over the state-of-the-art methods.
1. Introduction In real-world face recognition deployment scenarios, the pixel resolution of the detected face images is signif-icantly deflated, due to extreme long-range distance and broad viewing angle of the acquisition devices, especially in surveillance applications. These tiny regions of interest are, in general, ranging from 16 ×16 to 32 ×32 pixels [60], thereby suffering from poor pixel resolution, in addition to unrestricted noises such as poor illumination conditions, non-frontal poses with awful angles, unconstrained facial expressions, blurriness, and occlusions [45]. It is note-worthy that these contaminated very low-resolution (VLR) face images undermine the overall performance of a face model trained with its high-resolution (HR) counterparts; therefore, there is a lack of generalizability to resolve the VLR face recognition (VLRFR) problem [6]. Apart from that, training of a VLRFR model often suffers from very limited representative face examples to extract meaningful This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9957 identity-specific patterns. These issues are further escalated due to ambiguous inter-class variations for the heavily dis-torted face instances with perceptually similar identities in particular [40]. Whilst matching a probe to a gallery set of the same resolution (i.e. VLR to VLR) is still an open challenge, the resolution gap between galleries and probes triggers another problem in cross-resolution matching (typ-ically HR galleries to VLR probes). Hence, the generaliza-tion performance of the prevalent deep learning models for VLRFR is still far from satisfactory. As a whole, most existing works designated for VLRFR improve the face quality of the VLR instances based on an auxiliary set of HR face images [28]. The generic operation modes are either in image domain ( super-resolution, im-age synthesis ) [52, 54, 58], embedding domain ( resolution-invariant features, coupled mappings ) [33, 44], or at clas-sifier level ( transfer learning, knowledge distillation ) [13, 14, 20, 39]. However, most of these state-of-the-art models require mated HR-VLR pairs of the same subject. This is unrealistic in practice as the HR-VLR pairs are often un-available. As far as face recognition is concerned, face recogniz-ability ( also known as face quality [17, 18]) can be deemed as a utility of how well a face image is for discrimina-tion purposes. In other words, face quality is closely re-lated to face recognition performance. Some works thus focus on predicting a face image’s suitability for face recog-nition, commonly known as Face Image Quality Assess-ment (FIQA) [1, 18]. FIQA focuses either on ( i) creating propositions to label the training data with face image qual-ity scores and solve a regression problem [17, 18, 36], or (ii) linking the face embedding properties to FIQ scores [4,25,35,38,45]. The second approach shows better quality estimation, with the possible reason that the first approach is prone to mislabeling of ground truth quality [35, 45]. How-ever, the second approach may not be optimal since the FIQ scores are estimated based on the embedding proper-ties rather than through a learning process [2]. Recently, [9] reported an intriguing observation that a deep learning-based face model induces an unrecognizable cluster in the embedding space. The cluster, known as un-recognizable identities (UIs), is formed by unrecognizable face examples, owing to diverse inferior quality factors, in-cluding VLR, motion blurred, poor illumination, occlusion, etc. Hence, these face examples with varying ground truths incline to lie close to the UIs, rather than their respective identity clusters. This observation inspires us to analyze the embedding distribution of the VLR face images against the UIs center. Interestingly, the extreme bimodal distribu-tion in Fig. 1 discloses that a significant number of the VLR faces in TinyFace [6], i.e., a realistic VLR face dataset, are hard-to-recognize from the human perspective and there-fore rendered next to the UIs cluster. We reckon that miningrepresentative patterns from these hard-to-recognize faces is more meaningful for face recognition, in place of defining them as the elements of UIs. Apart from that, a more re-liable objective quality metric is needed to better interpret each VLR face example in terms of its embedding recog-nizability for recognizability-aware embedding learning. Instead of perceptual quality, this work aims to elevate the recognizability of every VLR face embedding. In a nut-shell, we formulate a learning-based recognizability index (RI) with respect to the Cosine proximity of each embed-ding instance with (i) the UIs cluster, and (ii) the associ-ated positive and negative prototypes. In the meantime, the index diversion (ID) loss is presented to detach the hard-to-recognize embeddings from the UIs cluster, alongside a per-ceptibility attention mechanism. We underline that embed-ding learning in the direction opposing the UIs contributes to a higher explanatory power whilst promoting inter-class separation, particularly for hard-to-recognize instances. For clarity, we summarize our contributions as follows: • A novel approach is proposed to address the VLRFR, including VLR-VLR and HR-VLR matching, by lever-aging the face recognizability notion in the embedding space to improve the hard-to-recognize instances. • A robust learning-based face recognizability, dubbed RI, is put forward. RI relies on the face embeddings’ intrinsic proximity relationship against the UIs cluster, positive, and negative class prototypes. • An index diversion (ID) loss is devised to enhance the RI for face embeddings. Further, we put forward a per-ceptibility attention mechanism to guide embedding learning from the most salient face regions. • Our proposed model trained in an end-to-end man-ner not only renders a more discriminative embed-ding space for VLRFR but simultaneously serves recognizability-aware embedding learning and face recognizability estimation.
Deng_SE-ORNet_Self-Ensembling_Orientation-Aware_Network_for_Unsupervised_Point_Cloud_Shape_Correspondence_CVPR_2023
Abstract Unsupervised point cloud shape correspondence aims to obtain dense point-to-point correspondences between point clouds without manually annotated pairs. However, hu-mans and some animals have bilateral symmetry and var-ious orientations, which lead to severe mispredictions of symmetrical parts. Besides, point cloud noise disrupts con-sistent representations for point cloud and thus degrades the shape correspondence accuracy. To address the above issues, we propose a Self-Ensembling ORientation-aware Network termed SE-ORNet. The key of our approach is to exploit an orientation estimation module with a domain adaptive discriminator to align the orientations of point cloud pairs, which significantly alleviates the mispredic-tions of symmetrical parts. Additionally, we design a self-ensembling framework for unsupervised point cloud shape correspondence. In this framework, the disturbances of point cloud noise are overcome by perturbing the inputs of the student and teacher networks with different data aug-mentations and constraining the consistency of predictions. Extensive experiments on both human and animal datasets show that our SE-ORNet can surpass state-of-the-art unsu-pervised point cloud shape correspondence methods.
1. Introduction With the cost of LiDAR and depth cameras falling, it is more accessible to obtain 3D point cloud data. For real-world applications, such as articulated motion trans-fer [5, 26] and non-rigid human body alignment [3], the correspondence between two point clouds is indispensable. However, we are hard to directly obtain the correspondence between two raw point clouds due to various object orienta-tions and ununified coordinate systems. To accurately find the point-to-point correspondence be-Equal Contribution yCorresponding Author Source Target (baseline) Target (Ours) Target (GT) Figure 1. The visualization of dense point matching results. Three point cloud pairs have different relative rotation angles. GT denotes ground truth. The correspondence is visualized by trans-ferring colors from source to target according to matching results. The baseline predicts many false matches, especially for symmet-rical, similar parts of the object. Our method achieves accurate matches for these parts with our orientation estimation module. tween two point clouds, spectral-based methods [1, 9, 15, 19,28] have been proven as practical shape correspondence methods by computing functional mapping between the projected features and learning a transformation for the cor-respondence. Nevertheless, the spectral-based methods suf-fer from complicated pre-processing steps and the neces-sity for connectivity information between points. With the rapid development of deep learning, many fully supervised point cloud shape correspondence methods [4, 8, 16] have been proposed to lead to remarkable progress. However, these methods rely on a large amount of carefully annotated point cloud pairs, which are expensive and time-consuming to collect. To relieve the annotation cost of fully super-vised methods, unsupervised methods [12, 32] that utilize unlabeled data for model training have attracted more and more attention. CorrNet3D [32] proposes the first unsu-pervised deep learning framework for building dense corre-spondence between point clouds in an end-to-end manner. DPC [12] models the local point cloud structure by explor-ing the proximity of points using DGCNN [29] and designs reconstruction losses to extract continuous point cloud rep-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5364 resentations. However, in the scanning process of 3D scan-ner, due to light, vibration and other factors, point cloud noise will be inevitably generated. Meanwhile, the pre-processing of point cloud (such as random subsampling) will also introduce noise. Unfortunately, the previous meth-ods fail to adequately consider the point cloud noise, which negatively impacts the point cloud representations. Besides the noise, existing methods lack attention to symmetrical parts of the body. The mismatching issue of symmetrical parts is challenging in this task, which was also spotted by the previous method [32] but has yet to be solved. By studying the previous point-based shape correspon-dence methods [4, 8, 12, 16, 32], we summarize two key issues that need consideration to achieve a more accurate shape correspondence: 1) How to overcome the disturbance of point cloud noise to get robust and consistent point cloud representations? Point cloud noise perturbs the spatial co-ordinates of point cloud and interferes with local structure modeling. Therefore, it is necessary to overcome noise dis-turbances. 2) How to solve the mismatching issue of sym-metrical parts in point clouds with different body orienta-tions? As shown in Figure 1, for the pair of bilaterally sym-metrical human point clouds facing the opposite directions, existing methods predict the completely reverse and seri-ously wrong point cloud correspondence due to the similar structure and position. The specific relative rotation angles lead to severe mispredictions of symmetrical parts. To achieve the above goals, we propose a Self-Ensembling Orientation-aware Network (SE-ORNet) for unsupervised point cloud shape correspondence. We inte-grate orientation modeling and consistent point cloud rep-resentations under a unified self-ensembling framework, which consists of a pair of teacher and student models, an orientation estimation module, and an adaptive domain dis-criminator. Firstly, we design a new augmentation scheme to produce augmented samples with rotation and Gaussian noise, and record the rotation angles as rotation angle labels. Then, we formulate soft labels and consistency losses to encourage consensus among ensemble predictions of aug-mented and raw samples, aiming to perceive the difference in body orientation and overcome the point cloud noise dis-turbance to obtain consistent point cloud representations. In addition, we design a plug-and-play lightweight Orien-tation Estimation Module, which aligns the orientations of two point clouds to solve the mismatching issue of symmet-rical parts in point clouds. Without the real label of rela-tive rotation angle between the source and target, we super-vise the training with the rotation angle labels and calculate angle losses. However, there is a noticeable domain gap between the rotation-augmented samples and the real sam-ples. Therefore, we design a discriminator to achieve do-main adaptation and calculate the domain losses. Further-more, the discriminator facilitates the Orientation Estima-tion Module to mine the valuable knowledge in the rotation-augmented samples to compensate for the information loss of the real relative rotation angles. In summary, the main contributions of this work are as follows: (i) We design a plug-and-play lightweight Orien-tation Estimation Module that accurately aligns the orienta-tions of point cloud pairs to achieve correct matching re-sults of symmetrical parts. (ii) We integrate point cloud orientation modeling and consistent point cloud representa-tion learning with the disturbance of point cloud noise into a unified self-ensembling framework. (iii) Our method at-tains state-of-the-art performance on both human and ani-mal benchmarks, and extensive experimental results verify the superiority of our designs.
Cheng_Out-of-Candidate_Rectification_for_Weakly_Supervised_Semantic_Segmentation_CVPR_2023
Abstract Weakly supervised semantic segmentation is typically in-spired by class activation maps, which serve as pseudo masks with class-discriminative regions highlighted. Al-though tremendous e fforts have been made to recall precise and complete locations for each class, existing methods still commonly su ffer from the unsolicited Out-of-Candidate (OC) error predictions that do not belong to the label can-didates, which could be avoidable since the contradiction with image-level class tags is easy to be detected. In this paper, we develop a group ranking-based Out-of-Candidate Rectification (OCR) mechanism in a plug-and-play fashion. Firstly, we adaptively split the semantic categories into In-Candidate (IC) and OC groups for each OC pixel according to their prior annotation correlation and posterior predic-tion correlation. Then, we derive a di fferentiable rectifica-tion loss to force OC pixels to shift to the IC group. In-corporating OCR with seminal baselines (e.g., A ffinityNet, SEAM, MCTformer), we can achieve remarkable perfor-mance gains on both Pascal VOC ( +3.2%, +3.3%, +0.8% mIoU) and MS COCO ( +1.0%, +1.3%, +0.5% mIoU) datasets with negligible extra training overhead, which jus-tifies the e ffectiveness and generality of OCR.†
1. Introduction Due to the development of deep learning, significant progress has been made in deep learning-based semantic segmentation [42, 47]. However, its e ffectiveness requires huge amounts of data with precise pixel-level labels. Col-lecting precise pixel-level labels is very time-consuming and labor-intensive, thus much research shifts attention to * Equal contribution Corresponding Author †/gtbgithub.com /sennnnn /Out-of-Candidate-Rectification Input Baseline OC Pixels Baseline + OCR In-Candidate:{ 𝑷𝒆𝒓𝒔𝒐𝒏,𝑫𝒐𝒈} Out-of-Candidate: { 𝑺𝒐𝒇𝒂} In-Candidate:{ 𝑯𝒐𝒓𝒔𝒆} Out-of-Candidate: { 𝑪𝒐𝒘,𝑫𝒐𝒈} In-Candidate:{ 𝑫𝒐𝒈} Out-of-Candidate: { 𝑺𝒐𝒇𝒂}Figure 1. Motivation of our OCR . We visualize the segmentation results from the baseline method (e.g. SEAM) and the baseline with our proposed OCR. The predictions from baseline methods are easily disturbed by OC pixels, that is, pixels whose semantic categories are in contradiction with label candidate set (inner of the Yellow contour). Our proposed OCR can rectify these OC pixels and suppress this unreasonable phenomenon. training e ffective semantic segmentation models with rel-atively low manual annotation cost, i.e., Weakly Super-vised Semantic Segmentation (WSSS). There exist vari-ous types of weak supervision for semantic segmentation such as image-level tag labels [1, 2, 26, 61, 77], bounding boxes [14, 30, 34], scribbles [40, 57] and points [5]. In this work, we focus on WSSS based on image-level tag labels since image-level tags demand the least annotation cost, which just needs the information on the existence of the tar-get object categories. Most of the previous WSSS methods follow such a stan-dard workflow [2]: 1). generating high-quality Class Ac-tivation Maps (CAM) [61, 70]; 2). generating pseudo la-bels from CAMs [2, 78]; 3). training segmentation net-works from pseudo labels. Previous works mainly fo-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23673 Class Center OC Pixel Regular Pixel 𝑶𝑪𝑰𝑪 ①𝑶𝑪pixels Tags: { 𝑷𝒆𝒓𝒔𝒐𝒏,𝑩𝒐𝒕𝒕𝒍𝒆} 𝑶𝑪𝑰𝑪 Image Prediction ③Rectification After Rectification Better Prediction Dog∉{𝑷𝒆𝒓𝒔𝒐𝒏,𝑩𝒐𝒕𝒕𝒍𝒆}②𝑰𝑪/𝑶𝑪group splitFigure 2. Conceptual workflow of our OCR . The OC pixels are selected out by checking if the semantic categories are in contradiction with image-level candidate tags. Then we adaptively split the categories into IC group and OC group. Finally, we utilize rectification loss for group ranking and let OC pixels escape from OC group to IC group. cus on the first and second procedures. However, train-ing segmentation network from pseudo labels is also vi-tal because neural network can exploit shared patterns be-tween pseudo labels [4] and largely improve final segmen-tation results [39]. But the pseudo label generation relies on high-quality CAM, while the pseudo labels usually are incomplete and imprecise because CAM only focuses on discriminative object parts and can not fully exploit object regions [7]. The existence of noise in pseudo labels pro-vides confused knowledge to segmentation networks and results in error predictions. According to the observations in Fig. 1, the segmentation networks trained by noisy pseudo labels usually output pixels with semantic categories that do not belong to the candidate label set, i.e., image-level tag labels. This special type of prediction errors are defined asOut-of-Candidate (OC). These errors can be easily de-tected by checking if the semantic category of pixel is in contradiction with image-level tag labels, which is seldom considered before. For better identifying this phenomenon, we extra name these error pixels as OC pixels and name the illegal categories as OC categories. In contrast, the po-tentially correct categories for OC pixels are defined as In-Candidate (IC) categories. To suppress the occurrence of OC phenomenon, we propose group ranking-based Out-of-Candidate Rectification (OCR) to rectify OC pixels from OC cate-gories to IC categories by solving a group ranking problem (i.e., the prediction score of IC group needs to be larger than the prediction score of OC group). In Fig. 2, OCR is illustrated as three procedures: OC pixels selection, IC/OC categories group split and rectification. Firstly, we find out OC pixels whose classification result is in contradiction with image-level tag labels. Secondly, we adaptively split the classes into IC classes group and OC classes group for each OC pixel by considering prior label correlation information from the image-level tag labels and posterior label correlation information from the network prediction. Finally, rectification loss is used to modulate the distance between OC pixels and class centers of IC group and OC group. It constraints that the OC pixels andOC class centers are pushed away and the OC pixels and IC class centers are pulled closer so that those OC pixels are rectified to correct classes. Out-of-Candidate Rectification (OCR) is designed in a plug-and-play style to provide reasonable supervision sig-nals with trivial training costs and to improve evaluation results with no extra cost for inference. To fairly show the effectiveness and generality, we adopt the same settings of several previous methods (A ffinityNet [2], SEAM [61], MCTformer [70]) and evaluate our proposed OCR on the PASCAL VOC 2012 and MS COCO 2014 datasets. Experi-ments demonstrate that our OCR improves the performance of final segmentation results. Specifically, our module im-proves A ffinityNet, SEAM and MCTformer by 3.2%, 3.3% and 0.8% mIoU on PASCAL VOC 2012 dataset and 1.0%, 1.3% and 0.5% mIoU on MS COCO 2014 dataset.
Jeong_Enhancing_Multiple_Reliability_Measures_via_Nuisance-Extended_Information_Bottleneck_CVPR_2023
Abstract In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition (i.e., less generalizable), so that one cannot prevent a model from co-adapting on such (so-called) “shortcut” signals: this makes the model fragile in various distribution shifts. To bypass such failure modes, we consider an adversarial threat model under a mutual information constraint to cover a wider class of perturba-tions in training. This motivates us to extend the standard information bottleneck to additionally model the nuisance information . We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training concerning both convolutional-and Transformer-based architectures. Our experimental results show that the proposed scheme improves robustness of learned repre-sentations (remarkably without using any domain-specific knowledge), with respect to multiple challenging reliability measures. For example, our model could advance the state-of-the-art on a recent challenging OBJECTS benchmark in novelty detection by 78.4%→87.2%in AUROC, while simultaneously enjoying improved corruption, background and (certified) adversarial robustness. Code is available at https://github.com/jh-jeong/nuisance_ib .
1. Introduction Despite the recent breakthroughs in computer vision in aid of deep learning, e.g., in image/video recogni-tion [ 9,20,109], synthesis [ 42,55,96,125], and 3D scene rendering [ 85,86,104], deploying deep learning models to the real-world still places a burden on contents providers as it affects the reliability of their services. In many cases, deep neural networks make substantially fragile predictions for out-of-distribution inputs, i.e., samples that are not likely *Work done at KAIST. Figure 1. A high-level illustration of our method, nuisance-extended information bottleneck (NIB). In this paper, we focus on scenarios when the input xcan be corrupted x→ˆxin test-time while preserving its semantics. Unlike the standard cross-entropy training (CE), NIB aims to encode every target-correlated signal in x, some of which can be more reliable under distribution shifts. from the training distribution, even when the inputs are se-mantically close enough to in-distribution samples for hu-mans [ 35,102]. Such a vulnerability can be a significant threat in risk-sensitive systems, such as autonomous driving, medical imaging, and health-care applications, to name a few [ 2]. Overall, the phenomena highlight that deep neural networks tend to extract “shortcut” signals [ 26] from given limited (or potentially biased) training data in practice. To address such concerns, multiple literatures have been independently developed based on different aspects of model reliability. Namely, their methods use different threat mod-elsand benchmarks, depending on how a shift in input dis-tribution happens in test-time, and how to evaluate model performance against the shift. For example, in the con-text of adversarial robustness [11,17,82,128], a typical threat model is to consider the worst-case noise inside a fixedℓp-ball around given test samples. Another example of corruption robustness [34,35,39,116] instead assumes pre-defined types of common corruptions ( e.g., Gaussian noise, fog, etc.) that applies to the test samples. Lastly, novelty detection [36,72,74,76] usually tests whether a model can detect a specific benchmark dataset as out-of-distribution from the (in-distribution) test samples. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16206 Due to discrepancy between each of “ideal” objectives and its practical threat models, however, the literatures have commonly found that optimizing under a certain threat model often hardly generalizes to other threat ones: e.g., (a) several works [ 16,62,121] have observed that standard adversarial training [ 82] often harms other reliability mea-sures such as corruption robustness or uncertainty estimation, as well as its classification performance; (b) Hendrycks et al. [34] criticize that none of the previous claims on cor-ruption robustness could consistently generalize on a more comprehensive benchmark. This also happens even for threat models targeting the same objective: e.g., (c) Yang et al. [ 122] show that state-of-the-arts in novelty detection are often too sensitive, so that they tend to also detect “near-in-distribution” samples as out-of-distribution and perform poorly on a benchmark regarding this. Overall, these obser-vations suggest that one should avoid optimizing reliability measures assuming a specific threat model or benchmark, and motivate to find a new threat model that is generally applicable for diverse scenarios of reliability concerns. Contribution. In this paper, we propose nuisance-extended information bottleneck (NIB), a new training objective tar-geting model reliability without assuming a prior on domain-specific tasks. Our method is motivated by rethinking the information bottleneck (IB) principle [ 107,108] under pres-ence of distribution shifts. Specifically, we argue that a “robust” representation z:=f(x)should always encode ev-erysignal in the input xthat is correlated with the target y, rather than extracting only a few shortcuts ( e.g., Figure 1). This motivates us to consider an adversarial form of threat model on distribution shifts in x, under a constraint on the mutual information I(x,y). To implement this idea, we propose a practical design by incorporating a nuisance rep-resentation znalongside zof the standard IB so that (z,zn) can reconstruct x. This results in a novel synthesis of adver-sarial autoencoder [83] and variational IB [1] into a single framework. For the architectural side, we propose (a) to utilize the internal feature statistics for convolutional net-work based encoders, and (b) to incorporate vector-quantized patch representations for Transformer-based [ 24] encoders to model zn, mainly to efficiently encode the nuisance zn (as well as z) in a scalable manner. We perform an extensive evaluation on the representations learned by our scheme, showing comprehensive improve-ments in modern reliability metrics: including (a) novelty detection, (b) corruption (or natural) robustness, (c) back-ground robustness and (d) certified adversarial robustness. The results are particularly remarkable as the gains are not from assuming a prior on each of specific threat models. For example, we obtain a significant reduction in CIFAR-10-C error rates of the highest severity, i.e., by26.5%→19.5%, without extra domain-specific prior as assumed in recent methods [ 39,40]. Here, we also show that the effective-ness of our method is scalable to larger-scale (ImageNet) datasets. For novelty detection, we could advance AUROCs in recent OBJECTS [ 122] benchmarks by a large margin of 78.4%→87.2%in average upon the state-of-the-art, show-ing that our representations can provide a more semantic information to better discriminate out-of-distribution sam-ples. Finally, we also demonstrate how the representations can further offer enhanced robustness against adversarial examples, by applying randomized smoothing [ 17] on them.
Du_Rethinking_the_Approximation_Error_in_3D_Surface_Fitting_for_Point_CVPR_2023
Abstract Most existing approaches for point cloud normal esti-mation aim to locally fit a geometric surface and calcu-late the normal from the fitted surface. Recently, learning-based methods have adopted a routine of predicting point-wise weights to solve the weighted least-squares surface fitting problem. Despite achieving remarkable progress, these methods overlook the approximation error of the fit-ting problem, resulting in a less accurate fitted surface. In this paper, we first carry out in-depth analysis of the ap-proximation error in the surface fitting problem. Then, in order to bridge the gap between estimated and precise sur-face normals, we present two basic design principles: 1) ap-plies the Z-direction Transform to rotate local patches for a better surface fitting with a lower approximation error; 2) models the error of the normal estimation as a learn-able term. We implement these two principles using deep neural networks, and integrate them with the state-of-the-art (SOTA) normal estimation methods in a plug-and-play manner. Extensive experiments verify our approaches bring benefits to point cloud normal estimation and push the fron-tier of state-of-the-art performance on both synthetic and real-world datasets. The code is available at https:// github.com/hikvision-research/3DVision .
1. Introduction Surface normal estimation on point clouds can offer ad-ditional local geometric information for numerous applica-tions, such as denoising [11, 23, 24], segmentation [28–30], registration [9, 17, 27, 33], and surface reconstruction [10, 15, 18, 25, 41]. However, raw-scanned point clouds tend to be incomplete, noisy, and non-uniform, which poses a chal-lenge in accurately estimating surface normals amidst noise, density variations, and missing structures. Normal estimation on point clouds is a long-standing re-*These authors contributed equally to this work. †Corresponding author. Figure 1. The error heatmap of point cloud normal estimation. The first row is produced by three SOTA surface fitting methods, while the second row shows the results of integrating our method with them. The bottom values indicate the corresponding normal angle root mean square error (RMSE) . Both quantitative and qualitative results demonstrate that our proposed method provides more pre-cise normal estimation. search topic. The majority of traditional methods [6, 8, 12, 15, 20] aim to fit a local geometric surface ( e.g., plane, jet and spherical) around a specific point, and infer the normal from the fitted surface. However, these methods require to carefully tune the setting of parameters, such as point neigh-borhood sizes, which is sensitive to noise and outliers. With the power of deep neural networks, many learning-based approaches [4, 5, 13, 14, 31, 37, 40] have been proposed to regress surface normal vectors directly, achieving promising performance improvements over traditional methods. How-ever, these approaches exhibit limited generalization capa-bility when applied to real-world point clouds. More recently, several approaches [3, 21, 42] have gen-eralized the truncated Taylor expansion ( n-jet) surface This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9486 model [8] to the learning-based regime, formulating normal estimation as a weighted least-squares problem with learn-able weights. In these methods, the point-wise weights of a local surface patch are predicted by a deep neural network, which can control the importance of neighboring points to the fitted surface and alleviate the sensitivity to outliers and noise. Then, the solution of weighted least-squares fitting problem can be expressed in a closed form, which enables to estimate the geometric surface and infer the surface normal. These methods heavily constrain the solution space and ob-tain a better result for surface normal estimation. Neverthe-less, none of them theoretically analyzes the approximation error in surface fitting, leading to a suboptimal normal esti-mation performance. In some sense, a smaller approxima-tion error represents a more precise estimation. Therefore, we aim to study how to reduce the approximation error and fit a more accurate surface for normal estimation. In this paper, we analyze the approximation error in the n-jet surface model, and find the existing gap between esti-mated and accurate normals in previous methods. Specif-ically, the truncated Taylor expansion polynomial is ex-pected to be equivalent to the height function of the surface, and the accuracy of the reminder term in Taylor expansion has an impact on the precision of normal estimation. As pointed out in [8], to improve the accuracy, a feasible way is to set up a coordinate system where zdirection is aligned (has the minimum angle) with the estimated normal. How-ever, we find the previous methods cannot accomplish this objective well, leading to a large estimation error in most cases. Besides, due to the presence of the reminder term and the imperfect data (inevitably containing outliers and noise), it is impossible to achieve an accurate surface fitting without any approximation error. To solve these problems, we propose two basic design principles. First, we apply the z-direction transformation to rotate local patches for a bet-ter surface fitting with a lower approximation error. Second, the error of normal estimation is modeled as a term that can be learned in a data-driven manner. The proposed princi-ples can improve the accuracy of the surface fitting, thereby leading to a more precise estimation of surface normals. To model the above two principles, we implement them with deep neural networks, and introduce two simple yet effective methods: Z-direction Transformation and Normal Error Estimation. More specifically, the z-direction trans-formation is fulfilled by adding a constraint on the angle between the rotated normal and the zaxis, which aims to align the rotated normal with the zdirection. To achieve this learning objective, we also design a graph-convolution based alignment transformation network to fully exploit the local neighborhood information for learning a better point transformation. Then, the rough estimated normal can be inferred by any existing polynomial surface fitting method, such as DeepFit [3] and GraphFit [21]. Finally, we design anormal error estimation module that learns a residual term based on the rough estimated result and thus improves the precision of normal estimation. We conduct comprehensive experiments to verify the ef-fectiveness of our methods on point cloud normal estima-tion. The proposed two basic design principles are imple-mented with the existing polynomial surface fitting meth-ods. The experimental results demonstrate our design prin-ciples are beneficial to these methods with few extra bur-dens. As shown in Fig. 1, an obvious improvement can be achieved by our proposed methods for normal estimation. The contributions of this paper are summarized as: • We provide an in-depth analysis of the approximation error in n-jet surface fitting, and introduce two basic design principles to improve the precision of 3D sur-face fitting. • We implement the design principles with neural net-works and propose two approaches, i.e.,z-direction transformation and normal error estimation, which can be flexibly integrated with the current polynomial sur-face fitting methods for point cloud normal estimation. • We conduct extensive experiments to show the im-provements by the proposed methods. The experimen-tal results demonstrate our methods consistently bring benefits and push the frontier of SOTA performance.
Deitke_Objaverse_A_Universe_of_Annotated_3D_Objects_CVPR_2023
Abstract Massive data corpora like WebText, Wikipedia, Concep-tual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today’s benchmarks. A notable omiss1ion within this fam-ily of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets of high-fidelity 3D models continue to be mid-sized with limited diversity of object categories. Addressing this gap, we present Objaverse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. We demonstrate the large potential of Objaverse via four diverse applications: training generative 3D models, im-Correspondence to <mattd@allenai.org >.proving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Em-bodied AI, and creating a new benchmark for robustness analysis of vision models. Objaverse can open new direc-tions for research and enable new applications across the field of AI.
1. Introduction Massive datasets have enabled and driven rapid progress in AI. Language corpora on the web led to large language models like GPT-3 [4]; paired image and text datasets like Conceptual Captions [63] led to vision-and-language pre-trained models like VilBERT [42]; YouTube video datasets led to video capable models like Merlot-Reserve [82]; and massive multimodal datasets like WebImageText [65] and LAION [61, 62] led to models like CLIP [55] and StableD-iffusion [59]. These leaps in dataset scale and diversity were triggered by moving from manually curated datasets to har-nessing the power of the web and its creative content. In contrast to the datasets described above, the size of This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13142 the datasets we are feeding to our data-hungry deep learn-ing models in many other areas of research is simply not comparable. For instance, the number of 3D assets used in training generative 3D models is, maximally, on the or-der of thousands [22] and the simulators used to train em-bodied AI models typically have only between a few dozen to a thousand unique scenes [36, 39, 58, 67]. The startling advances brought about by developing large-scale datasets for images, videos, and natural language, demand that an equivalent dataset be built for 3D assets. We present O BJAVERSE 1.0, a large scale corpus of high-quality, richly annotated, 3D objects; see Fig. 1. Objects in our dataset are free to use1and sourced from Sketch-fab, a leading online platform for managing, viewing, and distributing 3D models. In total, O BJAVERSE contains over 800K 3D assets designed by over 150K artists which makes this data large and diversely sourced. Assets not only belong to varied categories like animals, humans, and vehicles, but also include interiors and exteriors of large spaces that can be used, e.g., to train embodied agents. OBJAVERSE is a universe of rich 3D data with detailed metadata that can support many different annotations to en-able new applications. With this remarkable increase in scale, we see an incredible opportunity for O BJAVERSE to impact research progress across domains. In this work, we provide promising results to answer three questions. Can 3D vision benefit from a large-scale dataset? First, as a 3D asset resource, O BJAVERSE can support the exciting field of 3D generative modeling. We use data ex-tracted from O BJAVERSE to train generative models for sin-gle and multiple categories using GET3D [22] and find that we are able to generate high-quality objects. Moreover, we find that our generated objects are found by human anno-tators to be more diverse than those generated by a model trained on ShapeNet objects in 91% of cases. Can the diversity of 3D models help improve classi-cal 2D vision task performance? To answer this ques-tion, we use the diversity of O BJAVERSE to improve the performance of long tail instance segmentation models. In-stance segmentation data can be expensive to obtain ow-ing to the cost of annotating contours around objects. The recent LVIS dataset contains segmentation annotations for 1,230 categories but the task remains very challenging for present day models, particularly on tail categories that have few examples. We show that increasing the volume of data by leveraging a simple Copy+Paste augmentation method with O BJAVERSE assets can improve the performance of state-of-the-art segmentation methods. We also use O BJAVERSE to build a benchmark for eval-uating the robustness of state-of-the-art visual classifica-tion models to perspective shifts. We render objects in OBJAVERSE from random orientations, which is how one 1Creative Commons licensemight expect to see them in the real world, and test the ability of CLIP-style visual backbones to correctly classify these images. Our experiments show that current state-of-the-art models’ performance degrades dramatically in this setting when viewing objects from arbitrary views. OBJAVERSE allows us to build benchmarks to test (and po-tentially train) for orientation robustness for a long tail dis-tribution of asset categories. Building such benchmarks is made uniquely possible by the scale and diversity of 3D as-sets in O BJAVERSE . This would simply not be feasible to create in the real world nor can they be generated from ex-isting 2D images. Can a large-scale 3D dataset help us train perfor-mant embodied agents? We use assets in O BJAVERSE to populate procedurally generated simulated environments in ProcTHOR [16] that are used to train Embodied AI agents. This results in an orders of magnitude increase in the num-ber of unique assets available for use in ProcTHOR scenes (previously limited to AI2-THOR’s [36] asset library of a few thousand unique instances each assigned to one of 108 object categories). Using O BJAVERSE populated scenes en-ables open vocabulary object navigation from any text de-scription. In this paper, we provide quantitative results for navigating to 1.1K semantic object categories, roughly a 50x increase. These findings represent just a small fraction of what can be accomplished using O BJAVERSE . We are excited to see how the research community will leverage O BJAVERSE to enable fast and exciting progress in 2D and 3D computer vision applications and beyond.
Che_Image_Quality-Aware_Diagnosis_via_Meta-Knowledge_Co-Embedding_CVPR_2023
Abstract Medical images usually suffer from image degradation in clinical practice, leading to decreased performance of deep learning-based models. To resolve this problem, most previous works have focused on filtering out degradation-causing low-quality images while ignoring their potential value for models. Through effectively learning and leverag-ing the knowledge of degradations, models can better resist their adverse effects and avoid misdiagnosis. In this pa-per, we raise the problem of image quality-aware diagnosis , which aims to take advantage of low-quality images and im-age quality labels to achieve a more accurate and robust di-agnosis. However, the diversity of degradations and super-ficially unrelated targets between image quality assessment and disease diagnosis makes it still quite challenging to ef-fectively leverage quality labels to assist diagnosis. Thus, to tackle these issues, we propose a novel meta-knowledge co-embedding network , consisting of two subnets: Task Net and Meta Learner. Task Net constructs an explicit quality information utilization mechanism to enhance diagnosis via knowledge co-embedding features, while Meta Learner en-sures the effectiveness and constrains the semantics of these features via meta-learning and joint-encoding masking. Su-perior performance on five datasets with four widely-used medical imaging modalities demonstrates the effectiveness and generalizability of our method.
1. Introduction Medical imaging is one of the most valuable sources of diagnostic information about anatomical structures and pathological characteristics [1]. Advanced deep learning-based methods applied to high-quality (HQ) medical im-ages have shown significant potential in disease analysis and diagnosis [2, 3], achieving favorable results compared with human healthcare professionals [4]. However, in clin-ical practice, obtaining HQ images is not always feasible. Medical images often exhibit significant variations in imag-ing quality due to factors such as patient movements or en-*Corresponding author: Hao Chen, email: jhc@cse.ust.hk. Normal HQ ImageAbnormal HQ ImageNormal LQ ImageFigure 1. Illustration of impact of image degradations on diagno-sis semantics for fundus (top) and OCTA (bottom) images. Top: Degradations obscure parts of the vessel structure and present lesion-like spots. Bottom : Degradations result in a fake enlarge-ment of the foveal avascular zone (central circular area). vironmental conditions [5,6]. For instance, a medical image quality assessment study of 28,780 fundus images revealed that approximately 41.6% of them contained image artifacts and corruption and were considered low-quality (LQ) [7]. Such degradations can increase the uncertainty in patholog-ical observation, leading to misdiagnosis [8, 9]. Medical image degradations can significantly affect di-agnostic semantics, as illustrated in Figure 1. For instance, shadow degradation can obscure anatomical structures cru-cial for diagnosis, while spot artifacts can obfuscate patho-logical signs that typically manifest as circular shapes [10]. Furthermore, image degradations can also affect diagnos-tic measurements, such as the vessel area density in optical coherence tomography angiography (OCTA) images, ren-dering them unreliable [6]. These close relationships raise challenges in distinguishing degradations from actual ab-normalities [10], leading to false knowledge of lesions and undesired misdiagnosis during training and deployment [9]. Aware of the profound influence of image quality on diag-nosis, many previous works have focused on utilizing image quality assessment to select relatively HQ images for train-ing and testing, thereby avoiding the influence of LQ im-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19819 ages [7, 11–14]. However, discarding LQ images contain-ing diagnostically valuable information results in a waste of precious clinical data [7]. Including LQ images and corre-sponding quality information in training can assist models in recognizing potential false abnormalities, thus achieving more robust and accurate diagnosis [15–17]. In this paper, we reconsider the value of LQ images and corresponding image quality labels, and introduce the prob-lem of image quality-aware diagnosis (IQAD). The goal of IQAD is to enable models to leverage LQ images while si-multaneously learning image quality labels to achieve an accurate and robust diagnosis. However, effectively lever-aging quality labels for diagnosis is non-trivial with a multi-task learning framework. Specifically, image quality assess-ment can be considered as a task “unrelated” to disease di-agnosis [18], since it focuses on capturing image degrada-tions, while diagnosis emphasizes identifying lesions. This distinction makes it challenging for models to effectively utilize image quality labels. Further, commonly-used coarse annotations of quality may not sufficiently reflect the diver-sity of image degradation, making it difficult to provide in-formation that could be useful for precise diagnosis. To achieve IQAD, we propose a novel meta-knowledge co-embedding network (MKCNet) consisting of two sub-nets, Task Net and Meta Learner. To enable leveraging potential benefits of quality information, Task Net con-ducts diagnosis predictions by explicitly leveraging knowl-edge co-embedding features with desired knowledge of image quality and disease diagnosis. These features are constructed by learning auxiliary label embeddings from Meta Learner. Further, we employ meta-learning and joint-encoding masking to ensure the effectiveness and seman-tics of auxiliary label embedding and circumvent the bar-rier of obtaining fine-grained image quality labels. Specif-ically, joint-encoding masking selects a part of the Meta Learner output as auxiliary label embedding through com-binations of quality and diagnosis labels. Moreover, Meta Learner learns to provide auxiliary label embedding in a meta-learning manner to assist Task Net optimization, en-couraging it to learn effective knowledge co-embedding features. Our main contributions are highlighted as follows: (1) We tackle a novel problem named IQAD. To the best of our knowledge, this is the first work to discuss and ana-lyze this critical and practical problem. (2) We propose a novel method, MKCNet, to effectively handle the challenges posed by annotation granularity and task focus discrepancy via leveraging quality information explicitly and introducing a meta-learning paradigm. (3) We conduct extensive experiments on five datasets spanning four different yet widely-used medical imaging modalities. Our in-depth analytical study demonstrates the effectiveness and generalizability of MKCNet.2. Related Work Disease diagnosis. Many deep-learning methods have been developed to diagnose diseases [19–25]. For example, He et al. [19] proposed CABNet for learning discrimina-tive features associated with different severities of diabetic retinopathy (DR), and Liu et al. [20] developed a convo-lutional graph networks-based method to explore potential relationships among grades of DR. However, these methods may not perform well when dealing with image degrada-tions as they do not consider image quality issues [9]. By contrast, MKCNet leverages LQ images and quality labels to achieve a more robust and accurate diagnosis. Quality assessment and image enhancement. Aim-ing at avoiding the effect of LQ images, several methods have been proposed to assess image quality to select rel-atively HQ images [7, 11, 14]. For example, Fu et al. [7] selected usable samples and rejected nearly 20% of images as unsuitable for model learning. However, these images would still be diagnosable for physicians. Rejecting diag-nostically valuable LQ images is wasteful, since LQ images are not only useful in model generalization ability improve-ment in training [15,26], but also effective in evaluating the robustness of models in testing [27, 28]. Another solution is to leverage generative models to enhance the quality of LQ images [29–32]. For example, Shen et al. [10] improve the quality of LQ fundus images by generating degradations on images to simulate pseudo-paired samples. However, it is costly to train such generative models, which requires a large number of images for desirable performance [32]. Ad-ditionally, the simulation may only represent a partial dis-tribution of realistic degradation [33], and it is rarely help-ful in improving recognition performance [34]. Meanwhile, capturing real pairs with different image qualities proves to be extremely challenging [35]. Instead of requiring a high cost, MKCNet leverages the multi-task learning framework to tackle the IQAD problem in a less costly and more effec-tive manner. Multi-task learning. It has been shown that multi-task learning is an effective method for training a generalized model that can simultaneously handle multiple tasks [18]. In the medical domain, many studies have used it to im-prove the performance of models by exploring internal rela-tionships among diseases [36–40], as well as utilizing auxil-iary tasks to assist primary tasks [41,42]. However, most of these works ignore the potential benefits of quality informa-tion on diagnosis, which is critical for the IQAD problem. Among them, Zhou et al. [16] treated quality assessment as an auxiliary task. However, they did not obtain a significant performance boost due to the absence of an explicit mech-anism for utilizing quality information. By contrast, MKC-Net explicitly models and explores the potential assistance of quality information in diagnosis. 19820 …Task Net 𝑴𝜽Meta Learner 𝑴𝝓𝑓!"𝑓!#𝑓!$ℒ!ℒ" 𝑦!ℒ# 𝓛Loss FunctionNormal GradientMultilayer PerceptronFeature ConcatenationChannel AttentionMeta GradientJoint-encoding MaskGCGlobal AttentionSymbols 𝑓!"∗ Minibatch 𝑿Stage IStage II𝜕(ℒ!"#+ℒ!"$)𝜕𝜙 Update 𝜃with ℒ!=ℒ!"+ℒ!#+ℒ!$viaEq. (1)Gain 𝜃% with pseudo updating 𝜃via Eq. (3)Update 𝜙with ℒ%=ℒ!&"+ℒ!&#+ℛ%via Eq. (4)𝜕ℛ%𝜕𝜙Stage I Stage II 𝜕ℒ!#𝜕𝜃𝜕ℒ!$𝜕𝜃 𝜕ℒ!&𝜕𝜃 Minibatch 𝑿GGGC𝐹!Figure 2. The overview of MKCNet with two subnets (Task Net Mθand Meta Learner Mϕ). In the first stage, Mθlearns to construct fω θ by leveraging yω. Simultaneously, it adopts global attention to learn an informative and generalizable Fθ, while it explicitly utilizes fω θ andfd θto make diagnoses. In the second stage, Mϕlearns to provide yωwith desired knowledge of image quality and disease diagnosis. Mϕensures the effectiveness of fω θwhile constraining its semantics by utilizing the joint-encoding masking and meta-optimization. 3. Methodology This section first provides an introduction to IQAD and presents a preliminary experiment, as well as highlights the challenges involved. We then introduce our proposed solu-tion, MKCNet
Bao_Object_Discovery_From_Motion-Guided_Tokens_CVPR_2023
Abstract Object discovery – separating objects from the back-ground without manual labels – is a fundamental open chal-lenge in computer vision. Previous methods struggle to go be-yond clustering of low-level cues, whether handcrafted (e.g., color, texture) or learned (e.g., from auto-encoders). In this work, we augment the auto-encoder representation learning framework with two key components: motion-guidance and mid-level feature tokenization. Although both have been separately investigated, we introduce a new transformer de-coder showing that their benefits can compound thanks to motion-guided vector quantization. We show that our ar-chitecture effectively leverages the synergy between motion and tokenization, improving upon the state of the art on both synthetic and real datasets. Our approach enables the emergence of interpretable object-specific mid-level features, demonstrating the benefits of motion-guidance (no labeling) and quantization (interpretability, memory efficiency).
1. Introduction Objects are central in human and computer vision. In the former, they are a fundamental primitive used to decompose the complexity of the visual world into an actionable repre-sentation. This abstraction in turn enables higher-level cogni-tive abilities, such as casual reasoning and planning [32, 54]. In computer vision, object detection has achieved remarkable progress [7, 45] and is now an essential component in many applications ( e.g., driving, robotics). However, these models require a large amount of manual labels from a fixed vocab-ulary of categories. Consequently, learning unsupervised, object-centric representations is an important step towards scaling up computer vision to the real world. This topic has received renewed attention recently thanks to structured generative networks with iterative inference over a fixed set of variables [6,16,22,38,39]. These methods cluster pixels in the feature space of an auto-encoder, exhibit-ing behavior similar to grouping based on low-level cues, *Work done during an internship at TRI Slot RepresentationToken Representation Figure 1. TRI-PD dataset results: (left) top-10 foreground slot segments produced by our approach; (right) corresponding token representations. Compared to raw images, tokens in our framework present a more structured and compact space for reconstruction. such as color or texture. Hence, they are restricted to toy im-ages with colored geometric shapes on a plain background, and fail on more complex realistic scenes [4]. Two main types of works attempt to address this short-coming. The first family of methods sets out to simplify the grouping problem by introducing more structure into the out-put space, e.g., reconstructing optical flow [36] or depth [15]. They, however, require supervision, either in the form of known poses [36] or ground truth bounding boxes [15], veer-ing away from the unsupervised goal of object discovery. In contrast, Bao et al. [4] resolve the object-background ambiguity by explicitly integrating an unsupervised motion segmentation algorithm [11] into the pipeline, showing sub-stantial progress on realistic scenes. The second main direc-tion to improve object discovery focuses on improving the decoder part of auto-encoding architectures [52, 53], replac-ing convolutional decoders with transformers [43, 60] com-bined with discrete variational auto-encoders (DV AE) [47] to reduce memory footprint. These more sophisticated architec-tures improve performance without additional supervision, including on real-world sequences. However, these methods are evaluated with different protocols (metrics, datasets) and therefore no clear architectural principles have emerged yet for unsupervised object discovery. In this work, we introduce a novel architecture, Motion-guided Tokens (MoTok) , based on the combination of two fundamental structural principles: motion and discretization. We define objects as discrete entities that might have an in-dependent motion . As prior works have shown encouraging results thanks to unsupervised motion guidance and better This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22972 transformer-based decoders, we propose to leverage motion to guide tokenization , the vector quantization process at the heart of attention mechanisms in transformer architectures (See Figure 1). In addition, to comprehensively evaluate the contributions of prior works, we ablate key design choices proposed in the past, such as the decoder architecture and reconstruction space, in a unified framework. Our key contributions are as follows. (1) We intro-duce a novel auto-encoder architecture, MoTok , for unsu-pervised video object discovery with a new transformer decoder leveraging unsupervised motion-guided tokeniza-tion. (2) Our results on real and synthetic datasets show that with sufficient capacity in the decoder, motion guid-ance alleviates the need for labels, optical flow, or depth decoding thanks to tokenization, improving upon the state of the art. (3) We show that our motion-guided tokens map to interpretable mid-level features, going beyond typ-ical clusters of low-level features, thus explaining why our method scales to challenging realistic videos. Our code, models, and synthetic data are made available at https://github.com/zpbao/MoTok/ .
Chang_Domain_Generalized_Stereo_Matching_via_Hierarchical_Visual_Transformation_CVPR_2023
Abstract Recently, deep Stereo Matching (SM) networks have shown impressive performance and attracted increasing at-tention in computer vision. However, existing deep SM networks are prone to learn dataset-dependent shortcuts, which fail to generalize well on unseen realistic dataset-s. This paper takes a step towards training robust models for the domain generalized SM task, which mainly focuses on learning shortcut-invariant representation from synthet-ic data to alleviate the domain shifts. Specifically, we pro-pose a Hierarchical Visual Transformation (HVT) network to 1) first transform the training sample hierarchically in-to new domains with diverse distributions from three levels: Global, Local, and Pixel, 2) then maximize the visual dis-crepancy between the source domain and new domains, and minimize the cross-domain feature inconsistency to capture domain-invariant features. In this way, we can prevent the model from exploiting the artifacts of synthetic stereo im-ages as shortcut features, thereby estimating the dispari-ty maps more effectively based on the learned robust and shortcut-invariant representation. We integrate our pro-posed HVT network with SOTA SM networks and evaluate its effectiveness on several public SM benchmark datasets. Extensive experiments clearly show that the HVT network can substantially enhance the performance of existing SM networks in synthetic-to-realistic domain generalization.
1. Introduction Stereo Matching (SM) [7,41,44] aims to find the match-ing correspondences between a given stereo image pair and then calculate the disparity for depth sensing in many ap-plications, such as robot navigation and autonomous driv-ing [1, 28]. Recently, it attracts increasing attention in the computer vision community [4, 27, 30, 42]. With the development of deep learning [6, 18, 34–38], Convolutional Neural Network (CNN) based deep SM net-works have shown impressive performance benefiting from Corresponding Author HVT -PSMNet GT PSMNet ETH3D KITTI 2012 Middlebury Input (Left) KITTI 2015Figure 1. Comparison of the cross-domain SM generalization. Columns from left to right denote a sample image, ground truth disparities, the predicted disparities of the pretrained PSMNet model and our HVT-PSMNet model. Both models are trained on the synthetic SceneFlow [19] dataset and evaluated on the realistic datasets: Middlebury, ETH3D, KITTI 2012 and KITTI 2015. their strong ability of feature representation. However, due to the scarcity of sufficient labeled realistic training data, existing state-of-the-art (SOTA) SM networks usually are trained on synthetic data, e.g. SceneFlow [19], which fail to generalize well to unseen realistic domains as shown in Fig. 1. Generally, the generalizability of cross-domain deep SM networks is mainly hindered by a critical issue: SM net-works usually learn superficial shortcut features [5] from synthetic data to estimate the disparity. Specifically, such shortcut features mainly include two types of artifacts: con-sistent local RGB color statistics and overreliance on local chromaticity features, which are domain-sensitive and non-transferable to unseen domain. The semantic and structural features that are truly desirable are ignored by most existing SM networks. Therefore, the key to addressing the chal-lenging cross-domain SM task is how to effectively learn the domain-invariant representations of the given stereo im-age pair for synthetic-to-realistic generalization. Several attempts [3, 10, 15, 26, 45] have been made to minimize the synthetic-to-realistic domain gap and learn the domain-invariant representations for the SM task by either 1) exploiting labeled target-domain realistic data to fine-tune the SM network trained with synthetic data [3,10] This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9559 or 2) jointly using the synthetic data and unlabeled target-domain realistic data to train domain adaptive SM network-s [15,26,45]. Despite their performance improvement on re-alistic data, these attempts only work well when the target-domain realistic data is provided during training and thus can not improve the out-of-distribution (OOD) generaliza-tion of SM networks, which are less practically useful in real-world scenarios. In this work, we address the important but less explored challenging problem of single domain generalization in SM, where only the synthetic data is available for training. Con-sidering the fact that most existing SM networks are suscep-tible to exploiting shortcut cues in synthetic data instead of the semantic and structural correspondences, we propose to learn shortcut-invariant robust representation from synthet-ic SM image pairs for OOD generalization. Specifically, this paper presents a Hierarchical Visual Transformation (HVT) network to 1) first transform the synthetic training sample hierarchically into new source domains with diverse distributions from three levels: Global, Local, and Pixel, 2) then maximize the image discrepancy between the synthet-ic source domain and new domains for significantly alter-ing the original distribution, and minimize the cross-domain feature inconsistency to capture domain-invariant features. In this way, we are able to prevent the model from ex-ploiting the artifacts of synthetic stereo images as shortcut features, thereby estimating the disparity maps more effec-tively based on the learned shortcut-invariant feature repre-sentation. Our basic idea is to diversify the distribution of training data and thus force the network to overlook the ar-tifacts from synthetic domain. Note that our proposed HVT network is simple and can be plug-and-play. We integrate HVT with SOTA SM networks during training and evalu-ate its effectiveness on several challenging SM benchmark datasets. Extensive experiments clearly show that the HVT network can substantially enhance the performance of ex-isting SM networks in synthetic-to-realistic domain gener-alization without using any auxiliary data or features [17]. Our contributions can be briefly summarized as follows: We devise a simple yet effective domain generalized SM framework. It leverages a hierarchical visual transforma-tion network to effectively diversify the distribution of training data which prevents the model from exploiting the artifacts in synthetic data as shortcuts. We formulate novel learning objectives that force the model to effectively optimize three complementary vi-sual transformations by maximizing domain discrepancy and minimizing feature inconsistency between synthetic domain and new domains, thereby facilitating the learn-ing of domain-invariant feature representation. Extensive experiments on four realistic SM datasets clearly demonstrate the effectiveness and robustness of our HVT network. The out-of-distribution generalizationability of four SOTA SM methods has been significantly boosted, benefiting from our solution.
Jiang_MotionDiffuser_Controllable_Multi-Agent_Motion_Prediction_Using_Diffusion_CVPR_2023
Abstract We present MotionDiffuser, a diffusion based represen-tation for the joint distribution of future trajectories over multiple agents. Such representation has several key ad-vantages: first, our model learns a highly multimodal dis-tribution that captures diverse future outcomes. Second, the simple predictor design requires only a single L2 loss train-ing objective, and does not depend on trajectory anchors. Third, our model is capable of learning the joint distribu-tion for the motion of multiple agents in a permutation-invariant manner. Furthermore, we utilize a compressed trajectory representation via PCA, which improves model performance and allows for efficient computation of the exact sample log probability. Subsequently, we propose a general constrained sampling framework that enables controlled trajectory sampling based on differentiable cost functions. This strategy enables a host of applications such as enforcing rules and physical priors, or creating tai-lored simulation scenarios. MotionDiffuser can be com-bined with existing backbone architectures to achieve top motion forecasting results. We obtain state-of-the-art re-sults for multi-agent motion prediction on the Waymo Open Motion Dataset.1. Introduction Motion prediction is a central yet challenging problem for autonomous vehicles to safely navigate under uncertain-ties. Motion prediction, in the autonomous driving setting, refers to the prediction of the future trajectories of modeled agents, conditioned on the histories of the modeled agents, context agents, road graph and traffic light signals. Several key challenges arise in the motion prediction problem. First, motion prediction is probabilistic and multi-modal in nature where it is important to faithfully predict an unbiased distribution of possible futures. Second, mo-tion prediction requires jointly reasoning about the future distribution for a set of agents that may interact with each other in each such futures. Naively predicting and sampling from the marginal distribution of trajectories for each agent independently leads to unrealistic and often conflicting out-comes. Last but not least, while it is challenging to constrain or bias the predictions of conventional regression-based tra-jectory models, guided sampling of the trajectories is often required. For example, it may be useful to enforce rules or physical priors for creating tailored simulation scenarios. This requires the ability to enforce constraints over the fu-ture time steps, or enforce a specified behavior for one or This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9644 Figure 2. Overview for multi-agent motion prediction using diffusion models. The input scene containing agent history, traffic lights and road graphs is encoded via a transformer encoder into a set of condition tokens C. During training, a random set of noises are sampled i.i.d. from a normal distribution and added to the ground truth (GT) trajectory. The denoiser, while attending to the condition tokens, predicts the denoised trajectories corresponding to each agent. The entire model can be trained end-to-end using a simple L2 loss between the predicted denoised trajectory and the GT trajectory. During inference, a population of trajectories for each agent can first be sampled from pure noise at the highest noise level max, and iteratively denoised by the denoiser to produce a plausible distribution of future trajectories. An optional constraint in the form of an arbitrary differentiable loss function can be injected in the denoising process to enforce constraints. more agents among a set of agents. In light of these challenges, we present MotionDiffuser, a denoising diffusion model-based representation for the joint distribution of future trajectories for a set of agents (see Fig. 2). MotionDiffuser leverages a conditional denoising diffusion model. Denoising diffusion models [16, 23, 33, 43, 44] (henceforth, diffusion models) are a class of generative models that learns a denoising function based on noisy data and samples from a learned data distri-bution via iteratively refining a noisy sample starting from pure Gaussian noise (see Fig. 1). Diffusion models have recently gained immense popularity due to their simplicity, strong capacity to represent complex, high dimensional and multimodal distributions, ability to solve inverse problems [4, 6, 24, 44], and effectiveness across multiple problem do-mains, including image generation [36, 37, 39], video gen-eration [15, 18, 49] and 3D shape generation [35]. Building on top of conditional diffusion models as a basis for trajectory generation, we propose several unique design improvements for the multi-agent motion predic-tion problem. First, we propose a cross-attention-based permutation-invariant denoiser architecture for learning the motion distribution for a set of agents regardless of their ordering. Second, we propose a general and flexible frame-work for performing controlled and guided trajectory sam-pling based on arbitrary differentiable cost functions of the trajectories, which enables several interesting applications such as rules and controls on the trajectories, trajectory in-painting and creating tailored simulation scenarios. Fi-nally, we propose several enhancements to the representa-tion, including PCA-based latent trajectory diffusion and improved trajectory sample clustering to further boost the performance of our model. In summary, the main contributions of this work are: A novel permutation-invariant, multi-agent joint mo-tion distribution representation using conditional dif-fusion models. A general and flexible framework for performing con-trolled and guided trajectory sampling based on ar-bitrary differentiable cost functions of the trajectories with a range of novel applications. Several significant enhancements to the representation, including PCA-based latent trajectory diffusion formu-lation and improved trajectory sample clustering algo-rithm to further boost the model performance. 2. Related Work Denoising diffusion models Denoising diffusion models [16, 33], methodologically highly related to the class of score-based generative models [23, 43, 44], have recently emerged as a powerful class of generative models that demonstrate high sample quality across a wide range of ap-plication domains, including image generation [36, 37, 39], video generation [15, 18, 49] and 3D shape generation [35]. We are among the first to use diffusion models for predict-ing the joint motion of agents. 9645 Constrained sampling Diffusion models have been shown to be effective at solving inverse problems such as image in-painting, colorization and sparse-view computed tomography by using a controllable sampling process [4– 6, 22, 24, 43, 44]. Concurrent work [53] explores diffusion modeling for controllable traffic generation, which we com-pare to in Sec. 3.4. In diffusion models, the generation pro-cess can be conditioned on information not available during training. The inverse problem can be posed as sampling from the posterior p(x;y)based on a learned unconditional distributionp(x), whereyis an observation of the event x. We defer further technical details to Sec. 3.4. Motion prediction There are two main categories of ap-proaches for motion prediction: supervised learning and generative learning. Supervised learning trains a model with logged trajectories with supervised losses such as L2 loss. One of the challenges is to model inherent multi-modal behavior of the agents. For this, MultiPath [40] uses static anchors, and MultiPath++ [48], Wayformer [31], SceneTransformer [32] use learned anchors, and DenseTNT [13] uses goal-based predictions. Home [9] and GoHome [10] predict future occupancy heatmaps, and then decode trajectories from the samples. MP3 [2] and NMP [50] learn the cost function evaluator of trajectories, and then the output trajectories are heuristically enumerated. Many of these approaches use ensembles for further diversified predictions. The next section covers generative approaches. Generative models for motion prediction Various re-cent works have modeled the motion prediction task as a conditional probability inference problem of the form p(s;c)using generative models, where sdenote the fu-ture trajectories of one or more agents, and cdenote the context or observation. HP-GAN [1] learns a probability density function (PDF) of future human poses conditioned on previous poses using an improved Wasserstein Gener-ative Adversarial Network (GAN). Conditional Variational Auto-Encoders (C-V AEs) [11, 20, 34], Normalizing Flows [8, 28, 29, 41] have also been shown to be effective at learn-ing this conditional PDF of future trajectories for motion prediction. Very recent works have started looking into diffusion models as an alternative to modeling the condi-tional distributions of future sequences such as human mo-tion pose sequences [38, 52] and planning [21]. In a more relevant work, [14] the authors utilize diffusion models to model the uncertainties of pedestrian motion. As far as we are aware, we are the first to utilize diffusion models to model the multi-agent joint motion distribution. Multi-agent motion prediction While much of the mo-tion prediction literature has worked on predicting motions of individual agents independently, there has been somework to model the motion of multiple agents jointly. Scene-Transformer [32] outputs a fixed set of joint motion predic-tions for all the agents in the scene. M2I [45], WIMP [25], PIP [42], and CBP [47] propose a conditional model where the motions of the other agents are predicted by given mo-tions of the controlled agents. There is a set of literature using probabilistic graphical models. DSDNet [51] and MFP [46] use fully connected graphs. JFP [27] supports static graphs such as fully con-nected graphs and autonomous vehicle centered graphs, and dynamic graphs where the edges are constructed between the interacting agents. RAIN [26] learns the dynamic graph of the interaction through separate RL training. 3. Method 3.1. Diffusion Model Preliminaries Preliminaries Diffusion models [23] provide a learned parameterization of the probability distribution p(x) through learnable parameters . Denote this probability density function, convolved with a Gaussian kernel of stan-dard deviation to bep(x;). Instead of directly learn-ing a normalized probability density function p(x)where the normalization constant is generally intractable [19], dif-fusion models learn the score function of the distribution: rxlogp(x;)at a range of noise levels . Given the score function rxlogp(x;), one can sam-ple from the distribution by denoising a noise sample. Sam-ples can be drawn from the underlying distribution x0 p(x)via the following dynamics: x0=x(T) +Z0 T_(t)(t)rxlogp(x(t);(t))dt wherex(T)N(0;2 maxI)(1) where variance (t)is a monotonic, deterministic function of an auxiliary parameter of time t. Following [23], we use the linear noise schedule (t) =t. The initial noise sample is sampled i.i.d. from a unit Gaussian scaled to the highest standard deviation (T) =max. The diffusion model can be trained to approximate a data distribution p (x), where =fx1;x2;;xNdgdenote the set of training data. The empirical distribution of the data can be viewed as a sum of delta functions around each data point:p (x) =1 nPNd i=0(xxi). Denote the de-noiser asD(x;)which is a function that recovers the un-noised sample correspo
nding to the noised sample x. The denoiser is related to the score function via: rxlogp(x;) = (D(x;)x)=2(2) The denoiser can be learned by minimizing the expected L2 denoising error for a perturbed sample xat any noise level sampled from the noise distribution q(): arg min Exp Eq()EN (0;2I)jjD(x+;)xjj2 2 (3) 9646 Figure 3. Network architecture for set denoiser D(S;C;). The noisy trajectories corresponding to agents s1sNaare first con-catenated with a random-fourier encoded noise level , before go-ing through repeated blocks of self-attention among the set of tra-jectories and cross-attention with respect to the condition tokens c1cNc. The self-attention allows the diffusion model to learn a joint distribution across the agents and cross-attention allows the model to learn a more accurate scene-conditional distribution. Note that each agent cross-attends to its own condition tokens from the agent-centric scene encoding (not shown for simplicity). The [learnable components] are marked with brackets. Conditional diffusion models In this work, we are in-terested in the conditional setting of learning p(x;c), wherexdenote the future trajectories of a set of agents andcis the scene context. A simple modifi-cation is to augment both the denoiser D(x;c;)and the score function rxlogp(x;c;)by the condition c. Given a dataset caugmented by conditions: c= f(x1;c1);;(xNd;cN)g, the conditional denoiser can be learned by a conditional denoising score matching objective to minimize the following: Ex;c cEq()EN (0;2I)jjD(x+;c;)xjj2 2 (4) which leads to the learned conditional score function: rxlogp(x;c;) = (D(x;c;)x)=2(5) Preconditioning and training Directly training the model with the denoising score matching objective (Eqn. 4) has various drawbacks. First, the input to the denoiser has non-unit variance: Var (x+) = Var(x) +Var() = 2 data+2; 2[0;max]. Second, at small noise levels of , it is much easier for the model to predict the residual noise than predicting the clean signal. Following [23], we adopt a preconditioned form of the denoiser: D(x;c;) =cskip()x+cout()F(cin()x;c;cnoise()) (6) Fis the neural network to train, cskip;cin;cout;cnoiserespec-tively scale the skip connection to the noisy x, input to the network, output from the network, and noise input to the network. We do not additionally scale csince it is the output of an encoder network, assumed to have modulated scales.Sampling We follow the ODE dynamics in Eqn. 1 when sampling the predictions. We utilize Huen’s 2nd order method for solving the corresponding ODE using the de-fault parameters and 32 sampling steps. 3.2. Diffusion Model for Multi-Agent Trajectories One of the main contributions of this work is to propose a framework for modeling the joint distribution of multi-agent trajectories using diffusion models. Denote the fu-ture trajectory of agent iassi2RNtNfwhereNtis the number of future time steps and Nfis the number of features per time steps, such as longitudinal and lateral po-sitions, heading directions etc. Denote ci2Ras the learned ego-centric context encoding of the scene, includ-ing the road graph, traffic lights, histories of modeled and context agents, as well as interactions within these scene el-ements, centered around agent i. For generality ccould be of arbitrary dimensions, either as a single condition vector, or as a set of context tokens. Denote the set of agent futures trajectories as S2RNaNtNf, the set of ego-centric con-text encodings as C2RNa, wherejSj=jCj=Na is the number of modeled agents. We append each agent’s position and heading (relative to the ego vehicle) to its cor-responding context vectors.Denote the j-th permutation of agents in the two sets to be Sj;Cj, sharing consistent or-dering of the agents. We seek to model the set probabil-ity distribution of agent trajectories using diffusion models: p(Sj;Cj). Since the agent ordering in the scene is arbitrary, learning a permutation invariant set probability distribution is essential, i.e., p(S;C) =p(Sj;Cj);8j2[1;Na!] (7) To learn a permutation-invariant set probability distribu-tion, we seek to learn a permutation-equivariant denoiser, i.e., when the order of the agents in the denoiser permutes, the denoiser output follows the same permutation: D(Sj;Cj;) =Dj(S;C;);8j2[1;Na!] (8) Another major consideration for the denoiser architecture is the ability to effectively attend to the condition tensor cand noise level. Both of these motivations prompt us to uti-lize the transformer as the main denoiser architecture. We utilize the scene encoder architecture from the state-of-the-art Wayformer [31] model to encode scene elements such as road graph, agent histories and traffic light states into a set of latent embeddings. The denoiser takes as input the GT trajectory corresponding to each agent, perturbed with a random noise level q(), and the noise level . During the denoising process, the noisy input undergoes repeated blocks of self-attention between the agents and cross at-tention to the set of context tokens per agent, and finally the results are projected to the same feature dimensionality as the inputs. Since we do not apply positional encoding along the agent dimension, transformers naturally preserve 9647 Figure 4. Inferred exact log probability of 64 sampled trajecto-ries per agent. Higher probability samples are plotted with lighter colors. The orange agent represents the A V (autonomous vehicle). the equivariance among the tokens (agents), leading to the permutation-equivarnance of the denoiser model. See Fig. 3 for a more detailed design of the transformer-based de-noiser architecture. 3.3. Exact Log Probability Inference With our model, we can infer the exact log probability of the generated samples with the following method. First, the change of log density over time follows a second differen-tial equation, called the instantaneous change of variables formula [3], logp(x(t)) @t=Tr@f @x(t) wheref=@x=@t (9) In the diffusion model, the flow function, ffollows, f(x(t);t) =@x(t) @t=_(t)(t)rxlogp(x(t);(t)) (10) The log probability of the sample can be calculated by integrating over time as below. logp(x(0)) = logp(x(T))Z0 TTr@f @x(t) dt(11) The computation of the trace of the Jacobian takes O(n2) wherenis the dimensionality of x. When we use PCA as in Sec. 3.5,nwill be much smaller than the dimensionality of the original data. We can also use Hutchinson’s trace estimator as in FFJORD [12] which takes O(n). The log probability can be used for filtering higher prob-ability predictions. In Fig. 4, for example, higher probabil-ity samples plotted with lighter colors are more likely.3.4. Constraining Trajectory Samples Constrained trajectory sampling has a range of applica-tions. One situation where controllability of the sampled trajectories would be required is to inject physical rules and constraints. For example, agent trajectories should avoid collision with static objects and other road users. Another application is to perform trajectory in-painting: to solve the inverse problem of completing the trajectory prediction given one or more control points. This is a useful tool in cre-ating custom traffic scenarios for autonomous vehicle devel-opment and simulation. More formally, we seek the solution to sampling from the joint conditional distribution p(S;C)q(S;C), where p(S;C)is the learned future distribution for trajectories andq(S;C)a secondary distribution representing the con-straint manifold for S. The score of this joint distribu-tion isrSlog p(S;C)q(S;C) =rSlogp(S;C) + rSlogq(S;C). In order to sample this joint distribution, we need the joint score function at all noise levels : rSlogp(S;C;) +rSlogq(S;C;) (12) The first term directly corresponds to the conditional score function in Eqn. 5. The second term accounts for gra-dient guidance based on the constraint, which resembles classifier-based guidance [17] in class-conditional image generation tasks, where a specialty neural network is trained to estimate this guidance term under a range of noise lev-els. We refer to this as the constraint gradient score. How-ever, since our goal is to approximate the constraint gradi-ent score with an arbitrary differentiable cost function of the trajectory, how is this a function of the noise parameter ? The key insight is to exploit the duality between any in-termediate noisy trajectory Sand the denoised trajectory at that noise level D(S;C;). WhileSis clearly off the data manifold and not a physical trajectory, D(S;C;)usually closely resembles a physical trajectory that is on the data manifold since it is trained to regress for the ground truth (Eqn. 4), even at a high value. The denoised event and the noisy event converge at the limit !0. In this light, we approximate the constraint gradient score as: rSlogq(S;C;)@ @SL D(S;C;) (13) whereL:RNaNtNf7!Ris an arbitrary cost function for the set of sampled trajectories, and is a hyperparameter controlling the weight of this constraint. In this work, we introduce two simple cost functions for trajectory controls: an attractor and a repeller. Attractors encourage the predicted trajectory at certain timesteps to arrive at certain locations. Repellers discourage interacting agents from getting too close to each other and mitigates collisions. We define the costs as: 9648 Attractor cost Lattract(D(S;C;)) =Pj(D(S;C;)Starget) MtargetjPjMtargetj+eps (14) WhereStarget2RNaNtNfare the target location ten-sor, andMtarget is a binary mask tensor indicating which locations inStarget to enforce. denotes the elementwise product and epsdenotes an infinitesimal value to prevent underflow. Repeller cost A=max 11 r(D(S;C;)) (1I);0 (15) Lrepell(D(S)) =PAP(A>0) +eps(16) WhereAis the per time step repeller cost. we denote the pairwise L2distance function between all pairs of denoised agents at all time steps as (D(S;C;))2 RNaNaNt, identity tensor broadcast to all Nttime steps I2RNaNaNt, and repeller radius as r. Constraint score thresholding To further increase the stability of the constrained sampling process, we propose a simple and effective strategy: constraint score thresholding (ST). From Eqn. 2, we make the observation that: rxlogp(x;) = (D(x;)x)==;N(0;I) (17) Therefore, we adjust the constraint score in Eqn. 13 via an elementwise clipping function: rSlogq(S;C;) :=clip (rSlogq(S;C;);1)= (18) We ablate this design choice in Table 2. 3.5. Trajectory Representation Enhancements Sample clustering While MotionDiffuser learns an entire distribution of possible joint future trajectories from which we can draw an arbitrary number of samples, it is often nec-essary to extract a more limited number of representative modes from the output distribution. The Interaction Pre-diction challenge in Waymo Open Motion Dataset, for in-stance, computes metrics based on a set of 6 predicted joint futures across modeled agents. Thus, we need to generate a representative set from the larger set of sampled trajectories. To this end, we follow the trajectory aggregation method defined in [48] which performs iterative greedy clustering to maximize the probability of trajectory samples falling within a fixed distance threshold to an output cluster. We refer readers to [48] for details on the clustering algorithm. In the joint agent prediction setting, we modify the clus-tering algorithm such that for each joint prediction sample, we maximize the probability that all agent predictions fall within a distance threshold to an output cluster.PCA latent diffusion Inspired by the recent success of la-tent diffusion mod
Huang_QuantArt_Quantizing_Image_Style_Transfer_Towards_High_Visual_Fidelity_CVPR_2023
Abstract The mechanism of existing style transfer algorithms is by minimizing a hybrid loss function to push the generated image toward high similarities in both content and style. However, this type of approach cannot guarantee visual fi-delity, i.e., the generated artworks should be indistinguish-able from real ones. In this paper, we devise a new style transfer framework called QuantArt for high visual-fidelity stylization. QuantArt pushes the latent representation of the generated artwork toward the centroids of the real artwork distribution with vector quantization. By fusing the quan-tized and continuous latent representations, QuantArt al-lows flexible control over the generated artworks in terms of content preservation, style similarity, and visual fidelity. Experiments on various style transfer settings show that our QuantArt framework achieves significantly higher visual fi-delity compared with the existing style transfer methods.
1. Introduction Image style transfer aims at transferring the artistic style of a reference image to a content image, where the output image should have the style ( e.g., colors, textures, strokes,and tones) of the reference and the content information of the content image. Great advances [4, 5, 13, 14, 39, 40, 63] have been made in the area of image style transfer, where the arbitrary style transfer (AST) has become one of the main research focuses. Given a trained model, AST algo-rithms [8,26,33] can perform style transfer on arbitrary un-seen content-style pairs in a zero-shot manner, such that it enables more practical applications1. Existing AST algorithms, including the statistics-based methods [1,26,35,60] and the patch-based methods [6,47], deliver remarkable style transfer results by matching the artistic style information of the stylized image and the style reference. However, taking the high-fidelity artwork gener-ation as the ultimate goal of image style transfer, all existing methods can still be improved since there are few mech-anisms to guarantee a high artistic fidelity of the stylized image. A few existing work [3,59] accommodate the adver-sarial loss [15] into the style transfer framework to enhance the image quality. However, the performance improvement is hindered by the heterogeneous optimization objectives of high image quality and faithful image stylization. In this work, we introduce visual fidelity as a new eval-1The codes of this paper are available at https://github.com/ siyuhuang/QuantArt This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5947 uation dimension of style transfer. It is formulated as the similarity between the stylized image and the real artwork dataset, and it is orthogonal to the two widely studied eval-uation dimensions including style similarity and content preservation. Motivated by the vector-quantized image rep-resentation [11, 45, 52], if the latent feature of generation is closer to one of the cluster centers in the real distribution, it is harder for humans to distinguish it from the real images, i.e., having better visual fidelity. We propose to learn an art-work codebook, i.e., a global dictionary, to save the discrete cluster centers of all artworks. The continuous representa-tions of images are converted to the discrete encodings in the artwork codebook via vector quantization, ensuring that it is not only close to the given style reference but also close to one of the learned cluster centers in the real distribution. We further propose a framework called Quantizing Artis-tic Style Transfer (QuantArt) to achieve flexible control of the three evaluation dimensions mentioned above. Quan-tArt first extracts both content and style features using sep-arate encoders, respectively. Next, it applies vector quan-tization to both content and style features to fetch discrete codes in the learned codebooks. Then, the content and style codes are transferred to the stylized feature with a specially designed feature style transfer module called Style-Guided Attention. Before feeding into the decoder, the stylized fea-ture is quantized again with the artwork codebook, ensuring a high visual-fidelity stylization by approaching the cluster centers of the real artwork distribution. By fusing the con-tinuous and quantized stylized features with the content fea-tures before the decoder, QuantArt allows users to arbitrar-ily trade off between the style similarity, visual fidelity, and content reservation of the style transfer results. In the ex-periments, the proposed method significantly increases the visual fidelity of generations in various image style transfer settings including photo-to-art, art-to-art, photo-to-photo, and art-to-photo (see Fig. 1). The contribution of the pro-posed method can be summarized as follows: • We define visual fidelity as a new evaluation dimension of style transfer and propose a high visual-fidelity style transfer algorithm based on vector quantization. • We design a framework based on both discrete and continuous style transfer architectures, which allow users to flexibly control style similarity, content preser-vation, and visual fidelity of the stylization result. • The extensive experiments demonstrate that our method achieves higher visual fidelity and compara-ble style similarity with respect to the state-of-the-art style transfer methods.
Feng_Evolved_Part_Masking_for_Self-Supervised_Learning_CVPR_2023
Abstract Existing Masked Image Modeling methods apply fixed mask patterns to guide the self-supervised training. As those patterns resort to different criteria to mask local re-gions, sticking to a fixed pattern leads to limited vision cues modeling capability. This paper proposes an evolved part-based masking to pursue more general visual cues model-ing in self-supervised learning. Our method is based on an adaptive part partition module, which leverages the vi-sion model being trained to construct a part graph, and partitions parts with graph cut. The accuracy of parti-tioned parts is on par with the capability of the pre-trained model, leading to evolved mask patterns at different training stages. It generates simple patterns at the initial training stage to learn low-level visual cues, which hence evolves to eliminate accurate object parts to reinforce the learn-ing of object semantics and contexts. Our method does not require extra pre-trained models or annotations, and effectively ensures the training efficiency by evolving the training difficulty. Experiment results show that it substan-tially boosts the performance on various tasks including im-age classification, object detection, and semantic segmenta-tion. For example, it outperforms the recent MAE by 0.69% on imageNet-1K classification and 1.61% on ADE20K seg-mentation with the same training epochs.
1. Introduction Recent years have witnessed a boom in continuously growing representation learning capability and data de-mands of deep neural networks like CNN [21, 37] and vision transformers [14, 27, 33]. To tackle the increas-ing demand for labelled data, Masked Language Model-ing (MLM) [3, 13] has been adopted to train natural lan-guage processing models through self-supervised learning on large-scale data. Inspired by the success of MLM, many works propose Masked Image Modeling (MIM) to pre-train vision models on unlabeled images for a series of down-(a) Grid (b) Random (c) Block Early Median Final (d) Evolved mask patterns at different training stages Figure 1. (a), (b), and (c) are three basic mask patterns adopted in existing MIM methods. (d) illustrates the proposed evolved part masking, where the generated mask patterns evolve with the capa-bility of vision model being trained. stream tasks [2,18,38]. MLM masks several words in the in-put sentences and supervises the network to recover masked words according to semantics provided by remaining words. MIM follows a similar idea of MLM to mask a portion of regions in input images, then trains the vision model to re-cover masked contents from visible regions. As images are not structured representations like sentences, different MIM works have to resort to different criteria to generate mask patterns. Mask patterns in existing works can be divided into three categories according to their masked image cues. Some works like MAE [18] and SimMIM [38] do not differentiate visual cues in images, and randomly mask local regions or patches. Another line of the works, such as MST [24], pro-pose to preserve crucial cues in the image to enhance the learning of local context. The third line of works such as AttnMask [22] and SemMAE [23] propose to completely mask cues like object region in images to pose a more chal-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10386 (a) (b)Random Block GridFigure 2. Illustration of effects of different mask patterns to down-stream tasks in (a), and learned parameters in (b). In (a), random pattern and block pattern perform best in image classification and semantic segmentation, respectively. (b) shows the mean attention distance across images at different layers of the pre-trained model. Results indicate different mask pattern are suited to different tasks. lenging pretext task. A more detailed review to existing works will be presented in Sec. 2. Mask patterns in those works lead to different visual cues modeling tasks and varied difficulties. To study the impact of mask patterns on self-supervised pre-training, we adopt three basic masking methods in Fig. 1(a)-(c) to different vision tasks. Fig. 2(a) explores their effects to two vision tasks. It can be observed that, random pattern and block pattern perform best in image classification and semantic segmentation, respectively. It is also clear that, more train-ing epochs do not boost the performance of grid pattern and random pattern in segmentation. Fig. 1(b) further visualizes the average attention length of neurons at each layer of the pre-trained model. It indicates that, neurons trained by grid mask mostly focus on nearby regions with shorter attention distances. As longer attention distance benefits the learning of contextual cues, block pattern is more preferred by dense prediction tasks like semantic segmentation. Fig. 2 indicates that, the criteria for generating mask pat-terns largely determines visual cues that the network could learn in the pre-training phase. For instance, masking the complete object regions is more beneficial for learning se-mantics and contexts than grid mask. Masking grid pattern makes the network neuron pay more attention to nearby re-gions, and favors the initial training stage in classification, by posting an easier learning task. Therefore, different mask patterns are suited to different down-stream tasks. This find-ing leads to one fundamental challenge to self-supervised learning: the pre-training procedure have no clue which task it will be applied to. Instead of sticking to a fixed mask pattern, we propose the evolved part masking to pursue more general visual cues modeling capability in self-supervised learning. The evolved part masking is expected to model visual cues at different scales, accelerate the training convergence. To this end, we generate masks by partitioning object parts in train-ing images. An adaptive part partition module is adopted to leverage the vision model being trained to construct a part graph, and partition parts with graph cut. The accuracy of partitioned parts is on par with the capability of vision model, leading to evolved mask patterns at different train-ing stages, as illustrated in Fig. 1(d). In other words, the initial training stage generates simple patterns to learn low-level visual cues, which hence evolves to mask different ob-ject parts to reinforce the learning of object semantics and contexts. The adaptive part partition module generates parts ac-cording to the relationship among image patches inferred by the vision model. The relevance among patches learned by the vision transformers are encoded in the attention map. Our method hence constructs a patch association graph based on attention maps, and tackle the unlabeled part parti-tion as a classic graph cut problem. It implements the graph cut with an efficient Expectation-Maximization (EM) algo-rithm [1, 6, 30]. The generated masks embed extra contex-tual cues among image patches to supervise the training of vision model. The updated model in-turn boosts the accu-racy of part partition. Iteratively conducting mask genera-tion and model training results in a loop that trains vision models on the unlabeled dataset. The mask patterns thus could evolve to present different visual cues learning tasks. We test the effectiveness of the proposed method on three popular MIM architectures, i.e.,MAE [18], BEiT [2] and SimMIM [38]. Our method brings significant perfor-mance enhances for those three architectures, especially on the semantic segmentation task, e.g., boosts the mIoU by2%. When compared with recent self-supervised learn-ing methods, our method achieves comparable performance with fewer pre-training epochs, and superior performance with similar training epochs. To the best of our knowl-edge, this is an original effort on evolved part masking for self-supervised learning. Our method does not require extra pre-trained models or annotations. It effectively ensures the training efficiency, and enhances the generalization ability of trained model by evolving the mask patterns, thus shows potentials to boost the performance of pre-trained vision models.
Chen_Learning_a_Sparse_Transformer_Network_for_Effective_Image_Deraining_CVPR_2023
Abstract Transformers-based methods have achieved significant performance in image deraining as they can model the non-local information which is vital for high-quality im-age reconstruction. In this paper, we find that most ex-isting Transformers usually use all similarities of the to-kens from the query-key pairs for the feature aggrega-tion. However, if the tokens from the query are differ-ent from those of the key, the self-attention values esti-mated from these tokens also involve in feature aggregation, which accordingly interferes with the clear image restora-tion. To overcome this problem, we propose an effective DeRaining network, Sparse Trans former (DRSformer) that can adaptively keep the most useful self-attention values for feature aggregation so that the aggregated features bet-ter facilitate high-quality image reconstruction. Specifi-cally, we develop a learnable top-k selection operator to adaptively retain the most crucial attention scores from the keys for each query for better feature aggregation. Si-multaneously, as the naive feed-forward network in Trans-formers does not model the multi-scale information that is important for latent clear image restoration, we develop an effective mixed-scale feed-forward network to gener-ate better features for image deraining. To learn an en-riched set of hybrid features, which combines local con-text from CNN operators, we equip our model with mix-ture of experts feature compensator to present a coop-eration refinement deraining scheme. Extensive experi-mental results on the commonly used benchmarks demon-strate that the proposed method achieves favorable perfor-mance against state-of-the-art approaches. The source code and trained models are available at https://github. com/cschenxiang/DRSformer .
1. Introduction Single image deraining is a typical low-level vision prob-lem emerging in the last decade. It aims to recover the clean *Corresponding author. (a) Rainy Input (b) Uformer [48] (c) Restormer [58] (d) IDT [50] (e) Ours (f) Ground Truth Figure 1. Image deraining results between our method and recent Transformer-based methods [48,50,58]. Our method can generate high-quality image with more accurate detail and texture recovery. image from the observed rainy one. As the clear image and rain streaks are unknown, it is an ill-posed inverse problem. To solve this problem, early approaches [20, 24, 60] usu-ally impose various priors based on statistical properties of rain streaks and clear images. In fact, these handcrafted pri-ors are not robust to complex and varying rainy scenarios, which limit the deraining performance. Recently, numerous learning-based methods [4, 19, 23, 36, 52, 53, 56] have resorted to diverse CNN architectures as a preferable choice compared to traditional algorithms. However, the intrinsic characteristics of convolutional op-eration, i.e, local receptive fields and independence of input content, hinder the model’s capacity to eliminate long-range rain degradation perturbation. To alleviate such limitations, Transformers [2, 26, 35, 50] have been applied to image de-raining and have achieved decent performance as they can better model the non-local information for high-quality im-age reconstruction. Nevertheless, the image details, which are local features of images, are not modeled well by these approaches when restoring clear images as shown in Fig-ure 1. One main reason is that the self-attention in Trans-formers does not model the local invariant properties that This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5896 CNNs do well. Since rain streaks tend to confuse with back-ground details in local regions, recent studies [5, 18, 57] try to mitigate such drawbacks by combining CNN operations and Transformers for boosting image deraining, where the Transformers based on the standard formulations. We note that the standard Transformers [40] usually use all attention relations based on the query-key pairs to ag-gregate features. As the tokens from the key are not always relevant to those from the query, using the self-attention val-ues estimated from these tokens in the feature aggregation interferes with the following latent clear image restoration. The root cause behind this deficiency lies in that, the na-tive dense calculation pattern of self-attention amplifies rel-atively smaller similarity weights, making feature interac-tion and aggregation process susceptible to implicit noises. This also naturally leads to corresponding redundant or ir-relevant representations are still taken into consideration when modeling global feature dependencies [44, 64]. Thus, these findings motivate us to explore the most useful self-attention values so that we can make full use of the features for better image restoration. To this end, we develop an effective sparse Transformer network for image deraining, named as DRSformer. Specif-ically, the key component of the proposed framework is the sparse Transformer block (STB) which contains a top-ksparse attention (TKSA) that keeps the most useful self-attention values for feature aggregation and a mixed-scale feed-forward network (MSFN) that explores the multi-scale features for better image deraining. First, we design the top-kattention mechanism to replace the vanilla self-attention [40]. The TKSA keeps the largest Ksimilarity scores be-tween the queries and the keys for the self-attention comput-ing, thereby facilitating better feature aggregation. Further-more, the developed MSFN further explores the multi-scale information to better improve the aggregated features. Fi-nally, based on the observation that rain distribution reveals the degradation location and degree, we also introduce mix-ture of experts feature compensator (MEFC) to provide col-laborative refinement for STB. With the above-mentioned designs, our proposed method offers three-fold advantages: (1) it can enjoy natural robustness in terms of less sensi-tivity to useless feature interference, (2) it can not only en-rich the locality but also empower the capability of global feature exploitation, and (3) it can co-explore data (embod-ied in MEFC) and content (embodied in STB) sparsity for achieving deraining performance gains. The main contributions are summarized as follows: • We propose a sparse Transformer architecture to help generate high-quality deraining results with more ac-curate detail and texture recovery. • We develop a simple yet effective learnable top-kse-lection operator to adaptively maintain the most useful self-attention values for better feature aggregation.• We design an effective feed-forward network based on mixed-scale fusion strategy to explore multi-scale rep-resentations for better facilitating image deraining. • Extensive experimental results on various benchmarks demonstrate that our method achieves favorable per-formance against state-of-the-art (SOTA) approaches.
Hao_Learning_Attention_As_Disentangler_for_Compositional_Zero-Shot_Learning_CVPR_2023
Abstract Compositional zero-shot learning (CZSL) aims at learn-ing visual concepts ( i.e., attributes and objects) from seen compositions and combining concept knowledge into un-seen compositions. The key to CZSL is learning the disen-tanglement of the attribute-object composition. To this end, we propose to exploit cross-attentions as compositional dis-entanglers to learn disentangled concept embeddings. For example, if we want to recognize an unseen composition “yellow flower”, we can learn the attribute concept “yellow” and object concept “flower” from different yellow objects and different flowers respectively. To further constrain the disentanglers to learn the concept of interest, we employ a regularization at the attention level. Specifically, we adapt the earth mover’s distance (EMD) as a feature similarity met-ric in the cross-attention module. Moreover, benefiting from concept disentanglement, we improve the inference process and tune the prediction score by combining multiple concept probabilities. Comprehensive experiments on three CZSL benchmark datasets demonstrate that our method signifi-cantly outperforms previous works in both closed-and open-world settings, establishing a new state-of-the-art. Project page: https://haoosz.github.io/ade-czsl/
1. Introduction Suppose we have never seen white bears ( i.e., polar bears) before. Can we picture what it would look like? This is not difficult because we have seen many white animals in daily life ( e.g., white dogs and white rabbits) and different bears with various visual attributes in the zoo ( e.g., brown bears and black bears). Humans have no difficulty in disentangling “white” and “bear” from seen instances and combining them into the unseen composition. Inspired by this property of human intelligence, researchers attempt to make machines learn compositions of concepts as well. Compositional zero-shot learning (CZSL) is a specific problem studying visual compositionality, aiming to learn visual concepts from seen *Corresponding author “yellowbird”“yellowpear”“yellow” “purpleflower”“red flower”same attributesame objectseen compositionsunseen compositions “yellowflower”“flower” Figure 1. Motivation illustration. Given images from seen attribute-object compositions, human can disentangle the attribute “yellow” from “yellow bird” and “yellow pear”, and the object “flower” from “purple flower” and “red flower”. After learning visual properties of the concepts “yellow” and “flower”, human can then recognize images from the unseen composition “yellow flower”. compositions of attributes and objects and generalize concept knowledge to unseen compositions. Learning attribute-object compositions demands prior knowledge about attributes and objects. However, visual concepts of attributes and objects never appear alone in a nat-ural image. To learn exclusive concepts for compositionality learning, we need to disentangle the attribute concept and the object concept. As illustrated in Fig. 1, if we want to recog-nize the image of “yellow flower”, it is necessary to learn the “yellow” concept and the “flower” concept, i.e., disentangle visual concepts, from images of seen compositions. Previous works [22, 24, 25, 28 –30, 36, 46] tackle CZSL by composing attribute and object word embeddings, and projecting word and visual embeddings to a joint space. They fail to disentan-gle visual concepts. Recently, some works [21,40,41,50] con-sider visual disentanglement but still have limitations despite their good performance. SCEN [21] learns concept-constant samples contrastively without constructing concept embed-ding prototypes to avoid learning irrelevant concepts shared by positive samples. IVR [50] disentangles visual features into ideal concept-invariant domains. This ideal domain gen-eralization setting requires a small discrepancy of attribute and object sets and would degenerate on vaster and more This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15315 complex concepts. ProtoProp [40] and OADis [41] learn local attribute and object prototypes from spatial features on convolutional feature maps. However, spatial disentangle-ment is sometimes infeasible because attribute and object concepts are highly entangled in spatial features. Taking an image of “yellow flower” as an example, the spatial posi-tions related to the attribute “yellow” and the object “flower” completely overlap, which hinders effective attribute-object disentanglement. To overcome the above limitations, we propose a sim-ple visual disentangling framework exploiting Attention as DisEntangler (ADE) on top of vision transformers [6]. We notice that vision transformers (ViT) have access to more sub-space global information across multi-head attentions than CNNs [38]. Therefore, with the expressivity of different subspace representations, token attentions of ViT may pro-vide a more effective way for disentangling visual features, compared to using traditional spatial attentions across local positions on convolutional features [40, 41]. Specifically, it is difficult to disentangle the attribute-object composition “yellow flower” by spatial positions, but it is possible for ViT multi-head attentions to project attribute concept “yel-low” and object concept “flower” onto different subspaces. Inspired by this property, we propose to learn cross-attention between two inputs that share the same concept, e.g., “yel-low bird” and “yellow pear” share the same attribute concept “yellow”. In this way, we can derive attribute-and object-exclusive visual representations by cross-attention disentan-glement. To ensure that the concept disentangler is exclusive to the specific concept, we also need to constrain the disen-tanglers to learn the concept of interest instead of the other concept. For example, given attribute-sharing images, the attribute attention should output similar attribute-exclusive features while the object attention should not. To achieve this goal, we apply a regularization term adapted from the earth mover’s distance (EMD) [13] at the attention level. This reg-ularization term forces cross-attention to learn the concept of interest by leveraging the feature similarity captured from all tokens. Mancini et al. [24] propose an open-world evaluation setting, which is neglected by most previous works. We con-sider both closed-world and the open-world settings in our experiments, demonstrating that our method is coherently efficient in both settings. The contributions of this paper are summarized below: •We propose a new CZSL approach, named ADE, us-ing cross-attentions to disentangle attribute-and object-exclusive features from paired concept-sharing inputs. •We force attention disentanglers to learn the concept of interest with a regularization term adapted from EMD, ensuring valid attribute-object disentanglement. •We comprehensively evaluate our method in both closed-world and open-world settings on three CZSL datasets, achieving consistent state-of-the-art.2. Related work Visual attribute has been widely studied to understand how visual properties can be learned from objects. The pio-neering work by Ferrari and Zisserman [10] learned visual attributes using a probabilistic generative model. The suc-cessive work by Lampert et al. [20] used visual attributes to detect unseen objects with an attribute-based multi-label classification. Similarly, Patterson et al. [35] proposed Eco-nomic Labeling Algorithm (ELA) to discover multi-label at-tributes for objects. Different from multi-label classification, other works [4,8,15,23] learned attribute-object relationship to generalize attribute feature across all object categories based on probabilistic models. Visual attributes also ben-efit downstream tasks, e.g., object recognition [7, 16, 31], action recognition [1, 9, 26], image captioning [19, 33], and semi-supervised learning [42]. Compositional zero-shot learning (CZSL) is a special case of zero-shot learning (ZSL) [34, 39, 43, 47, 48], aims at recognizing unseen attribute-object compositions learn-ing from seen compositions. Misra et al. [28] first termed and studied CZSL by projecting composed primitives and visual features to a joint embedding space. Nagarajan et al.[30] formulated attributes as matrix operators applied on object vectors. Purushwalkam et al. [36] introduced a task-driven modular architecture to learn unseen compositions by re-weighting a set of sub-tasks. Wei et al. [46] generated attribute-object compositions with GAN [11] to match vi-sual features. Li et al. [22] proposed symmetry principle of attribute-object transformation under the supervision of group axioms. Naeem et al. [29] and Mancini et al. [25] used graph convolutional networks to extract attribute-object rep-resentations. Recently, some works shift their interest from word composing to visual disentanglement. Atzmon et al. [2] solved CZSL from a causal perspective to learn disentangled representations. Ruis et al. [40] proposed to learn prototyp-ical representations of objects and attributes. Li et al. [21] disentangled visual features into a Siamese contrastive space and entangled them with a generative model. Saini et al. [41] extracted visual similarity from spatial features to disentan-gle attributes and objects. Zhang et al. [50] treated CZSL as a domain generalization task, learning attribute-and object-invariant domains. A more realistic open-world CZSL set-ting was studied in [17,24,25], which considered all possible compositions in testing. Very recently, an inspiring work by Nayak et al. [32] introduced compositional soft prompts to CLIP [37] to tackle CZSL problem. Attention mechanism has been well studied by non-local neural networks [45] in computer vision and transform-ers [44] in machine translation. Dosovitskiy et al. [6] adapted transformer architecture to computer vision field, proving its comparable efficiency over traditional CNNs. Inspired by the multi-head self-attention implemented by transformers, our work exploits efficient attention as disentangler for CZSL. 15316 𝑤!𝑤"𝑤#𝑣#ℒ!""#$ℒ!""#𝑣"$𝑣"𝑣#𝑣!$𝑣!ℒ%&'$ℒ%&'ℒ(%)𝑤" 𝑤!𝑤#𝜋"𝜋#𝜋!𝑣"$𝑣" 𝑣!$𝑣!“red” “bus”ViT ❄ attribute cross-attentioncompositionself-attentionobject cross-attention𝜋!𝜋#𝜋!𝜓ℒ!""#$ℒ(%)ℒ%&'attribute embeddings object embeddingslinear[CLS]ℒ!""#ℒ%&'$“blue”“bus”“red”“wall”“red”“bus” embeddersVisual Feature SpaceWord Embedding Space cross entropy loss𝑄𝐾 Figure 2. Method overview. Left (our framework ADE): Given one target image of “red bus”, we sample two auxiliary images of the same attribute “red wall” and of the same object “blue bus”. We feed the three images into a frozen ViT initialized with DINO [3]. We then input all encoded tokens ( i.e.,[CLS] and patch tokens) to three attention modules: (1) attribute cross-attention taking paired attribute-sharing tokens as inputs; (2) object cross-attention taking paired object-sharing tokens as inputs; (3) composition self-attention taking tokens of single target image as input. We then project the [CLS] tokens of attention outputs with three MLP embedders πa,πc, andπo. We finally compute cro
Boutros_CR-FIQA_Face_Image_Quality_Assessment_by_Learning_Sample_Relative_Classifiability_CVPR_2023
Abstract Face image quality assessment (FIQA) estimates the util-ity of the captured image in achieving reliable and accurate recognition performance. This work proposes a novel FIQA method, CR-FIQA, that estimates the face image quality of a sample by learning to predict its relative classifiability. This classifiability is measured based on the allocation of the training sample feature representation in angular space with respect to its class center and the nearest negative class center. We experimentally illustrate the correlation between the face image quality and the sample relative classifiability. As such property is only observable for the training dataset, we propose to learn this property by probing internal net-work observations during the training process and utilizing it to predict the quality of unseen samples. Through exten-sive evaluation experiments on eight benchmarks and four face recognition models, we demonstrate the superiority of our proposed CR-FIQA over state-of-the-art (SOTA) FIQA algorithms.1
1. Introduction Face image utility indicates the utility (value) of an im-age to face recognition (FR) algorithms [1, 19]. This util-ity is measured with a scalar, namely the face image qual-ity (FIQ) score, following the definition in ISO/IEC 2382-37 [20] and the FR Vendor Test (FRVT) for FIQA [10]. As FIQA measures the face utility to FR algorithm, it does not necessary reflects, and does not aim at measuring, the perceived image quality, e.g. a profile face image can be of high perceived quality but of low utility to FR algo-rithm [35]. Assessing this perceived image quality has been addressed in the literature by general image quality assess-ment (IQA) methods [26, 29, 30] and is different than as-sessing the utility of an the image for FR . This is reflected by FIQA methods [28, 32, 36] significantly outperforming IQA methods [26, 29, 30] in measuring the utility [19] of face images in FR, as demonstrated in [8, 28, 36]. 1https://github.com/fdbtrs/CR-FIQASOTA FIQA methods focused either on creating con-cepts to label the training data with FIQ scores and then learn a regression problem [14, 15, 32], or on developing a link between face embedding properties under certain scenarios and the FIQ [28, 34, 36]. Generally, the sec-ond approach led to better FIQA performances with most works mentioning the error-prone labeling of the ground truth quality in the first research direction as a possible rea-son [28, 36]. However, in the second category, transferring the information in network embeddings into an FIQ score is not a learnable process, but rather a form of statistical analysis, which might not be optimal. This paper proposes a novel learning paradigm to assess FIQ, namely the CR-FIQA. Our concept is based on learn-ing to predict the classifiability of FR training samples by probing internal network observations that point to the rel-ative proximity of these samples to their class centers and negative class centers. This regression is learned simulta-neously with a conventional FR training process that min-imizes the distance between the training samples and their class centers. Linking the properties that cause high/low classifiability of a training sample to the properties leading to high/low FIQ, we can use our CR-FIQA to predict the FIQ of any given sample. We empirically prove the theo-rized link between classifiability (Section 3.3) and FIQ and conduct thorough ablation studies on key aspects of our CR-FIQA design (Section 5). The proposed CR-FIQA is eval-uated on eight benchmarks along with SOTA FIQAs. The reported results on four FR models demonstrate the supe-riority of our proposed CR-FIQA over SOTA methods and the stability of its performance across different FR models. An overview of the proposed CR-FIQA is presented in Fig-ure 1 and will be clarified in detail in this paper.
Fu_Neural_Transformation_Fields_for_Arbitrary-Styled_Font_Generation_CVPR_2023
Abstract Few-shot font generation (FFG), aiming at generating font images with a few samples, is an emerging topic in re-cent years due to the academic and commercial values. Typ-ically, the FFG approaches follow the style-content disen-tanglement paradigm, which transfers the target font styles to characters by combining the content representations of source characters and the style codes of reference samples. Most existing methods attempt to increase font generation ability via exploring powerful style representations, which may be a sub-optimal solution for the FFG task due to the lack of modeling spatial transformation in transferring font styles. In this paper, we model font generation as a continu-ous transformation process from the source character image to the target font image via the creation and dissipation of font pixels, and embed the corresponding transformations into a neural transformation field. With the estimated trans-formation path, the neural transformation field generates a set of intermediate transformation results via the sampling process, and a font rendering formula is developed to ac-cumulate them into the target font image. Extensive exper-iments show that our method achieves state-of-the-art per-formance on few-shot font generation task, which demon-strates the effectiveness of our proposed model. Our imple-mentation is available at: https://github.com/fubinfb/NTF .
1. Introduction Generating a new stylized font with a few reference sam-ples, referred as the few-shot font generation (FFG) task, has received considerable attentions due to the academic, commercial, and artistic values, especially for some glyph-rich scripts such as Chinese and Korean. In recent years, the style-content disentanglement paradigm has become the Corresponding author: Yu Qiao Figure 1. The motivation of this paper: (a). The differences of font styles mainly come from the shape deformation and transfor-mation of the source font, such as the thickness of strokes and the writing pattern of glyphs. (b). We regard font generation as a con-tinuous transformation process from the source font to target font via the creation and dissipation of font pixels. most popular solution for FFG task, which decouples the stylized font images into the font-specific style codes and the character-specific content features. Therefore, the target stylized font will be generated from the carefully-designed decoder via the combination of the style codes from the reference samples and the content embeddings from the s-tandard glyphs. Based on the style representations, exist-ing approaches can be roughly divided into two categories. Early approaches mainly model font style information as global statistic features, and thus utilize the universal style representations to embed such information. Witnessing the fine-grained structure variations and local correlations (such as stoken and component) in font styles, recent approaches further develop component-wise or fine-grained localized style representations to boost FFG performance. Howev-er, as shown in Fig. 1, the differences between font styles mainly come from the shape deformation and transforma-tion on the source glyph. Based on this observation, pre-vious approaches may be the sub-optimal solution for FFG task due to the lack of modeling spatial transformation in font generation process. Inspired by recent advances in Neural Radiance Field (NeRF) [23] in 3D view synthesis, we attempt to embed This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22438 Figure 2. (a). The methodology of NeRF. (b). We embed the desired transformations of font generation into the neural transformation field. The style estimator Epredicts the locations of each font style, and the font generation process can be viewed as a transformation process of font pixels from the original point to this location. (c). As each NeRF only corresponds to a specific scene, it is impracticable for the FFG task. Thus we generalize our NTF to model the transformations for all characters by introducing the structure embedding (extracted by a structure encoder Ec) of characters. (d). Finally, considering the localized characteristic of font style, we further generalize our NTF into the localized style representation. the desired spatial transformations in a neural transforma-tion field, and thus the font generation process can be refor-mulated as the accumulation of a set of intermediate trans-formation results along a specific path. The methodology of NeRF is presented in Fig. 2 (a). The NeRF constructs a neural radiance field to represent a specific 3D scene as a 5D function, whose inputs are the location and view di-rection while outputs are the emitted color together with the volume density. An MLP network is utilized to approximate this function, where the scene information is embedded in-to the parameters via the optimization process. To generate a novel view, the color of each pixel is rendered along the color ray passing through the scene via volume rendering technique [22]. Motivated by the above method, instead of directly pre-dicting the pixel-level deformation offsets, we model the font generation as a continuous transformation process via the creation intensity 'and dissipation rate of font pix-els, and embed such transformations into a neural transfor-mation filed. To make the description clearly, we use the universal-representation-based font generation to introduce our method. As shown in Fig. 2 (c), the neural transfor-mation field (NTF) is constructed to model the font trans-formation process based on the structure embeddings of source characters. Each location in NTF represents a spe-cific structure-related transformation and the path from the original point to this location corresponding to the trans-formation process from the source font to the target font. Each font style has a specific location relating to the desired transformations for generating font images in NTF, and we utilize an estimator to estimate this location. With the esti-mated location and the corresponding transformation path, NTF generates a set of intermediate transformations via the sampling process, and a font rendering formula is develope-d to accumulate them into the target font image. Since the font styles contain many fine-grained structures and local correlations, the localized style representation shows sig-nificant advantages over the universal style representation. Therefore, as shown in Fig. 2 (d), we generalize our NT-F into the localized style representations and conduct ex-tensive experiments to evaluate our model. Experimental results show that our model achieves new state-of-the-art performance in few-shot font generation tasks, both in the seen fonts with unseen contents testing and unseen fonts with unseen contents testing, which demonstrate the supe-rior generation performance of our proposed method. In summary, our contribution is threefold in this paper: 1). We regard font generation as a continuous transfor-mation process via the creation and dissipation of font pix-els along the transformation path, and embed such transfor-22439 mations into the neural transformation field (NTF). 2). A differentiable font rendering procedure is devel-oped to accumulate the intermediate transformations into the target font image. 3). Experimental results show that our method outper-forms the state-of-the-art methods in the few-shot font gen-eration task, which demonstrate the effectiveness of our pro-posed method.
Fu_Learning_Semantic_Relationship_Among_Instances_for_Image-Text_Matching_CVPR_2023
Abstract Image-text matching, a bridge connecting image and language, is an important task, which generally learns a holistic cross-modal embedding to achieve a high-quality semantic alignment between the two modalities. How-ever, previous studies only focus on capturing fragment-level relation within a sample from a particular modal-ity, e.g., salient regions in an image or text words in a sentence, where they usually pay less attention to captur-ing instance-level interactions among samples and modal-ities, e.g., multiple images and texts. In this paper, we argue that sample relations could help learn subtle dif-ferences for hard negative instances, and thus transfer shared knowledge for infrequent samples should be promis-ing in obtaining better holistic embeddings. Therefore, we propose a novel hierarchical relation modeling frame-work (HREM), which explicitly capture both fragment-and instance-level relations to learn discriminative and ro-bust cross-modal embeddings. Extensive experiments on Flickr30K and MS-COCO show our proposed method out-performs the state-of-the-art ones by 4%-10% in terms of rSum. Our code is available at https://github.com/ CrossmodalGroup/HREM .
1. Introduction Image-text matching bridges the semantic gap between visual and textual modalities and is a fundamental task for various multi-modal learning applications, such as cross-modal retrieval [22] and text-to-image synthesis [17]. The critical challenge is accurately and efficiently learning cross-modal embeddings and their similarities for images and texts, to achieve a high-quality semantic alignment. In general, existing image-text matching methods can be classified into two paradigms. The first embedding-based matching [4, 10, 20, 35] separately encodes the whole im-ages and texts into a holistic embedding space, then globally *Corresponding author. (c) Semantic scarcity (b) Semantic ambiguityA surfer is holding ona surfboard to stare out the wave. A surfer is squating ona surfboard to break outthe wave.A surfer is riding on a surfboard to wipe out the wave.A man is playing the ice hockey with sticks to hitaball..A man is playing the cricket with bat to hitaball.A man is playing the polo with sticks to hitaball.Holistic embedding space with our frameworkImage -text pairs Surfer with surfboard Man with ball Hockey Cricket PoloStage -one Stage -two Stage -threeVisual local features AggregationFragment -level Interaction Instance -level Interaction (a)Pipeline of embedding -based methodLoss Previous workOur workTextual local featuresAggregationFragment -level InteractionLoss with our frameworkFigure 1. Illustration of our motivation. Sample relation modeling improves the holistic representation of cross-modal learning. Col-ors and shapes indicate different modalities and image-text pairs, respectively. Orange elements mark effective interactions: (a) The pipeline of the previous and our work, we add the cross-modal re-lation interaction between samples. (b) For the identical theme of “surfer with surfboard”, specific behaviors exist subtle differences, like “hold/squat/ride on the surfboard” and “stare/break/wipe out the wave”. Our method distinguishes these hard negative samples from semantic ambiguities. (c) For similar themes under “man play a ball”, corresponding behaviors usually are semantic simi-lar, like ”play the hockey/cricket/polo” all need to “hit the ball” with “sticks/bats”. Our method improves learning embeddings on these infrequent samples with semantic scarcities for themselves. measures the semantic similarity of the two modalities. The second score-based matching [3,7,19,27] applies the cross-modal interaction between visual and textual local features, then learns a cumulative similarity score. Recently, embedding-based methods have served as the mainstream solution owing to both accuracy and efficiency in image-text matching, which contains two steps as shown in Fig. 1 (a): (1) Capturing the intra-modal relation be-tween visual fragments ( e.g., regional features) or textual This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15159 fragments ( e.g., word features) independently, then enhanc-ing the semantic representation of local features. (2) Ag-gregating relation-enhanced local features of two modalities into the holistic embedding space. For the first step, most use the graph neural network [5, 20, 21] or attention mod-ule [35, 45, 46] to capture semantic relations and enhance the local features of two modalities, respectively. Some work further exploits the spatial relation [45] for visual re-gions or grammatical relation [28] for textual words. For the second step, they design pooling functions [4] or se-quence encoders [21] to aggregate local features and get holistic embeddings. Existing embedding-based methods follow the principle of separately encoding images and texts by two branches as Fig. 1 (a). In each branch, these meth-ods only focus on the fragment-level relation modeling and local features interaction within one sample, e.g., the region features inside one image (or the word features inside one text). In this way, the instance-level relation modeling and global embeddings interaction among different samples and modalities, e.g., holistic embeddings of multiple images and texts, are entirely overlooked. Consequently, existing embedding-based methods di-rectly use global embeddings to compute loss function, e.g., hard negative triplet loss [10] on random mini-batch, which is insufficient to exploit the manifold structure of holistic embedding space [39]. First, they fail to learn subtle seman-tic discrepancies among different samples ( e.g., similar be-haviors with an identical theme as shown in Fig. 1 (b)), then can not distinguish hard negative samples with semantic ambiguities because of the heterogeneity of visual and tex-tual semantics. Second, they are unable to transfer shared knowledge from diverse samples ( e.g., different samples that contain similar behaviors with similar themes as shown in Fig. 1 (c)), then can not effectively learn on these infre-quent samples with semantic scarcities. Therefore, it is ex-pected that a framework should precisely capture the sample relationship to learn better cross-modal embeddings, while does not break the principle of embedding-based methods, i.e., independently encodes embeddings without modality interaction at the inference stage. In doing so, we propose a Hierarchical RElation Modeling framework (HREM) that, for the first time to our knowledge, explicitly captures both fragment-level and instance-level relations to learn holistic embeddings jointly. Therefore, HREM learns not only contextual semantics among intra-modal fragments to enhance local features, but also the associated semantics among inter-modal instances to distinguish hard negative samples and improve learning on infrequent samples. As illustrated in Fig. 1 (a) and Fig. 2, we propose a novel step ( i.e., the “stage-three”) to exactly capture the semantic relationship of cross-modal samples. First, we propose a novel cross-embedding as-sociation graph, which explicitly identifies the connectionrelation and learns the relevance relation between batch samples with fragment-level semantic matching. Next, we propose two relation interaction mechanisms, which ex-plore inter-modal and intra-modal relations synchronously or asynchronously with our improved attention modules to obtain enhanced embeddings. Consequently, HREM only needs to capture the instance-level relation for training, then encode multi-modal embeddings independently at the in-ference stage, to achieve high accuracy and efficiency for image-text matching. To summarize, the major contributions are as follows: (1) We propose a hierarchical relation modeling framework (HREM) for image-text matching. To the best of our knowl-edge, this is the first work that explicitly captures both fragment-level relations within modality and instance-level relations across modalities. (2) We propose a novel cross-embedding association graph by identifying the connection relation and learning the relevance relation. (3) We propose two relation interaction mechanisms to learn the relation-enhanced embeddings. (4) HREM outperforms all state-of-the-art methods for image-text retrieval on two widely used benchmarks, Flickr30K and MS-COCO, by 4%-10% rSum.
Feng_AeDet_Azimuth-Invariant_Multi-View_3D_Object_Detection_CVPR_2023
Abstract Recent LSS-based multi-view 3D object detection has made tremendous progress, by processing the features in Brid-Eye-View (BEV) via the convolutional detector. How-ever, the typical convolution ignores the radial symmetry of the BEV features and increases the difficulty of the de-tector optimization. To preserve the inherent property of the BEV features and ease the optimization, we propose an azimuth-equivariant convolution (AeConv) and an azimuth-equivariant anchor. The sampling grid of AeConv is always in the radial direction, thus it can learn azimuth-invariant BEV features. The proposed anchor enables the detection head to learn predicting azimuth-irrelevant targets. In ad-dition, we introduce a camera-decoupled virtual depth to unify the depth prediction for the images with different cam-era intrinsic parameters. The resultant detector is dubbed Azimuth-equivariant Detector (AeDet). Extensive experi-ments are conducted on nuScenes, and AeDet achieves a 62.0% NDS, surpassing the recent multi-view 3D object de-tectors such as PETRv2 and BEVDepth by a large mar-gin. Project page: https://fcjian.github.io/ aedet .
1. Introduction In the field of autonomous driving, multi-view 3D ob-ject detection has been one of the most widely researched problems and received a lot of attention due to its low as-sembly cost and high efficiency. In the recent literature, such vision-based 3D object detector has made tremendous progress, especially for the Lift-Splat-Shoot (LSS) based methods [11,15]. They first transfer the image features from the image-view to Bird-Eye-View (BEV), and then process the BEV features via the convolutional backbone and detec-tion head similar to 2D object detection [6, 7, 28]. However, the BEV features in multi-view 3D object de-tection are significantly different from the image features in 2D object detection: (1) the BEV features from the same camera are naturally endowed with radial symmetry; (2) the cameras have different orientations in BEV , and the BEV FeaturePredictions Figure 1. Illustration of the BEV features and predictions from the typical LSS-based detector BEVDepth [15] (top row) and our AeDet (bottom row). The BEV features are the outputs of the BEV backbone. Assume the six cameras capture the same imag-ing, and the detector takes the same imaging as the input of the six views. BEVDepth generates different features and predictions for the same bus in different azimuths, while AeDet yields almost the same feature and prediction for the same bus in different azimuths. features from the different cameras are also approximately with radial symmetry. Simply using the typical 2D back-bone and detection head to perform BEV perception (such as [11, 15]) ignores the inherent property of the BEV fea-tures and suffers from two limitations discussed as follows: First, the BEV representation of the same imaging in different azimuths is inconsistent. The typical convolu-tion shares the kernel weights and adopts the same regular sampling grid at each location of the features. Such de-sign may destroy the radial symmetry of the BEV features, and thus is unfriendly to representation learning. To be spe-cific, assume the six cameras capture the same imaging of the object, and the detector takes the same imaging as the input of the six views. These images are then transferred to be the rotation-equivalent features in different azimuths from BEV . However, the sampling grid of the convolution is This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21580 translation-invariant and samples the inconsistent BEV fea-tures of the object in different azimuths (see Figure 3a for detailed demonstration). Consequently, as demonstrated in the ‘BEV feature’ column in Figure 1, the convolution of BEVDepth learns different features in different azimuths, increasing the difficulty of the representation learning. Second, the prediction targets of the same imaging in different azimuths are inconsistent. The typical detec-tion head predicts the object orientation and velocity along the Cartesian coordinates, which requires the detection head to predict different targets for the same imaging in differ-ent azimuths. Concretely, assume the different-view cam-eras capture the same imaging of the object at different mo-ments. After mapping the imaging from the image to BEV , the object would have different orientations and velocities along the Cartesian coordinates in different azimuths (see Figure 3b for detailed demonstration). As a result, the de-tection head is required to predict different targets even for the same imaging in different azimuths, and inevitably in-creases the difficulty of the predictions. To address the two limitations, we propose an Azimuth-equivariant Detector (AeDet) that aims to perform an azimuth-invariant BEV perception by modeling the prop-erty of radial symmetry to the network: (1) Azimuth-equivariant convolution. In contrast to the typical convo-lution that uses the same regular sampling grid at each loca-tion, we design an Azimuth-equivariant Convolution (Ae-Conv) to rotate the sampling grid according to the azimuth at each location. AeConv enables the sampling grid equiv-ariant to the azimuth and always in the radial direction of the camera. This allows the convolution to preserve the ra-dial symmetry of the BEV features and unify the represen-tation in different azimuths. (2) Azimuth-equivariant an-chor. To unify the prediction targets in different azimuths, we propose an azimuth-equivariant anchor. Specifically, different from the typical anchor (anchor point or anchor box) defined along the Cartesian coordinates, the azimuth-equivariant anchor is defined along the radial direction and equivariant to the azimuth. We predict both the bounding box and velocity along the new anchor and its orthogo-nal directions, yielding the same prediction target for the same imaging of the object in different azimuths. Thus the azimuth-equivariant anchor enables the detection head to learn predicting azimuth-irrelevant targets. Notably, Ae-Conv and the azimuth-equivariant anchor can work collabo-ratively to improve the consistency between the representa-tion learning and predictions, as shown in the ‘BEV feature’ and ‘Predictions’ columns of AeDet in Figure 1. In addition, we introduce a camera-decoupled virtual depth to improve the depth prediction, and ease the opti-mization of the depth network. In specific, we decouple the camera’s intrinsic parameters from the depth network, en-abling the depth network to model the relationship betweenthe image feature and the virtual depth. In this way, the depth network only needs to learn to predict a universal vir-tual depth, regardless of the intrinsic parameters of different cameras. Finally, we map the virtual depth to the real depth according to the classic camera model. To summarize, we make the following contributions: (1) We design an azimuth-equivariant convolution to unify the representation learning in different azimuths, and extract the azimuth-invariant BEV features. (2) We propose a new azimuth-equivariant anchor to redefine the anchor along the radial direction and unify the prediction targets in differ-ent azimuths. (3) We introduce a camera-decoupled virtual depth to unify the depth prediction for the images captured by different cameras. (4) We conducted extensive experi-ments on nuScenes [1], where our AeDet significantly im-proves the accuracy of the object orientation (by 5.2%) and velocity (by 6.6%). AeDet achieves 62.0% NDS on the nuScenes testset, surpassing the recent multi-view object detectors such as BEVDepth [15] by a large margin.
Black_BEDLAM_A_Synthetic_Dataset_of_Bodies_Exhibiting_Detailed_Lifelike_Animated_CVPR_2023
Abstract We show, for the first time, that neural networks trained only on synthetic data achieve state-of-the-art accuracy on the problem of 3D human pose and shape (HPS) estima-tion from real images. Previous synthetic datasets have been small, unrealistic, or lacked realistic clothing. Achiev-ing sufficient realism is non-trivial and we show how to do this for full bodies in motion. Specifically, our BED-LAM dataset contains monocular RGB videos with ground-truth 3D bodies in SMPL-X format. It includes a diver-sity of body shapes, motions, skin tones, hair, and cloth-ing. The clothing is realistically simulated on the moving bodies using commercial clothing physics simulation. We render varying numbers of people in realistic scenes with varied lighting and camera motions. We then train vari-ous HPS regressors using BEDLAM and achieve state-of-the-art accuracy on real-image benchmarks despite train-ing with synthetic data. We use BEDLAM to gain insights *The authors contributed equally and are listed alphabetically. †This work was performed when JL was at MPI-IS.into what model design choices are important for accu-racy. With good synthetic training data, we find that a basic method like HMR approaches the accuracy of the current SOTA method (CLIFF). BEDLAM is useful for a variety of tasks and all images, ground truth bodies, 3D clothing, support code, and more are available for research purposes. Additionally, we provide detailed information about our synthetic data generation pipeline, enabling oth-ers to generate their own datasets. See the project page: https://bedlam.is.tue.mpg.de/ .
1. Introduction The estimation of 3D human pose and shape (HPS) from images has progressed rapidly since the introduc-tion of HMR [32], which uses a neural network to regress SMPL [45] pose and shape parameters from an image. A steady stream of new methods have improved the accuracy of the estimated 3D bodies [21, 33, 35, 38, 41, 73, 93]. The progress, however, entangles two things: improvements to the architecture and improvements to the training data. This makes it difficult to know which matters most. To answer This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8726 this, we need a dataset with real ground truth 3D bodies and not simply 2D joint locations or pseudo ground truth. To that end, we introduce a new, realistic, synthetic dataset called BEDLAM (Bodies Exhibiting Detailed Lifelike An-imated Motion) and use it to analyze the current state of the art (SOTA). Fig. 1 shows example images from BEDLAM along with the ground-truth SMPL-X [57] bodies. Theoretically, synthetic data has many benefits. The ground truth is “perfect” by construction, compared with existing image datasets. We can ensure diversity of the training data across skin tones, body shapes, ages, etc., so that HPS methods are inclusive. The data can also be easily repurposed to new cameras, scenes, and sensors. Conse-quently, there have been many attempts to create synthetic datasets to train HPS methods. While prior work has shown synthetic data is useful, it has not been sufficient so far. This is likely due to the lack of realism and diversity in existing synthetic datasets. In contrast, BEDLAM provides the realism necessary to test whether “synthetic data is all you need”. Using BED-LAM, we evaluate different network architectures, back-bones, and training data and find that training only using synthetic data produces methods that generalize to real im-age benchmarks, obtaining SOTA accuracy on both 3D hu-man pose and 3D body shape estimation. Surprisingly, we find that even basic methods like HMR [32] achieve SOTA performance on real images when trained on BEDLAM. Dataset. BEDLAM contains monocular RGB videos to-gether with ground truth 3D bodies in SMPL-X format. To create diverse data, we use 271 body shapes (109 men and 162 women), with 100 skin textures from Meshcapade [3] covering a wide range of skin tones. In contrast to previous work, we add 27 different types of hair (Reallusion [1]) to the head of SMPL-X. To dress the body, we hired a pro-fessional 3D clothing designer to make 111 outfits, which we drape and simulate on the body using CLO3D [2]. We also texture the clothing using 1691 artist-designed textures. The bodies are animated using 2311 motions sampled from AMASS [47]. Because AMASS does not include hand mo-tions, we replace the static hands with hand motions sam-pled from the GRAB dataset [74]. We render single people as well as groups of people (varying from 3-10) moving in a variety of 3D scenes (8) and HDRI panoramas (95). We use a simple method to place multiple people in the scenes so that they do not collide and use simulated camera motions with various focal lengths. The synthetic image sequences are rendered using Unreal Engine 5 [5] at 30 fps with mo-tion blur. In total, BEDLAM contains around 380K unique image frames with 1-10 people per image, for a total of 1M unique bounding boxes with people. We divide BEDLAM into training, validation, and test sets with 75%, 20% and 5% of the total bounding boxes respectively. While we make all the image data available,we withhold the SMPL-X ground truth from the test set and provide an automated evaluation server. For the training and validation sets, we provide all the SMPL-X animations, the 3D clothing, skin textures, and all freely available assets. Where we have used commercial assets, we provide infor-mation about how to obtain the data and replicate our re-sults. We also provide the details necessary for researchers to create their own data. Evaluation. With sufficient high-quality training data, fairly simple neural-network architectures often produce SOTA results on many vision tasks. Is this true for HPS regression? To tackle this question, we train two different baseline methods (HMR [32] and CLIFF [38]) on varying amounts of data and with different backbones; HMR repre-sents the most basic method and CLIFF the recent SOTA. Since BEDLAM provides paired images with SMPL-X pa-rameters, we train methods to directly regress these parame-ters; this simplifies the training compared with methods that use 2D training data. We evaluate on natural-image datasets including 3DPW [79] and RICH [26], a laboratory dataset (Human3.6M [27]), as well as two datasets that evaluate body shape accuracy (SSP-3D [66] and HBW [16]). Surprisingly, despite its age, we find that training HMR on synthetic data produces results on 3DPW that are bet-ter than many recently published results and are close to CLIFF. We find that the backbone has a large impact on accuracy, and pre-training on COCO is significantly better than pre-training on ImageNet or from scratch. We perform a large number of experiments in which we train with just synthetic data, just real data, or synthetic data followed by fine tuning on real data. We find that there is a significant benefit to training on synthetic data over real data and that fine tuning with real data offers only a small benefit. A key property of BEDLAM is that it contains realisti-cally dressed people with ground truth body shape. Con-sequently, we compare the performance of methods trained on BEDLAM with two SOTA methods for body shape re-gression: SHAPY [16] and Sengupta et al. [67] using both the HBW and SSP-3D datasets. CLIFF trained with BED-LAM does well on both datasets, achieving the best overall of all methods tested. This illustrates how methods trained on BEDLAM generalize across tasks and datasets. Summary. We propose a large synthetic dataset of re-alistic moving 3D humans. We show that training on syn-thetic dataset alone, even with a basic network architecture, produces accurate 3D human pose and shape estimates on real data. BEDLAM enables us to perform an extensive meta-ablation study that illuminates which design decisions are most important. While we focus on HPS, the dataset has many other uses in learning 3D clothing models and action recognition. BEDLAM is available for research purposes together with an evaluation server and the assets needed to generate new datasets. 8727
Grosche_Image_Super-Resolution_Using_T-Tetromino_Pixels_CVPR_2023
Abstract For modern high-resolution imaging sensors, pixel bin-ning is performed in low-lighting conditions and in case high frame rates are required. To recover the original spa-tial resolution, single-image super-resolution techniques can be applied for upscaling. To achieve a higher image quality after upscaling, we propose a novel binning con-cept using tetromino-shaped pixels. It is embedded into the field of compressed sensing and the coherence is cal-culated to motivate the sensor layouts used. Next, we inves-tigate the reconstruction quality using tetromino pixels for the first time in literature. Instead of using different types of tetrominoes as proposed elsewhere, we show that using a small repeating cell consisting of only four T-tetrominoes is sufficient. For reconstruction, we use a locally fully con-nected reconstruction (LFCR) network as well as two clas-sical reconstruction methods from the field of compressed sensing. Using the LFCR network in combination with the proposed tetromino layout, we achieve superior image qual-ity in terms of PSNR, SSIM, and visually compared to con-ventional single-image super-resolution using the very deep super-resolution (VDSR) network. For PSNR, a gain of up to+1.92 dB is achieved.
1. Introduction Conventional imaging sensors acquire image data using square pixels that are regularly placed on the sensor sur-face. Due to the everlasting pursuit for higher resolution, smaller and smaller pixels are packed onto the sensor sur-face. Such sensors can acquire single images of extremely high-resolution in case of good lighting conditions. How-ever, there are two major issues with decreasing the pixel size. Firstly, fewer photons arrive at each pixel such that photometric limits are approaching [5,35] resulting in worse signal to noise ratios. Secondly, the required bandwidth in-creases with the number of measured pixels such that enor-Binned BIC, VDSRUpscaling Reference Target sensor binningLow-resolution sensor binningTetromino binningConventional binningTetromino L-JSDE, LFCRReconstruction Figure 1. Illustration of a conventional low-resolution binning pro-cess and the proposed tetromino binning. In both cases, less noise and higher frame rates are possible. mous amounts of data need to be processed and stored, es-pecially when recording high frame rate raw videos. In or-der to achieve higher signal-to-noise ratios and higher frame rates, pixel binning is typically applied on the hardware level [14, 24] to the disadvantage of spatial resolution. This is shown in Figure 1. To increase the spatial resolution of a low-resolution im-age, single-image super-resolution can be applied in post-processing. In this field, a vast amount of research has been investigating classical upscaling algorithms, e.g., [15,33,44, 45,47], and recently neural networks, e.g., [8,25,48]. More generally speaking, the aim is to achieve the best possible image quality for a limited number of measurements. Single-image super-resolution, however, is intrinsically limited by the regularity of the underlying measurement process which introduces aliasing whenever too high fre-quencies are present in the scene. One solution to circum-vent artifacts from aliasing is to employ a non-regular place-ment of pixels [1, 7, 21, 28, 31, 36]. This allows for higher image quality after reconstructing the image on a higher res-olution grid without increasing the number of pixels com-pared to a (binned) low-resolution sensor. Unfortunately, implementations of non-regular sampling such as quarter This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9989 (a) Low-resolution sensor(c)4×4T-tetromino sensor (prop.) (b) Tetromino sensor from [16] Figure 2. Illustration of different sensor layouts, pixel shapes, and cell sizes. Blue lines indicate the boundaries of the pixels. Dark gray color is used for a single complete cell. The coherence is given for an image of size M×N= 30×30pixels. sampling [34] and three-quarter sampling [37] suffer from having a lower fill factor leading to stronger noise in case of low light scenarios. To achieve a fill factor of 100% and at the same time keep the non-regularity of the sampling process, the square shape of the conventionally used pixels must be reconsid-ered. For this reason, Ben-Ezra et al. suggest the usage of Penrose pixels to tile the sensor area [3]. Such Pen-rose tiling is aperiodic, which can be understood as non-regularity on a larger scale. On the other hand, this aperiod-icity also means that hardware manufacturing and readout strategies are expected to be highly complicated. Another possibility to achieve a 100% fill factor with non-square pixels is the usage of hexagonal pixels [22, 42] or triangu-lar and rectangular shaped pixels [38, 39]. While all these pixel shapes have potential at their own, they cannot be used for the previously described binning process within a higher resolution sensor. Another promising possibility are tetromino pixels as proposed in a patent application by Galdo et al. [16]. In their work, T-, L-and Z-shaped tetromino pixels are used to tile the sensor area. Though it is proposed to directly manufacture the tetromino pixels in hardware, the tetromino shapes could also be used during the binning process of higher resolution pixels in case less noise or higher frame rate is desired. Optimally, the tetromino pixels are stacked together without leaving any vacancies such that a complete tiling of the sensor area is formed. Galdo et al. highlight an exemplarily tiling consisting of a 6×6pixel cell in their work which is then repeated periodically. With respect to hardware implementation and wiring, they provide initial solutions to manufacture, connect, and read out the indi-vidual pixels. In the scope of this paper, such pixel lay-outs could be used for the binning of four high-resolution pixels. As for conventional square binning, this would al-low for a higher signal-to-noise ratio and a higher frame rate. The resulting noise is identical for all binned sensor layouts because the light-active area is the same. Regard-ing the reconstruction, Galdo et al. suggest that techniques (a) Low-resolution sensor 1 1 1 10 0 0 0 0 0 0 00 0 0 0... ...0 0 0 00 0 0 00 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0... ...0 0 0 00 0 0 00 0 0 0 0 0 0 0 0 0 0 0 ...1 1 1 100 0 0 0 0 0 00 0 0 0 0 0 0 00 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 001 1 0 0 0 0 00 0 0 0 0 0 0 00 0 0 01 1 0 0 0 0 0 0 0 0 0 0 ...... ... ......(c)4×4T-tetromino sensor (prop.)(b) Tetromino sensor from [16] A0αβ A1αβA0αβ A1αβA0αβ A1αβ 1 1 1 100 0 0 0 0 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0... ... 0 0 0 001 1 1 0 1 0 00 0 0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0... ... ... Figure 3. Two slices ( i= 0 andi= 1) through the measurement matrices for the (a) low-resolution sensor, (b) Tetromino sensor from [16], and (c) the proposed 4×4T-tetromino sensor. from compressed sensing could be used. However, no re-construction results are given and no further analysis is pro-vided in [16] or elsewhere. This raises the question whether a better image quality can really be achieved. In this work, we propose a novel sensor layout based on a small tetromino cell consisting of only four T-tetromino pix-els. This layout could be used for binning four higher res-olution pixels as shown in the bottom row of Figure 1. For the proposed sensor layout as well as for other sensor lay-outs, we perform image reconstruction with suitable classi-cal and data-driven algorithms. For the best of our knowl-edge, it is the first time in literature that image reconstruc-tion is performed for tetromino sensor layouts. We show that the proposed sensor layout outperforms the more com-plicated sensor layout from [16] as well as another larger T-tetromino cell in terms of reconstruction quality. More-over, our tetromino sensor layout significantly outperforms the reconstruction quality of a (binned) low-resolution sen-sor in combination with single-image super-resolution us-ing the very deep super-resolution (VDSR) network [25]. At the same time, we are able to achieve a faster recon-struction while still outperforming VDSR. This paper is organized as follows: In Section 2, we re-view the different sensor layouts in a compressed sensing framework. In Section 3, the proposed T-tetromino sensor layout as well as a larger T-tetromino sensor layout are pre-sented. In Section 3, the used reconstruction algorithms are presented. In Section 5, we evaluate the performance of the different sensor layouts and reconstruction algorithms. Fi-nally, Section 6 concludes the paper. 9990
Hu_GFIE_A_Dataset_and_Baseline_for_Gaze-Following_From_2D_to_CVPR_2023
Abstract Gaze-following is a kind of research that requires locat-ing where the person in the scene is looking automaticallyunder the topic of gaze estimation. It is an important clue for understanding human intention, such as identifying ob-jects or regions of interest to humans. However , a survey of datasets used for gaze-following tasks reveals defects in the way they collect gaze point labels. Manual labeling may introduce subjective bias and is labor-intensive, while auto-matic labeling with an eye-tracking device would alter the person’s appearance. In this work, we introduce GFIE, a novel dataset recorded by a gaze data collection system we developed. The system is constructed with two devices, an Azure Kinect and a laser rangefinder , which generate the laser spot to steer the subject’s attention as they perfor-m in front of the camera. And an algorithm is developed to locate laser spots in images for annotating 2D/3D gaze targets and removing ground truth introduced by the spot-s. The whole procedure of collecting gaze behavior allows us to obtain unbiased labels in unconstrained environments semi-automatically. We also propose a baseline method with stereo field-of-view (F oV) perception for establishing a 2D/3D gaze-following benchmark on the GFIE dataset. Project page: https://sites.google.com/view/ gfie .
1. Introduction Gaze-following is a human skill that emerges in infan-cy [ 36] to learn about visual focus of other people, which helps to understand their personal thoughts and intention-s[32]. For these reasons, detecting gaze targets automati-cally as humans do has great potential in some applications, /enc-12Corresponding author * This work is supported in part by National Key Research and De-velopment Project of China under Grant 2019YFB1310604, in part by Na-tional Natural Science Foundation of China under Grant 62173189. CZ CXCY ddo Laser Spot Recording System Azure Kinect Laser Rangefinder a) Manual annotation b)Automatic annotation with eye-tracking device c) Our system for recording gaze behavior to build GFIE dataset Figure 1. The way of collecting gaze data in the existing gaze-following dataset and our proposed scheme. a) is a sample fromthe GazeFollow [ 28] dataset, the blue dots indicate the gaze targets annotated by the different annotators. b) indicate the case whereannotations are collected with an eye-tracking device. c) is thesystem designed in this paper. such as locating items of interest to a person in the retail en-vironment [ 35] and judging the risk of driving by detecting whether the driver is distracted [ 9,16]. In addition, gaze target detection can assist in action recognition [ 40], so-cial relationship analysis [ 11,41], autism diagnosis [ 7] and human-aware robot navigation [ 25]. As a device for monitoring gaze behavior, a wear-able eye-tracking device was explored for gaze-following [23,30]. [14,23,27] designed a custom system for track-ing gaze. These methods are only applicable to constrained scenes due to extra burdens they bring, such as complex calibration and additional expense. This challenge has also attracted the attention of researchers in the computer vision community, and recent works [ 7,22,28,38] have made an effort to establish datasets for inferring a person’s gaze tar-get from third-view image based on deep-learning methods. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8907 However, our survey of these datasets, which play an im-portant role in this task, reveal deficiency in the way they gather gaze data. Most datasets are manually annotated, but the subjectivity of annotators may cause annotations to de-viate from the actual gaze target. This is demonstrated by the sample in Figure 1a) where each annotator has a differ-ent opinion on the gaze target of the same person. In addi-tion, labor-intensive is another drawback. The eye-tracking device in Figure 1b) can capture annotations automatically but alter subjects’ appearance in the dataset, which brings the gap with the gaze-related behavior in the natural envi-ronment. To address these problems, as shown in Figure 1c), we propose a novel system for establishing our GFIE dataset that provides accurate annotations and clean training data recorded in natural environments. The system consists ofa laser rangefinder and an RGB-D camera Azure Kinec-t, which allows us to manipulate the laser rangefinder to guide the subject’s gaze target through the laser spot while recording their activities with the RGB-D camera. After detecting the laser spot in the image by our proposed al-gorithm, the gaze target of the person in the image can be located. Based on the distance to the laser spot measured by the laser rangefinder, the 3D gaze target can also be recon-structed. Considering that the laser spot introduces ground truth to the image, we employ an image inpainting algorith-m to eliminate it for constructing the final dataset. Most of the processes are automated, alleviating the need for human resources. Our proposed GFIE dataset comprises rich ac-tivity clips with different subjects and diverse scenes. They are key to ensuring the diversity of gaze behaviors. Along with RGB-D images and 2D/3D gaze targets, we also pro-vide camera parameters, head bounding boxes and 2D/3D eye locations. Accompanying our proposed GFIE dataset, we design a novel baseline method that takes the stereo field of view (FoV) to estimate gaze targets into account. In this paper, FoV is defined as the extend to which a person can observe in 3D space. It is perceived based on the predicted gaze di-rection and transformed into a heatmap. Then the heatmap combined with scene saliency, helps the entire model lo-calize 2D and 3D gaze targets more efficiently. State-of-the-art methods are introduced to establish 2D/3D gaze-following benchmarks on both GFIE and CAD-120 [ 20] datasets. Experiment results show that the GFIE dataset isreliable and the proposed baseline method achieves excel-lent performance in 2D images and 3D scenes. In summary, our main contributions are as follows: •We develop a system consisting of a laser rangefinder and RGB-D camera to guide and localize gaze target while recording gaze behavior. •We release a new GFIE dataset for 2D/3D gaze-following that contains reliable annotations and di-Table 1. Comparison of GFIE with existing gaze-following datasts DatasetRGB/ RGB-DSizeGaze Target Localized by3D Annot.Data Source GazeFollow [ 28] RGB122,143 frames, 130,339 peopleAnnotator MS COCO, SUN, PASCAL, etc VideoAttentionTarget [ 7] RGB1331 tracks,164,541 frame-level anno.Annotator  Y ouTube GazeFollow360 [ 21] RGB65 videos, 10,058 framesAnnotator  Y ouTube VideoGaze [ 29] RGB140 movies, 166,721 anno.Annotator  MovieQA VideoCoAtt [ 10] RGB380 videos, 492,100 framesAnnotator  TV shows DL Gaze [ 22] RGB95,000 frames, 16 subjectsAnnotator  Recorded by iPhone TIA [ 38] RGB-D330,000 frames, 14 subjectsEye-tracking glasses Recorded by Kinect V2 GFIE (ours) RGB-D71799 frames, 61 subjectsLaser spot Recorded by Azure Kinect verse human activities in indoor environments. •We introduce a stereo field of view (FoV) in the pro-posed baseline method for improving gaze-following.
Fang_Efficient_Robust_Principal_Component_Analysis_via_Block_Krylov_Iteration_and_CVPR_2023
Abstract Robust principal component analysis (RPCA) is widely studied in computer vision. Recently an adaptive rank es-timate based RPCA has achieved top performance in low-level vision tasks without the prior rank, but both the rank estimate and RPCA optimization algorithm involve singular value decomposition, which requires extremely huge com-putational resource for large-scale matrices. To address these issues, an efficient RPCA (eRPCA) algorithm is pro-posed based on block Krylov iteration and CUR decomposi-tion in this paper. Specifically, the Krylov iteration method is employed to approximate the eigenvalue decomposition in the rank estimation, which requires O(ndrq +n(rq)2) for an (n×d)input matrix, in which qis a parameter with a small value, ris the target rank. Based on the estimated rank, CUR decomposition is adopted to replace SVD in up-dating low-rank matrix component, whose complexity re-duces from O(rnd)toO(r2n)per iteration. Experimen-tal results verify the efficiency and effectiveness of the pro-posed eRPCA over the state-of-the-art methods in various low-level vision applications.
1. Introduction Robust principal component analysis (RPCA) aims to re-cover a low-rank matrix Land a sparse matrix Sfrom the corrupted observation matrix D∈Rn×d:D=L+S. The RPCA can be formulated as the following optimization problem [1]: min L,Srank(L) +λ∥S∥0s.t.D=L+S (1) *Shiqian Wu is corresponding author.where ℓ0-norm is the number of nonzero elements in the matrix, the paramter λ >0provides the trade-off between the rankness and sparsity. RPCA has been widely studied and applied in computer vision. For example, “background” in a video clip captured by a static camera has a low-rank property, which can be considered in background modeling [2]. The requirement of detecting sparse outliers from the observed imagery data leads to RPCA applications in image or video processing [3]. In industry, RPCA is also applicable to point cloud filtering [4], surface defects detection [5], shock sensing [6], etc. It is noted that the optimization problem (1) is NP-hard. The convex relaxation of RPCA has been studied to achieve an exact recovery in [1, 7–12], where the RPCA problem was relaxed as the sum of the nuclear norm and ℓ1-norm. But the convex methods always have a rate of sublinear con-vergence and high computation in practice [13]. It is neces-sary to exploit the structure of the underlying data and de-velop more efficient algorithms for RPCA. Zhou et al. [14] extended the equality constraint of RPCA to inequality in order to deal with noisy data, thus the RPCA is reformu-lated as the following constrained non-convex optimization problem: min L,S∥D−L−S∥2 F s.t.rank(L)≤rand∥S∥0≤s(2) with the target rank rand target sparse number s. The key idea of Eq. (2) is to formulate a constrained non-convex optimization problem. Among the nonconvex opti-mization mthods of RPCA, Zhou et al. [15] substituted the hard thresholding of Eq. (2) with a soft ℓ1regularization to reduce the complexity of S. Netrapalli et al. [16] added the deterministic sparsity assumption and devoted themselves This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1348 to recovering a low-rank matrix from the sparse corruptions that were of unknown value and support. The RPCA algo-rithm via gradient descent on the factorized space in [17] achieved linear convergence with proper initialization and step size, whose sparse estimator is to guarantee that the fraction of nonzero entries in each column and row of S is bounded above. Inspired by these methods, GoDec+ [18] made the model become an ordinary low-rank projec-tion problem based on the correntropy of noise and half-quadratic optimization theory. In [19], RPCA problem is considered for the first time without heuristics, such as loss functions, convex and surrogate constraints, which provides a new direction for potential research on online algorithms. In addition, Ornhag et al. [20] used second-order methods to convert the original objectives to differentiable equivalents, benefitting from faster convergence. Meanwhile, the efficient algorithms for RPCA have been widely investigated for large-scale matrices. In [16], the de-veloped algorithm involved alternating projections between a set of low-rank matrices and a set of sparse matrices, whose projection idea was also widely studied. Inspired by [16], a proximal block coordinate descent method was proposed in [21] to find an ϵ-stationary solution in O(1/ϵ2) iterations. Furthermore, an accelerated alternating projec-tion strategy was studied in [22] for RPCA, which project a matrix onto a specific low-dimensional subspace before ob-taining a new estimate of the low-rank matrix via truncated SVD. Most of RPCA and its variants contain SVD, which re-quires significant computational cost for large-scale matri-ces. Hinterm ¨uller [23] directly considered a least-squares problem subject to rank and cardinality constraints based on matrix manifolds, which favorably avoids singular value decompositions in full dimension. Phan et al. [24] pro-posed an accelerated algorithm with iteratively reweighted nuclear norm to ensure that every limit point is a critical point, which leads to small singular values and obtains fast result. Some different computing strategies were employed to substitute SVD so that the computation of RPCA was sig-nificantly reduced. Cai et al. [25] introduced CUR decom-position at each iteration, which only required O(r2n)flops per iteration and preserved more information than SVD. Generally, the aforementioned algorithms achieved ef-ficient and effective performance in different computer vi-sion tasks. It is highlighted that these methods require the rank of a low-rank matrix to be known a prior, which is inappropriate in most practical applications. To bridge this gap, Xu et al. [26] proposed a rank estimation method based on Gerschgorin disk theorem (GDE), whose computational complexity is O((d−1)3)at each iteration. Furthermore, an adaptive weighting RPCA was developed based on iter-atively estimated rank, which outperforms the state-of-the-art RPCA methods in various computer vision applications.More specifically, the adaptive weighting RPCA in [26] was formulated as the following optimization problem: min L,S∥L∥W+λ∥S∥1s.t.D=L+S (3) where ∥L∥W= Σ iωiσi(L)andωiare non-negative weights. Based on the estimated rank of L, the weights can be updated iteratively. Although the adaptive weighting RPCA [26] achieves top performance in recovering a low-rank matrix and a sparse matrix, both the rank estimation and RPCA opti-mization algorithm contain SVD, which takes significant computational costs for large-scale matrices. Recently, ran-domized block Krylov iteration [27] was introduced to ap-proximate the singular value decomposition in fewer iter-ations with better accuracy guarantees. This motivates us to use the block Krylov iteration method to accelerate the GDE-based rank estimation in this work. The computa-tional complexity of the proposed Krylov GDE (KGDE)-based rank estimation is reduced to O(ndrq +n(rq)2), where qis a parameter with a small value. On the other hand, CUR decomposition is also adopted to replace SVD in the updates of a low-rank matrix, thus the computational complexity of RPCA is significantly reduced. Since the rank of a matrix is required to be known for CUR decompo-sition, it is natural that the proposed KGDE is used to adap-tively estimate the rank of the low-rank matrix within the it-erative RPCA computing. Furthermore, a new non-convex low-rank regularized term is used to replace the weighted nuclear norm in (3) which can improve the low-rank matrix approximation. Compared with the state-of-the-art RPCA approaches, the proposed efficient RPCA (eRPCA) algo-rithm are fast with better accuracy guarantees. The main contributions of this paper are as follows: 1) An efficient rank estimation method based on Ger-schgorin disks with block Krylov iteration is proposed to accelerate the rank estimate of low-rank matrices. 2) CUR decomposition is adopted to reduce the compu-tation of SVD on a large-scale matrix in the update of the low-rank matrix. 3) An efficient non-convex RPCA method with a non-convex weighted regularizer is proposed to achieve better recovery of the low-rank matrix and the sparse matrix. 4) The proposed eRPCA algorithm has been applied to various computer vision scenarios and outperforms the state-of-the-art methods on large-scale data. In the following section, the improved eRPCA method, which consists of efficient rank estimation and new weight is presented in Section 2. The experimental results are demonstrated in Section 3, and the conclusions are drawn in Section 4. 1349
Fruhstuck_VIVE3D_Viewpoint-Independent_Video_Editing_Using_3D-Aware_GANs_CVPR_2023
Abstract We introduce VIVE3D, a novel approach that extends the capabilities of image-based 3D GANs to video editing and is able to represent the input video in an identity-preserving and temporally consistent way. We propose two new build-ing blocks. First, we introduce a novel GAN inversion tech-nique specifically tailored to 3D GANs by jointly embedding multiple frames and optimizing for the camera parameters. Second, besides traditional semantic face edits ( e.g. for age and expression), we are the first to demonstrate edits that show novel views of the head enabled by the inherent prop-erties of 3D GANs and our optical flow-guided compositing technique to combine the head with the background video. Our experiments demonstrate that VIVE3D generates high-fidelity face edits at consistent quality from a range of cam-era viewpoints which are composited with the original video in a temporally and spatially consistent manner. *This work was conducted during an internship at Meta RL Research.1. Introduction Semantic image editing has been an active research topic for the past few years. Previous work [21] uses Generative Adversarial Networks (GANs) to produce high-fidelity re-sults in the image space. The most popular backbone is StyleGAN [26–29] as it generates high-resolution domain-specific images while providing a disentangled latent space that can be utilized for editing operations. To edit real pho-tographs, there are typically two steps: The first step maps the input image to the latent space of a pre-trained gener-ator. This is usually accomplished either through encoder-based embedding or through optimization, such that gen-erator can accurately reconstruct the image from the latent code [53]. The second step is semantic image manipula-tion, where one latent input representation is mapped to an-other to obtain a certain attribute edit, ( e.g. changing age, facial expression, glasses, or hairstyle). While existing ap-proaches produce impressive results on single images, ex-tending them to videos is far from straightforward. Among This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4446 the challenges that arise are: (1) people tend to move their heads freely in videos (instead of assuming frontal image inputs), (2) the inversion of multiple frames should be coor-dinated, (3) the inverted face and edits need to be temporally consistent and (4) the compositing of the edited face with the original frame must maintain boundary consistency. A recent set of approaches has focused on 3D-aware GANs where a 2D face generator is combined with a neural renderer. Given a latent code, a 2D image and the under-lying 3D geometry are generated, thus allowing for some camera movement while rendering the head of the person. In this paper, we tackle the problem of viewpoint-independent face editing in videos. The edited face is ren-dered from novel views in a temporally-consistent man-ner. Specifically, we use a 3D-aware GAN in the tempo-ral domain and apply facial image editing techniques per frame that are temporally smooth regardless of the ren-dered view. Compared with other GAN-based video edit-ing approaches [5, 48], our method is the first to perform viewpoint-independent video editing while showing the full upper body of the person in the video with high fidelity. VIVE3D takes a video of a person captured from a monocular camera as input. The captured person can move freely across time, talk, and make facial expressions while their body can be visible. Unlike all prior work that learns a generator and performs edits on the exact same video, we disentangle these steps. Hence the output of our approach can be a different video of the same person or the same video. In both cases, the face has undergone one or more attribute edits and is rendered from a novel view. To ac-complish this challenging task, we introduce several novel components, each addressing one challenge of the problem at hand. Specifically, we first propose a simple yet effec-tive technique to create a personalized generator by invert-ing multiple frames at the same time. The simultaneous inversion of Nframes exposes the generator to a variety of facial poses and expressions, which results in a larger ca-pacity that we can then utilize. Our generator can generalize to new unseen videos of the same identity where the per-son might be wearing a different shirt, a result not demon-strated in the literature so far. In addition, we propose to optimize the camera pose of the 3D-aware GAN during in-version to obtain an accurate estimate which angle the face was captured from. Finally, we introduce an optical flow-based compositing method to properly place the novel view of the edited face back into the original frame while ensur-ing that the end result is temporally and spatially consistent. Our experimental work provides a wide range of qualita-tive and quantitative results to demonstrate that VIVE3D accomplishes semantic video editing with changing camera poses in a faithful way. In summary, our contributions are: • A new 3D GAN inversion technique that jointly embeds multiple images while optimizing for their camera poses.• A complete attribute editing framework and an optical flow-based compositing technique to replace the edited face in the original video. • VIVE3D is the first 3D GAN-based video editing method and the first that can change the camera pose of the face. 2. Related Work GAN Inversion. GANs are a powerful tool for seman-tic editing. Most editing techniques are tailored to Style-GAN, the state-of-the-art of 2D GANs [26–29]. Several editing techniques [10, 18, 19, 25, 34] build upon Style-GAN as it uses an intermediate disentangled latent space, usually referred to as w-space. Before editing, a latent space representation of the input image has to be recov-ered using a process typically referred to as Inversion or Projection [1, 2, 12, 50]. Refer to [52] for a survey of in-version techniques. In contrast to optimization-based in-version techniques, learning-based approaches attempt to obtain faster latent space correspondences by training en-coders [4, 36, 47]. In order to retain the generalization abil-ity of the w-space while providing a high-quality inversion, Pivotal Tuning [37] has successfully shown that trained gen-erators can overfit to target images while still maintaining a navigable latent space. Recent works study 3D GAN inver-sion [30, 31], attempting to infer a 3D representation for a reference image. GAN-based Latent Space Editing. Once an appropriate latent space representation of an input image has been re-covered, semantic edits can be applied by navigating the latent space manifold surrounding the inverted latent code. Unsupervised techniques attempt to find interesting edits without labeled data [22, 24, 41, 49]. InterfaceGAN [39, 40] is a simple and robust supervised technique that is highly recommended for practical applications, and as such we also employ it in our work. While there is a plethora of other techniques [3, 11, 44, 45, 55, 59] the development of related latent space manipulations itself is not the focus of our work. Another line of work is text-based editing which gained immense popularity during the last year [20, 35]. 3D-aware GANs. Recent GAN papers attempt to discover 3D information from large collections of 2D images us-ing Neural Radiance Fields (NeRFs) as shape representa-tions [8, 9, 14, 23, 33, 38, 58]. While most of these papers share similar architectural ideas, EG3D [9] has emerged as a popular basis for follow-up work ( e.g. integration of a seg-mentation branch [43]). We chose to build upon EG3D, but our work is also applicable to other generators with a simi-lar latent space. For more information on 3D GAN architec-tures, we refer the interested reader to a recent survey [51]. Video Synthesis and Editing. One branch of work at-tempts to leverage 2D GANs to generate video sequences [17, 42, 46, 57]. These ideas can be extended to create 3D videos [6], which also rely on 3D NeRFs. 4447 + wn wID Joint 3D GAN InversionFace AlignmentGenerator Fine-tuning Attribute Editing wfedit = wID+αwdir+of wglasses wbeard Regularized Inversion wf = wID+of Flow-based View Adjustment ddom= max(||dhist||) × median(argmax(hist/gid00898d)) Camera Parameter Update yawf , pitchf , FOVf +cdiff Edit Compositing with Source Frame optimize segmentation boundary region edited output video PERSONALIZED GENERATOR FRAME -BY-FRAME VIDEO SAMPLING ddom adjust face crop in frame space Figure 2. VIVE3D Pipeline. To create an edited video, we first need to create a personalized generator by jointly inverting selected faces and fine-tuning a pre-trained generator. We then invert the cropped face regions from a source video (which could be the same or a different video) into our personalized generator and recover the latent codes and camera poses for each target frame. We are able to perform semantic editing on the inverted stack of latent codes using previously discovered latent space directions and we can freely change the camera path around the face region. In order to composite the face with the source frame in a consistent fashion, we use optical flow to correct the position of the inset within the
frame, which allows us to composite the result in a seamless and temporally consistent fashion. GAN-based Video Editing. GAN-based video editing is the core topic of this paper. Duong et al. [15] employ deep reinforcement learning for automatic face aging. Latent-Transformer [55] encodes frames into the StyleGAN la-tent space using an encoder. They train a transformer to do attribute editing on single frames and blend the result with Poisson blending. The main competitor to our work is Stitch it in Time (StiiT) [48], which crops the faces from a video, edits them with 2D GAN techniques, and merges the edited result back to the video with some blending. How-ever, StiiT does not learn a 3D model of the human head, overfits to a particular video, and is unable to provide edits to the viewpoint of the human head. Recently, VideoEdit-GAN (VEG) [54] attempted to improve the temporal consis-tency of StiiT by running a two-step optimization approach focused on localized temporal coherence. Alaluf et al. [5] use StyleGAN3 for video editing, to leverage its inherent alignment capabilities and reduce texture sticking artifacts. Since this is an active area of research, all these techniques are concurrent work to our method, yet we do provide com-parisons to showcase the benefits of our proposed approach. 3. Method In this section, we introduce the key components of VIVE3D to perform frame-by-frame video editing while al-lowing for rendering the edited face from new views. We leverage a 3D-aware generator that infers 3D geometry and camera positions while being trained solely on 2D images. We build a personalized 3D-aware generator by performing joint inversion on multiple frames and then use it to perform attribute editing, apply camera viewpoint changes, and fi-nally composite the edited face rendered from a new view back into the original frame. An overview of our proposedapproach is depicted in Fig. 2 while the personalized gener-ator architecture is shown in Fig. 3. 3.1. Personalized 3D-Aware Generator Face Selection and Cropping. To create a personalized 3D-aware GAN model, we start by processing a short range from the input video where Nframes are selected such that they cover a range of orientations and facial expressions of the target person. We detect the facial keypoints within these frames using an off-the-shelf facial keypoint detec-tor [7] and use them to determine the face bounding box within the frame. This is achieved by calculating a rigid transformation from the facial keypoints in the frame to the facial keypoints in a generated example image, thereby aligning the keypoints at the center of the crop in the same way as the generator’s original training data. We pick a spe-cific field of view for cropping the faces and optimizing the generator, but the field of view remains a flexible parameter that can be adapted during any later stage in the pipeline. Simultaneous Inversion. We propose to perform multiple inversions simultaneously. EG3D has two major compo-nents in its generator. The first component uses a mapping network to map random vectors into a semantically mean-ingful space, called w-space. Vectors in this space control a 3D neural field that defines a 3D representation that is ren-dered using volumetric rendering. The second component is a 2D upsampler that performs a 4×super-resolution on the original output. We invert all selected faces simultane-ously into the w-space following a strategy similar to [37] that we discuss in detail below. In order to find a representation in w-space, we define a “global” wIDaiming at capturing the global identity fea-tures of the target person, and a “local” offset vector on 4448 wStyleGAN2 BACKBONEEG3D architecture TARGET FACES identity latent wID, offsets on and camera parameters cnz G GENERATOR SUPERRESOLUTIONNEURAL RENDERER yaw pitch FOV + + JOINT LATENT INVERSION generator weights G and NEURAL RENDERERJOINT FINE-TUNING TRI-PLANE IMAGES wID wID+o1 wID+o2 wID+o3 wID+o4 wID+o5 c1=(0.31, 0.19) c2=(-0.06, -0.02) c3=(0.02, -0.13) c4=(-0.23, 0.12) c5=(-0.41, -0.06)Figure 3. Personalized Generator. First, we run a joint inver-sion on Nselected target faces, where we optimize a shared target person latent wIDand an offset onfor each face. This ensures the inversions share information about the target. Simultaneously, we jointly optimize for the camera pose cn. We then fine-tune the generator to ensure it captures the fine details of the target identity. Note that the “default” latent (left column) implicitly captures the identity of the target person without being explicitly optimized. for each input expression Fnthat encodes the differences of each individual facial expression and position from the default wID. The length of each onis regularized using an LL2loss, aiming to keep the difference as small as possi-ble, and capturing all similarities between the input images within the default person latent wID. We use a combination of a perceptual loss LLPIPS and a pixel loss LL1for the inver-sion. Note that during this stage, we calculate these losses on the raw output Graw(wID+on)of the EG3D neural ren-derer at 128 ×128 resolution because we observed that it yields sharper result quality rather than evaluating the loss at the output of the super-resolution network. We down-sample our target images to the same resolution D128(Fn) to compare. To ensure that we can faithfully capture the tar-get person’s identity and expression, we use BiSeNet [56] to obtain a segmentation Sexp(Fn)of the facial regions encod-ing the expression (eyes, mouth, eyebrows, and nose) and add an additional feature loss on this area to encourage con-sistent facial expressions ( e.g. closed eyes). To obtain the inversion, in each optimization step, we sum up the losses for each face image Fn, therefore jointly optimizing all tar-gets simultaneously, yielding a total loss Linv. Linv=PN n=0λLPIPSLLPIPS (Graw(wID+on),D128(Fn))+ λL1LL1(Graw(wID+on),D128(Fn))+ λsegLLPIPS (Sexp(G(wID+on)),Sexp(Fn))+ λregLL2(on) Due to the 3D awareness of the EG3D generator, the qual-ity of the inversion into the latent space is highly sensitive to the camera parameter settings. Hence, in addition to op-timizing for wIDandon, we propose to also allow the in-version to optimize for the camera parameters cn(yawnand pitchn) for each input expression Fn, which reliably esti-mates the camera position that the face is captured from. A key advantage of this joint optimization is that the fa-cial characteristics of the person preserve their high fidelity even when seen from novel views. When inverting a single image of a side-facing person into the EG3D latent space, exploring other viewpoints of the inverted latent can lead to significant distortions. Often, unseen features ( e.g. hidden ears) can be blurry or distorted, and the identity no longer resembles the input from a different viewpoint. The joint inversion, however, ensures that the different views are em-bedded closely enough in latent space such that even unseen views yield consistently identity-preserving outputs. Generator Fine-tuning. We propose a variant of Pivotal Tuning [37] to jointly fine-tune the weights of the generator GEG3D on all input faces Fn, while keeping the detected wID, onand camera poses cnfixed. Here, we do not allow the weights of the upsampler of the generator to be updated as we want to preserve the generalization capabilities of the super-resolution network and prevent it from overfitting to our target images. During this fine-tuning stage, we employ perceptual and pixel losses described as follows: Ltune=PN n=0λLPIPSLLPIPS (GID(wID+on),Fn)+ λL1LL1(GID(wID+on),Fn) Finally, we obtain a personalized EG3D generator GID, fine-tuned to a set of facial expressions of the target person. We verify that the fine-tuned generator indeed provides a good generalized latent space for the target person even though it was inverted and tuned based on a low number of frames by exploring the person created by the “global” latent code, which was not explicitly fine-tuned for, as well as through a latent space walk in the fine-tuned latent space. 3.2. Frame-by-frame Video Inversion With the personalized 3D-aware generator in hand, we are now given a video of the same person as input which can be different from the one the generator was trained on. To process our new target video, we extract the facial keypoints from each frame fto determine the location of the box to in-dicate the face crop within the frame. In order to stabilize the crop over time, which supports the temporal coherence 4449 SMILE-+ BLOND-+ YOUNG-+Figure 4. InterfaceGAN edits. We show InterfaceGAN editing directions discovered in the latent space by applying them on our personalized generator. The attribute edits are consistent in 3D. of the inversion, we perform a Gaussian smoothing on the extracted facial keypoints along the temporal axis after ex-traction. However, it is important to not over-smooth, be-cause fast motions in the video would yield distorted key-point locations, deteriorating the inversion quality. We then perform a frame-by-frame inversion of the ex-tracted face regions Ffinto the space of the fine-tuned generator GID. Like before, we optimize for an offset of for each input frame Ff, as well as regularizing the off-set length. After inverting the first frame, each consecutive frame Ff+1is inverted starting from the previ
Chen_Unsupervised_Sampling_Promoting_for_Stochastic_Human_Trajectory_Prediction_CVPR_2023
Abstract The indeterminate nature of human motion requires tra-jectory prediction systems to use a probabilistic model to formulate the multi-modality phenomenon and infer a finite set of future trajectories. However, the inference processes of most existing methods rely on Monte Carlo random sam-pling, which is insufficient to cover the realistic paths with finite samples, due to the long tail effect of the predicted distribution. To promote the sampling process of stochastic prediction, we propose a novel method, called BOsampler , to adaptively mine potential paths with Bayesian optimiza-tion in an unsupervised manner, as a sequential design strat-egy in which new prediction is dependent on the previously drawn samples. Specifically, we model the trajectory sam-pling as a Gaussian process and construct an acquisition function to measure the potential sampling value. This ac-quisition function applies the original distribution as prior and encourages exploring paths in the long-tail region. This sampling method can be integrated with existing stochastic predictive models without retraining. Experimental results on various baseline methods demonstrate the effectiveness of our method. The source code is released in this link.
1. Introduction Humans usually behave indeterminately due to intrinsic intention changes or external surrounding influences. It requires human trajectory forecasting systems to formulate humans’ multimodality nature and infer not a single future state but the full range of plausible ones [16, 32]. Facing this challenge, many prior methods formulate stochastic human trajectory prediction as a generative prob-lem, in which a latent random variable is used to represent multimodality. A typical category of methods [10,18,46,66] is based on generative adversarial networks (GANs), which generate possible future trajectories by a noise in the multi-modal distribution. Another category exploits the variational auto-encoder (V AE) [21,26,30,41,50] that uses the observed *Authors contributed equally and are listed alphabetically by first name. QMCMC MethodResult 101p(z)zTransformationSample SampleUpdate PosteriorBOsamplerFigure 1. The comparison of different sampling methods. Monte Carlo (MC) sampling generates trajectories by directly sampling from a prior distribution of latent variable z. Quasi-Monte Carlo (QMC) sampling uses a transformation from low-discrepancy se-quences to the prior distribution [36] to sample more uniformly than MC. Different from MC and QMC, BOsampler formulates the sampling process as a Gaussian Process and calculate the Gaus-sian posterior with existing samples to sample the next one, where sampling and posterior updating are iterative. history trajectories as a condition to learn the latent variable. Beyond these two mainstream categories, other generative models are also employed for trajectory prediction, such as diffusion model [16], normalized flow [39], and even simple Gaussian model [35, 43]. Instead of a single prediction, the inference process of these stochastic prediction methods produces a finite set of plausible future trajectories by Monte Carlo (MC) random sampling. However, the distributions are always uneven and biased, where the common choices like “go straight” are in high probability. In contrast, many other choices such as “turn left/right” and “U-turn” are in low probability. Due to the long tail effect of predicted distribution, finite samples are insufficient to cover the realistic paths. For example, as shown in Figure 1, MC sampling tends to generate redundant trajectories with high probability but ignores the potential low-probability choice. To solve this problem, some meth-ods [4, 31] trained the model using an objective term to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17874 increase the diversity of samples, e.g., maximizing the dis-tance among the predicted samples. Though improving the sampling diversity, these methods need to re-train the model by adding the loss term. It is timely-cost and may fail when only the model is given (the source data is inaccessible). In this paper, we propose an unsupervised method to pro-mote the sampling process of stochastic prediction without accessing the source data. It is named BOsampler , which refines the sampling for more exploration via Bayesian opti-mization (BO). Specifically, we first formulate the sampling process as a Gaussian Process (GP), where the posterior is conditioned by previous sampling trajectories. Then, we de-fine an acquisition function to measure the value of potential samples, where the samples fitting the trained distribution well or away from existing samplings obtain high values. By this acquisition function, we can encourage the model to explore paths in the long-tail region and achieve a trade-off between accuracy and diversity. As shown in Figure 1, we compare BOsampler with MC and another sampling method QMC [4], which first generates a set of latent vari-ables from a uniform space and then transfers it to prior distribution for trajectory sampling. Compared with them, BOsampler can adaptively update the Gaussian posterior based on existing samples, which is more flexible. We high-light that BOsampler serves as a plug-and-play module that could be integrated with existing multi-modal stochastic predictive models to promote the sampling process without retraining. In the experiments, we apply the BOsampler on many popular baseline methods, including Social GAN [18], PECNet [33], Trajectron++ [41], and Social-STGCNN [35], and evaluate them on the ETH-UCY datasets. The main contributions of this paper are summarized as follows: •We present an unsupervised sampling prompting method for stochastic trajectory prediction, which mines potential plausible paths with Bayesian optimiza-tion adaptively and sequentially. •The proposed method can be integrated with existing stochastic predictors without retraining. •We evaluate the method with multiple baseline methods and show significant improvements.
Chen_Semantic_Prompt_for_Few-Shot_Image_Recognition_CVPR_2023
Abstract Few-shot learning is a challenging problem since only a few examples are provided to recognize a new class. Several recent studies exploit additional semantic information, e.g. text embeddings of class names, to address the issue of rare samples through combining semantic prototypes with visual prototypes. However, these methods still suffer from the spurious visual features learned from the rare support sam-ples, resulting in limited benefits. In this paper, we propose a novel Semantic Prompt (SP) approach for few-shot learn-ing. Instead of the naive exploitation of semantic informa-tion for remedying classifiers, we explore leveraging seman-tic information as prompts to tune the visual feature extrac-tion network adaptively. Specifically, we design two com-plementary mechanisms to insert semantic prompts into the feature extractor: one is to enable the interaction between semantic prompts and patch embeddings along the spatial dimension via self-attention, another is to supplement vi-sual features with the transformed semantic prompts along the channel dimension. By combining these two mecha-nisms, the feature extractor presents a better ability to at-tend to the class-specific features and obtains more gen-eralized image representations with merely a few support samples. Through extensive experiments on four datasets, the proposed approach achieves promising results, improv-ing the 1-shot learning accuracy by 3.67% on average.
1. Introduction Few-shot learning (FSL) [21] is a fundamental and chal-lenging task and remains largely unsolved as it aims to pre-dict a new class with rare samples. To address this problem, most effective FSL approaches leverage the prior knowl-edge learned from a large labeled base dataset, and encode the prior knowledge as a set of initial network parame-ters [12, 37, 42], or a fixed embedding function shared by *Equal contribution 𝑓 A unicycle is a vehicle with only one wheel...𝑔 {‘unicycle’}Input image Attention mapSemantic prompt -guided feature extractionFigure 1. Given only one image about a new class ‘unicycle’, the feature extractor is easily confused by the spurious features, such as the rider on the unicycle, and fails to obtain generalized image representations about the new class. In this paper, we propose Semantic Prompt, a new method to condition the feature extraction on rich semantic prior knowledge, such that the feature extractor captures the intrinsic class-specific features about the novel class. all classes [16, 45, 46, 49]. As the labeled images of novel classes are scarce, a straightforward alternative is to use auxiliary information from other modalities, e.g. natural language, to assist in learning new concepts, which has been extensively studied in zero-shot learning [13,26,40,43]. These methods usually directly use textual embeddings as the image classifiers for novel classes. Following this idea, a recent FSL study [52] proposes to infer textual prototypes from class names and combine them with the visual prototypes ( i.e., classifiers) extracted from the rare support images. Others [32, 53] im-prove this work by introducing more sophisticated textual prototype predictors ( e.g. Graph Convolutional Network) or producing more accurate textual prototypes through lever-aging the benefits of large-scale pre-trained language mod-els. In spite of their success, most of the above methods for directly inferring class prototypes from textual features ig-nore the information gap between textual and visual fea-tures. Specifically, the textual features may contain the semantic relationship between a novel class and known This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23581 classes. However, they fail to provide the exact discrimi-native visual features of the new class because of lacking interaction with the underlying visual representations. As a result, the rich semantic information has derived limited benefit for recognizing novel classes when directly inject-ing it into classifiers. Moreover, with only limited support images, the learned visual features still suffer from spuri-ous features, such as background clutters, and struggles to produce an accurate class prototype. For example, as illus-trated in Figure 1, given one support image of a novel class ‘unicycle’, the feature extractor may capture image features containing both unicycles and other distractors, like riders and tile roofs, and fail to recognize the unicycle in other environments. Actually, human perception system has a unique visual perceptual mechanism, called cognitive pene-trability [30], which uses linguistic prior knowledge to tune ongoing visual perceptual processing to category-relevant stimulus features, promoting the learning of novel objects. Hence, it is necessary to develop a new architecture for ef-fectively leveraging textual information to remedy the de-fective representation caused by rare samples. In this paper, we propose Semantic Prompt, a novel ap-proach that leverages textual information of class names to significantly improve the representation ability of visual features for few-shot learning. Instead of directly infer-ring prototypes from textual features, we explore leverag-ing the textual features as semantic prompts to adaptively tune the feature extraction network for the rare support sam-ples. As shown in Figure 1, with the guidance of semantic prompts, the feature extractor is expected to capture the in-trinsic class-specific features for the novel class rather than other background clutters. Moreover, the advent of large-scale training has produced a cornucopia of powerful Natu-ral Language Processing (NLP) models, such as BERT [9] and GPT [36], which bootstrap extracting rich textual in-formation from class names. Through the interaction be-tween semantic prompts and visual features, such seman-tically rich representations have powerful potential to pro-vide the feature extractor with additional discriminative vi-sual features about the new class, and subsequently produce more generalized class prototypes. To condition the visual feature extraction on semantic prompts, we propose two complementary mechanisms to inject semantic information into the feature extractor, which allow the interaction between semantic prompts and visual features on the spatial and the channel dimensions, respec-tively. Specifically, to facilitate the interaction on the spatial dimension, we extend the image patch sequence with se-mantic prompts and feed them into a Transformer encoder. Through self-attention layers, the semantic prompts can in-form the feature extractor to attend to the class-specific fea-tures while suppressing other distractors. For the interaction on the channel dimension, we first concatenate the semanticprompts with the visual context extracted from all patches, and then feed them into an MLP module. The extracted feature vector is added to each patch token to modulate and augment the visual features channel-by-channel. By com-bining the two interaction mechanisms, the proposed Se-mantic Prompt approach (SP) can effectively leverage the textual information in class names to boost FSL. Through comprehensive experiments on four benchmarks, the pro-posed SP presents consistent performance improvements with different types of text encoders and architecture de-signs, demonstrating its strong generality for the FSL prob-lem. In summary, our contribution are three-folds: • We propose a novel Semantic Prompt approach to leveraging textual information in class names for few-shot image recognition, which is inspired by the top-down cognitive penetrability effect in human percep-tion and aims to adaptively tune the feature extrac-tion to class-specific features according to the semantic prompts. • To condition visual feature extraction on semantic prompts, we propose two complementary mechanisms to inject semantic prompts into the visual feature ex-tractor, which allow the interaction on the spatial and the channel dimensions, respectively. • The proposed method achieves remarkable perfor-mance on four FSL benchmarks, improving the FSL accuracy by 3.67% on average under the challenging 1-shot setting.
Jeanneret_Adversarial_Counterfactual_Visual_Explanations_CVPR_2023
Abstract Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics. Yet, ad-versarial attacks cannot be used directly in a counterfac-tual explanation perspective, as such perturbations are per-ceived as noise and not as actionable and understandable image modifications. Building on the robust learning liter-ature, this paper proposes an elegant method to turn adver-sarial attacks into semantically meaningful perturbations, without modifying the classifiers to explain. The proposed approach hypothesizes that Denoising Diffusion Probabilis-tic Models are excellent regularizers for avoiding high-frequency and out-of-distribution perturbations when gen-erating adversarial attacks. The paper’s key idea is to build attacks through a diffusion model to polish them. This al-lows studying the target model regardless of its robustifi-cation level. Extensive experimentation shows the advan-tages of our counterfactual explanation approach over cur-rent State-of-the-Art in multiple testbeds.
1. Introduction The research branch of explainable artificial intelligence has yielded remarkable results, gradually opening the ma-chine learning black boxes. The production of counter-factual explanations (CE) has become one of the promis-ing pipelines for explainability, especially in computer vi-sion [25, 28, 49, 54]. As a matter of fact, CE are an intuitive way to expose how an input instance can be minimally mod-ified to steer the desired change in the model’s output. More precisely, CE answers the following: what does Xhave to change to alter the prediction from YtoY′?From a user perspective, these explanations are easy to understand since they are concise and illustrated by examples. Henceforth, companies have adopted CE as an interpretation methodol-ogy to legally justify the decision-making of machine learn-ing models [60]. To better appreciate the potential of CE, one may consider the following scenario: a client goes to a photo booth to take some ID photos, and the system claimsthe photos are invalid for such usage. Instead of performing random attempts to abide by the administration criteria, an approach based on CE could provide visual indications of what the client should fix. The main objective of CE is to add minimalistic semantic changes in the image to flip the original model’s prediction. Yet, these generated explanations must accomplish several objectives [28,49,60]. A CE must be valid , meaning that the CE has to change the prediction of the model. Secondly, the modifications have to be sparse and proximal to the input data, targeting to provide simple and concise explanations. In addition, the CE method should be able to generate di-verse explanations. If a trait is the most important for a cer-tain class among other features, diverse explanations should change this attribute most frequently. Finally, the semantic changes must be realistic . When the CE method inserts out-of-distribution artifacts in the input image, it is difficult to interpret whether the flipping decision was because of the inserted object or because of the shifting of the distribution, making the explanation unclear. Adversarial attacks share a common goal with CE: flip-ping the classifier’s prediction. For traditional and non-robust visual classifiers, generating these attacks on input instances creates imperceptible noise. Even though it has been shown that it contains meaningful changes [24] and that adversarial noise and counterfactual perturbations are related [13, 23], adversarial attacks have lesser value. In-deed, the modifications present in the adversaries are unno-ticeable by the user and leave him with no real feedback. Contrary to the previous observations, many papers ( e.g., [47]) evidenced that adversarial attacks toward robust clas-sifiers generate semantic changes in the input images. This has led works [51, 70] to explore robust models to produce data using adversarial attacks. In the context of counterfac-tual explanations, this is advantageous [5, 52] because the optimization will produce semantic changes to induce the flipping of the label. Then two challenges arise when employing adversarial attacks for counterfactual explanations. On the one hand, when studying a classifier, we must be able to explain its behavior regardless of its characteristics. So, a naive ap-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16425 plication of adversarial attacks is impractical for non-robust models. On the other hand, according to [57], robustify-ing the classifier yields an implicit trade-off by lowering theclean accuracy , as referred by the adversarial robustness community [10], a particularly crucial trait for high-stakes areas such as the medical field [40]. The previous remarks motivate our endeavor to mix the best of both worlds. Hence, in this paper, we propose robus-tifying brittle classifiers without modifying their weights to generate CE. This robustification, obtained through a filter-ing preprocessing leveraging diffusion models [19], allows us to keep the performance of the classifier untouched and unlocks the production of CE through adversarial attacks. We summarize the novelty of our paper as follows: (i) We propose Adversarial Counterfactual Explanations, ACE in short, a novel methodology based on adversarial attacks to generate semantically coherent counterfactual explana-tions. (ii) ACE performs competitively with respect to the other methods, beating previous state-of-the-art methods in multiple measurements along multiple datasets. (iii) Fi-nally, we point out some defects of current evaluation met-rics and propose ways to remedy their shortcomings. (iv) To show a use case of ACE, we study ACE’s meaningful and plausible explanations to comprehend the mechanisms of classifiers. We experiment with ACE findings producing actionable modifications in real-world scenarios to flip the classifier decision. Our code and models are available on GitHub.
Asnani_MaLP_Manipulation_Localization_Using_a_Proactive_Scheme_CVPR_2023
Abstract Advancements in the generation quality of various Gen-erative Models (GMs) has made it necessary to not only perform binary manipulation detection but also localize the modified pixels in an image. However, prior works termed as passive for manipulation localization exhibit poor gener-alization performance over unseen GMs and attribute mod-ifications. To combat this issue, we propose a proactive scheme for manipulation localization, termed MaLP . We encrypt the real images by adding a learned template. If the image is manipulated by any GM, this added protec-tion from the template not only aids binary detection but also helps in identifying the pixels modified by the GM. The template is learned by leveraging local and global-level features estimated by a two-branch architecture. We show that MaLP performs better than prior passive works. We also show the generalizability of MaLP by testing on 22 different GMs, providing a benchmark for future research on manipulation localization. Finally, we show that MaLP can be used as a discriminator for improving the gener-ation quality of GMs. Our models/codes are available at www.github.com/vishal3477/pro_loc .
1. Introduction We witness numerous Generative Models (GMs) [8, 9, 15, 17, 23–25, 28, 34, 39, 44, 50, 52, 60] being proposed to generate realistic-looking images. These GMs can not only generate an entirely new image [23, 24], but also perform partial manipulation of an input image [9, 9, 28, 60]. The proliferation of these GMs has made it easier to manipulate personal media for malicious use. Prior methods to combat manipulated media focus on binary detection [1,2,5,11,14, 30, 45, 48, 55, 56], using mouth movement, model parsing, hand-crafted features, etc. Recent works go one step further than detection, i.e.ma-nipulation localization , which is defined as follows: given *All data sourcing, modeling codes, and experiments were developed at Michigan State University. Meta did not obtain the data/codes or conduct any experiments in this work. Figure 1. (a) High-level idea of MaLP. We encrypt the image by adding a learnable template, which helps to estimate the fakeness map. (b)The cosine similarity (CS) between ground-truth and predicted fakeness maps for 22unseen GMs. The performance is better for almost all GMs when using our proactive approach. a partially manipulated image by a GM ( e.g. STGAN [28] modifying hair colors of a face image), the goal is to iden-tify which pixels are modified by estimating a fakeness map [21]. Identifying modified pixels helps to determine the severity of the fakeness in the image, and aid media-forensics [11, 21]. Also, manipulation localization pro-vides an understanding of the attacker’s intent for modifica-tion which may further benefit identifying attack toolchains used [13]. Recent methods for manipulation localization [27,37,49] focus on estimating the manipulation mask of face-swapped images. They localize modified facial attributes by lever-aging attention mechanisms [11], patch-based classifier [4], and face-parsing [21]. The main drawback of these methods is that they do not generalize well to GMs unseen in train-ing. That is when the test images and training images are modified by different GMs, which will likely happen given the vast number of existing GMs. Thus, our work aims for a localization method generalizable to unseen GMs. All aforementioned methods are based on a passive scheme as the method receives an image as is for estima-tion. Recently, proactive methods are gaining success for deepfake tasks such as detection [1], disruption [46, 57], and tagging [51]. These methods are considered proactive as they add different types of signals known as templates for encrypting the image before it is manipulated by a GM. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12343 This template can be one-hot encoding [51], adversarial per-turbation [46], or a learnable noise [1], and is optimized to improve the performance of the defined tasks. Motivated by [1], we propose a Proactive scheme for MAnipulation Localization, termed as MaLP, in order to improve generalization. Specifically, MaLP learns an op-timized template which, when added to real images, would improve manipulation localization, should they get manip-ulated. This manipulation can be done by an unseen GM trained on either in-domain or out-of-domain datasets. Fur-thermore, face manipulation may involve modifying facial attributes unseen in training ( e.g. train on hair color mod-ification yet test on gender modification). MaLP incorpo-rates three modules that focus on encryption, detection, and localization. The encryption module selects and adds the template from the template set to the real images. These encrypted images are further processed by localization and detection modules to perform the respective tasks. Designing a proactive manipulation localization ap-proach comes with several challenges. First, it is not straightforward to formulate constraints for learning the template unsupervisedly . Second, calculating a fakeness map at the same resolution as the input image is compu-tationally expensive if the decision for each pixel has to be made. Prior works [4, 11] either down-sample the images or use a patch-wise approach, both of which result in inac-curate low-resolution fakeness maps. Lastly, the templates should be generalizable to localize modified regions from unseen GMs. We design a two-branch architecture consisting of a shal-low CNN network and a transformer to optimize the tem-plate during training. While the former leverages local-level features due to its shallow depth, the latter focuses on global-level features to better capture the affinity of the far-apart regions. The joint training of both networks enables the MaLP to learn a better template, having embedded the information of both levels. During inference, the CNN net-work alone is sufficient to estimate the fakeness map with a higher inference efficiency. Compared to prior passive works [11, 21], MaLP improves the generalization perfor-mance on unseen GMs. We also demonstrate that MaLP can be used as a discriminator for fine-tuning conventional GMs to improve the quality of GM-generated images. In summary, we make the following contributions. •We are the first to propose a proactive scheme for im-age manipulation localization, applicable to both face and generic images. •Our novel two-branch architecture uses both local and global level features to learn a set of templates in an unsu-pervised manner. The framework is guided by constraints based on template recovery, fakeness maps classification, and high cosine similarity between predicted and ground-truth fakeness maps.Table 1. Comparison of our approach with prior works on manip-ulation localization and proactive schemes. We show the general-ization ability of all works across different facial attribute modi-fications, unseen GMs trained on datasets with the same domain (in-domain) and different domains (out-domain). [Keys: Attr.: At-tributes, Imp.: Improving, L.: Localization, D.: Detection] Generalization Imp.Work Scheme Task TemplateAttr. In-domain Out-domain GM [51] Proactive Tag Fix ✔ ✔ ✖ ✖ [47] Proactive Disrupt Learn ✔ ✖ ✖ ✖ [46] Proactive Disrupt Learn ✔ ✔ ✖ ✖ [57] Proactive Disrupt Learn ✔ ✖ ✖ ✖ [1] Proactive D. Learn ✖ ✔ ✔ ✖ [37] Passive L.+D. -✖ ✖ ✖ ✖ [49] Passive L.+D. -✖ ✖ ✖ ✖ [27] Passive L.+D. -✖ ✔ ✖ ✖ [11] Passive L.+D. -✔ ✔ ✖ ✖ [4] Passive L.+D. -✖ ✔ ✖ ✖ [21] Passive L.+D. -✔ ✔ ✖ ✖ MaLP Proactive L.+D. Learn ✔ ✔ ✔ ✔ •MaLP can be used as a plug-and-play discriminator mod-ule to fine-tune the generative model to improve the qual-ity of the generated images. •Our method outperforms State-of-The-Art (SoTA) meth-ods in manipulation localization and detection. Further-more, our method generalizes well to GMs and modified attributes unseen in training. To facilitate the research of localization, we develop a benchmark for evaluating the generalization of manipulation localization, on im-ages where the train and test GMs are different.
Cao_HexPlane_A_Fast_Representation_for_Dynamic_Scenes_CVPR_2023
Abstract Modeling and re-rendering dynamic 3D scenes is a chal-lenging task in 3D vision. Prior approaches build on NeRF and rely on implicit representations. This is slow since it re-quires many MLP evaluations, constraining real-world ap-plications. We show that dynamic 3D scenes can be ex-plicitly represented by six planes of learned features, lead-ing to an elegant solution we call HexPlane. A HexPlane computes features for points in spacetime by fusing vec-tors extracted from each plane, which is highly efficient. Pairing a HexPlane with a tiny MLP to regress output col-ors and training via volume rendering gives impressive re-sults for novel view synthesis on dynamic scenes, match-ing the image quality of prior work but reducing training time by more than 100×. Extensive ablations confirm our HexPlane design and show that it is robust to different fea-ture fusion mechanisms, coordinate systems, and decoding mechanisms. HexPlane is a simple and effective solution for representing 4D volumes, and we hope they can broadly contribute to modeling spacetime for dynamic 3D scenes.1
1. Introduction Reconstructing and re-rendering 3D scenes from a set of 2D images is a core vision problem which can enable many AR/VR applications. The last few years have seen tremendous progress in reconstructing static scenes, but this assumption is restrictive: the real world is dynamic , and in complex scenes motion is the norm, not the exception. Many current approaches for representing dynamic 3D scenes rely on implicit representations, building on NeRF [42]. They train a large multi-layer perceptron (MLP) that inputs the position of a point in space and time, and out-puts either the color of the point [28, 29] or a deformation to a canonical static scene [16, 49, 50, 54]. In either case, rendering images from novel views is expensive since each generated pixel requires many MLP evaluations. Training is 1Project page: https://caoang327.github.io/HexPlane . Figure 1. HexPlane for Dynamic 3D Scenes. Instead of regress-ing colors and opacities from a deep MLP, we explicitly compute features for points in spacetime via HexPlane. Pairing with a tiny MLP, it allows above 100×speedups with matching quality. similarly slow, requiring up to days of GPU time to model a single dynamic scene; this computational bottleneck pre-vents these methods from being widely applied. Several recent methods for modeling static scenes have demonstrated tremendous speedups over NeRF through the use of explicit andhybrid methods [7, 43, 66, 81]. These methods use an explicit spatial data structure that stores ex-plicit scene data [14, 81] or features that are decoded by a tiny MLP [7, 43, 66]. This decouples a model’s capacity from its speed , and allows high-quality images to be ren-dered in realtime [43]. While effective, these methods have thus far been applied only to static scenes. In this paper, we aim to design an explicit representa-tion of dynamic 3D scenes, building on similar advances for static scenes. To this end, we design a spatial-temporal data structure that stores scene data. It must overcome two key technical challenges. First is memory usage . We must model all points in both space and time; na ¨ıvely storing data in a dense 4D grid would scale with the fourth power of grid resolution which is infeasible for large scenes or long durations. Second is sparse observations . Moving a single camera through a static scene can give views that densely cover the scene; in contrast, moving a camera through a dynamic scene gives just one view per timestep. Treating timesteps independently may give insufficient scene cov-erage for high-quality reconstruction, so we must instead share information across timesteps. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 130 We overcome these challenges with our novel HexPlane architecture. Inspired by factored representations for static scenes [5, 7, 51], a HexPlane decomposes a 4D spacetime grid into six feature planes spanning each pair of coordinate axes ( e.g.XY,ZT). A HexPlane computes a feature vector for a 4D point in spacetime by projecting the point onto each feature plane, then aggregating the six resulting feature vectors. The fused feature vector is then passed to a tiny MLP which predicts the color of the point; novel views can then be rendered via volume rendering [42]. Despite its simplicity, a HexPlane provides an elegant solution to the challenges identified above. Due to its fac-tored representation, a HexPlane’s memory footprint only scales quadratically with scene resolution. Furthermore, each plane’s resolution can be tuned independently to ac-count for scenes requiring variable capacity in space and time. Since some planes rely only on spatial coordinates (e.g.XY), by construction a HexPlane encourages sharing information across disjoint timesteps. Our experiments demonstrate that HexPlane is an effec-tive and highly efficient method for novel view synthesis in dynamic scenes. On the challenging Plenoptic Video dataset [28] we match the image quality of prior work but improve training time by >100×; we also outperform prior approaches on a monocular video dataset [54]. Extensive ablations validate our HexPlane design and demonstrate that it is robust to different feature fusion mechanisms, co-ordinate systems (rectangular vs. spherical), and decoding mechanisms (spherical harmonics vs. MLP). HexPlane is a simple, explicit, and general representa-tion for dynamic scenes. It makes minimal assumptions about the underlying scene, and does not rely on deforma-tion fields or category-specific priors. Besides improving and accelerating view synthesis, we hope HexPlane will be useful for a broad range of research in dynamic scenes [61].
Chen_Boosting_Semi-Supervised_Learning_by_Exploiting_All_Unlabeled_Data_CVPR_2023
Abstract Semi-supervised learning (SSL) has attracted enormous attention due to its vast potential of mitigating the depen-dence on large labeled datasets. The latest methods (e.g., FixMatch) use a combination of consistency regulariza-tion and pseudo-labeling to achieve remarkable successes. However, these methods all suffer from the waste of compli-cated examples since all pseudo-labels have to be selected by a high threshold to filter out noisy ones. Hence, the ex-amples with ambiguous predictions will not contribute to the training phase. For better leveraging all unlabeled ex-amples, we propose two novel techniques: Entropy Mean-ing Loss (EML) and Adaptive Negative Learning (ANL). EML incorporates the prediction distribution of non-target classes into the optimization objective to avoid competition with target class, and thus generating more high-confidence predictions for selecting pseudo-label. ANL introduces the additional negative pseudo-label for all unlabeled data to leverage low-confidence examples. It adaptively allo-cates this label by dynamically evaluating the top-kper-formance of the model. EML and ANL do not introduce any additional parameter and hyperparameter. We inte-grate these techniques with FixMatch, and develop a sim-ple yet powerful framework called FullMatch. Extensive experiments on several common SSL benchmarks (CIFAR-10/100, SVHN, STL-10 and ImageNet) demonstrate that FullMatch exceeds FixMatch by a large margin. Integrated with FlexMatch (an advanced FixMatch-based framework), we achieve state-of-the-art performance. Source code is available at https://github.com/megvii-research/FullMatch.
1. Introduction Semi-supervised learning (SSL) is proposed to leverage an abundance of unlabeled data to enhance the model’s per-formance when labeled data is limited [44]. Consistency *Corresponding author. He is sponsored by Shanghai Sailing Program (23YF1410500). 0 200k 400k 600k 800k 1000k Iter0.550.600.650.700.750.800.850.90Ratio w/ EML w/o EML(a) 0 200k 400k 600k 800k 1000k Iter010203040506070Amount amount of NPL accuracy of NPL 0.900.920.940.960.981.00 Accuracy (b) Figure 1. Visualization of the experimental results on CIFAR-100 with 10000 labeled images. Evaluations are done every 1K itera-tions. (a) The increasing proportion of examples with pseudo-label when applying EML to FixMatch. (b) The number of negative pseudo-labels per sample and accuracy during the whole training process. “NPL” denotes negative pseudo-labels. regularization [1, 3, 23] and pseudo labeling [10, 12, 25] have shown significant ability for leveraging unlabeled data and thus they are widely used in SSL frameworks. Re-cently, FixMatch-based methods [21, 26, 31, 38] that com-bine the two technologies in a unified framework have achieved noticeable successes. Specifically, they apply weak-augmentation (e.g., only random flip and shift) to un-labeled data and obtain their predictions, and then corre-sponding one-hot pseudo-label is generated if the largest prediction confidence is beyond the predefined threshold (e.g., 0.95), finally it is used as a training target when in-putting the strongly-augmented examples (e.g., RandAug-ment [6], Cutout [8]). However, the FixMatch-based methods still have a sig-nificant drawback that they rely on an extremely high threshold to produce accurate pseudo-labels, which results in ignoring a large number of unlabeled examples with am-biguous predictions, especially on the early and middle training stages. We can easily observe this phenomenon according to the blue curve in Fig. 1(a), which visualizes the proportion of samples with pseudo-labels at different training iterations when applying FixMatch [26] to CIFAR-100 [16] with 10,000 labels. It shows that the ratio of se-lected examples with pseudo-label is around 58% after 200k iterations and merely reaches 84% in the end. This moti-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7548 vates us to exploit more unlabeled data to boost the overall performance. One intuitive solution is to assign pseudo-label for po-tential examples (i.e., the maximum confidence is close to the predefined threshold). We argue the competition between partial classes leads to failure to produce high-confidence prediction, while the unsupervised loss of Fix-Match (i.e, cross-entropy) only focus on the target class when training the examples with pseudo-label. Therefore, we propose a novel scheme to enhance confidence on tar-get class, namely Entropy Meaning Loss (EML). For exam-ples with pseudo-label, EML imposes additional supervi-sion on all non-target classes (i.e., classes which specify the absence of a specific label) to push their prediction close to a uniform distribution, thus preventing any class com-petition with the target class. Fig. 1(a) illustrates that Fix-Match equipped with EML can select more examples with the pseudo-label during the whole training process. Since EML attempts to yield more low-entropy predictions to se-lect more examples with pseudo-label rather than tuning the threshold, it can be also applied to any dynamic-threshold methods (e.g., FlexMatch [38], Dash [34]). Nevertheless, it is still impossible to leverage all un-labeled data by generating pseudo-labels with a threshold strategy. This motivates us to further consider how to uti-lize the low-confidence unlabeled examples without pseudo-label (i.e., the maximum confidence is far from the prede-fined threshold). Intuitively, the prediction may get con-fused among the top classes, but it will be confident that the input does not belong to the categories ranked after these classes. Fig. 2 shows an inference result of FixMatch. The ground truth is “cat”, FixMatch is confused by several top classes (e.g., “dog”, “frog”) and make low-confidence pre-diction, however it shows highly confidence that some low-rank classes (e.g.,“airplane”, “horse”) are not ground truth class, thus we can safely assign negative pseudo-labels to these classes. Based on this insight, we propose a novel method named Adaptive Negative Learning (ANL). Specif-ically, ANL first calculate a kadaptively based on the pre-diction consistency, so that the accuracy of top-kis close to 1, and then regard the classes ranked after kas nega-tive pseudo-labels. Furthermore, if the example is selected a pseudo-label, ANL will shrink the range of non-target classes (i.e., EML only needs constrain the top-kclasses ex-cept target class). Note that ANL is a threshold-independent scheme and thus can be applied on allunlabeled data. As shown in Fig. 1(b), our ANL’s rendered negative pseudo-labels are increasing as the model is optimized while keep-ing high accuracy. In summary, our method makes full use of the unlabeled dataset, which is hardly ever seen in mod-ern SSL algorithms. To demonstrate the effectiveness of the proposed EML and ANL, we simply integrate EML and ANL to Fix-airplane automobilebirdcat deerdog frog horseship truck CIF AR10 Classes0.00.10.20.30.40.50.6Predict Confidence 0.00180.0052 0.00520.3805 0.00250.22380.3715 0.00020.00180.0075Figure 2. An example of inference result of FixMatch. It can conclude that the input does not belong to these low-rank classes, such as airplanes, horse. Match and exploit a new framework named FullMatch. We conduct various experiments on CIFAR-10/100 [16], SVHN [22], STL-10 [5] and ImageNet [7]. The results demonstrate that FullMatch surpasses the performance of FixMatch with a large margin while the training cost re-mains similar to FixMatch. Moreover, our method can be easily adapted to other FixMatch-based algorithms and obtain further improvement. For example, by combining it with FlexMatch [38], we achieve state-of-the-art perfor-mance. To summarize, our key contributions include: 1) We introduce an additional supervision namely En-tropy Meaning Loss (EML) when training examples with pseudo-label, which enforces a uniform distribution of non-target classes to avoid them competition with target class and thus producing more high-confidence predictions. 2) We propose the Adaptive Negative Learning (ANL), a dynamic negative pseudo-labels allocation scheme, which renders negative pseudo-labels with very limited extra com-putational overhead for all unlabeled data, including the low-confidence ones. 3) We design a simple yet effective framework named FullMatch by simply integrating FixMatch with the pro-posed EML and ANL, which leverages all unlabeled data and thus achieving remarkable gains on five benchmarks. Furthermore, our method is shown to be orthogonal to other FixMatch-based frameworks. Specifically, FlexMatch with our method, achieves state-of-the-art results.
Chen_Novel-View_Acoustic_Synthesis_CVPR_2023
Abstract We introduce the novel-view acoustic synthesis (NVAS) task: given the sight and sound observed at a source view-point, can we synthesize the sound of that scene from an unseen target viewpoint? We propose a neural rendering approach: Visually-Guided Acoustic Synthesis (ViGAS) net-work that learns to synthesize the sound of an arbitrary point in space by analyzing the input audio-visual cues. To benchmark this task, we collect two first-of-their-kind large-scale multi-view audio-visual datasets, one synthetic and one real. We show that our model successfully reasons about the spatial cues and synthesizes faithful audio on both datasets. To our knowledge, this work represents the very first formulation, dataset, and approach to solve the novel-view acoustic synthesis task, which has exciting potential applications ranging from AR/VR to art and design. Un-locked by this work, we believe that the future of novel-view synthesis is in multi-modal learning from videos.
1. Introduction Replaying a video recording from a new viewpoint1has many applications in cinematography, video enhancement, and virtual reality. For example, it can be used to edit a video, simulate a virtual camera, or, given a video of a per-sonal memory, even enable users to experience a treasured moment again—not just on a 2D screen, but in 3D in a vir-tual or augmented reality, thus ‘reliving’ the moment. While the applications are exciting, there are still many unsolved technical challenges. Recent advances in 3D re-construction and novel-view synthesis (NVS) address the problem of synthesizing new images of a given scene [32, 34, 44]. However, thus far, the view synthesis problem is concerned with creating visuals alone; the output is silent or at best naively adopts the sounds of the original video (from the “wrong” viewpoint). Without sound, the emotional and cognitive significance of the replay is severely diminished. In this work, we address this gap and introduce the new task of novel-view acoustic synthesis (NV AS). The goal of 1We use “viewpoint” to mean a camera or microphone pose. <latexit sha1_base64="DB6IHqvM66/U2Zrzj2obSpIFx38=">AAACHHicbVDLSgMxFM34rOOr6tJNsAhdlZkKKq4KblxWsA9oS8lkbtvQTDIkmUoZ+iFu/BU3LhRx40Lwb0zbEbT1QOBwzrlJ7glizrTxvC9nZXVtfWMzt+Vu7+zu7ecPDutaJopCjUouVTMgGjgTUDPMcGjGCkgUcGgEw+up3xiB0kyKOzOOoRORvmA9RomxUjd/1g6gz0RKQRhQE5ckIZNXeH4/HjG4jyUTxm2DCH9C3XzBK3kz4GXiZ6SAMlS7+Y92KGkS2XHKidYt34tNJyXKMMph4rYTDTGhQ9KHlqWCRKA76Wy5CT61Soh7UtkjDJ6pvydSEmk9jgKbjIgZ6EVvKv7ntRLTu+ykTMSJAUHnD/USjo3E06ZwyBRQw8eWEKqY/SumA6IItR1o15bgL668TOrlkn9eKt+WC5ViVkcOHaMTVEQ+ukAVdIOqqIYoekBP6AW9Oo/Os/PmvM+jK042c4T+wPn8Bh/gopA=</latexit>audio: source viewpoint<latexit sha1_base64="XZLNGEBTmZWFfCrxLoBgXuXESXQ=">AAACHHicbVDLSgMxFM34rOOr6tJNsAhdlZkKKq4KblxWsCq0pWQytzWYSYbkTqUM/RA3/oobF4q4cSH4N6YPQasHAodzzuXmniiVwmIQfHpz8wuLS8uFFX91bX1js7i1fWl1Zjg0uJbaXEfMghQKGihQwnVqgCWRhKvo9nTkX/XBWKHVBQ5SaCesp0RXcIZO6hQPWhH0hMo5KAQz9FkWC31CkZkeIO0LuEu1UOi3QMXfoU6xFFSCMehfEk5JiUxR7xTfW7HmWeLGuWTWNsMgxXbODAouYei3Mgsp47esB01HFUvAtvPxcUO675SYdrVxTyEdqz8ncpZYO0gil0wY3thZbyT+5zUz7B63c6HSDEHxyaJuJilqOmqKxsIARzlwhHEj3F8pv2GGcdeB9V0J4ezJf8lltRIeVqrn1VKtPK2jQHbJHimTkByRGjkjddIgnNyTR/JMXrwH78l79d4m0TlvOrNDfsH7+AIPgqKG</latexit>audio: target viewpoint<latexit sha1_base64="FqwoHxlVQf+RHmpskhy/1w+4r/I=">AAACD3icbVDLSsNAFJ3UV42vqEs3waJ0VZIu1GXBjcsK9gFNKJPJTTt0MgkzE6GE/oEbf8WNC0XcunXn3zhpI2jrgYHDOfdw554gZVQqx/kyKmvrG5tb1W1zZ3dv/8A6POrKJBMEOiRhiegHWAKjHDqKKgb9VACOAwa9YHJd+L17EJIm/E5NU/BjPOI0ogQrLQ2tcy+AEeU5Aa5AzEwdDikpPNMDHv7oQ6vmNJw57FXilqSGSrSH1qcXJiSLdZwwLOXAdVLl51goShjMTC+TkGIywSMYaMpxDNLP5/fM7DOthHaUCP24sufq70SOYymncaAnY6zGctkrxP+8QaaiKz+nPM0UcLJYFGXMVoldlGOHVABRbKoJJoLqv9pkjAUmugNp6hLc5ZNXSbfZcC8azdtmrVUv66iiE3SK6shFl6iFblAbdRBBD+gJvaBX49F4Nt6M98VoxSgzx+gPjI9vFHedPQ==</latexit>prediction <latexit sha1_base64="wgq9kiYeoddM9HQ0RQef4oYy1Z8=">AAACCnicbVDLSsNAFJ3UV42vqEs30SJ0VZIu1GXBjcsK9gFNKJPJTTt0MgkzE6GErt34K25cKOLWL3Dn3zhNI2jrgYHDOfdw554gZVQqx/kyKmvrG5tb1W1zZ3dv/8A6POrKJBMEOiRhiegHWAKjHDqKKgb9VACOAwa9YHI993v3ICRN+J2apuDHeMRpRAlWWhpap14AI8pzAlyBmJmUp5kyPeDhjzS0ak7DKWCvErckNVSiPbQ+vTAhWazjhGEpB66TKj/HQlHCYGZ6mYQUkwkewUBTjmOQfl6cMrPPtRLaUSL048ou1N+JHMdSTuNAT8ZYjeWyNxf/8waZiq78vDgPOFksijJmq8Se92KHVABRbKoJJoLqv9pkjAUmugNp6hLc5ZNXSbfZcC8azdtmrVUv66iiE3SG6shFl6iFblAbdRBBD+gJvaBX49F4Nt6M98VoxSgzx+gPjI9vCkWbCg==</latexit>input<latexit sha1_base64="gmuaOXNBHdGGiasD0JLjq1fYTJ0=">AAACMXicbVBNSwMxFMz6WdevqkcvwSLUg2W3B/VY8dKTVLS10C0lm762wWyyJNlKWfqXvPhPxEsPinj1T5itFbQ6EJjMvOHxJow508bzJs7C4tLyympuzV3f2Nzazu/sNrRMFIU6lVyqZkg0cCagbpjh0IwVkCjkcBveXWT+7RCUZlLcmFEM7Yj0BesxSoyVOvlqEEKfiZSCMKDGrpBD4MdDBvdBgAmViTaMYsv1SJgBaKazT/GycX595AYgut/JTr7glbwp8F/iz0gBzVDr5J+CrqRJZOOUE61bvhebdkqUXchh7AaJhpjQO9KHlqWCRKDb6fTiMT60Shf3pLJPGDxVfyZSEmk9ikI7GREz0PNeJv7ntRLTO2unTMSJAUG/FvUSjo3EWX24yxRQw0eWEKpYVg4dEEWo7UC7tgR//uS/pFEu+Sel8lW5UCnO6sihfXSAishHp6iCqqiG6oiiB/SMXtCr8+hMnDfn/Wt0wZll9tAvOB+fFfSqAg==</latexit>novel-viewacousticsynthesis(NVAS) <latexit sha1_base64="M/oXgzzvTTQZBpW0tUZ/skQulpQ=">AAACHXicbVDLSgMxFM34rONr1KWbYBG6KjNFVFwV3LisYB/QlpJJb9vQTDIkmUoZ+iNu/BU3LhRx4Ub8G9N2BG09EDiccy4394QxZ9r4/pezsrq2vrGZ23K3d3b39r2Dw5qWiaJQpZJL1QiJBs4EVA0zHBqxAhKFHOrh8Hrq10egNJPizoxjaEekL1iPUWKs1PHOWiH0mUgpCANq4o6YTgi/wvMFeMTgPpZMGLcFovuT6nh5v+jPgJdJkJE8ylDpeB+trqRJZMcpJ1o3Az827ZQowyiHidtKNMSEDkkfmpYKEoFup7PrJvjUKl3ck8o+YfBM/T2RkkjrcRTaZETMQC96U/E/r5mY3mU7ZSJODAg6X9RLODYST6vCXaaAGj62hFDF7F8xHRBFqO1Au7aEYPHkZVIrFYPzYum2lC8Xsjpy6BidoAIK0AUqoxtUQVVE0QN6Qi/o1Xl0np03530eXXGymSP0B87nNyKCoxw=</latexit>visual: source viewpoint<latexit sha1_base64="9UpGCzo0YEBsWgEaPdo5vXYYl9I=">AAACI3icbVDLSsNAFJ34Nr6qLt0MFqGrknSh0lXBjUsFawtNKJPJbR2cTMLMTaWE/osbf8WNC0XcuPBfnD4EbT0wcDjnXO7cE2VSGPS8T2dpeWV1bX1j093a3tndK+0f3Jo01xyaPJWpbkfMgBQKmihQQjvTwJJIQiu6vxj7rQFoI1J1g8MMwoT1legJztBK3VI9iKAvVMFBIeiROxAmZ7IeBC4y3QekQUCtCA9ZKhS6Aaj4J9stlb2qNwFdJP6MlMkMV93SexCnPE/sOJfMmI7vZRgWTKPgEkZukBvIGL9nfehYqlgCJiwmN47oiVVi2ku1fQrpRP09UbDEmGES2WTC8M7Me2PxP6+TY+88LITKcgTFp4t6uaSY0nFhNBYaOMqhJYxrYf9K+R3TjNsOjGtL8OdPXiS3tap/Wq1d18qNyqyODXJEjkmF+OSMNMgluSJNwskjeSav5M15cl6cd+djGl1yZjOH5A+cr2+pi6TS</latexit>visual:targetviewpoint <latexit sha1_base64="VvksTeSlNE+VBbTxSJ14XZHmE4M=">AAACHHicbVC7SgNBFJ31GddX1NJmMAixCbsR1DJgYxnBPCAbwuzs3WTI7MwyMyuEJR9i46/YWChiYyH4N04egiae6nDOPdx7T5hypo3nfTkrq2vrG5uFLXd7Z3dvv3hw2NQyUxQaVHKp2iHRwJmAhmGGQztVQJKQQyscXk/81j0ozaS4M6MUugnpCxYzSoyVesXzIIQ+EzkFYUCN3bIUfISDwI2lwgpiUCAonLkBiOhnqFcseRVvCrxM/DkpoTnqveJHEEmaJTZOOdG643up6eZEGUY5jN0g05ASOiR96FgqSAK6m0+fG+NTq0R4ck4shcFT9XciJ4nWoyS0kwkxA73oTcT/vE5m4qtuzkSaGfvjbFGccWwknjSFI6aAGttGxAhVzN6K6YAoQm0H2rUl+IsvL5NmteJfVKq31VKtPK+jgI7RCSojH12iGrpBddRAFD2gJ/SCXp1H59l5c95noyvOPHOE/sD5/AaTR6GZ</latexit>(onlyfor reference) <latexit sha1_base64="6lJBr/qgL1rvMh3XYJx3cXo7qa0=">AAACHXicbVDLSgMxFM3UVx1foy7dBIvQVZkpoi4LblxWsA/olJJJb9vQTDIkmUoZ+iNu/BU3LhRx4Ub8G9OHoK0HAodzziX3nijhTBvf/3Jya+sbm1v5bXdnd2//wDs8qmuZKgo1KrlUzYho4ExAzTDDoZkoIHHEoRENr6d+YwRKMynuzDiBdkz6gvUYJcZKHe88jKDPREZBGFAT1xDVB4NHDO4TyYTBYegmUoMbguj+pDpewS/5M+BVEixIAS1Q7XgfYVfSNLbjlBOtW4GfmHZGlGGUw8QNUw0JoUPSh5alg
Huang_Robust_Generalization_Against_Photon-Limited_Corruptions_via_Worst-Case_Sharpness_Minimization_CVPR_2023
Abstract Robust generalization aims to tackle the most challeng-ing data distributions which are rare in the training set and contain severe noises, i.e., photon-limited corruptions. Com-mon solutions such as distributionally robust optimization (DRO) focus on the worst-case empirical risk to ensure low training error on the uncommon noisy distributions. How-ever, due to the over-parameterized model being optimized on scarce worst-case data, DRO fails to produce a smooth loss landscape, thus struggling on generalizing well to the test set. Therefore, instead of focusing on the worst-case risk minimization, we propose SharpDRO by penalizing the sharpness of the worst-case distribution, which measures the loss changes around the neighbor of learning parameters. Through worst-case sharpness minimization, the proposed method successfully produces a flat loss curve on the cor-rupted distributions, thus achieving robust generalization. Moreover, by considering whether the distribution annota-tion is available, we apply SharpDRO to two problem set-tings and design a worst-case selection process for robust generalization. Theoretically, we show that SharpDRO has a great convergence guarantee. Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and Ima-geNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.
1. Introduction Learning against corruptions has been a vital challenge in the practical deployment of computer vision models, as learning models are much more fragile to subtle noises than human perception systems [12, 15, 31]. During the train-ing, the encountered corruptions are essentially perceived as †Equal contribution.1Sydney AI Centre, The University of Sydney;2National Engineering Research Center for Multimedia Software, Institute of Artificial Intelli-gence, School of Computer Science and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University;3JD Explore Academy; 4Department of Automation, University of Science and Technology of China;5Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, School of Computer Science and Engineering, Nanjing University of Science and Technology;6Department of Computer Science, Hong Kong Baptist University. Correspondence to Jun Yu ( harryjun@ustc.edu.cn ). Figure 1. Illustration of photon-limited corruptions. distribution shift, which would significantly hinder the pre-diction results [29, 34, 51, 58 –60]. Therefore, to mitigate the performance degradation, enhancing generalization to cor-rupted data distributions has drawn lots of attention [1, 45]. In the real world, noise corruptions are commonly known as photon-limited imaging problems [21, 26, 35, 50] which arises due to numerous small corruption photons arriving to a image sensor. Consequently, different numbers of captured photons would form different levels of corruptions severity, further producing multiple data distributions and imposing varied impacts on learning models [14]. Specifically, the encountered photon-limited corruption Eis a composition of multiple noise photon u, which is triggered by some dis-crete factors with a certain probability during a time interval. For example, a photon ucan be triggered by each platform changing, re-distribution, transmission, etc. More photons are captured, and severer corruption would be applied to the image. Therefore, the severity sof the photon-limited corruptions Ecan be modeled by Poisson distribution, i.e., s∼P(s;λ) =e−λλs s!, which is illustrated in Figure 1. As a result, the real-world training set is not completely composed of clean data, but contains corrupted data with a smaller proportion as the severity goes stronger. Dealing with such a realistic problem by vanilla empirical risk minimization can achieve satisfactory averaged accuracy on the whole training set. However, due to the extremely lim-ited number of severely-corrupted data, the learning model would produce large training errors on the corrupted dis-tributions, further hindering the robust performance under challenging real-world situations. A popular approach to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16175 Existing DRO method ( GroupDRO ) Our method (a) (b) s=0 s=1 s=2 s=3 s=4 s=5Figure 2. Illustration of our motivation. (a) Loss surface visualization of GroupDRO and the proposed SharpDRO. The columns from left to right stand for corrupted distributions with severity s= 0to5. (b) Illustration of why a sharp loss surface hinders generalization to test data. achieve low error on the scarce corrupted data is distribution-ally robust optimization (DRO) [39,42,45,54,55,65], which commonly optimizes the model parameter θby optimizing: min θ∈Θsup Q∈QE(x,y)∼Q[L(θ; (x, y))], (1) where Qdenotes the uncertainty set that is utilized to esti-mate the possible test distribution. Intuitively, DRO assumes thatQconsists of multiple sub-distributions, among which exists a worst-case distribution Q. By concentrating on the risk minimization of the worst-case distribution, DRO hopes to train a robust model which can deal with the potential dis-tribution shift during the test phase. However, existing DRO methods usually leverage over-parameterized models to fo-cus on a small portion of worst-case training data. Therefore, the worst-case data contaminated with severe corruption is highly possible to get stuck into sharp minima. As shown in the upper of Figure 2 (a), a stronger corruption would cause the existing method to learn a sharper loss surface. Conse-quently, optimization via DRO fails to produce a flat loss landscape over the corrupted distributions, which leads to a large generalization gap between training and test set [6, 22]. To remedy this defect, in this paper, we propose Sharp-DRO method to focus on learning a flat loss landscape of the worst-case data, which can largely mitigate the training-test generalization gap problem of DRO. Specifically, we adopt the largest loss difference formed by applying weight pertur-bation [11, 57] to measure the sharpness of the loss function. Intuitively, a sharp loss landscape is sensitive to noise and cannot generalize well on the test set. On the contrary, a flat loss landscape produces consistent loss values and is robust against perturbations (Figure 2 (b)). By minimizing the sharpness, we can effectively enhance the generalization performance [6, 22]. However, directly applying sharpness minimization on multiple distributions would yield poor re-sults [4], as the computed sharpness could be influenced by the largest data distribution, and thus cannot generalize well to small corrupted data. Therefore, we only focus on worst-case sharpness minimization. In this way, as the lower of Figure 2 (a) shows, SharpDRO successfully produces a flat loss surface, thus achieving robust generalization on theseverely corrupted distributions. In addition, identification of the worst-case distribution requires expensive annotations, which are not always prac-tically feasible [30]. In this paper, we apply SharpDRO to solve two problem settings: 1) Distribution-aware ro-bust generalization which assumes that distribution indexes are accessible, and 2) Distribution-agnostic robust gen-eralization where the distributions are no longer identifi-able, making the worst-case data hard to find. Existing approaches such as Just Train Twice (JTT) require two-stage training which is rather inconvenient. To tackle this chal-lenge, we propose a simple (Out-of-distribution) OOD detec-tion [15,19,20,29,32,52,53] process to detect the worst-case data, which can be further leveraged to enable worst-case sharpness minimization. Through constructing training sets according to the Poisson distributed noisy distribution using CIFAR10/100 and ImageNet30, we show that SharpDRO can achieve robust generalization results on both two prob-lem settings, surpassing well-known baseline methods by a large margin. To sum up, our main contributions are three-fold: •We proposed a sharpness-based DRO method that over-comes the poor worst-case generalization performance of distributionally robust optimization. •We apply the proposed SharpDRO to both distribution-aware and distribution-agnostic settings, which brings a practical capability to our method. •Theoretically, we show that SharpDRO has a conver-gence rate of O(κ2 √ MT). Empirically, we conduct ex-tensive analysis to validate its generalization capability on photon-limited corruptions. In the following, we first introduce the background in section. 2. Then, we specify our SharpDRO over two prob-lem settings in Section 3. Moreover, we give a detailed optimization process and analysis in Section 3.3. Further, experiments are demonstrated to validate SharpDRO in Sec-tion 4. At last, we conclude this paper in Section 5. 16176
Hu_Point2Pix_Photo-Realistic_Point_Cloud_Rendering_via_Neural_Radiance_Fields_CVPR_2023
Abstract Synthesizing photo-realistic images from a point cloud is challenging because of the sparsity of point cloud represen-tation. Recent Neural Radiance Fields and extensions are proposed to synthesize realistic images from 2D input. In this paper, we present Point2Pix as a novel point renderer to link the 3D sparse point clouds with 2D dense image pix-els. Taking advantage of the point cloud 3D prior and NeRF rendering pipeline, our method can synthesize high-quality images from colored point clouds, generally for novel in-door scenes. To improve the efficiency of ray sampling, we propose point-guided sampling, which focuses on valid samples. Also, we present Point Encoding to build Multi-scale Radiance Fields that provide discriminative 3D point features. Finally, we propose Fusion Encoding to efficiently synthesize high-quality images. Extensive experiments on the ScanNet and ArkitScenes datasets demonstrate the ef-fectiveness and generalization.
1. Introduction Point cloud rendering aims to synthesize images from point clouds at given camera parameters, which has been frequently utilized in 3D visualization, navigation, and aug-mented reality. There are many advantages to point cloud representation, such as flexible shape and general 3D prior. However, since point clouds are generally produced by 3D scanners (RGBD or LiDAR) [3, 9, 14] or by Multi-View Stereo (MVS) from images [4, 15, 52], the points are usu-ally sparsely distributed in 3D scenes. Although traditional graphics-based renderers [38, 42, 50, 58] can render point clouds to images without training or finetuning, the quality is not satisfying with hole artifacts and missing details [10]. Recently, Neural Radiance Fields (NeRF) [29] were pro-posed for 3D representation and high-fidelity novel view synthesis. It employs an implicit function to directly map each point’s spatial information (location and direction) to attributes (color and density). However, most NeRF-based *Corresponding author.methods [23,49,54,55] are scene-specific, thus taking much time to train from scratch for novel scenes in abundant multi-view images, which limits the practical applications. In this work, we bridge the gap between point clouds and NeRF, thus proposing a novel point cloud renderer, called Point2Pix, to synthesize photo-realistic images from col-ored point clouds. Compared with most NeRF-based meth-ods [23, 29, 53, 54], ours does not necessarily require multi-view images or fine-tuning procedures for indoor scenes. First, point clouds are treated as underlying anchors of NeRF. The training process of NeRF is to learn 3D point at-tributes from given locations. Because there is no mapping ground truth for multi-view images, NeRF-based methods [23, 29] indirectly train their networks with a pixel recon-struction loss. Note that point clouds are exactly made up of points with location and attributes, thus can provide train-ing pairs for the mapping function to conduct supervised learning and improve performance. Then, point clouds can also improve the efficiency of ray sampling. NeRF-based methods [23, 29, 54] learn the 3D shape and structure from multi-view images, which do not involve geometric prior in novel scenes. Thus, dense uni-form sampling [5, 46] or coarse-to-fine sampling [29, 54] were used to synthesize high-quality images. These strate-gies are inefficient because most of the locations in 3D scenes are empty [23, 31]. Since point clouds represent a relatively fine shape of 3D scenes, the area around existing points deserves more attention. Based on this observation, we propose a point-guided sampling strategy to mainly fo-cus on the local area of points in the point cloud. It can significantly reduce the number of required samples while maintaining decent synthesis accuracy. Further, point-based networks can provide 3D features prior to subsequent applications, general for novel scenes. Although many methods [7, 18, 36, 37] have been proposed for various point cloud understanding tasks, they are usu-ally designed for existing points. In this work, we propose Multi-scale Radiance Fields, including Point Encoder and MLP networks, to extract multi-scale features for any loca-tion in the scene. These 3D point features are discriminative and general, ensuring finetuning-free rendering. Also, in-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8349 spired by recent NeRF-based image generators [19, 22, 57], we render the 3D point features as multi-scale feature maps. Our fusion decoder gradually synthesizes high-resolution images. It can not only fill the possible holes but also im-prove the quality of rendered images. Our main contribu-tions are summarized as follows. • We propose Point2Pix to link point clouds with image space, which renders point clouds into photo-realistic images. • We present an efficient ray sampling strategy and fu-sion decoder to greatly decrease the number of samples in each ray and the total number of rays, thus acceler-ating the rendering process. • We propose Multi-scale Radiance Fields, which ex-tract discriminative 3D prior for arbitrary 3D locations. • Extensive experiments and ablation studies on indoor datasets demonstrate the effectiveness and generaliza-tion of the proposed method.
Feng_Neural_Dependencies_Emerging_From_Learning_Massive_Categories_CVPR_2023
Abstract This work presents two astonishing findings on neural networks learned for large-scale image classification. 1) Given a well-trained model, the logits predicted for some category can be directly obtained by linearly combining the predictions of a few other categories, which we call neural dependency . 2) Neural dependencies exist not only within a single model, but even between two independently learned models, regardless of their architectures. Towards a theoretical analysis of such phenomena, we demon-strate that identifying neural dependencies is equivalent to solving the Covariance Lasso (CovLasso) regression problem proposed in this paper. Through investigating the properties of the problem solution, we confirm that neural dependency is guaranteed by a redundant logit covariance matrix, which condition is easily met given massive categories, and that neural dependency is highly sparse, implying that one category correlates to only a few others. We further empirically show the potential of neural dependencies in understanding internal data correlations, generalizing models to unseen categories, and improvingmodel robustness with a dependency-derived regularizer. Code to reproduce the results in this paper is available at https://github.com/RuiLiFeng/Neural-Dependencies.
1. Introduction Despite the tremendous success of deep neural networks in recognizing massive categories of objects [8–10, 12, 14– 16, 23, 27, 29], how they manage to organize and relate dif-ferent categories remains less explored. A proper analysis of such a problem is beneficial to understanding the network behavior, which further facilitates better utilization of this powerful tool. In this work, we reveal that a deep model tends to make its own way of data exploration, which sometimes contrasts sharply with human consciousness. We reveal some underlying connections between the predictions from a well-learned image classification model, which appears as one category highly depending on a few others. In the example given in Fig. 1a, we can directly replace the logits predicted for “macaw” with a linear combination of the logits for “ostrich”, “bittern”, etc. (without tuning This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11711 the network parameters) and achieve similar performance. We call this phenomenon as neural dependency , which automatically emerges from learning massive categories. A more surprising finding is that neural dependencies exist not only within a single model, but also between two independently learned models, as shown in Fig. 1b. It is noteworthy that these two models can even have different architectures ( e.g., one with convolutional neural network [12] and the other with transformer [10, 16]) and different training strategies. Towards figuring out what brings neural dependencies and whether they happen accidentally, we make a the-oretical investigation and confirm that identifying neural dependencies is equivalent to solving a carefully designed convex optimization—the Covariance Lasso (CovLasso) regression problem proposed in this paper. Such a problem owns a smooth solution path when varying its hyper-parameters [21], which has two appealing properties. First, the solution is guaranteed by a redundant covariance matrix of the category-wise logits. This condition is easily met when the model is trained on a sufficiently large number of categories [11]. Second, the solution admits elegant sparsity. It implies that a category involved in neural dependencies only relates to several instead of numerous other categories. We further study the potential utilities of neural depen-dencies, as a support to our theoretical contributions. One straightforward application is to help interpret the internal data correlations, such as what categories are more likely to link to each other (Sec. 3.1). Another application is to investigate how we can generalize a well-learned model to unseen categories with the help of neural dependencies (Sec. 3.2). We also propose a regularizer to test whether dis-couraging the neural dependencies could assist the model in learning a more robust representation (Sec. 3.3). We believe the findings in this paper would deepen our understanding of the working mechanism of deep neural networks, and also shed light on some common rules in knowledge learn-ing with visual intelligence systems.
Fan_ARCTIC_A_Dataset_for_Dexterous_Bimanual_Hand-Object_Manipulation_CVPR_2023
Abstract Humans intuitively understand that inanimate objects do not move by themselves, but that state changes are typi-cally caused by human manipulation ( e.g., the opening of a book). This is not yet the case for machines. In part this is because there exist no datasets with ground-truth 3D an-notations for the study of physically consistent and synchro-nised motion of hands and articulated objects. To this end, we introduce ARCTIC – a dataset of two hands that dex-terously manipulate objects, containing 2.1M video frames paired with accurate 3D hand and object meshes and de-tailed, dynamic contact information. It contains bi-manual articulation of objects such as scissors or laptops, where hand poses and object states evolve jointly in time. We pro-pose two novel articulated hand-object interaction tasks: (1) Consistent motion reconstruction: Given a monocular video, the goal is to reconstruct two hands and articulated objects in 3D, so that their motions are spatio-temporally consistent. (2) Interaction field estimation: Dense rela-tive hand-object distances must be estimated from images. We introduce two baselines ArcticNet and InterField, re-spectively and evaluate them qualitatively and quantita-tively on ARCTIC. Our code and data are available at https://arctic.is.tue.mpg.de .1. Introduction Humans constantly manipulate complex objects: we open our laptop’s cover to work, we apply spray to clean, we carefully control our fingers to cut with scissors – rigid and articulated parts of objects move together with our hands. Inanimate objects only move or deform if external forces are applied to them. The study of the physically consis-tent dynamics of hands and objects during manipulation has so far been under-researched in the hand pose estima-tion literature. This is partly because existing hand-object datasets [8, 18, 19, 21, 30, 34] are mostly limited to grasping of rigid objects and contain few if any examples of rich and dexterous manipulation of articulated objects. To enable the study of dexterous articulated hand-object manipulation, we collect a novel dataset called ARCTIC (ARticulated obje CTs in Intera Ction). ARCTIC consists of video sequences of multi-view RGB frames, and each frame is paired with accurate 3D hand and object meshes. ARCTIC contains data from 10subjects interacting with 11 articulated objects, resulting in a total of 2.1M RGB im-ages. Images are captured from multiple synchronized and calibrated views, including 8static allocentric views and 1 moving egocentric view. To capture accurate 3D meshes This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12943 during manipulation, we synchronize color cameras with 54high-resolution Vicon MoCap cameras [66]. These al-low the use of small MoCap markers that do not interfere with hand-object interaction and are barely visible in the images. We then fit pre-scanned human and object meshes to the observed markers [35,56]. The objects consist of two rigid parts that rotate about a shared axis such as the flip phone in Fig. 1 (for all objects, see SupMat). Our dataset enables two novel tasks: (1) consistent mo-tion reconstruction, (2) interaction field estimation. For consistent motion reconstruction , given a monocular video, the task is to reconstruct the 3D motion of two hands and an articulated object. In particular, the reconstructed hand-object meshes should have spatio-temporally consis-tent hand-object contact, object articulation, and smooth motion during interaction. This task has several chal-lenges: (1) Spatio-temporal consistency requires precise hand-object 3D alignment for all frames; (2) This precision is hard to achieve due to depth ambiguity and severe occlu-sions during dexterous manipulation; (3) The unconstrained interaction causes more variations in hand pose and contact than in existing datasets [8, 18, 19, 34] (see Fig. 2). As an initial step towards addressing these challenges, and to provide baselines for future work, we introduce ArcticNet to reconstruct the motions of two hands and an articulated object from a video. ArcticNet uses an encoder-decoder architecture to estimate parameters of the MANO hand model [45] for the two hands, and our artic-ulated object model. We experiment with two variations of ArcticNet: a single-frame model and a temporal model with a recurrent architecture inspired by [28]. We provide quali-tative and quantitative results for future comparison. When studying hand-object interaction, contact is impor-tant [17, 67]. Some approaches [22, 67] explore the task of binary contact estimation from a single RGB image. In the two-handed manipulation setting, hands can be near the ob-ject but not in contact. To understand the dynamic, relative spatial configuration between hands and objects in more de-tail, even when not in contact, we propose the general task ofinteraction field estimation from RGB images. The goal is to estimate, for each hand vertex, the shortest distance to the object mesh and vice versa (see Fig. 6 for a visualiza-tion). We introduce a baseline, InterField, for this task and benchmark both a single-frame and a recurrent version of InterField on ARCTIC for future comparison. In summary, our contributions are as follows: (1) We present ARCTIC, the first large-scale dataset of two hands thatdexterously manipulate articulated objects, with multi-view RGB images paired with accurate 3D meshes; (2) We introduce two novel tasks of consistent motion reconstruc-tion and interaction field estimation to study the physically consistent motion of hands and articulated objects; (3) We provide baselines for both tasks on ARCTIC.2. Related Work Human-object datasets: Several datasets [1, 7, 38, 53, 61, 64] contain images of human-object interaction, but here we focus on large-scale data [3, 15, 18, 21, 23, 47, 78] that facilitates machine learning. There are three categories. (1) Human body with rigid objects: Bhatnagar et al. [3] and Huang et al. [23] introduce image datasets for human body interaction with big objects. Compared to ours, [3] do not capture the hands. Huang et al. [23] capture hands and body using a multi-view RGB-D setup while ours is captured us-ing a MoCap setup for more accurate 3D data. Compared to both, we have dexterous bimanual manipulation, dynamic hand-object contact, and articulated objects. GRAB [56] contains detailed human-object interaction but no images, while BEDLAM [4] contains videos with ground-truth hu-mans but no object interaction. (2) Single hand with rigid objects: Most hand-object datasets [6,8,15,18,21,34] con-sist of single-hand grasping interaction. However, hand poses in grasping interaction are mostly static, with rela-tively little pose variation over time. Hampali et al. [18] use a multi-RGB-D system and fit both MANO and YCB object meshes with sequence-level fitting and contact con-straints. (3) Two hands with rigid objects: Kwon et al. [30] and Hampali et al. [19] present two-hand datasets interact-ing with rigid objects. Compared to (2) and (3), our dataset has 3D annotations of the full human body, both hands, and articulated objects. We go beyond grasping and focus on less constrained dexterous bimanual manipulation. We dis-cuss the comparison between ours (ARCTIC) and existing hand-object datasets [8, 18, 19, 30, 34] in Sec. 3.1. Estimating 3D hands and objects from RGB images: Monocular RGB 3D hand reconstruction has a long history since Rehg and Kanade [43]. Most work in the literature focuses on hand-only reconstructions [5, 13, 21, 24, 31, 36, 37, 49–52, 62, 70, 73, 76, 76, 77]. Zimmermann et al. [77] use a deep convolutional network for 3D hand pose estima-tion via a multi-stage approach. Spurr et al. [51] introduce biomechanical constraints to regularize hand pose predic-tion. Ziani et al. [76] use a self-supervised time-contrastive formulation to improve smoothness for hand motion recon-struction. Recently, there has been increased interest in hand-object reconstruction from RGB images [12, 17, 20, 21, 33, 57, 67, 75]. Tekin et al. [57] infer 3D control points for both the hand and the object in videos, using a tempo-ral model to propagate information across time. Hasson et al. [21] render synthetic images and train a neural network to regress a static grasp of a 3D hand and a rigid object, us-ing full supervision together with contact losses. Corona et al. [12] estimate MANO grasps for objects from an image, by first inferring the object shape and a rough hand pose, which is refined via contact constraints and an adversarial prior. Liu et al. [33] use a transformer-based contextual-reasoning module that encodes the synergy between hand 12944 dataset real # number of: ego-image articulated both human dexterous annot. images img view centric resol. objects hands body manipulation type FreiHand [78] ✓ 37k 8 ✗ 224×224 ✗ ✗ ✗ ✗ semi-auto ObMan [21] ✗ 154k 1 ✗ 256×256 ✗ ✗ ✗ ✗ synthetic FHPA [15] ✓ 105k 1 ✓ 1920×1080 ✗ ✗ ✗ ✗ magnetic HO3D [18] ✓ 78k 1-5 ✗ 640×480 ✗ ✗ ✗ ✗ multi-kinect ContactPose [6] ✓ 2.9M 3 ✗ 960×540 ✗ ✗ ✗ ✗ multi-kinect GRAB [56] -----✗ ✓ ✓ ✗ mocap DexYCB [8] ✓ 582k 8 ✗ 640×480 ✗ ✗ ✗ ✗ multi-manual H2O [30] ✓ 571k 5 ✓ 1280×720 ✗ ✓ ✗ ✗ multi-kinect H2O-3D [19] ✓ 76k 5 ✗ 640×480 ✗ ✓ ✗ ✗ multi-kinect HOI4D [34] ✓ 2.4M 1 ✓ 1280×800 ✓ ✗ ✗ ✗ single-manual
ARCTIC (Ours) ✓ 2.1M 9 ✓ 2800×2000 ✓ ✓ ✓ ✓ mocap Table 1. Comparison of our ARCTIC dataset with existing datasets . The keyword “single/multi-manual” denotes whether single or multiple views being used to annotate manually. and object features, and has higher responses at contact re-gions. Zhou et al. [74] learn an interaction motion prior to denoise motion predicted from an off-the-shelf single-frame hand-object reconstruction method. None of these methods deal with articulated objects, which result in complex hand-object interactions. Human-object contact detection: Contact has been shown important for: pose taxonomies [2, 14, 25], pose estima-tion [17, 18, 21, 53, 60, 64, 67], in-hand scanning [63, 72], and grasp synthesis [17, 27, 56, 67]. Many methods [17, 18, 53,60,64] use the proximity between the 3D hand/body and object meshes to estimate contacts and regularize pose es-timation based on these. Three main categories for con-tact estimation exist: 1) directly from meshes; 2) on the image pixel space from RGB images; 3) binary contact in 3D space from RGB images. Grady et al . [17] take off-the-shelf regressors to estimate grasping hand and object meshes, use these meshes to predict contacts on the objects provided by [6], and leverage contacts to refine the grasp. Their recent dataset [16] contains both contact and pressure between a hand and a flat sensor surface. Tripathi et al. [59] infer pressure from body-scene contact. Narasimhaswamy et al . [39] and Shan et al . [48] infer bounding boxes for hands in contact on the input RGB image. Chen et al. [9] infer human-scene contact on pixels. Rogez et al. [44] learn to infer contacts from the image using synthetic data, while Pham et al. [41] use real contact data captured with instru-mented objects. Unlike others, [44] and [41] estimate 3D binary contact from RGB images but the former does not generalize well to real images and the latter uses a classical approach due to the limited amount of data. BSTRO esti-mates contact on the 3D body from an image but does not estimate 3D hand or object pose [22]. Hi4D [68] provides ground-truth contact for close human interaction. In con-trast, our task of interaction field estimation goes beyond binary contact to model the dense relative distances between hands and objects. Thanks to our dexterous manipulation,ARCTIC contains fast changing hand-object contact. 3. ARCTIC Dataset Overview: To allow the study of object articulation with hands in motion, we construct ARCTIC, a video dataset with accurate 3D annotation for hands and articulated ob-jects. ARCTIC contains 339sequences of dexterous ma-nipulation of 11articulated objects by 10subjects (5 fe-/males). The dataset consists of 2.1M RGB images from 8 static views and 1egocentric view, paired with 3D hand and object meshes. To capture different interaction modes, we ask our subjects to either “use” (1.7M images) or “grasp” (457K images) the objects. Depth images of the two hands, the human body, and objects can be rendered from ARCTIC (see SupMat). 3.1. Data Characteristics Dataset features comparison: Table 1 compares ARCTIC with existing hand-object datasets. ARCTIC is the only dataset that contains both hands, the full human body (in SMPL-X [40]) and articulated objects. ARCTIC provides calibrated cameras ( 8allocentric and 1egocentric) with high-resolution images, enabling the study of monocular, multi-view and egocentric reconstruction settings. Impor-tantly, ARCTIC is a motion dataset that focuses on bi-manual dexterous manipulation, meaning that subjects can freely interact with objects using both hands. In con-trast, existing hand-object datasets focus single-hand grasp-ing [8,18,21] and the movement is often controlled [19,30]. GRAB [56] has fast motion by using a similar MoCap setup but captures only rigid objects and does not have images. HOI4D [34] is the only hand-object dataset that contains ar-ticulated objects, but it contains only a single view, does not capture the full human body, has a single hand, and mainly focuses on grasping. Crucially, their hand data is captured from only a single egocentric view, which introduces ambi-guity for the occluded fingers. 12945 Figure 2. Hand pose and contact variations in datasets . (a) T-SNE clustering of hand poses in different datasets. The plot shows that ARCTIC has a significantly larger range of poses than all existing datasets. (b) Frequently contacted regions for hands in HO-3D [18], GRAB [56], and ARCTIC. As seen with the broader heatmap spread on the hands, ARCTIC has higher contact diver-sity. (c) Frequently contacted areas on our objects. Capture setup comparison: Capturing dexterous manip-ulation while maintaining the quality of 3D annotation is extremely challenging due to fast motion and heavy occlu-sion during the interaction. In particular, the joints of a hand often have significant self-occlusion. The occlusion is even more severe when a hand interacts with objects and when there are multiple hands [36]. Existing hand-object datasets [8,18,19,30,34] are captured with 1−8commodity RGB-D cameras, which is insufficient to eliminate occlu-sion. As a result, their hand-object motion is often slow and they mainly focus on grasping interaction. To reduce occlu-sion and to enable the capture of dexterous manipulation, we construct our dataset using an accurate Vicon MoCap setup with 54high-end infrared Vantage-16 cameras [66]. To show our dexterous motion, and to compare 3D annota-tion quality between datasets, see our project page video. Hand pose and contact variations: Figure 2a compares different hand-object datasets [8,18,19,34] in terms of hand pose variations by showing a T-SNE clustering [65] of 3D hand joints. The plot reveals that our dataset (shown in blue) has a significantly larger hand pose diversity than oth-ers. This is due to the unconstrained nature of ARCTIC in which the subjects dexterously and dynamically manipulate the object (see project page video). The figure also shows frequently in-contact regions on hands (b) and objects (c) in the ARCTIC dataset. We generate the contact heatmaps fol-lowing GRAB’s [56] approach, by integrating per-frame bi-nary contact labels for vertices over all sequences. “Hotter” regions denote a higher chance of being in contact while “cooler” regions denote lower chance of contact. Similar to HO-3D [18] and GRAB [56], finger tips in our dataset Figure 3. Our camera views . We capture high resolution images in8static allocentric and 1moving egocentric views. Here we show zoomed-in crops and the original images. are most likely to be in contact with objects. However, thanks to the dexterous manipulation it contains, ARCTIC has higher contact likelihood in the palm region than other datasets, hence the heatmaps appear more “spread out”. For regular-sized everyday objects, such as the ketchup bottle, the contact regions “agree” with our usual interaction with them. For smaller toy objects like the waffle iron, subjects are likely to pick up the object and support it with one hand, leading to “hot” regions on the bottom of the object. 3.2. Acquisition Setup We detail our motion capture (MoCap) setup to acquire 3D surfaces of strongly interacting hands and articulated objects. We synchronize a MoCap system with a multi-view RGB system. See SupMat for the marker sets. With the latter we capture RGB videos from 8static allocentric views and 1moving egocentric view at 30FPS (see Fig. 3). The capture pipeline has five steps: (1) obtaining the 3D template geometry of the subjects and objects, (2) estimat-ing the rotation axis for articulated objects, shown in Sup-Mat, (3) capturing interaction using marker-based MoCap together with calibrated and synchronized video, (4) solving for the poses of the body, hands, and objects from MoCap markers following [35, 56], and (5) computing hand-object contact based on proximity, shown in SupMat. Obtaining canonical geometry: We obtain the ground-truth (GT) hand and body shape of each subject in a canon-ical T-Pose using 3D scans from a 3dMD [58] scanner. We register SMPL-X [40] to 3D scans at different time steps in varying poses and construct a personalized 3D template of each subject. See the SupMat for details of the template creation. To obtain object geometries, we scan each object using an Artec 3D hand-held scanner in a pre-defined pose. We separate each scanned object mesh into two articulated parts in Blender. See SupMat for all 11articulated objects. Capturing human-object interaction: To ensure accu-racy, we perform full-body, hand and object tracking using a Vicon MoCap system with 54infrared Vantage-16 cam-eras [66] to minimize the issues with occlusion. To capture 12946 usable RGB images alongside the MoCap data, we balance the trade-off between accuracy and marker intrusiveness by using small hemispherical markers with 1.5mm radius on the hands and objects. The markers are placed on the dorsal side of the hand to not encumber participants during natu-ral hand-object interaction, similar to GRAB [56]. While our focus is on hands, we retrieve full-body pose estimates as they provide more reliable global rotations and transla-tions for each hand. Therefore, we fit SMPL-X [40] to the observed markers to attain realistic wrist articulations, as MANO contains no wrist articulation. Obtaining surfaces from MoCap: Following [35,56], we associate MoCap marker positions with their correspond-ing subject/object vertices i
Ding_Hidden_Gems_4D_Radar_Scene_Flow_Learning_Using_Cross-Modal_Supervision_CVPR_2023
Abstract This work proposes a novel approach to 4D radar-based scene flow estimation via cross-modal learning. Our ap-proach is motivated by the co-located sensing redundancy in modern autonomous vehicles. Such redundancy implic-itly provides various forms of supervision cues to the radar scene flow estimation. Specifically, we introduce a multi-task model architecture for the identified cross-modal learn-ing problem and propose loss functions to opportunistically engage scene flow estimation using multiple cross-modal constraints for effective model training. Extensive experi-ments show the state-of-the-art performance of our method and demonstrate the effectiveness of cross-modal super-vised learning to infer more accurate 4D radar scene flow. We also show its usefulness to two subtasks -motion seg-mentation and ego-motion estimation. Our source code will be available on https://github.com/Toytiny/CMFlow.
1. Introduction Scene flow estimation is to obtain a 3D motion vector field of the static and dynamic environment relative to an ego-agent. In the context of self-driving, scene flow is a key enabler to navigational safety in dynamic environments by providing holistic motion cues to multiple subtasks, such as ego-motion estimation, motion segmentation, point cloud accumulation, multi-object tracking, etc. Driven by the recent successes of deep neural networks in point cloud processing [22, 35, 36, 40, 51, 54], predomi-nant approaches to scene flow estimation from point clouds adopt either fully-[12, 27, 34, 49, 52] or weakly-[10, 11] supervised learning, or only rely on self-supervised sig-nals [2, 9, 21, 24, 31]. For supervised ones, the acquisi-tion of scene flow annotations is costly and requires tedious and intensive human labour. In contrast, self-supervised learning methods require no annotations and can exploit the inherent spatio-temporal relationship and constraints in the input data to bootstrap scene flow learning. Neverthe-*Corresponding author: Chris Xiaoxuan Lu (xiaoxuan.lu@ed.ac.uk) LiDAR Camera Odometer Cross -Modal Supervision RetrievingAutomotive 4D Radar Input Warped Scene Flow Estimator Figure 1. Cross-modal supervision cues are retrieved from co-located odometer, LiDAR and camera sensors to benefit 4D radar scene flow learning. The source point cloud (red) is warped with our estimated scene flow and gets closer to the target one (blue). less, due to the implicit supervision signals, self-supervised learning performance is often secondary to the supervised ones [10,23,49], and they fail to provide sufficiently reliable results for safety-critical autonomous driving scenarios. These challenges become more prominent when it comes to 4D radar scene flow learning. 4D automotive radars receive increasing attention recently due to their robust-ness against adverse weather and poor lighting conditions (vs. camera), availability of object velocity measurements (vs. camera and LiDAR) and relatively low cost ( vs. Li-DAR) [5, 30, 32, 41]. However, 4D radar point clouds are significantly sparser than LiDARs’ and suffer from non-negligible multi-path noise. Such low-fidelity data signif-icantly complicates the point-level scene flow annotation for supervised learning and makes it difficult to rely ex-clusively on self-supervised training-based methods [9] for performance and safety reasons. To find a more effective framework for 4D radar scene flow learning, this work aims to exploit cross-modal su-pervision signals in autonomous vehicles. Our motiva-tion is based on the fact that autonomous vehicles today are equipped with multiple heterogeneous sensors, e.g., Li-DARs, cameras and GPS/INS, which can provide comple-mentary sensing and redundant perception results for each other, jointly safeguarding the vehicle operation in com-plex urban traffic. This co-located perception redundancy This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9340 can be leveraged to provision multiple supervision cues that bootstrap radar scene flow learning. For example, the static points identified by radar scene flow can be used to estimate the vehicle odometry. The consistency between this esti-mated odometry and the observed odometry from the co-located GPS/INS on the vehicle forms a natural constraint and can be used to supervise scene flow estimation. Simi-lar consistency constraints can be also found for the optical flow estimated from co-located cameras and the projected radar scene flow onto the image plane. While the aforementioned examples are intuitive, re-trieving accurate supervision signals from co-located sen-sors is non-trivial. With the same example of optical and scene flow consistency, minimizing flow errors on the im-age plane suffers from the depth-unaware perspective pro-jection, potentially incurring weaker constraints to the scene flow of far points. This motivates the following research question: How to retrieve the cross-modal supervision sig-nals from co-located sensors on a vehicle and apply them collectively to bootstrap radar scene flow learning . Towards answering it, in this work we consider op-portunistically exploiting useful supervision signals from three commonly co-existent sensors with the 4D radar on a vehicle: odometer (GPS/INS), LiDAR, and RGB camera (See Fig. 1). This cross-modal supervision is expected to help us realize radar scene flow learning without human an-notation. In our setting, the multi-modal data are only avail-able in the training phase, and only 4D radar is used during the inference stage. Our contributions can be summarized as follows: • Our work is the first 4D radar scene flow learning us-ing cross-modal supervision from co-located heteroge-neous sensors on an autonomous vehicle. • We introduce a multi-task model architecture for the identified cross-modal learning problem and propose loss functions to effectively engage scene flow estima-tion using multiple cross-modal constraints for model training. • We demonstrate the state-of-the-art performance of the proposed CMFlow method on a public dataset and show its effectiveness in downstream tasks as well.
Girdhar_OmniMAE_Single_Model_Masked_Pretraining_on_Images_and_Videos_CVPR_2023
Abstract Transformer-based architectures have become competi-tive across a variety of visual domains, most notably images and videos. While prior work studies these modalities in iso-lation, having a common architecture suggests that one can train a single unified model for multiple visual modalities. Prior attempts at unified modeling typically use architectures tailored for vision tasks, or obtain worse performance com-pared to single modality models. In this work, we show that masked autoencoding can be used to train a simple Vision Transformer on images and videos, without requiring any labeled data. This single model learns visual representations that are comparable to or better than single-modality repre-sentations on both image and video benchmarks, while using a much simpler architecture. Furthermore, this model can be learned by dropping 90% of the image and 95% of the video patches, enabling extremely fast training of huge model ar-chitectures. In particular, we show that our single ViT-Huge model can be finetuned to achieve 86.6% on ImageNet and 75.5% on the challenging Something Something-v2 video ∗Equal technical contribution.benchmark, setting a new state-of-the-art.
1. Introduction The Transformer architecture [78] is rapidly becoming competitive across the different visual modalities in Com-puter Vision, from images [24, 27, 55, 77], to 3D [57, 60, 89] and videos [2, 9, 27, 31, 32, 56]. This convergence toward a unified architecture naturally suggests that we should be able to train a single model that works across different vi-sual modalities. However, recent attempts to train unified models either lead to worse performance compared to single modality models [53], or require the use of an alternative architecture [33], namely the Swin Transformer [55], with inductive biases tailored towards vision tasks. While special-ized Transformer architectures for vision [27, 55, 56, 81] can offer better performance for visual modalities, they lose the generality and flexibility of the vanilla Transformer, making it harder to later model different domains like text, speech, 3Detc. in multi-modal architectures. In this work, we train a single vanilla Transformer that works for both images and videos, as illustrated in Figure 1. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10406 To this end, we leverage the findings of several recent works on the use of the masked pretraining [23] to greatly improve the training and performance of Transformers in the domain of images [6, 40, 80, 84], videos [76, 80, 82] or across text, audio and images [5]. We show that this masked pretraining is a viable strategy to pretrain a unified ‘omnivorous’ Trans-former across visual modalities. In particular, we consider the Masked Auto-Encoding ( MAE ) approach [40] to train an Omni vorous visual encoder [33]. The resulting OmniMAE model learns from all the modalities with the same objective function and does not require any supervision. Using a masked pretraining objective has several advan-tages over supervised objectives [33, 53] or discriminative self-supervised objectives [13, 17, 41]. First, as opposed to supervised objectives, a general-purpose unsupervised loss does not require any human labeling effort. As a result, it is robust to biases introduced by a predefined set of labels [35]. Moreover, it does not require a multi-head architecture to incorporate supervision from each of the label spaces cor-responding to each modality, which is hard to maintain and scale with new modalities. Second, although discriminative self-supervised methods produce superior frozen features compared to reconstruction objectives, they are non trivial to scale in model and data size [36]. Our masked pretraining objective is simple, efficient to train, and scales to different visual modalities with minimal changes. Contributions. (1) We show that the simple Vision Trans-former architecture (ViT) [24] originally designed for images can naturally be applied on videos, and videos and images jointly. OmniMAE is a single ViT-based model for videos and images that outperforms architectures and models specif-ically designed for either modality. (2)Prior and concurrent work design self-supervised methods and architectures for either image or video and we find that these models do not transfer well across modalities. OmniMAE is the first single self-supervised model that achieves good performance on both modalities. (3)We show that our joint training using both images and videos enables us to use much higher mask-ing ratios than any prior work for training MAE. Since ViT can processes only the non-masked input, we train Omn-iMAE models with only 10% of image and 5% of video patches. This enables us to train large (650M parameter) models with a ∼7×and∼11×reduction in compute and memory on images and videos. (4)Finally, we propose im-provements to the MAE training. We show that repeating samples in a mini-batch reduces dataloading (and thus train-ing) time without loss in final transfer performance. Sample replication is particularly useful for masked pretraining as the unmasked patches are different across sample replicas. We also show that using a shallow shared decoder for videos and images leads to better performance while reducing the number of parameters by 2−4×.2. Related Work Our work builds upon research in self-supervised learning, masked pretraining and unified modeling in computer vision. Self-supervised learning. In recent years, self-supervised approaches have been dominated by joint embedding meth-ods which can rely on different objectives including con-trastive [17,39,41,51,61,65,75,83], non-contrastive [7,18,26, 87], clustering [3,11,12,85] or self-distillation [5,13,38,92]. Such methods are trained to learn invariance to a set of pre-defined transformations which results in image descriptors with a strong linear probing and KNN performance. How-ever, such methods can be challenging to scale since they can suffer from instabilities [19]. Additionally, the strongest performance is typically obtained with the help of augmenta-tions like multi-crop [12, 13, 92] which can be hard to apply at scale due their compute and memory overhead. Masked pretraining. We build upon masked prediction methods where the representation is learned by predicting masked parts of the input. Such methods have recently gained popularity given their immense success in NLP. In particular, BERT [23] showed that masked language mod-eling by predicting a subset of masked words is a powerful pre-training objective and leads to impressive finetuning performance on various downstream tasks. In computer vi-sion, input reconstruction methods have a rich history with non-linear PCA [49], sparse reconstruction [64], autoen-coders [10, 30, 42, 50], RBMs [71] etc. Masked prediction methods can be viewed as a special case of denoising au-toencoders [30, 79] where the input ‘noise’ is a masking function. An example of such a method that uses masking as noise is context encoders [68]. With the recent and rapid rise of Vision Transformers [24], masked prediction was revisited by multiple efforts. SiT [4] replaces variable sized patches in the image with random noise and trains a ViT model for reconstruction, among other objectives. BEiT [6] moves a step closer to BERT with replacing full patches with mask tokens and training a ViT encoder to predict the dis-crete visual words of masked patches using a cross-entropy loss. Masked prediction has also shown impressive perfor-mance for specialized vision transformer architectures such as MViT [27, 52, 82] and Swin transformer [54, 55, 84]. Sim-MIM [84] and MaskedFeat [82] predict pixel values and HOG features of the masked patches using a Swin-v2 [54] and MViT-v2 [52] backbones respectively. Finally, Split-Mask [25] studied the interesting properties of masked pre-diction methods in terms of high sample efficiency. Of particular interest to OmniMAE, masked autoencoders (MAE) [40] demonstrated impressive scaling properties by utilizing a patch dropping strategy for masked patches ac-companied with a high masking ratio of 75%. Under this setting, the encoder only process a small subset of the image patches followed by a relatively low capacity decoder which reconstructs the image pixels. This property is even more 10407 crucial for video representation learning given the large num-ber of patches, and a concurrent work, ST-MAE [76], shows that MAE pretraining with an even higher masking ratio of 90% works well and obtains strong finetuning performance on downstream video recognition benchmarks. Notably, the efficiency gained by patch dropping is specific to vanilla ViTs, and multiscale architectures such as MViT and Swin are unable to benefit due to their design. Hence, we use the simple ViT as our architecture and show that it can be trained efficiently and jointly for images and videos using extremely high masking ratios (90-95% on both modalities), and yet perform competitively to specialized architectures like MViT [27] and Swin [55]. Unified modeling and multi-modal learning. Multi-modal learning in computer vision has a long history that includes training using images and text [15, 34, 47, 58, 59], video and optical flow [29,72], and video and audio [1,62,63,66]. The majority of such methods rely on training a separate back-bone for each modality as well as the availability of align-ment across modalities. More recently, Omnivore [33] was proposed for joint training of multiple modalities like images, videos and single-view 3D, attaining a strong performance in each of the modality specific benchmarks with a single shared trunk. PolyViT [53] co-trains a shared transformer encoder using images, videos and audio data and provides a competitive performance on various downstream tasks for each modality. The aforementioned methods differ com-pared to OmniMAE in that they are trained with supervised learning and require human annotations. BEVT [80] tackles BERT pre-training for videos and proposes that jointly pre-training using static images improves the finetuning perfor-mance of video recognition benchmarks. Unlike OmniMAE, BEVT uses the specialized Swin transformer architecture with separate decoder heads for images and videos. Thus, BEVT cannot drop patches while training which can limit its scalability. Furthermore, it relies on a tokenizer which must be trained apriori, and the tokenizer training itself can affect the model’s performance. 3. OmniMAE Our goal is to pretrain a single unified model for images and videos. Rather than use specialized architectures tai-lored for a visual modality, we build upon the vanilla Vision Transformer (ViT) [24] architecture that has limited induc-tive biases for vision. For pretraining, we extend the simple self-supervised masked auto-encoding (MAE) approach [40]. The original architecture and pretraining method are tested only on images, and we show simple design decisions for a unified model.3.1. Training OmniMAE jointly on images and videos We illustrate our method in Figure 1. For pretraining, we use an encoder-decoder architecture where the encoder only operates on a ‘non-masked’ subset of the input. The decoder predicts the pixel values for the entire input, i.e., masked and non-masked pixels. The model is trained to minimize the reconstruction error for the masked (unseen) part of the input. After pretraining, we evaluate the encoder by transfer learning (the decoder is discarded). Next, we describe the pretraining details. Images and videos as spatio-temporal patches. The input image or video can be represented as a 4D tensor of shape T×H×W×3where Tis the temporal dimension and H, W are the spatial dimensions, and 3represents the color channels. We treat images as being single-frame videos with T= 1. The input is split into Nspatio-temporal patches, each of size t×h×w×3[33]. Omnivorous visual encoder. We use an omnivorous [33] visual encoder that processes both images and video using the same parameters. The encoder operates on the Nspatio-temporal patches from the images and videos. The encoder can naturally handle variable number Nof patches from im-ages and videos as it uses the Transformer architecture [78]. The encoder shares the same parameters for both image and video input
Cao_Real-Time_Neural_Light_Field_on_Mobile_Devices_CVPR_2023
Abstract Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utiliz-ing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is extremely slow, limiting the application scenarios of utilizing NeRF on resource-constrained hard-ware, such as mobile devices. Many works have been con-ducted to reduce the latency of running NeRF models. How-ever, most of them still require high-end GPU for accel-eration or extra storage memory, which is all unavailable on mobile devices. Another emerging direction utilizes the neural light field (NeLF) for speedup, as only one forward pass is performed on a ray to predict the pixel color. Nev-ertheless, to reach a similar rendering quality as NeRF , the network in NeLF is designed with intensive computation, which is not mobile-friendly. In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering. We follow the setting of NeLF to train our network. Unlike existing works, we introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size, i.e., saving 15⇥⇠24⇥ storage compared with MobileNeRF . Our model achieves high-resolution generation while maintaining real-time in-ference for both synthetic and real-world scenes on mo-bile devices, e.g., 18.04ms (iPhone 13) for rendering one 1008 ⇥756image of real 3D scenes. Additionally, we achieve similar image quality as NeRF and better quality than MobileNeRF (PSNR 26.15vs.25.91on the real-world forward-facing dataset)1.
1. Introduction Remarkable progress seen in the domain of neural ren-dering [ 33] promises to democratize asset creation and ren-dering, where no mesh, texture, or material is required – only a neural network that learns a representation of an ob-ject or a scene from multi-view observations. The trained 1More demo examples in our Webpage . Figure 1. Examples of deploying our approach on mobile de-vices for real-time interaction with users. Due to the small model size ( 8.3MB) and fast inference speed ( 18⇠26ms per image on iPhone 13), we can build neural rendering applications where users interact with 3D objects on their devices, enabling various appli-cations such as virtual try-on. We use publicly available software to make the on-device application for visualization [ 1,3]. model can be queried at arbitrary viewpoints to generate novel views. To be made widely available, this excit-ing application requires such methods to run on resource-constrained devices, such as mobile phones, conforming to their limitations in computing, wireless connectivity, and hard drive capacity. Unfortunately, the impressive image quality and capa-bilities of NeRF [ 33] come with a price of slow render-ing speed. To return the color of the queried pixel, hun-dreds of points need to be sampled along the ray that ends up in that pixel, which is then integrated to get the radi-ance. To enable real-time applications, many works have been proposed [ 12,34,37,45], yet, they still require high-end GPUs for rendering and hence are not available for resource-constrained applications on mobile or edge de-vices. An attempt is made to trade rendering speed with storage in MobileNeRF [ 10]. While showing promising acceleration results, their method requires storage for tex-turing saving. For example, for a single real-world scene from the forward-facing dataset [ 33], MobileNeRF requires 201.5MB of storage. Clearly, downloading and storing tens, hundreds, or even thousands of such scenes in MobileNeRF This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8328 format on a device is prohibitively expensive. A different approach is taken in Neural Light Fields (NeLF) that directly maps a ray to the RGB color of the pixel by performing only one forward pass per ray, result-ing in faster rendering speed [ 5,25,28,41]. Training NeLF is challenging and hence requires increased network capac-ity. For example, Wang et al. [41] propose an 88-layer fully-connected network with residual connections to distill a pre-trained radiance model effectively. While their approach achieves better rendering results than vanilla NeRF at 30⇥ speedup, running it on mobile devices is still not possible, as it takes three seconds to render one 200⇥200image on iPhone 13 shown in our experiments. In this work, we propose MobileR2L, a real-time neu-ral rendering model built with mobile devices in mind. Our training follows a similar distillation procedure intro-duced in R2L [ 41]. Differently, instead of using an MLP, a backbone network used by most neural representations, we show that a well-designed convolutional network can achieve real-time speed with the rendering quality similar to MLP. In particular, we revisit the network design choices made in R2L and propose to use the 1⇥1Conv layer in the backbone. A further challenge with running a NeRF or NeLF on mobile devices is an excessive requirement of RAM. For example, to render an 800⇥800image, one needs to sample 640,000rays that need to be stored, caus-ing out-of-memory issues. In 3D-aware generative mod-els [9,15,20], this issue is alleviated by rendering a radi-ance feature volume and upsampling it with a convolutional network to obtain a higher resolution. Inspired by this, we render a light-field volume that is upsampled to the required resolution. Our MobileR2L features several major advan-tages over existing works: •MobileR2L achieves real-time inference speed on mo-bile devices (Tab. 3) with better rendering quality, e.g., PSNR, than MobileNeRF on the synthetic and real-world datasets (Tab. 1). •MobileR2L requires an order of magnitude less stor-age, reducing the model size to 8.3MB, which is 15.2⇥⇠24.3⇥less than MobileNeRF. Due to these contributions, MobileR2L can unlock wide adoption of neural rendering in real-world applications on mobile devices, such as a virtual try-on, where the real-time interaction between devices and users is achieved (Fig. 1).
Huang_End-to-End_Video_Matting_With_Trimap_Propagation_CVPR_2023
Abstract The research of video matting mainly focuses on temporal coherence and has gained significant improvement via neural networks. However, matting usually relies on user-annotated trimaps to estimate alpha values, which is a labor-intensive issue. Although recent studies exploit video object segmentation methods to propagate the given trimaps, they suffer inconsistent results. Here we present a more robust and faster end-to-end video matting model equipped with trimap propagation called FTP-VM (Fast Trimap Propagation -Video Matting). The FTP-VM combines trimap propagation and video matting in one model, where the additional backbone in memory matching is replaced with the proposed lightweight trimap fusion module. The segmentation consistency loss is adopted from automotive segmentation to fit trimap segmentation with the collaboration of RNN (Recurrent Neural Network) to improve the temporal coherence. The experimental results demonstrate that the FTP-VM performs competitively both in composited and real videos only with few given trimaps. The efficiency is eight times higher than the state-of-the-art methods, which confirms its robustness and applicability in real-time scenarios. The code is available at https: //github.com/csvt32745/FTP-VM .
1. Introduction Image matting aims to estimate the alpha value of each pixel for a target in the input image. Unlike general segmentation, which generates binary values, matting outputs values between 0 and 1, meaning the degree of transparency. The output alpha mattes can describe semi-transparent objects with precise details. As shown in Eq. (1), each pixel color Cof an image is composited by the foreground color F, background color B, and an alpha value α, where the background can be substituted to generate a matting dataset. C=αF+ (1−α)B (1) Given a frame like Fig. 1a, a trimap Fig. 1b is the common requirement for image matting, which divides the pixels (a) Memory frame (b) Memory trimap (c) Query frame (d) The result of RVM [32] (e) The result of OTVM [42] (f) The result of FTP-VM (ours) Figure 1. An example of an in-street interview video. (d) Automatic matting, (e) Trimap-propagation-based method (f) The proposed method. into three regions: foreground, background and unknown. Matting methods adopt such information to solve alpha values of unknown (gray) regions. Video matting extracts an alpha matte of each frame of the given video. The resulting alpha mattes can be used for background replacement, which is decisive for video applications such as video conferencing and visual effects. It is intuitive to perform image matting on each frame of a video. However, severe flickering artifacts would occur in the resultant image sequence. In order to improve the robustness, considering spatial and temporal coherence is the main challenge for video matting as the temporal information helps infer the matting target from the previous frames. Another challenge is to provide a trimap for each frame, which is expected to be a massive cost to most users. To tackle the above two issues, automatic matting and trimap propagation are addressed in this paper. Automatic matting captures specific targets without trimaps, but This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14337 the input videos with ambiguous scenarios highly affect the results. Fig. 1 shows an example of an in-street interview video where an interviewee and passengers occasionally appear simultaneously. While the human matting method [32] attempts to capture all the people, the ambiguity caused by inconspicuous passengers results in unsatisfactory output Fig. 1d. Thus we opt for trimap propagation to select targets more stably as shown in Fig. 1f. While performing trimap propagation, the user is required to provide a pair of so-called memory frame Fig. 1a and memory trimap Fig. 1b, and this information is utilized to propagate throughout the video. Since we are predicting a sequence of three-class segmentation masks, the trimap propagation can be treated as video object segmentation. Recent studies [42, 47,55] leverage STM (Space-Time Memory network), [35] an emerging video object segmentation model, to produce the trimaps successfully. However, the resultant trimaps usually contain flickers and lead to unsatisfactory results. As STM lacks temporal coherence, we append ConvGRUs [2] to the model to improve stability. Moreover, the previous approaches containing two full models make them unsuitable for interactive applications. We thus combine the two models into an end-to-end model to enhance the speed and performance. The main contributions of this work are summarized as follows. • A novel end-to-end video matting model equipped with trimap propagation, called FTP-VM (Fast Trimap Propagation -Video Matting), is proposed. FTP-VM is faster than the previous two-model methods by a large margin while preserving competitive performance in different scenarios. While the frame rate of the previous methods is 5 FPS, the proposed method reaches 40 FPS on an NVIDIA RTX 2080Ti GPU. • A lightweight trimap fusion module is designed to replace an additional encoder in the STM-based model to make FTP-VM efficient and more powerful. • Motivated by [38], the segmentation consistency loss from automotive segmentation is adapted to trimap segmentation. The final setting reaching the more satisfactory performance is determined by conducting comprehensive experiments.
Ji_Are_Binary_Annotations_Sufficient_Video_Moment_Retrieval_via_Hierarchical_Uncertainty-Based_CVPR_2023
Abstract Recent research on video moment retrieval has mostly focused on enhancing the performance of accuracy, effi-ciency, and robustness, all of which largely rely on the abun-dance of high-quality annotations. While the precise frame-level annotations are time-consuming and cost-expensive, few attentions have been paid to the labeling process. In this work, we explore a new interactive manner to stimu-late the process of human-in-the-loop annotation in video moment retrieval task. The key challenge is to select “am-biguous” frames and videos for binary annotations to fa-cilitate the network training. To be specific, we propose a new hierarchical uncertainty-based modeling that explicitly considers modeling the uncertainty of each frame within the entire video sequence corresponding to the query descrip-tion, and selecting the frame with the highest uncertainty. Only selected frame will be annotated by the human ex-perts, which can largely reduce the workload. After ob-taining a small number of labels provided by the expert, we show that it is sufficient to learn a competitive video mo-ment retrieval model in such a harsh environment. More-over, we treat the uncertainty score of frames in a video as a whole, and estimate the difficulty of each video, which can further relieve the burden of video selection. In gen-eral, our active learning strategy for video moment retrieval works not only at the frame level but also at the sequence level. Experiments on two public datasets validate the ef-fectiveness of our proposed method. Our code is released athttps://github.com/renjie-liang/HUAL .
1. Introduction Video Moment Retrieval (VMR) aims to localize the temporal region of an untrimmed video corresponding to query description, which is a fundamental task in the video understanding area, and can benefit a lot of downstream *Corresponding author. Is this frame corresponding to the query? Yes. How about this frame? Query: person drinks from a glass.Video: User Expert User User No. Expert . . . And this one?(a) Video Moment Retrieval Model Query frame Unlabel Pool Label Pool Training modelExpert (b) Figure 1. We propose a new interactive method named HUAL which only requires binary annotations to reduce the annotation cost. (a) In each round, user (student) selects a frame with the largest uncertainty, and the expert (teacher) returns the binary la-bel of this frame as feedback. (b) With more labels provided, the VMR model is retrained and the whole process can be treated in a human-in-the-loop manner. tasks, such as video question answering [19, 40, 53], dense video captioning [4, 6], video relation detection [14, 34], video dialog [29], etc. Recent methods on VMR mainly focus on modeling the cross-modal context in temporal, and have achieved significant performance gains in public datasets, which heavily rely on the well-annotated datasets, such as Charades [9], ActivityNet [35], etc. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23013 However, the precise labels on VMR datasets are time-consuming and cost-expensive, and relying on the well-annotated dataset will restrict the generalization ability of the current models. Hence, we revisit the process of anno-tation and propose a new active learning-based method to relieve the heavy burden of annotations in the VMR task. We have two assumptions: 1) not each frame should be considered equally, as the frame with the higher uncertainty is more valuable than the rest; and 2) not each video can be treated as a hard sample, annotating complex video and query pairs first benefits more than annotating simple ones. The whole process of our active learning-based method is shown in Figure 1. In each round, for each video in the training set, the user (student) first selects one frame with the highest uncertainty regarding the consistency of the video and query in this video, then the expert (teacher) re-turns the label of this frame (positive or negative). After that, the user takes the label of a single frame as supervi-sion and trains the model, and the uncertainty score of each video will be updated. In the next round, the user can se-lect another frame with the highest uncertainty. The whole process can be described as: “A student asks a hard ques-tion first, then the teacher returns the answer as feedback, the student then digests what have learned, and the process can be repeated.” Besides, the amount of videos that need to be annotated can be further compressed. The uncertainty of each video can also be utilized in the sequence level. A certain percentage of videos with high uncertainty can be treated as hard samples with more benefits when annotat-ing. Hence, the whole process of interactive annotating can be treated as Human-in-the-Loop. The key techniques rely on the computation of uncertainty and the selection of the video frame or the whole video sequence as hard samples. To be specific, we consider the uncertainty in two aspects: the classification confidence of each frame, and the distance of the current frame from the known labels ( e.g., the start and end boundaries of each video are negative samples, and the annotated frames in previous rounds). For the classi-fication confidence of each frame, we choose a weakly-supervised VMR model (such as CPL [50]) to obtain the ini-tial classification result and confidence score of each frame, and the frame with alow confidence score can be treated with high uncertainty. By adding the distance score and confidence score together, the frame with the highest uncer-tainty is selected. We then utilize a fully-supervised VMR model (such as SeqPAN [45]) to train with the labels pro-vided by the expert in each round. Besides considering the frame-level uncertainty, we also seek the reduction of an-notated videos at the sequence level via accumulating the frame-level uncertainty. Hence, the annotation cost can be further reduced with a minor performance drop. Our main contributions are summarized as follows:• We propose a new interactive framework named HUAL to reduce the annotation cost, which only re-quires binary annotations. To verify the feasibility, we stimulate the process of annotation in the video mo-ment retrieval task, which is model-agnostic and can be treated in a Human-in-the-Loop manner. • Specifically, we consider the hierarchical design, which is frame-level and sequence-level uncertainty estimation to select hard samples and fully take ad-vantages of limited binary annotations by the expert. This annotation method can greatly reduce the anno-tation cost while achieving comparable performance compared with the fully supervised setting. • Extensive experimental results on two public datasets indicate that binary annotations are sufficient for video moment retrieval. The proposed method can achieve competitive performance with much fewer annota-tions, which show the effectiveness of our proposed methods.
Gao_Implicit_Diffusion_Models_for_Continuous_Super-Resolution_CVPR_2023
Abstract Image super-resolution (SR) has attracted increasing atten-tion due to its widespread applications. However, current SR methods generally suffer from over-smoothing and artifacts, and most work only with fixed magnifications. This paper in-troduces an Implicit Diffusion Model (IDM) for high-fidelity continuous image super-resolution. IDM integrates an im-plicit neural representation and a denoising diffusion model in a unified end-to-end framework, where the implicit neu-ral representation is adopted in the decoding process to learn continuous-resolution representation. Furthermore, we design a scale-adaptive conditioning mechanism that consists of a low-resolution (LR) conditioning network and a scaling fac-tor. The scaling factor regulates the resolution and accordingly modulates the proportion of the LR information and generated features in the final output, which enables the model to accom-modate the continuous-resolution requirement. Extensive ex-periments validate the effectiveness of our IDM and demon-strate its superior performance over prior arts. The source code will be available at https://github.com/Ree1s/ IDM .
1. Introduction Image super-resolution (SR) refers to the task of generating high-resolution (HR) images from given low-resolution (LR) images. It has attracted increasing attention due to its far-reaching applications, such as video restoration, photography, and accelerating data transmission. While significant progress has been achieved recently, existing SR models predominantly suffer from suboptimal quality and the requirement for fixed-resolution outputs, leading to undesirable restrictions in prac-tice. Regression-based methods [21, 23] offer an intuitive way to establish a mapping from LR to HR images. LIIF [6] specifi-cally achieves resolution-continuous outputs through implicit neural representation. However, these methods often fail to generate high-fidelity details needed for high magnifications *These authors contributed equally. †Corresponding Author: bczhang@buaa.edu.cn. 2× 8× 10× (a) (b) (c) (d)Figure 1. Visual comparison, where training is on 8 ×SR and testing on 2×, 8×, and 10 ×. (a) EDSR [23] and (b) LIIF [6] are regression-based models; (c) SR3 [35] and (d) IDM (ours) are generative models. Among them, LIIF and IDM employ the implicit neural representa-tion. (see Fig. 1(a) and (b)) since their regression losses tend to cal-culate the averaged results of possible SR predictions. Deep generative models, including autoregressive [30, 43], GAN-based [15,16,18,26], flow-based [8,24] and variational autoen-coders (V AEs) [17, 42], have emerged as solutions that enrich detailed textures. Still, they often exhibit artifacts and only ap-ply to pre-defined fixed magnifications. Despite the ability to generate realistic images with high perceptual quality with the help of extra priors, GAN-based models are subject to mode collapse and struggle to capture complex data distributions, yielding unnatural textures. Recently, Diffusion Probabilistic Models (DMs) [12, 39] have been used in image synthesis to improve the fidelity of SR images and have shown impressive performance. Nonetheless, DM-based methods are still lim-ited to fixed magnifications, which would result in corrupted output once the magnification changes (see Fig. 1(c)). There-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10021 (a) Low -Resolution (b) ESRGAN (c) GLEAN (d) IDM ( ours) (e) Ground-TruthFigure 2. Examples of 16 ×super-resolution. (a) LR input. (b) ESRGAN [45] which trains a simple end-to-end structure GAN, and loses the inherent information. (c) GLEAN [4] which achieves more realistic details through additional StyleGAN [16] priors, but still generates unnatural textures and GAN-specific artifacts. (d) With implicit continuous representation based on a scale-adaptive conditioning mechanism, IDM generates the output with high-fidelity details and retains the identity of the ground-truth. (e) The ground-truth. fore, they turn to a complicated cascaded structure [13] or two-stage training strategies [10, 33, 34] to achieve multiple com-bined magnifications, or retrain the model for a specific resolu-tion [35], which brings extra training cost. To address these issues, this paper presents a novel Implicit Diffusion Model (IDM) for high-fidelity image SR across a continuous range of resolutions. We take the merit of diffu-sion models in synthesizing fine image details to improve the fi-delity of SR results and introduce the implicit image function to handle the fixed-resolution limitation. In particular, we formu-late continuous image super-resolution as a denoising diffusion process. We leverage the appealing property of implicit neural representations by encoding an image as a function into a con-tinuous space. When incorporated into the diffusion model, it is parameterized by a coordinate-based Multi-Layer Perceptron (MLP) to capture the resolution-continuous representations of images better. At a high level, IDM iteratively leverages the denoising dif-fusion model and the implicit image function, which is im-plemented in the upsampling layers of the U-Net architecture. Fig. 1(d) illustrates that IDM achieves continuously modu-lated results within a wide range of resolutions. Accordingly, we develop a scale-adaptive conditioning mechanism consist-ing of an LR conditioning network and a scaling factor. The LR conditioning network can encode LR images without pri-ors and provide multi-resolution features for the iterative de-noising steps. The scaling factor is introduced for controlling the output resolution continuously and works through the adap-tive MLP to adjust how much the encoded LR and generated features are expressed. It is worth noting that, unlike previ-ous methods with two-stage synthesis pipelines [9, 13, 33] or additional priors [4, 26, 44], IDM enjoys an elegant end-to-end training framework without extra priors. As shown in Fig. 2, we can observe that IDM outperforms other previous works in synthesizing photographic image details. The main contributions of this paper are summarized as fol-lows:• We develop an Implicit Diffusion Model (IDM) for continuous image super-resolution to reconstruct photo-realistic images in an end-to-end manner. Iterative im-plicit denoising diffusion is performed to learn resolution-continuous representations that enhance the high-fidelity details of SR images. • We design a scale-adaptive conditioning mechanism to dynamically adjust the ratio of the realistic information from LR features and the generated fine details in the dif-fusion process. This is achieved through an adaptive MLP when size-varied SR outputs are needed. • We conduct extensive experiments on key benchmarks for natural and facial image SR tasks. IDM exhibits state-of-the-art qualitative and quantitative results compared to the previous works and yields high-fidelity resolution-continuous outputs.
Jain_VGFlow_Visibility_Guided_Flow_Network_for_Human_Reposing_CVPR_2023
Abstract The task of human reposing involves generating a realis-tic image of a person standing in an arbitrary conceivable pose. There are multiple difficulties in generating percep-tually accurate images, and existing methods suffer from limitations in preserving texture, maintaining pattern co-herence, respecting cloth boundaries, handling occlusions, manipulating skin generation, etc. These difficulties are fur-ther exacerbated by the fact that the possible space of pose orientation for humans is large and variable, the nature of clothing items is highly non-rigid, and the diversity in body shape differs largely among the population. To alle-viate these difficulties and synthesize perceptually accurate images, we propose VGFlow. Our model uses a visibility-guided flow module to disentangle the flow into visible and invisible parts of the target for simultaneous texture preser-vation and style manipulation. Furthermore, to tackle dis-tinct body shapes and avoid network artifacts, we also in-corporate a self-supervised patch-wise ”realness” loss to improve the output. VGFlow achieves state-of-the-art re-sults as observed qualitatively and quantitatively on differ-ent image quality metrics (SSIM, LPIPS, FID). Results can be downloaded from Project Webpage
1. Introduction People are frequently featured in creative content like display advertisements and films. As a result, the ability to easily edit various aspects of humans in digital visual media is critical for rapidly producing such content. Changing the pose of humans in images, for example, enables several ap-plications, such as automatically generating movies of peo-ple in action and e-commerce merchandising. This paper presents a new deep-learning-based framework for reposing *rishabhj@adobe.com Figure 1. Human reposing involves changing the orientation of a source image to a desired target pose. To get accurate results, we learn to preserve the region visible (green) in the source image and transfer the appropriate style to the invisible region (red) humans guided by a target pose, resulting in high-quality and realistic output. Recent approaches for human-image reposing based on deep-learning neural networks, such as [19, 26,39], require a person image, their current pose, represented as a sequence of key-points or a 2D projection of a 3D body-pose map, and the target pose represented similarly. These methods fail to reproduce accurate clothing patterns, textures, or re-alistic reposed human images. This mainly happens when either the target pose differs significantly from the current (source) pose, there are heavy bodily occlusions, or the gar-ments are to be warped in a non-rigid manner to the target pose. Many of these failures can be attributed to the in-ability of these networks to discern regions of the source image that would be visible in the target pose from those that would be invisible. This is an important signal to deter-mine which output pixels must be reproduced from the in-put directly and which must be predicted from the context. We present VGFlow, a framework for human image repos-ing that employs a novel visibility-aware detail extraction mechanism to effectively use the visibility input for preserv-ing details present in the input image. VGFlow consists of two stages -encoding the changes This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21088 in appearance and pose of the source image required to achieve the new pose and decoding the encoded input to the re-posed human image. The encoding stage includes a pose-based warping module that takes the source image and the source and target pose key-points as input and pre-dicts two 2D displacement fields. One corresponds to the visible region of the source image in the target pose, and the other to the invisible areas. It also predicts a visibil-ity mask indicating both visible and invisible regions in the source image, as they should appear in the target pose. The displacement fields, known as appearance flows , are used to sample pixels from the source image to produce two warped images. These warped images and the visibility masks are then encoded into the appearance features, a multi-scale feature pyramid. The encoding stage also tries to capture the relationship between the source and target poses by en-coding their respective key-points together. The encoded pose key-points are translated into an image during the de-coding stage, with the appearance features modulating the translation at each scale. This appearance-modulated pose to image decoding provides the final reposed output, which is then subjected to multiple perceptual and reconstruction losses during training. The vast majority of existing methods [5, 26, 27, 39] are trained using paired source and target images. However, in terms of output realism, we observe various artifacts and a lack of generalization in these methods to unpaired in-puts, especially when the source image differs significantly in body shape or size [20]. To that end, VGFlow is trained with a self-supervised patch-wise adversarial loss on un-paired images alongside the pairwise supervised loss to en-sure a high level of realism in the final output. In sum-mary, this paper proposes a new human reposing network VGFlow, based on: • A novel visibility-aware appearance flow prediction module to disentangle visible and invisible regions of the person image in the target pose. • An image decoder employing multi-scale texture mod-ulated pose encoding. • And, a patch-wise adversarial objective to improve the realism of the produced images leading to fewer output artifacts. Our method achieves state-of-the-art on image quality metrics for the human reposing task. We present extensive qualitative and quantitative analysis with previous base-lines, as well as ablation studies. Next, we discuss work related to the proposed method.
Dancette_Improving_Selective_Visual_Question_Answering_by_Learning_From_Your_Peers_CVPR_2023
Abstract Despite advances in Visual Question Answering (VQA), the ability of models to assess their own correctness remains under-explored. Recent work has shown that VQA mod-els, out-of-the-box, can have difficulties abstaining from an-swering when they are wrong. The option to abstain, also called Selective Prediction, is highly relevant when deploy-ing systems to users who must trust the system’s output (e.g., VQA assistants for users with visual impairments). For such scenarios, abstention can be especially important as users may provide out-of-distribution (OOD) or adversarial in-puts that make incorrect answers more likely. In this work, we explore Selective VQA in both in-distribution (ID) and OOD scenarios, where models are presented with mixtures of ID and OOD data. The goal is to maximize the number of questions answered while minimizing the risk of error on those questions. We propose a simple yet effective Learning from Your Peers (LYP) approach for training multimodal selection functions for making abstention decisions. Our approach uses predictions from models trained on distinct subsets of the training data as targets for optimizing a Se-lective VQA model. It does not require additional manual labels or held-out data and provides a signal for identify-ing examples that are easy/difficult to generalize to. In our extensive evaluations, we show this benefits a number of models across different architectures and scales. Overall, for ID, we reach 32.92% in the selective prediction metric coverage at 1% risk of error ( C@1%) which doubles the previous best coverage of 15.79% on this task. For mixed ID/OOD, using models’ softmax confidences for abstention decisions performs very poorly, answering <5% of ques-tions at 1% risk of error even when faced with only 10% OOD examples, but a learned selection function with LYP can increase that to 25.38% C@1%. *Equal contribution. †Work primarily done during internship at FAIR. Code: https://github.com/facebookresearch/selective-vqa_ood Q: What color will the light be when the vehicle has permission to proceed? Groundtruth A: greenQ: What color is the face of the sheep on the left? Grountruth A: blackA: black A: black A: red A: [abstain]ID OOD SotA VQA Model + L YPSotA VQA Model + L YPSotA VQA Model SotA VQA Model Figure 1. VQA Models are able to answer straightforward ID questions, as in the top example where a SotA model [62] with and without our Learning from Your Peers (LYP) approach an-swers correctly. However, difficult OOD examples can arise, like the bottom example. With LYP, the model is able to abstain from answering to avoid outputting the incorrect answer, whereas the existing model is overconfident and outputs the answer anyways.
1. Introduction Recent successes of deep learning models for multi-modal tasks have created the potential for many exciting real-world applications that require a large degree of relia-bility, such as providing assistance to users with visual im-pairments [23, 51]. However, with these novel, high-stakes applications come responsibilities towards the users, as well as the need to revise problem setups and the general ap-proach to evaluating model performance. One particularly important consideration when developing models for real-world applications is reliability , i.e., the ability of the model to avoid making errors when facing uncertainty. One way to approach reliability is to frame the problem as a selective prediction task [9,14,63]. In selective predic-tion, models are able to either output an answer or abstain from answering (i.e., effectively saying “ I don’t know ”) based on the model’s confidence/uncertainty in order to avoid making incorrect predictions. A prevalent cause of such incorrect predictions in real-world settings is distri-bution shifts [13, 20, 42], where the test environment may differ from the training environment and models could en-counter a wide variety of input examples at test time that This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24049 may not satisfy the independent and identically distributed assumption often made by practitioners when developing models. This is especially true in open-ended tasks like Vi-sual Question Answering (VQA) where models may receive adversarial, out-of-distribution (OOD) inputs that are diffi-cult to answer correctly. For example, in Fig. 1, a model is asked a question that requires background knowledge that it simply does not possess. While the ability to answer open-ended questions has been a point of focus in VQA, having a model perfectly answer all questions, ID and OOD, is likely unattainable [19, 29]. Therefore, framing this problem as a selective prediction task provides an avenue to handle such OOD examples more gracefully as the model can abstain from answering on many of these inputs, while still attempt-ing to answer as many questions as possible. Doing this re-quires models to recognize OOD examples for abstention decisions (OOD detection) and generalize to OOD exam-ples (OOD generalization) in order to make predictions on examples that the model will get right. However, previous evaluations for selective prediction in VQA [63] have been done on ID data, where the questions and images all come from the VQA v2 dataset [21]. In NLP, there are efforts on selective prediction with OOD exam-ples [29, 59], although they tend to not address some prac-tical considerations, such as assuming access to OOD data or threshold generalization. More broadly, selective predic-tion and OOD generalization have largely been studied as independent problems in the literature [58]. In this work, we explore selective prediction for VQA with distribution shifts, where we present models with mix-tures of both ID and OOD examples, and measure the abil-ity of different approaches to optimize answering as many questions as possible while maintaining a low risk of error (or high accuracy) on those questions. We perform exten-sive experiments on VQA v2 [21] as our ID data and Ad-VQA [50] as our adversarial, OOD data. We evaluate a number of state-of-the-art approaches to this problem and find that existing models’ softmax proba-bilities are generally poor confidence estimates for absten-tion decisions on OOD data, leading models to answer <5% of questions to achieve 1% risk of error in some settings. Further, we show that training a selection function [63] im-proves performance ID and OOD, but integrating features from OOD detection methods as well as augmenting with known-OOD data (i.e., OOD data different from the un-known target distribution) do not improve beyond simply training this selection function on ID data. However, we observe that existing methods for training multimodal se-lection functions can require a held-out dataset in order to be most effective. Therefore, we propose a Learning from Your Peers (LYP) approach that alleviates the need for held-out data while also allowing both the VQA model and selectionfunction to learn from the additional data that would have been withheld. LYP works by breaking the training data intoNsubsets and training different VQA models on dis-tinct combinations of N−1subsets, leaving one subset out at a time. Our approach then uses these trained mod-els to predict answers on their respective Nthleft-out sub-sets. We recombine this data into an updated training set that has predictions from the different models. We utilize these predictions and the associated accuracies as labels to train a multimodal selection function, which learns to pre-dict the accuracies. By using predictions on the training data from models that have not seen these examples, our approach provides a signal for which examples in the train-ing data can be generalized to for a given model class, and which are too hard and should be abstained on. Overall, our contributions are: We present an evaluation benchmark for Selective VQA on both ID and OOD data. We show that model and data scaling are important factors for selective prediction and evaluate multiple baselines from prior works. Finally, we propose LYP and demonstrate that it can benefit performance over standard selection function training in both ID and mixed ID/OOD settings.
Bai_GLeaD_Improving_GANs_With_a_Generator-Leading_Task_CVPR_2023
Abstract Generative adversarial network (GAN) is formulated as a two-player game between a generator (G) and a discriminator (D), where D is asked to differentiate whether an image comes from real data or is produced by G. Under such a formulation, D plays as the rule maker and hence tends to dominate the competition. Towards a fairer game in GANs, we propose a new paradigm for adversarial training, which makes G assign a task to D as well. Specifically, given an image, we expect D to extract representative features that can be adequately decoded by G to reconstruct the input. That way, instead of learning freely, D is urged to align with the view of G for domain classification. Experimental results on various datasets demonstrate the substantial superiority of our approach over the baselines. For instance, we improve the FID of StyleGAN2 from 4.30 to 2.55 on LSUN Bedroom and from 4.04 to 2.82 on LSUN Church. We believe that the pioneering attempt present in this work could inspire the community with better designed generator-leading tasks for GAN improvement. Project page is athttps://ezioby.github.io/glead/ .
1. Introduction Generative adversarial networks (GANs) [ 18] have sig-nificantly advanced image synthesis, which is typically formulated as a two-player game. The generator (G) aims at synthesizing realistic data to fool the discriminator (D), while D pours attention on distinguishing the synthesized samples from the real ones. Ideally, it would come to an optimal solution where G can recover the real data distribution, and D can hardly tell the source of images anymore [ 18]. However, the competition between G and D seems to be ∗This work was done during an internship at Ant Group. †Corresponding author. This work was partly supported by the Na-tional Natural Science Foundation of China (Grant No. U1903213) and the Shenzhen Science and Technology Program (JCYJ20220818101014030). D(x) G(z) Gradients Gradients G G D D x x z Realness Reconstruction Difference Forward Pass Backward Pass ’ Figure 1. Concept diagram of our proposed generator-leading task (bottom), as complementary to the discriminator-leading task in the original formulation of GANs (upper). D is required to extract representative features that can be adequately decoded by G to reconstruct the input. unfair. Specifically, on the one hand, D acts as a player in this adversarial game by measuring the discrepancy between the real and synthesized samples. But on the other hand, the learning signals ( i.e., gradients) of G are only derived from D, making the latter naturally become a referee in the competition. Such a formulation easily allows D to rule the game. Massive experimental results could serve as supporting evidence for the theoretical analysis. For instance, in practice, D can successfully distinguish real and fake samples from a pretty early stage of training and is able to maintain its advantage in the entire training process [ 63]. Accordingly, the capability of the discrimi-nator usually determines the generation performance more or less. For instance, a discriminator that has over-fitted the whole training set always results in synthesis with limited diversity and poor visual quality [ 33]. Following this philosophy, many attempts [ 28,29,38,40,56,70] have been made for discriminator improvement. This work offers a different perspective on GAN im-provement. In particular, we propose a new adversarial paradigm where G is assigned a new role, i.e., playing as the referee as well to guide D. Recall that producing realistic images usually requires Gto generate all-level concepts This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12094 adequately. Nevertheless, due to the asymmetrical status of G and D, D is able to tell apart the real and synthesized data merely from limited discriminative regions [ 63]. We, therefore, would like to encourage D to extract as much information from an image as possible, such that the features learned by D could be rendered back to the input with a frozen G, as in Fig. 1. That is, D is enforced to align with the view of G ( i.e.focusing on the entire image region) instead of learning freely for domain classification. Our method is termed as GLeaD because we propose to assign D a generator-leading task. In particular, given a real or synthesized image, the discriminator would deliver extra spatial representations and latent representations that are then fed into a frozen generator to reproduce the original image. Reconstruction loss (perceptual loss is adopted in practice) penalties the difference between the input image and the reconstructed image and derives gradients from updating the parameters of the discriminator. Moreover, comprehensive experiments are then conducted on various datasets, demonstrating the effectiveness of the proposed method. Particularly, our method improves Frechet Incep-tion Distance (FID) [ 23] from 4.30 to 2.55 on LSUN Bed-room and 4.04 to 2.82 on LSUN Church. We also manage to improve Recall [ 39] largely (56%) from 0.25 to 0.39 on LSUN Bedroom. In addition, thorough ablation studies also suggest that applying generator-leading tasks to require D to reconstruct only real or fake images could boost synthesis quality. While a larger improvement would be gained if both real and synthesized images were incorporated. Last but not least, experimental results in Sec. 4reveal that our method can indeed boost the fairness between G and D as well as improve the spatial attention of D.
Alloulah_Look_Radiate_and_Learn_Self-Supervised_Localisation_via_Radio-Visual_Correspondence_CVPR_2023
Abstract Next generation cellular networks will implement ra-dio sensing functions alongside customary communications, thereby enabling unprecedented worldwide sensing coverage outdoors. Deep learning has revolutionised computer vision but has had limited application to radio perception tasks, in part due to lack of systematic datasets and benchmarks dedicated to the study of the performance and promise of radio sensing. To address this gap, we present MaxRay: a synthetic radio-visual dataset and benchmark that facil-itate precise target localisation in radio. We further pro-pose to learn to localise targets in radio without supervision by extracting self-coordinates from radio-visual correspon-dence. We use such self-supervised coordinates to train a radio localiser network. We characterise our performance against a number of state-of-the-art baselines. Our results indicate that accurate radio target localisation can be au-tomatically learned from paired radio-visual data without labels, which is important for empirical data. This opens the door for vast data scalability and may prove key to re-alising the promise of robust radio sensing atop a unified communication-perception cellular infrastructure. Dataset will be hosted on IEEE DataPort.
1. Introduction Sixth-generation (6G) wireless networks are being de-signed from the ground up to support sensing at the physical layer [79]. Such a brand new capability in 6G networks marks a departure from communication-only functions, and aims to supply applications with sensing primitives atop a unified communication-perception infrastructure. Con-cretely, dense cellular deployments in urban settings (e.g., per lamppost) would allow for unprecedented radio cover-age, enabling a multitude of challenging perception tasks. Examples include around-the-corner obstacle detection in support of autonomous driving and pedestrian and drone localisation, to name a few [2]. *Correspondence to alloulah@outlook.com †Work done whilst at Bell Labs. Cross-modal Training Inference onRadio Self-labels Look RadiateDownstream localisation Learnt from self-labels LearnFigure 1. We train a radio localisation network by using com-monalities with vision to drive spatial attention. Without laborious manual annotations, we learn to suppress clutter and localise targets in radio heatmaps. Training perception models for radio signals is a key challenge for network infrastructure vendors. Unlike vision and audio, radio signals are hard to label manually because they are not human interpretable. Typically, sparse radio signals have been paired with a groundtruth vision modality for reliable semantic and qualitative filtration via a cross-modal annotation flow [34, 56, 77, 85]. Recently, this radio-visual pairing has been shown to work in a self-supervised fashion [10], building on a wave of progress in vision self-supervised learning (SSL) [19,22,26,40 –42,45,61,62,82,84]. Computer vision has traditionally benefited from syn-thetic datasets for: (a) content augmentation for enhanced generalisability [54, 81], or (b) closing the learning loop on out-of-distribution failure modes [68], e.g., in the context of autonomous driving [1]. Extrapolating from vision, it is also likely that synthetic data will play an important role towards realising robust radio sensing. However, radio per-ception tasks have yet to benefit from such publicly available datasets. In this work we aim to support next-gen 6G percep-tion tasks, while championing a self-supervised radio-visual learning approach. Concretely, Fig. 1 captures the crux of our new machine learning proposition for radio sensing. We demonstrate how to automatically extract radio self-labels through cross-modal learning with vision. We then use such self-labels to train a downstream localiser network. We show This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17430 that our self-supervised localiser net enhances estimation in the radio domain compared to state-of-the-art. Our contribu-tions are: •A synthetic dataset: We curate and synthesise radio-visual data for a new learning task designed for target detection and localisation in radio. •A cross-modal SSL algorithm: We formulate a con-trastive radio-visual objective for label-free radio local-isation. •Evaluation: We conduct numerous characterisations on synthetic and empirical data in order to validate our SSL algorithm and expose its superior performance compared to state-of-the-art. We discuss our dataset and algorithmic findings to galvanise machine learners’ interest in radio-visual learning research. We hope to both facilitate and inform future research on this new cross-modal learning paradigm.
Dou_Multiplicative_Fourier_Level_of_Detail_CVPR_2023
Abstract We develop a simple yet surprisingly effective implicit representing scheme called Multiplicative Fourier Level of Detail (MFLOD) motivated by the recent success of mul-tiplicative filter network. Built on multi-resolution feature grid/volume ( e.g., the sparse voxel octree), each level’s fea-ture is first modulated by a sinusoidal function and then element-wisely multiplied by a linear transformation of pre-vious layer’s representation in a layer-to-layer recursive manner, yielding the scale-aggregated encodings for a sub-sequent simple linear forward to get final output. In con-trast to previous hybrid representations relying on inter-leaved multilevel fusion and nonlinear activation-based de-coding, MFLOD could be elegantly characterized as a lin-ear combination of sine basis functions with varying am-plitude, frequency, and phase upon the learned multilevel features, thus offering great feasibility in Fourier analysis. Comprehensive experimental results on implicit neural rep-resentation learning tasks including image fitting, 3D shape representation, and neural radiance fields well demonstrate the superior quality and generalizability achieved by the proposed MFLOD scheme.
1. Introduction Classical geometric modeling techniques in computer graphics represent signals by storing discrete samples in array-or grid-based formats. It is nontrivial to adapt them to learning-based framework due to the lack of differentia-bility. Recently, neural implicit functions have emerged as an attractive alternative, which parameterize the continu-ous mapping between low dimensional coordinates and im-age/object domain signals using neural network, for exam-ple, as the representation of 3D shapes [5, 23, 29, 42] and radiance fields [16, 25]. Prior works commonly use a large multi-layer percep-tron (MLP) to parameterize the learning function. To cir-cumvent the well-known low frequency spectral bias of neu-*Corresponding author: Bingbing Ni. (a) Global Support Methods (b) Our Multiscale S upport MethodFourier Level of Detail 5.8K 4.1K 2.7K ~500K smooth high -fidelity… Figure 1. Implicit Neural Representations from Spectral Per-spective. (a) Most implicit neural representations can be cate-gorized into global support-based method. A large network is required to enrich the basis coefficients for representing high-frequency details. (b) Our method can also be well-characterized like the global method, enabling explicit bandwidth control for each LOD. This allows for representing high-fidelity details at fine levels and smooth overall shape at coarse levels. ral networks [31], frequency encoding [15, 25, 33, 34, 40] is usually adopted to map input coordinates to a higher dimen-sional space. A useful property of these pure MLP methods is that the entire function can be theoretically characterized as a linear combination of Fourier bases [6, 49], facilitating the design of neural representation from the spectral point of view, such as explicitly manipulating the bandwidth [15], representing overall signal and details separately [46], bal-ancing representation generalization and spectrum cover-age [40]. However, solely relying on the Fourier bases to ex-press local high-frequency details is inefficient since these bases normally have (infinite) global support [49], resulting in the requirement of employing over-sized MLP to accom-modate a large set of basis coefficients. Most recently, hybrid representations [27,38,39] emerge for their efficiency and high-fidelity. They employ a multi-level feature grid/volume to capture local details, and thus allow the use of a much smaller MLP as decoder. Yet, the interleaved multilevel fusion and nonlinear activation-based decoding make the entire function hard to characterize and not amenable to Fourier analysis like pure MLP methods. Specifically, little is known of how multilevel features are combined to get the final output. We address the above limitations with our proposed im-plicit feature representation framework named Multiplica-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1808 tive Fourier Level of Detail (MFLOD). Within a multi-resolution feature grid framework, MFLOD inherits the ef-ficiency merits of hybrid methods, and in the meantime, as multilevel local features are modulated with a rich set of frequency basis functions, the resulting representation is feasible in Fourier analysis. More concretely, each feature point within a multi-resolution feature grid/volume ( e.g., the sparse voxel octree) is interpolated according to its spa-tial distances to the griding points, modulated by a sinu-soidal filter, and then element-wisely multiplied by a lin-ear transformation of previous layer’s representation in a layer-to-layer recursive manner. After that, a simple lin-ear forward is sufficient to decode these multilevel encod-ings to the final implicit function value. To enable mean-ingful levels of detail, we explicitly manipulate the spec-tral bandwidth of each level. This allows for representing high-fidelity details at fine levels and smooth overall shape at coarse levels, as shown in Fig. 1. In addition to introducing MFLOD, we conduct in-depth theoretical study from the spectral and neural tangent ker-nel (NTK) [10] approximating perspectives, showing that the proposed method has better spectrum coverage and gen-eralization. Comprehensive experimental results on im-plicit neural representation learning tasks including image fitting, 3D shape representation, and neural radiance fields well demonstrate the superior quality and generalizability achieved by the proposed MFLOD scheme.
Feng_Shape-Erased_Feature_Learning_for_Visible-Infrared_Person_Re-Identification_CVPR_2023
Abstract Due to the modality gap between visible and infrared im-ages with high visual ambiguity, learning diverse modality-shared semantic concepts for visible-infrared person re-identification (VI-ReID) remains a challenging problem. Body shape is one of the significant modality-shared cues for VI-ReID. To dig more diverse modality-shared cues, we expect that erasing body-shape-related semantic concepts in the learned features can force the ReID model to ex-tract more and other modality-shared features for identifi-cation. To this end, we propose shape-erased feature learn-ing paradigm that decorrelates modality-shared features in two orthogonal subspaces. Jointly learning shape-related feature in one subspace and shape-erased features in the orthogonal complement achieves a conditional mutual in-formation maximization between shape-erased feature and identity discarding body shape information, thus enhancing the diversity of the learned representation explicitly. Ex-tensive experiments on SYSU-MM01, RegDB, and HITSZ-VCM datasets demonstrate the effectiveness of our method.
1. Introduction Recently, person re-identification (ReID) for pedestrian matching in non-overlapping camera views has experienced fast development. However, ReID is still challenging when people appear both in the daytime and in low-light situa-tions where only infrared cameras can clearly capture their appearances, raising the task of visible-infrared ReID (VI-ReID). Many remarkable works [4, 5, 14, 15, 27, 30] have been witnessed in the field of VI-ReID. For realistic scenar-ios, discovering rich and diverse modality-shared seman-tic concepts usually helps to improve the effectiveness of VI-ReID [31, 39]. So far, diverse modality-shared feature learning remains challenging. *Corresponding author Visible modality Infrared ModalityOthers... DecorrelateIdentity Body shapeOthers... Body shape Body shapeErased InformationTarget Information Others... Others... Body shapeGuide Body Shape PriorFigure 1. An illustration of our motivation on VI-ReID. It is as-sumed that body shape information and identity-related modality-shared information (presented in dashed box) are partially over-lapped with each other. To make extracted features more di-verse , we propose shape-erased feature learning paradigm that de-composes the representation into shape-related feature and shape-erased one. Learning shape-erased feature drives the model to dis-cover richer modality-shared semantic concepts other than body shape. Among the cues for VI-ReID, we can identify pedes-trians by their body shapes in many situations, for it con-tains modality-invariant information and also robust to light changes. Nevertheless, body shape is not the only or a sufficient semantic concept that interprets the identity of a person. It may be hard in some situations to tell the difference only depending on the body shape, but we can still distinguish them by other semantic concepts, such as their belongings, hairstyles or face structures. Inspired by this, we illustrate an information theoretic measure be-tween visible and infrared modality as a Venn diagram on the left of the dashed line in Fig. 1. It is assumed that body shape (presented in red) and identity-related modality-shared information (presented in dashed box) are partially overlapped with each other. Note that partially is also due to there exists identity-unrelated information contained in body shape map, e.g., human pose. This partially over-lapped assumption indicates that the target information for This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22752 VI-ReID, which is identity-related and modality-shared, can be divided into two independent components that are related and unrelated to body shape. Based on the above observation and assumption, to dig more diverse modality-shared cues for VI-ReID, we ex-pect to erase the body-shape-related semantic concepts in the features to force the VI-ReID model to extract more and other modality-shared features for identification. As illustrated on the right of the dashed line in Fig. 1, the shape-erased feature is decorrelated from the shape-related feature to simultaneously discover shape-unrelated knowl-edge, while shape-related feature can be explicitly guided by some given body shape prior, which is easy to obtain by existing pre-trained human parsing models [16]. In this way, both shape-related andshape-erased features are ex-plicitly quantified while the discriminative nature of the two features can be independently maintained. Specifically, we propose shape-erased feature learning paradigm that introduces orthogonality into representation to satisfy a relaxation of independent constraint. The repre-sentation is then decomposed into two sub-representations lying in two orthogonal subspaces for shape-related and shape-erased feature learning, respectively. By learning and covering most discriminative body shape feature in one subspace, the shape-erased feature is forced to dis-cover other modality-shared discriminative semantic con-cepts in the the other subspace as shape-related feature is constrained in its orthogonal complement. Under the above assumptions, we formulate this shape-erased feature learn-ing paradigm from a mutual information perspective, and demonstrate that jointly learning shape-erased andshape-related objectives achieves a conditional mutual informa-tion maximization between shape-erased feature and iden-tity discarding body shape information, thus enhancing the diversity of the learned representation explicitly. We finally design a Shape-Guided dIverse fEature Learning (SGIEL) framework that jointly optimizes shape-related andshape-erased objectives to learn modality-shared and discrimi-native integrated representation. The contributions of our work are summarized as follows: • We propose a shape-erased feature learning paradigm for VI-ReID that decorrelates shape-erased feature from shape-related one by orthogonal decomposition. Shape-related feature in one subspace is guided by body shape prior while shape-erased feature is con-strained in its orthogonal complement to discover more and other modality-shared discriminative se-mantic concepts, thus enhancing the diversity of the learned representation explicitly. • Based on the proposed shape-erased feature learning paradigm, we design a Shape-Guided dIverse fEa-ture Learning framework that jointly optimizes shape-related andshape-erased objectives to learn modality-shared and discriminative integrated representation. • Extensive experiments on SYSU-MM01, RegDB, and HITSZ-VCM datasets demonstrate the effectiveness of our method.