title
stringlengths
28
135
abstract
stringlengths
0
12k
introduction
stringlengths
0
12k
Du_On-the-Fly_Category_Discovery_CVPR_2023
Abstract Although machines have surpassed humans on visual recognition problems, they are still limited to providing closed-set answers. Unlike machines, humans can cog-nize novel categories at the first observation. Novel cate-gory discovery (NCD) techniques, transferring knowledge from seen categories to distinguish unseen categories, aim to bridge the gap. However, current NCD methods assume a transductive learning and offline inference paradigm, which restricts them to a pre-defined query set and renders them unable to deliver instant feedback. In this paper, we study on-the-fly category discovery (OCD) aimed at making the model instantaneously aware of novel category samples (i.e., enabling inductive learning and streaming inference). We first design a hash coding-based expandable recogni-tion model as a practical baseline. Afterwards, noticing the sensitivity of hash codes to intra-category variance, we further propose a novel Sign-Magnitude d Isentang LEment (SMILE) architecture to alleviate the disturbance it brings. Our experimental results demonstrate the superiority of SMILE against our baseline model and prior art. Our code is available at https://github.com/PRIS-CV/On-the-fly-Category-Discovery .
1. Introduction Deep models are well known for beating humans in vi-sual recognition [13]. However, this is just a victory of specialist models over generalist humans – existing vision recognition models are mostly closed-set experts. Given a defined category set, huge datasets are gathered and an-notated, and then, deep models trained with the annotated data can easily handle such an in-category recognition due to their great fitting ability. However, these models are ar-guably only learning to memorize in that they are restricted to the defined category set and are incapable of modeling novel categories. Although paradigms like open set recog-*Corresponding Author Transductive Learning Offline Inference Inductive Learninge.g., Fully-& Semi-& Self-supervised Learning e.g., Fully-supervised Learninge.g., Clustering#NCD #OCD Instant Inferencee.g., Hash Coding 1000001001 Figure 1. Comparison of the conventional NCD setting and the proposed OCD setting. (a) NCD adopts transductive learning and offline inference. (b) OCD removes the pre-defined query set as-sumption and conducts inductive learning and instant inference. nition [9] aim to filter out the out-of-category samples, sim-ply rejecting them is not satisfactory. For humans, visual recognition is far beyond a closed-set problem – instead of learning to memorize , we learn to cognize . In particular, given samples containing novel categories, we can not only tell which are novel but we can also tell which may share the same novel category. E.g., even you have never seen “hedgehogs”, you can easily realize that they differ from other creatures you have seen before and realise that multi-ple hedgehog images belong to the same category, even if you don’t know the name. To bridge the gap, a rising field named novel category discovery (NCD) [11] attaching increasing attention. With a labelled support set of seen categories and an unlabeled query set containing unseen ones, NCD aims at cogniz-ing unseen categories by splitting the query set into several groups with the same latent category. As shown in Figure 1, existing NCD works [7, 10, 15, 37] mostly fall into a trans-ductive learning and offline inference procedure. Specifi-cally, a visual feature encoder is first trained with the sup-port set via supervised learning and the query set via unsu-pervised or semi-supervised learning. After that, clustering This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11691 techniques are applied to the encoded visual features to ob-tain category clusters. Although convincing performance has been obtained, two restrictive assumptions still hinder the real-world ap-plication of NCD approaches under the current setting. (i) Firstly, the query set is visible and required during train-ing, which makes the model specialized to the pre-defined query set and less capable of dealing with truly novel sam-ples. (ii) Secondly, the query set is batch processed offline during inference. Therefore these models are not practical in online scenarios where new data occurs in a stream and instant feedback on each instance is required. To approach a more realistic scenario, we put forward the problem of on-the-fly category discovery (OCD) that re-moves the assumption of a pre-defined query set (Figure 1). In particular, we keep the seen/unseen split of datasets, and make samples of the unseen query set unavailable during training and only individually visible during test. The goal of OCD is learning to recognise seen categories and to cog-nize unseen categories – both in an inductive manner that can be applied online. We follow the setting of generalized category discovery (GCN) [30] where both seen and unseen categories appear in the query set. Next, we introduce a new recognition paradigm for OCD along with a baseline model. Instead of adopting cross-entropy loss during training for fully supervised learning, we choose supervised contrastive learning [16] that works in embedding space. Thus, we directly optimize and ob-tain discriminative visual features rather than probability outputs within a fixed prediction space. To meet the need for instant feedback, cluster-based techniques are no longer practical during inference. To this end, we take the bina-rized feature embeddings as hash-like category descriptors, and samples with the same descriptor can be regarded as sharing the same latent category. In this way, the model can individually recognize each novel sample, like us humans. Afterward, we observe a challenge of OCD – the hash-like descriptor is extremely sensitive to intra-category vari-ance, especially for fine-grained categories. E.g., for the CUB-2011 -200dataset [31],∼1500 different 12-dimension hash codes are generated for∼4000 birds from only 200 categories. To address this, we contribute a novel Sign-Magnitude d Isentang LEment (SMILE) architecture to al-leviate the negative influence of intra-category variance. Specifically, we infer the signs and the magnitudes of fea-ture embeddings with two separate branches and only the sign branch is used during inference. The intuition be-hind this is that, since deep neural features respond to ab-stract semantics (e.g., colors, textures, shapes), the sign branch should encode whether a semantic feature corre-sponds to this category, and the magnitude branch indicates the expression level of the semantic feature on the current sample. In summary, the magnitude branch should modelintra-category variance, and the sign branch inter-category variance. Experiments on three widely used classifica-tion datasets and three fine-grained classification datasets demonstrate the superiority of SMILE over our baseline and prior art.
Chen_A_Unified_Knowledge_Distillation_Framework_for_Deep_Directed_Graphical_Models_CVPR_2023
Abstract Knowledge distillation (KD) is a technique that transfers the knowledge from a large teacher network to a small stu-dent network. It has been widely applied to many different tasks, such as model compression and federated learning. However, existing KD methods fail to generalize to gen-eral deep directed graphical models (DGMs) with arbitrary layers of random variables. We refer by deep DGMs to DGMs whose conditional distributions are parameterized by deep neural networks. In this work, we propose a novel unified knowledge distillation framework for deep DGMs on various applications. Specifically, we leverage the repa-rameterization trick to hide the intermediate latent variables, resulting in a compact DGM. Then we develop a surrogate distillation loss to reduce error accumulation through mul-tiple layers of random variables. Moreover, we present the connections between our method and some existing knowl-edge distillation approaches. The proposed framework is evaluated on four applications: data-free hierarchical varia-tional autoencoder (VAE) compression, data-free variational recurrent neural networks (VRNN) compression, data-free Helmholtz Machine (HM) compression, and VAE continual learning. The results show that our distillation method out-performs the baselines in data-free model compression tasks. We further demonstrate that our method significantly im-proves the performance of KD-based continual learning for data generation. Our source code is available at https: //github.com/YizhuoChen99/KD4DGM-CVPR .
1. Introduction Knowledge distillation (KD) aims at transferring the knowledge of a large teacher model to a small student model, which tries to mimic the behavior of the teacher model to *Work is completed during internship at William and Mary †Corresponding authorattain a competitive or superior performance [13, 20]. The goal of this work is to develop a unified knowledge distil-lation (KD) framework fordeep directed graphical models (DGMs). Applications of the proposed framework include: (i) data-free hierarchical variational autoencoder (V AE) com-pression [50], (ii) data-free variational recurrent neural net-works (VRNN) compression [8], (iii) data-free Helmholtz Machine (HM) compression [49], and (iv) KD based contin-ual learning. Deep directed graphical models (DGMs) refer to DGMs whose conditional distributions are parameterized by deep neural networks (DNNs), which is in contrast to the regular DGMs with tabular conditional probability. One good ex-ample is variational autoencoders (V AEs), whose posterior probability of latent variables is parameterized by DNNs. A general deep DGM may have a complex structure, consisting of an arbitrary number of input variables, target variables, and latent variables. Deep DGMs have been widely used in various applications, such as image generation [53], text generation [5], and video prediction [55]. This work is motivated by the growing popularity of re-cent over-parameterized deep DGMs with millions of param-eters to improve their accuracy in various tasks. However, the large models are very computationally expensive. As a result, it isnotpractical to deploy them on resource-constrained edge devices, such as mobile phones and IoT systems [33]. One possible solution to this problem is KD, which enables a smaller student model to approximate the performance of a large teacher. Recently, KD has been widely applied to many different tasks, such as model compression [20], continual learning [34, 59], and federated learning [30, 36]. To our knowledge, the existing KD methods, however, are only applicable to some specific DGMs, including genera-tive adversarial networks (GANs) [2, 33], auto-regressive models in natural language processing (NLP) [35], and VQ-V AE [48]. They fail to generalize to the general deep DGMs, especially to those with multiple latent variables or complex This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7795 z!z""!""#$!$"z!z""!""#z!z""!""#$#,!$#,"$%,!$& $%,""!""#$!$"(a)(b)(c)(d)Figure 1. Toy example of DGM in four different forms. Diamonds are deterministic variables and circles are random variables. (a) Original form; (b) Auxiliary form; (c) Our semi-auxiliary form; (d) Compact semi-auxiliary form. /uni00000014 /uni00000018 /uni0000001c /uni00000014/uni00000016 /uni00000014/uni0000001a /uni00000051/uni00000058/uni00000050/uni00000045/uni00000048/uni00000055/uni00000003/uni00000052/uni00000049/uni00000003/uni0000004f/uni00000044/uni0000005c/uni00000048/uni00000055/uni00000056/uni00000003/uni00000052/uni00000049/uni00000003/uni0000004f/uni00000044/uni00000057/uni00000048/uni00000051/uni00000057/uni00000003/uni00000059/uni00000044/uni00000055/uni0000004c/uni00000044/uni00000045/uni0000004f/uni00000048/uni00000056/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000018/uni00000014/uni00000011/uni00000013/uni00000014/uni00000011/uni00000018/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000044/uni00000046/uni00000046/uni00000058/uni00000050/uni00000058/uni0000004f/uni00000044/uni00000057/uni00000048/uni00000047/uni00000003/uni00000048/uni00000055/uni00000055/uni00000052/uni00000055/uni00000003/uni0000000b/uni0000002e/uni0000002f/uni00000003/uni00000047/uni0000004c/uni00000059/uni00000048/uni00000055/uni0000004a/uni00000048/uni00000051/uni00000046/uni00000048/uni0000000c/uni0000004f/uni00000052/uni00000046/uni00000044/uni0000004f/uni00000010/uni00000047/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000052/uni00000058/uni00000055/uni00000010/uni00000047/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051Figure 2. Toy example of accumulated error (KL divergence) between the teacher and student for local distillation and our method. Experimental settings are presented in the last paragraph in Section 3.2. dependence structures , as illustrated in Fig. 1. Generalizing knowledge distillation to deep DGMs poses two major challenges. First, distillation by marginalizing all latent variables is generally intractable (as explained in Appendix A). Secondly, distilling each layer locally and independently may suffer from error accumulation, as shown in Fig. 2. We can observe that the accumulated error (i.e., KL divergence) between the teacher and student grows linearly for local distillation. To address these challenges, we propose a novel unified knowledge distillation framework for deep DGMs. Specifically, we first adopt the reparameterization trick [23,24] to convert a DGM into a compact semi-auxiliary form . By semi-auxiliary form , we mean the latent variables, z, in both the student and teacher models are converted into deterministic variables with auxiliary variables, while the input variables and target variables remain unchanged, as shown in Fig. 1 (c). Note that different from the classical reparameterization for V AE model training [25], ours can be applied to both continuous and discrete variables. Then a surrogate distillation loss is derived as a new objective of KD. To mitigate gradient vanishing, we further incorporate a latent distillation loss that penalizes the dissimilarity of latent variables between the teacher and student into our objective. We also present the connections between our approach and some existing KD methods for specific DGMs and show that our method is a proper generalization of these existing methods. We evaluate the performance of our distillation method on four different tasks: hierarchical V AE compression, VRNN compression, Helmholtz Machine compression, and KD-based continual learning with V AEs. For model compression tasks, the student model distilled by our method in a data-free manner outperforms that trained from scratch and the other baselines. In addition to model compression, we also illustrate that our method can better mitigate the catastrophicforgetting issue than the generative replay approaches in continual learning. In summary, our contributions include: 1) a new unified KD framework is proposed for general deep DGMs based on reparameterization trick, 2) we derive a novel distillation loss that combines the latent distillation loss and surrogate distillation loss to improve the performance of KD, and 3) evaluation results on multiple benchmark datasets show that our approach can not only achieve high accuracy for deep DGMs compression but also improve the performance of KD-based continual learning.
Chen_OvarNet_Towards_Open-Vocabulary_Object_Attribute_Recognition_CVPR_2023
Abstract In this paper, we consider the problem of simultaneously detecting objects and inferring their visual attributes in an image, even for those with no manual annotations provided at the training stage, resembling an open-vocabulary sce-nario. To achieve this goal, we make the following con-tributions: (i) we start with a na ¨ıve two-stage approach for open-vocabulary object detection and attribute classi-fication, termed CLIP-Attr. The candidate objects are first proposed with an offline RPN and later classified for se-mantic category and attributes; (ii) we combine all avail-able datasets and train with a federated strategy to fine-tune the CLIP model, aligning the visual representation with attributes, additionally, we investigate the efficacy of leveraging freely available online image-caption pairs un-der weakly supervised learning; (iii) in pursuit of efficiency, we train a Faster-RCNN type model end-to-end with knowl-edge distillation, that performs class-agnostic object pro-posals and classification on semantic categories and at-tributes with classifiers generated from a text encoder; Fi-nally, (iv) we conduct extensive experiments on VAW, MS-COCO, LSA, and OVAD datasets, and show that recog-⋆Equal contribution. †Corresponding author.nition of semantic category and attributes is complemen-tary for visual scene understanding, i.e., jointly training object detection and attributes prediction largely outper-form existing approaches that treat the two tasks indepen-dently, demonstrating strong generalization ability to novel attributes and categories.
1. Introduction Understanding the visual scene in terms of objects has been the main driving force for development in computer vision [46], for example, in object detection, the goal is to localise objects in an image and assign one of the pre-defined semantic labels to them, such as a ‘car’, ‘person’ or ‘bus’, despite tremendous success has been made by the community, such task definition has largely over-simplified our understanding of the visual world, as a visual object can often be characterised from many aspects other than seman-tic category, for example, a bus can be ‘yellow’ or ‘black’, a shirt can be ‘striped’ or ‘unpatterned’, learning attributes can thus complement category-level recognition, acquiring more comprehensive visual perception. In the literature, numerous work has shown that under-standing the objects’ attributes can greatly facilitate object recognition and detection, even with few or no examples This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23518 of visual objects [5, 16, 21, 36, 45], for example, Farhadi et al.proposed to shift the goal of object recognition from ‘naming’ to ‘description’, which allows naming familiar objects with attributes, but also to say something about unfamiliar objects (“hairy and four-legged”, not just “un-known”) [5]; Lampert et al. considered the open-set ob-ject recognition, that aims to recognise objects by human-specified high-level description, e.g., arbitrary semantic at-tributes, like shape, color, or even geographic information, instead of training images [16]. However, the problem con-sidered in these seminal work tends to be a simplification from today’s standard, for example, attribute classification are often trained and evaluated on object-centric images under the close-set scenario, i.e., assuming the bounding boxes/segmentation masks are given [11, 25, 31], or some-times even the object category are known as a prior [22,25]. In this paper, we consider the task of simultaneously detecting objects and classifying the attributes in an open-vocabulary scenario, i.e., the model is only trained on a set of base object categories and attributes, while it is required to generalise towards ones that are unseen at training time, as shown in Fig. 1. Generally speaking, we observe three major challenges: First , in the existing foundation mod-els,e.g., CLIP [27] and ALIGN [13], the representation learned from image-caption pairs tends to bias towards ob-ject category, rather than attributes, which makes it suffer from feature misalignment when used directly for attribute recognition. We experimentally validate this conjecture by showing a significant performance drop in attribute recog-nition, compared to category classification; Second , there is no ideal training dataset with three types of annotations, ob-ject bounding boxes, semantic categories, and attributes; as far as we know, only the COCO Attributes dataset [24] pro-vides such a degree of annotations, but with a relatively lim-ited vocabulary size (196 attributes, 29 categories); Third , training all three tasks under a unified framework is chal-lenging and yet remains unexplored, i.e., simultaneously lo-calising (‘where’), classifying objects’ semantic categories and attributes (‘what’) under the open-vocabulary scenario. To address the aforementioned issues, we start with a na¨ıve architecture, termed as CLIP-Attr, which first pro-poses object candidates with an offline RPN [30], and then performs open-vocabulary object attribute recognition by comparing the similarity between the attribute word em-bedding and the visual embedding of the proposal. To bet-ter align the feature between attribute words and propos-als, we introduce learnable prompt vectors with parent at-tributes on the textual encoder side and finetune the orig-inal CLIP model on a large corpus of the freely available image-caption datasets. To further improve the model effi-ciency, we present OvarNet, a unified framework that per-forms detection and attributes recognition at once, which is trained by leveraging datasets from both object detec-tion and attribute prediction, as well as absorbing knowl-edge from CLIP-Attr to improve the performance and ro-bustness of unseen attributes. As a result, our proposed OvarNet, being the first scalable pipeline, can simultane-ously localize objects and infer their categories with visual attributes in an open-vocabulary scenario. Experimental re-sults demonstrate that despite only employing weakly su-pervised image-caption pairs for distillation, OvarNet out-performs previous the state-of-the-art on V AW [25], MS-COCO [18], LSA [26] and OV AD [3] datasets, exhibiting strong generalization ability on novel attributes and cate-gories.
Geng_Human_Pose_As_Compositional_Tokens_CVPR_2023
Abstract Human pose is typically represented by a coordinate vec-tor of body joints or their heatmap embeddings. While easy for data processing, unrealistic pose estimates are admit-ted due to the lack of dependency modeling between the body joints. In this paper, we present a structured repre-sentation, named Pose as Compositional Tokens (PCT), to explore the joint dependency. It represents a pose by Mdis-crete tokens with each characterizing a sub-structure with several interdependent joints (see Figure 1). The composi-tional design enables it to achieve a small reconstruction error at a low cost. Then we cast pose estimation as a clas-sification task. In particular, we learn a classifier to pre-dict the categories of the Mtokens from an image. A pre-learned decoder network is used to recover the pose from the tokens without further post-processing. We show that it achieves better or comparable pose estimation results as the existing methods in general scenarios, yet continues to work well when occlusion occurs, which is ubiquitous in practice. The code and models are publicly available at https://github.com/Gengzigang/PCT .
1. Introduction Human pose estimation is a fundamental task in com-puter vision which aims to estimate the positions of body joints from images. The recent progress has focused on net-work structures [74, 87, 96], training methods [31, 68, 93], and fusion strategies [14,15,61,67,84,102], which have no-tably advanced the accuracy on public datasets. However, it remains an open problem in challenging scenarios, e.g., in the presence of occlusion, which hinders its application in practice. Current 2/3D pose estimators usually represent a pose by a coordinate vector [23, 34, 79, 110] or its heatmap em-beddings [40, 55, 60, 74, 75, 80, 87, 90]. In both represen-tations, the joints are treated independently, ignoring the fact that the body joints can serve as mutual context to each *Equal Advising 638 ? 312 638 312 Encoder ... 513 456 638 ... Decoder Decoder 312 ... 431 871 27 129 933 42 ?Figure 1. Our approach represents a pose by M discrete tokens which are indices to the codebook entries ( top). Each token is learned to represent a sub-structure. In each row, we show that if we change the state of one token to different values, it consistently changes the same sub-structure highlighted by orange. The black poses are before changing ( bottom ). other. As a result, they may get unrealistic estimates when occlusion occurs as shown in Figure 2 (top). However, it is interesting to note that humans can easily predict intact poses from only the visible joints and the visual features. This is probably because people are able to use context to aid recognition as evidenced by some psychology experi-ments [5, 58]. Some works attempt to introduce a tree or graph structure [2, 21, 65, 85] to model joint dependency. However, the hand-designed rules usually make unrealistic assumptions on the relationships, making them incapable to represent complex patterns. In this work, we hope to learn the dependency between the joints earlier in the representation stage without any as-sumptions. Our initial idea is to learn a set of prototype poses that are realistic, and represent every pose by the near-est prototype. While it can guarantee that all poses are real-istic, it requires a large number of prototypes to reduce the quantization error to a reasonable level which is computa-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 660 Figure 2. Heatmap-based method ( top) v.s. our PCT method ( bottom ) in occluded scenes. PCT predicts reasonable poses even under severe occlusion. The images are from COCO val2017. tionally infeasible. Instead, we propose a discrete represen-tation, named pose as compositional tokens (PCT). Figure 3 shows the two stages of the representation. In Stage I, we learn a compositional encoder to transform a pose into Mtoken features, with each encoding a sub-structure of the pose. See Figure 1 for some examples. Then the tokens are quantized by a shared codebook. So, a pose is simply represented by Mdiscrete indices. The space represented by the codebook is sufficiently large to represent all poses accurately. We jointly learn the encoder, the codebook, and the decoder by minimizing a reconstruction error. In Stage II, we cast human pose estimation as a classifi-cation task. Given an image, we predict the categories of the Mtokens, from which the pose is recovered by the decoder network. The PCT representation has several advantages. First, the dependency between the joints is modeled by the tokens, which helps to reduce the chance of getting unre-alistic pose estimates. In particular, we see evidence that it has the potential to obtain reasonable estimates even when a large portion of the body is occluded. See Figure 2 (bottom) for some examples. Second, it does not require any expen-sive post-processing modules such as UDP [29] which is required by the heatmap representation to reduce the quan-tization errors. Third, it provides a unified representation for 2D and 3D poses. In addition, the discrete representa-tion potentially facilitates its interactions with other discrete modalities such as text and speech. But this is not the focus of this work. We extensively evaluate our approach in 2D human pose estimation on five benchmark datasets. It gets better or com-parable accuracy as the state-of-the-art methods on all of them. But more importantly, it achieves significantly better results when evaluated only on the occluded joints, validat-ing the advantages of its dependency modeling capability. We also present the results in 3D pose estimation on the H36M dataset on which it achieves comparable accuracyas the state-of-the-art methods using a simple architecture. The results demonstrate that it has wide applicability.
Jiang_DartBlur_Privacy_Preservation_With_Detection_Artifact_Suppression_CVPR_2023
Abstract Nowadays, privacy issue has become a top priority when training AI algorithms. Machine learning algorithms are expected to benefit our daily life, while personal informa-tion must also be carefully protected from exposure. Fa-cial information is particularly sensitive in this regard. Multiple datasets containing facial information have been taken offline, and the community is actively seeking solu-tions to remedy the privacy issues. Existing methods for privacy preservation can be divided into blur-based and face replacement-based methods. Owing to the advantages of review convenience and good accessibility, blur-based based methods have become a dominant choice in prac-tice. However, blur-based methods would inevitably intro-duce training artifacts harmful to the performance of down-stream tasks. In this paper, we propose a novel De-artifact Blurring (DartBlur) privacy-preserving method, which cap-italizes on a DNN architecture to generate blurred faces. DartBlur can effectively hide facial privacy information while detection artifacts are simultaneously suppressed. We have designed four training objectives that particularly aim to improve review convenience and maximize detec-tion artifact suppression. We associate the algorithm with an adversarial training strategy with a second-order opti-mization pipeline. Experimental results demonstrate that DartBlur outperforms the existing face-replacement method from both perspectives of review convenience and accessi-bility, and also shows an exclusive advantage in suppress-ing the training artifact compared to traditional blur-based methods. Our implementation is available at https: //github.com/JaNg2333/DartBlur .
1. Introduction Computer vision (CV) technology has been influencing our daily life in many ways. However, successful CV mod-els often have to rely on large-scale datasets collected from real-world scenes, which raises concerning privacy issues. *These authors contributed equally to this work. †Lu Fang is the corresponding author ( www.luvision.net ). FAIL Gaussian BlurPixelationDeepPrivacy DartBlurBlockCIAGANOriginalFAILFigure 1. Example faces and anonymized versions by existing methods and DartBlur. As presented, blur-based methods facil-itate review convenience, and face replacement-based methods may fail when the keypoint detector does not work as expected. Best viewed in color. The CV community has started to take privacy issues seriously. Existing privacy-preserving methods can be di-vided into blur-based methods (e.g., Block, Gaussian blur, Pixelation) and face replacement-based methods (e.g., CIA-GAN [24], DeepPrivacy [13], DeIdGAN [18]). Blur-based methods are simple to implement but inevitably introduce additional noise and artifacts into the actual CV task [37]. For example, Gaussian blur patterns are easier to recog-nize. Therefore, face detectors trained on Gaussian blurred This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16479 Train onInferonOperationFidelityPost-hocFidelityCycleFidelity Cleandata DetectortrainedonCleandataBlurred data DetectortrainedonBlurreddataInferredboundingboxes Figure 2. Illustration of detection artifact suppression. We encour-age the blur model to maintain all the operation fidelity, post-hoc fidelity, and cycle fidelity. datasets will take the shortcut to identify the blur patterns instead of the actual faces. In contrast, face replacement-based methods attempt to generate synthesized faces in or-der to replace the original faces through generative models such as generative adversarial networks (GANs) [7]. Such replacements tend to preserve the critical human face fea-tures that can effectively trigger the face detector to work, whereas discriminative individual identification character-istics are mostly erased. Despite the advantages of face replacement-based meth-ods, blur-based methods are usually still preferred in prac-tice [1, 6, 9, 11, 26, 35, 37, 39]. In fact, blur-based methods usually make it easier to determine whether human iden-tification is removed, while face replacement-based meth-ods require careful face-face comparisons during an ethi-cal review. Face replacement-based methods also hinge on the quality of landmark detection [13, 24] or semantic seg-mentation [18] techniques, and the generative training itself, which often requires additional face data, would also raise potential privacy issues. The above concerns motivate us to rethink and design a novel blur-based privacy protection paradigm with the fol-lowing goals: (1) Accessibility . The method should work well without relying on the quality of other pretrained mod-els, such as landmark detection. (2) Review convenience . One can quickly determine whether or not identifiable hu-man information is successfully concealed during an ethical review. (3) Detection artifact suppression . The blur func-tion should avoid introducing much training artifacts to the detector, and specifically, we desire the following properties for detection artifact suppression, as illustrated in Figure 2. •Operation Fidelity . Open-source models trained on clean data should produce similar results between clean and blurred data for model utility flexibility. •Post-hoc Fidelity . The recognition performance should be maximally invariant to the images’ features beforeand after blurring. In other words, the distance be-tween hard cases (in terms of recognition) and easy cases on clean data should be maintained in the feature space after blurring. •Cycle Fidelity : Models trained on blurred data should produce good recognition results on clean testing data. Given the above considerations, we propose a novel privacy-preserving model called De-artifact Blurring (Dart-Blur). DartBlur is a learnable U-Net model [28] that is fed with Gaussian blurred images and face bounding boxes as input, and outputs detection artifact-suppressed blurred im-ages without relying on other pretrained models like land-mark detection. We propose four training objectives, each specifically addressing the mentioned concerns above, and the implementation resorts to an adversarial training strat-egy with a second-order optimization pipeline. Example images anonymized by existing methods and DartBlur are presented in Figure 1. The main contributions of this paper can be summarized as follows. • We propose a new blur-based privacy preservation model DartBlur by taking into account the actual ac-cessibility of the model, review convenience, and de-tection artifact suppression simultaneously. • DartBlur model is associated with four novel train-ing objectives that each directly addresses the desired properties. We also design an adversarial training strat-egy with a second-order optimization for model train-ing. • We demonstrate that DartBlur can effectively protect personal privacy while suppressing detection artifacts on various benchmarks.
Jiang_Self-Supervised_Pre-Training_With_Masked_Shape_Prediction_for_3D_Scene_Understanding_CVPR_2023
Abstract Masked signal modeling has greatly advanced self-supervised pre-training for language and 2D images. How-ever, it is still not fully explored in 3D scene understand-ing. Thus, this paper introduces Masked Shape Prediction (MSP), a new framework to conduct masked signal model-ing in 3D scenes. MSP uses the essential 3D semantic cue, i.e., geometric shape, as the prediction target for masked points. The context-enhanced shape target consisting of ex-plicit shape context and implicit deep shape feature is pro-posed to facilitate exploiting contextual cues in shape pre-diction. Meanwhile, the pre-training architecture in MSP is carefully designed to alleviate the masked shape leakage from point coordinates. Experiments on multiple 3D under-standing tasks on both indoor and outdoor datasets demon-strate the effectiveness of MSP in learning good feature rep-resentations to consistently boost downstream performance.
1. Introduction Self-supervised pre-training has witnessed considerable progress in natural language processing (NLP) [4, 10, 42] and 2D computer vision [2, 15, 17, 18], the main idea of which is to define a pretext task to leverage unlabeled data to learn meaningful representations. With the development of transformer [11,30,59], masked signal modeling (MSM) has been proved to be an effective pretext task, attaining bet-ter results than other tasks like contrastive learning [6, 18]. An MSM architecture first partially masks out the input and then reconstructs the masked part given the remaining con-tent, forcing the network to learn semantic knowledge for completing the missing part. Compared to 2D images, the labeling of 3D real-scene data is more labor-intensive. Therefore, self-supervised pre-training is important in 3D scene understanding for its abil-ity in boosting the performance with limited labeled data. Previous 3D scene-level pre-training methods mostly fol-low the contrastive pipeline [20, 21, 43, 64]. Though effec-tive, MSM is less explored in 3D scene level. Some recentmethods [36, 74] also explore MSM with point clouds but focus on single-object-level understanding. In contrast, we investigate MSM for more practical scene-level understand-ing that contains complicated contextual environments, and we propose a Masked Shape Prediction (MSP) framework to conduct pre-training on point cloud scenes. There are several key problems when performing masked signal modeling in 3D scenes. The first is the de-sign of the reconstruction target. In 2D images, pixel colors constitute the semantic contents, making appearance sig-nals [17,62] good choices as targets. In 3D, the most essen-tial semantic clue is geometric shape, which motivates us to explore shape information in target design. In 3D scene-level understanding with complex object distribution, broad contextual information is essential in achieving outstand-ing performance. Therefore, to promote the network to ex-ploit contextual cues in shape prediction, we propose the context-enhanced shape target, which includes two compo-nents: shape context anddeep shape feature . Shape con-text explicitly describes the 3D shape by discretizing the local space into multiple bins, which is robust to the un-even point distributions. Deep shape feature is extracted from point clouds with complete shapes by a deep network. As a learned shape descriptor, deep shape feature is able to adaptively integrate contextual information in a larger range, thanks to the large receptive field of the deep net-work. By combining shape context and deep shape feature as our context-enhanced shape target, the network is pro-moted to not only focus on explicit shape patterns, but also on contextual object relations in a larger scope. Using the geometric shapes as reconstruction target, however, raises another problem. Shape information can be inferred from the point coordinates, yet masked signal modeling requires the coordinates of masked points to spec-ify the target positions for reconstruction, which may reveal the masked shape and thus create a shortcut for network learning. In this paper, we discuss several MSP network designs to prevent the masked shape from being revealed by the masked point coordinates. The core idea is to either avoid the information interactions between masked points This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1168 or restrict the interactions to sparsely sampled keypoints. We follow [20, 64] to perform unsupervised pre-training on ScanNet v2 [9] indoor scene dataset, and then evalu-ate it via supervised fine-tuning in different downstream tasks. Our MSP extracts representative 3D features that are beneficial in indoor scene understanding tasks on mul-tiple datasets [1, 9, 54], achieving excellent performance in both segmentation and detection and showing great ability in data-efficient learning. We also evaluate its transferring ability to outdoor scenes. Our core technical contributions are listed below: • We propose a self-supervised pre-training method for 3D scene understanding, namely, Masked Shape Pre-diction (MSP), which consistently boosts the down-stream performance. • We present the context-enhanced shape target, com-bining the strengths of explicit shape context descrip-tor and implicit deep shape feature. • We explore different MSP network architecture de-signs to promote feature learning and mitigate the masked shape leakage problem.
Gao_DKT_Diverse_Knowledge_Transfer_Transformer_for_Class_Incremental_Learning_CVPR_2023
Abstract In the context of incremental class learning, deep neu-ral networks are prone to catastrophic forgetting, where the accuracy of old classes declines substantially as new knowl-edge is learned. While recent studies have sought to address this issue, most approaches suffer from either the stability-plasticity dilemma or excessive computational and param-eter requirements. To tackle these challenges, we propose a novel framework, the Diverse Knowledge Transfer Trans-former (DKT), which incorporates two knowledge trans-fer mechanisms that use attention mechanisms to transfer both task-specific and task-general knowledge to the current task, along with a duplex classifier to address the stability-plasticity dilemma. Additionally, we design a loss func-tion that clusters similar categories and discriminates be-tween old and new tasks in the feature space. The pro-posed method requires only a small number of extra param-eters, which are negligible in comparison to the increas-ing number of tasks. We perform extensive experiments on CIFAR100, ImageNet100, and ImageNet1000 datasets, which demonstrate that our method outperforms other com-petitive methods and achieves state-of-the-art performance. Our source code is available at https://github.com/MIV-XJTU/DKT.
1. Introduction Deep neural networks have demonstrated notable suc-cess in the domain of class-fixed classification problems, wherein the object classes are predetermined and constant throughout the training and testing phases [12, 13]. Never-theless, in practical scenarios, these models are frequently employed in an ever-changing and dynamic environment, necessitating the incorporation of capabilities to enable them to learn and identify new classes that arise contin-*Corresponding authors Figure 1. The accuracy and parameters number comparisons of different methods on CIFAR100 10-step . We report the final parameter number and accuracy. The height of the green pillars represents the number of final parameters, the polyline represents the average accuracy of different methods. We can see that DKT surpasses state-of-the-art methods with much fewer parameters. uously. This challenge, commonly known as the Class-Incremental Learning (CIL) problem [4, 27], is of utmost significance. Numerous recent studies [9, 10, 15, 27, 36, 41, 44, 45] have endeavored to address the challenges associated with Class-Incremental Learning (CIL). Among these methods, some [15,27,41,45] have adopted the concept of knowledge distillation [14], which entails transferring prior knowledge from a teacher model to a student model, to retain the previ-ous knowledge encoded in the output logits of the network. Meanwhile, other methods [10, 36, 44] have employed dy-namic expandable networks to overcome the CIL issues. These techniques involve dynamically augmenting the net-work architectures, such as feature extractors, by utilizing supplementary parameters and memory. Despite recent ad-vancements in addressing the CIL problem, several chal-lenges persist. Firstly, knowledge distillation techniques, as demonstrated in the literature [14], exhibit significant feature degradation [44], leading to reduced performance when transferring knowledge from prior to new tasks. Sec-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24236 ondly, networks must be stable enough to preserve existing knowledge [23,26] while simultaneously exhibiting plastic-ity to acquire new information, creating a stability-plasticity dilemma [3]. Lastly, dynamic expandable networks de-mand significant additional parameters, memory storage, and computational resources, hindering their practical ap-plication in real-world scenarios. We introduce a new approach, named the Diverse Knowledge Transfer Transformer (DKT), to address the aforementioned challenges. DKT comprises two innovative attention blocks and a duplex classifier. Firstly, to mitigate feature degradation and catastrophic forgetting, we propose two novel attention blocks: the General Knowledge Trans-fer Attention Block (GKAB) and the Specific Knowledge Transfer Attention Block (SKAB). These attention blocks can transfer previous knowledge by utilizing a unified task-general token and a set of task-specific tokens from a to-ken pool. For each task, the token pool initializes a new task-specific token to accumulate task-specific knowledge and update the unified task-general token storing the gen-eral knowledge of previous tasks. Unlike other dynamic expandable networks that use a feature extractor network, we initialize a task-specific token for each task, which is a 1×384trainable vector. This design reduces the number of extra parameters and computational requirements signif-icantly. We demonstrate the relationship between parame-ters and performance in Figure 1. Notably, DKT achieves state-of-the-art performance with only 1/10 of the param-eters of the competitive DER w/o P method [44] and sig-nificantly outperforms other representative methods. Sec-ondly, to address the stability-plasticity dilemma, we pro-pose a duplex classifier comprising a stability classifier to maintain the model’s stability on old categories and a plas-ticity classifier to learn the knowledge of new categories. Additionally, we propose a cluster-separation loss to pull features belonging to the same categories together and push features between old and new tasks apart. This encourages the model to learn diverse task-specific knowledge in differ-ent tasks. We conduct extensive experiments on three widely-used image classification benchmarks, namely CIFAR100, Im-ageNet100, and ImageNet1000, to showcase the effective-ness of our proposed method. We compare our DKT with other state-of-the-art methods, including Dytox [10] and DER [44]. Dytox is the first attempt to utilize the trans-former architecture for CIL, while DER uses extra parame-ters to achieve state-of-the-art performance. Our proposed approach outperforms Dytox [10] and sets a new state-of-the-art performance surpassing DER [44]. Our ablation study confirms the efficacy of our proposed method. In summary, our key contributions include: • We propose a novel framework, DKT, that comprises the GKAB and SKAB attention blocks to facilitate di-verse knowledge transfer and mitigate catastrophic for-getting in continual learning scenarios. • We introduce a duplex classifier that enables a model to maintain stability in recognizing old categories while retaining plasticity in learning new categories. • We develop a cluster-separation loss that clusters fea-tures belonging to the same categories and discrimi-nates features between old and new tasks to encourage the model to learn diverse task-specific knowledge. • We conduct extensive experiments on three bench-marks, and our approach achieves a new state-of-the-art performance with fewer parameters on the CIL benchmarks.
Attaiki_Generalizable_Local_Feature_Pre-Training_for_Deformable_Shape_Analysis_CVPR_2023
Abstract Transfer learning is fundamental for addressing prob-lems in settings with little training data. While several transfer learning approaches have been proposed in 3D, unfortunately, these solutions typically operate on an en-tire 3D object or even scene-level and thus, as we show, fail to generalize to new classes, such as deformable or-ganic shapes. In addition, there is currently a lack of un-derstanding of what makes pre-trained features transfer-able across significantly different 3D shape categories. In this paper, we make a step toward addressing these chal-lenges. First, we analyze the link between feature local-ity and transferability in tasks involving deformable 3D ob-jects, while also comparing different backbones and losses for local feature pre-training. We observe that with proper training, learned features can be useful in such tasks, but, crucially, only with an appropriate choice of the recep-tive field size. We then propose a differentiable method for optimizing the receptive field within 3D transfer learn-ing. Jointly, this leads to the first learnable features that can successfully generalize to unseen classes of 3D shapes such as humans and animals. Our extensive experiments show that this approach leads to state-of-the-art results on several downstream tasks such as segmentation, shape cor-respondence, and classification. Our code is available at https://github.com/pvnieo/vader .
1. Introduction Extracting informative representations from 3D geome-try is a central task in Computer Vision, Computer Graph-ics, and related fields. Classical approaches have relied on hand-crafted features derived from basic geometric princi-ples [8, 10, 53, 82, 94]. More recently, the focus has shifted towards data-driven approaches that learn features directly from 3D data [16, 19, 45] in a task-specific manner. In addition to methods that learn features from scratch for each application, several recent works have also advo-cated for general-purpose representation learning on geo-metric data [49,97,103]. Inspired by the success of transfer learning in other domains [109], these methods aim to learn Rigid registration Non rigid registration Semantic segmentation Local feature pre-training Downstream deformable shape analysis Figure 1. We present V ADER, a novel feature pre-training tech-nique aiming for deformable shapes. By pre-training local feature extractors on 3D scenes for rigid alignment, our approach enables transfer learning to downstream deformable shape analysis tasks, such as shape matching and semantic segmentation. informative representations of 3D data, which can then be exploited in data-limited downstream tasks. Despite this progress, state-of-the-art architectures in de-formable shape analysis still either rely on classical hand-crafted features as input signals to their learning pipelines [67, 78, 83, 93], or are trained from scratch for each task [29, 41, 64], thus requiring significant amounts of labeled data. Unfortunately, as we demonstrate in our work, exist-ing 3D representation learning approaches fail to provide a useful signal in tasks that involve highly deformable shapes, such as shape correspondence or segmentation. This result is perhaps expected since existing approaches have primarily focused on transfer learning across man-made 3D objects or scenes [102], and are typically restricted to settings with significant domain overlap between training and test data. Furthermore, there is currently a lack of un-derstanding of what makes pre-trained features transferable, especially across significantly different shape classes. In this work, we aim to investigate the transferabil-ity of geometric features to develop representation learn-ing approaches that are useful in downstream deformable shape analysis tasks, such as non-rigid shape matching and semantic segmentation (see Fig. 1). Taking inspira-tion from recent studies that emphasize the importance of low and mid-level features in enabling 2D transfer learn-ing [75, 108], we explore the impact of feature locality on downstream task accuracy across significantly different 3D shape categories. Our study shows that, with a carefully chosen architecture, successful general-purpose represen-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13650 Local Feature Extractor <latexit sha1_base64="CI0M3FSfiwj1nVgtybg0iD1hAtY=">AAAB6HicbVDLSgMxFL1TX7W+qi7dBIvgqsyIVJcFNy5b6AvaQTLpnTY2kxmSjFCGfoEbF4q49ZPc+Tem7Sy09UDgcM655N4TJIJr47rfTmFjc2t7p7hb2ts/ODwqH590dJwqhm0Wi1j1AqpRcIltw43AXqKQRoHAbjC5m/vdJ1Sax7Jlpgn6ER1JHnJGjZWajYdyxa26C5B14uWkAjls/mswjFkaoTRMUK37npsYP6PKcCZwVhqkGhPKJnSEfUsljVD72WLRGbmwypCEsbJPGrJQf09kNNJ6GgU2GVEz1qveXPzP66cmvPUzLpPUoGTLj8JUEBOT+dVkyBUyI6aWUKa43ZWwMVWUGdtNyZbgrZ68TjpXVa9W9ZrXlXorr6MIZ3AOl+DBDdThHhrQBgYIz/AKb86j8+K8Ox/LaMHJZ07hD5zPH7D7jOw=</latexit>P<latexit sha1_base64="NFj4sXfgWdTOR6PEqg3QvfZ1Y84=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE1GPBi8cW+gVtKJvtpF272YTdjVBCf4EXD4p49Sd589+4bXPQ1gcDj/dmmJkXJIJr47rfTmFjc2t7p7hb2ts/ODwqH5+0dZwqhi0Wi1h1A6pRcIktw43AbqKQRoHATjC5n/udJ1Sax7Jppgn6ER1JHnJGjZUajUG54lbdBcg68XJSgRz1QfmrP4xZGqE0TFCte56bGD+jynAmcFbqpxoTyiZ0hD1LJY1Q+9ni0Bm5sMqQhLGyJQ1ZqL8nMhppPY0C2xlRM9ar3lz8z+ulJrzzMy6T1KBky0VhKoiJyfxrMuQKmRFTSyhT3N5K2JgqyozNpmRD8FZfXiftq6p3U/Ua15VaM4+jCGdwDpfgwS3U4AHq0AIGCM/wCm/Oo/PivDsfy9aCk8+cwh84nz+yf4zt</latexit>Q<latexit sha1_base64="rBiWqnn8nLaEuII4ygkofCIP9Tk=">AAAB8XicbVBNS8NAFHypX7V+VT16WSyCp5KIqMeCF09SwdZiG8pm+9Iu3WzC7kYoof/CiwdFvPpvvPlv3LQ5aOvAwjDzHjtvgkRwbVz32ymtrK6tb5Q3K1vbO7t71f2Dto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMr3P/4QmV5rG8N5ME/YgOJQ85o8ZKj72ImlEQZsm0X625dXcGsky8gtSgQLNf/eoNYpZGKA0TVOuu5ybGz6gynAmcVnqpxoSyMR1i11JJI9R+Nks8JSdWGZAwVvZJQ2bq742MRlpPosBO5gn1opeL/3nd1IRXfsZlkhqUbP5RmApiYpKfTwZcITNiYgllitushI2ooszYkiq2BG/x5GXSPqt7F3Xv7rzWuC3qKMMRHMMpeHAJDbiBJrSAgYRneIU3RzsvzrvzMR8tOcXOIfyB8/kD9qWRJg==</latexit>p<latexit sha1_base64="3UU4hhJGXgnnblZ56iphWv+0wg4=">AAAB8XicbVDLSgMxFL1TX7W+qi7dBIvgqsyIqMuCG1dSwT6wLSWTZtrQTGZM7ghl6F+4caGIW//GnX9jpp2Fth4IHM65l5x7/FgKg6777RRWVtfWN4qbpa3tnd298v5B00SJZrzBIhnptk8Nl0LxBgqUvB1rTkNf8pY/vs781hPXRkTqHicx74V0qEQgGEUrPXRDiiM/SB+n/XLFrbozkGXi5aQCOer98ld3ELEk5AqZpMZ0PDfGXko1Cib5tNRNDI8pG9Mh71iqaMhNL50lnpITqwxIEGn7FJKZ+nsjpaExk9C3k1lCs+hl4n9eJ8HgqpcKFSfIFZt/FCSSYESy88lAaM5QTiyhTAublbAR1ZShLalkS/AWT14mzbOqd1H17s4rtdu8jiIcwTGcggeXUIMbqEMDGCh4hld4c4zz4rw7H/PRgpPvHMIfOJ8/+CqRJw==</latexit>qLocal Features<latexit sha1_base64="Cb+YoKyBGWIeLXXSzh5LoE/RxZ8=">AAACCnicbVDLSsNAFJ3UV62vqEs3o0VwVRIRdVlw40oq2Ac0oUymk3boZCbOTJQSsnbjr7hxoYhbv8Cdf+OkjaCtBy4czrmXe+8JYkaVdpwvq7SwuLS8Ul6trK1vbG7Z2zstJRKJSRMLJmQnQIowyklTU81IJ5YERQEj7WB0kfvtOyIVFfxGj2PiR2jAaUgx0kbq2ftehPQwCNM4g56kg6FGUop7+CPfZj276tScCeA8cQtSBQUaPfvT6wucRIRrzJBSXdeJtZ8iqSlmJKt4iSIxwiM0IF1DOYqI8tPJKxk8NEofhkKa4hpO1N8TKYqUGkeB6cwvVLNeLv7ndRMdnvsp5XGiCcfTRWHCoBYwzwX2qSRYs7EhCEtqboV4iCTC2qRXMSG4sy/Pk9ZxzT2tudcn1fpVEUcZ7IEDcARccAbq4BI0QBNg8ACewAt4tR6tZ+vNep+2lqxiZhf8gfXxDVTam1k=</latexit>p!q Corresp.… Alignment<latexit sha1_base64="nFqdhdl9WottL0J2hcQ5yUHV+hY=">AAAB/nicbVDLSsNAFL3xWesrKq7cDBbBhZRERF0W3LiSKvYBbSiT6aQdOpmEmYlQQsBfceNCEbd+hzv/xkkbQVsPDBzOuZd75vgxZ0o7zpe1sLi0vLJaWiuvb2xubds7u00VJZLQBol4JNs+VpQzQRuaaU7bsaQ49Dlt+aOr3G89UKlYJO71OKZeiAeCBYxgbaSevd8NsR76QXqXnfxQnfXsilN1JkDzxC1IBQrUe/Zntx+RJKRCE46V6rhOrL0US80Ip1m5mygaYzLCA9oxVOCQKi+dxM/QkVH6KIikeUKjifp7I8WhUuPQN5N5QjXr5eJ/XifRwaWXMhEnmgoyPRQkHOkI5V2gPpOUaD42BBPJTFZEhlhiok1jZVOCO/vledI8rbrnVff2rFK7KeoowQEcwjG4cAE1uIY6NIBACk/wAq/Wo/VsvVnv09EFq9jZgz+wPr4BxIyWDQ==</latexit>R,t<latexit sha1_base64="d9jrFCBftc0XhmBaVcwtWu6h5AU=">AAAB/3icbVDLSsNAFJ34rPUVFdy4CRbBVUlE1GXBjQuRCvYBTQiT6aQdOpmEmRuxxCz8FTcuFHHrb7jzb5y0WWjrgYHDOfdyz5wg4UyBbX8bC4tLyyurlbXq+sbm1ra5s9tWcSoJbZGYx7IbYEU5E7QFDDjtJpLiKOC0E4wuC79zT6VisbiDcUK9CA8ECxnBoCXf3HcjDEOCeXad+5kL9AEykue+WbPr9gTWPHFKUkMlmr755fZjkkZUAOFYqZ5jJ+BlWAIjnOZVN1U0wWSEB7SnqcARVV42yZ9bR1rpW2Es9RNgTdTfGxmOlBpHgZ4s0qpZrxD/83ophBdexkSSAhVkeihMuQWxVZRh9ZmkBPhYE0wk01ktMsQSE9CVVXUJzuyX50n7pO6c1Z3b01rjpqyjgg7QITpGDjpHDXSFmqiFCHpEz+gVvRlPxovxbnxMRxeMcmcP/YHx+QMvLJbm</latexit>Lc Loss <latexit sha1_base64="Iz8t+3iXGeP6/A0AmgJG6A4dPGA=">AAAB6HicdVDLSgNBEOyNrxhfUY9eBoPgadndxCTeAl48SQLmAckSZiezyZjZBzOzQljyBV48KOLVT/Lm3zibRFDRgoaiqpvuLi/mTCrL+jBya+sbm1v57cLO7t7+QfHwqCOjRBDaJhGPRM/DknIW0rZiitNeLCgOPE673vQq87v3VEgWhbdqFlM3wOOQ+YxgpaWWHBZLlnlZrzoXDrJMy6o55WpGnFrFKSNbKxlKsEJzWHwfjCKSBDRUhGMp+7YVKzfFQjHC6bwwSCSNMZniMe1rGuKASjddHDpHZ1oZIT8SukKFFur3iRQHUs4CT3cGWE3kby8T//L6ifLrbsrCOFE0JMtFfsKRilD2NRoxQYniM00wEUzfisgEC0yUzqagQ/j6FP1POo5pV027VSk1blZx5OEETuEcbKhBA66hCW0gQOEBnuDZuDMejRfjddmaM1Yzx/ADxtsnRzGNTQ==</latexit>s<latexit sha1_base64="Iz8t+3iXGeP6/A0AmgJG6A4dPGA=">AAAB6HicdVDLSgNBEOyNrxhfUY9eBoPgadndxCTeAl48SQLmAckSZiezyZjZBzOzQljyBV48KOLVT/Lm3zibRFDRgoaiqpvuLi/mTCrL+jBya+sbm1v57cLO7t7+QfHwqCOjRBDaJhGPRM/DknIW0rZiitNeLCgOPE673vQq87v3VEgWhbdqFlM3wOOQ+YxgpaWWHBZLlnlZrzoXDrJMy6o55WpGnFrFKSNbKxlKsEJzWHwfjCKSBDRUhGMp+7YVKzfFQjHC6bwwSCSNMZniMe1rGuKASjddHDpHZ1oZIT8SukKFFur3iRQHUs4CT3cGWE3kby8T//L6ifLrbsrCOFE0JMtFfsKRilD2NRoxQYniM00wEUzfisgEC0yUzqagQ/j6FP1POo5pV027VSk1blZx5OEETuEcbKhBA66hCW0gQOEBnuDZuDMejRfjddmaM1Yzx/ADxtsnRzGNTQ==</latexit>s<latexit sha1_base64="6VIuKFB6RqOBjbuPfAYm/QVI0EA=">AAACEXicbVC7SgNBFJ31GeMramkzGIQUEnZF1DIgiJVEyAuyIcxObpIhsw9m7oph2V+w8VdsLBSxtbPzb5xNUmjigYHDOfc1x4uk0Gjb39bS8srq2npuI7+5tb2zW9jbb+gwVhzqPJShanlMgxQB1FGghFakgPmehKY3usr85j0oLcKghuMIOj4bBKIvOEMjdQsl12c45Ewm12k3cREecDI0UdBLE52eULc2BGRpt1C0y/YEdJE4M1IkM1S7hS+3F/LYhwC5ZFq3HTvCTsIUCi4hzbuxhojxERtA29CA+aA7yWR5So+N0qP9UJkXIJ2ovzsS5ms99j1Tmd2v571M/M9rx9i/7CQiiGKEgE8X9WNJMaRZPLQnFHCUY0MYV8LcSvmQKcbRhJg3ITjzX14kjdOyc1527s6KldtZHDlySI5IiTjkglTIDamSOuHkkTyTV/JmPVkv1rv1MS1dsmY9B+QPrM8fQ2eelA==</latexit>Fs,⇥ <latexit sha1_base64="Iz8t+3iXGeP6/A0AmgJG6A4dPGA=">AAAB6HicdVDLSgNBEOyNrxhfUY9eBoPgadndxCTeAl48SQLmAckSZiezyZjZBzOzQljyBV48KOLVT/Lm3zibRFDRgoaiqpvuLi/mTCrL+jBya+sbm1v57cLO7t7+QfHwqCOjRBDaJhGPRM/DknIW0rZiitNeLCgOPE673vQq87v3VEgWhbdqFlM3wOOQ+YxgpaWWHBZLlnlZrzoXDrJMy6o55WpGnFrFKSNbKxlKsEJzWHwfjCKSBDRUhGMp+7YVKzfFQjHC6bwwSCSNMZniMe1rGuKASjddHDpHZ1oZIT8SukKFFur3iRQHUs4CT3cGWE3kby8T//L6ifLrbsrCOFE0JMtFfsKRilD2NRoxQYniM00wEUzfisgEC0yUzqagQ/j6FP1POo5pV027VSk1blZx5OEETuEcbKhBA66hCW0gQOEBnuDZuDMejRfjddmaM1Yzx/ADxtsnRzGNTQ==</latexit>s<latexit sha1_base64="Iz8t+3iXGeP6/A0AmgJG6A4dPGA=">AAAB6HicdVDLSgNBEOyNrxhfUY9eBoPgadndxCTeAl48SQLmAckSZiezyZjZBzOzQljyBV48KOLVT/Lm3zibRFDRgoaiqpvuLi/mTCrL+jBya+sbm1v57cLO7t7+QfHwqCOjRBDaJhGPRM/DknIW0rZiitNeLCgOPE673vQq87v3VEgWhbdqFlM3wOOQ+YxgpaWWHBZLlnlZrzoXDrJMy6o55WpGnFrFKSNbKxlKsEJzWHwfjCKSBDRUhGMp+7YVKzfFQjHC6bwwSCSNMZniMe1rGuKASjddHDpHZ1oZIT8SukKFFur3iRQHUs4CT3cGWE3kby8T//L6ifLrbsrCOFE0JMtFfsKRilD2NRoxQYniM00wEUzfisgEC0yUzqagQ/j6FP1POo5pV027VSk1blZx5OEETuEcbKhBA66hCW0gQOEBnuDZuDMejRfjddmaM1Yzx/ADxtsnRzGNTQ==</latexit>sReceptive Field Optimization Local Feature Pre-training Local Feature Transfer Deformable Shape Analysis Segmentation Matching…Optimized InitialFigure 2. Method overview . We propose generalizable local feature pre-training for deformable shape analysis. We first pre-train a local feature extractor Fs,Θ, which has a learnable receptive field size sand network parameters Θ, on a pretext task of matching local features for 3D alignment. We then propose a differentiable method for optimizing the receptive field size sto transfer Fs,Θto downstream tasks. For illustration purposes, we use a molecular surface segmentation task as an example on the right. tation learning for deformable 3D shape analysis is possi-ble. We also find that the receptive field (or local support) size plays a crucial role in the transferability of features and needs to be adapted between training and test data. To ad-dress this, we propose a receptive field optimization strat-egy, which, combined with a specific pre-training approach, leads to state-of-the-art results on a wide range of down-stream tasks. An overview of our proposed method can be found in Fig. 2. To summarize, our main contributions are as follows: 1. We investigate the link between the locality of geomet-ric (3D) features and their transferability in challeng-ing deformable shape tasks.
Athar_TarViS_A_Unified_Approach_for_Target-Based_Video_Segmentation_CVPR_2023
Abstract The general domain of video segmentation is currently fragmented into different tasks spanning multiple bench-marks. Despite rapid progress in the state-of-the-art, cur-rent methods are overwhelmingly task-specific and cannot conceptually generalize to other tasks. Inspired by recent approaches with multi-task capability, we propose TarViS: a novel, unified network architecture that can be applied to any task that requires segmenting a set of arbitrarily de-fined ‘targets’ in video. Our approach is flexible with re-spect to how tasks define these targets, since it models the latter as abstract ‘queries’ which are then used to predict pixel-precise target masks. A single TarViS model can be trained jointly on a collection of datasets spanning differ-ent tasks, and can hot-swap between tasks during infer-ence without any task-specific retraining. To demonstrate its effectiveness, we apply TarViS to four different tasks, namely Video Instance Segmentation (VIS), Video Panoptic Segmentation (VPS), Video Object Segmentation (VOS) and Point Exemplar-guided Tracking (PET). Our unified, jointly trained model achieves state-of-the-art performance on 5/7 benchmarks spanning these four tasks, and competitive per-formance on the remaining two. Code and model weights are available at: https://github.com/Ali2500/TarViS
1. Introduction The ability to understand video scenes has been a long-standing goal of computer vision research because of wide-ranging applications in intelligent vehicles and robots. Early approaches tackled simpler tasks involving contour-based [33, 39] and box-level tracking [21, 25, 40, 52], back-ground subtraction [20, 61], and motion segmentation [8, 49]. The deep learning boom then revolutionized the land-scape by enabling methods to perform pixel-precise seg-mentation on challenging, real-world videos. In the past few years, a number of benchmarks have emerged, which evaluate how well methods can perform video segmenta-tion according to various task formulations. Over time, these tasks/benchmarks have ballooned into separate re-VPS VIS VOS PET TarViSBEFORE Task-specific modelsNOW Task-specific targets VIS VPS VOS/PET Figure 1. Predicted results from a jointly trained TarViS model for four different video segmentation tasks. search sub-communities. Although existing methods are rapidly improving the state-of-the-art for these benchmarks, each of them typically tackles only one narrowly-defined task, and generalizing them is non-trivial since the task def-inition is baked into the core approach. We argue that this fragmentation is unnecessary be-cause video target segmentation tasks all require the same high-level capability, namely that of identifying, localizing and tracking rich semantic concepts. Meanwhile, recent progress on Transformer networks has enabled the wider AI research community to move towards unified, multi-task architectures [1, 30, 31, 38, 58], because the attention op-eration [62] is well-suited for processing feature sets with arbitrary structure and data modality. These developments give us the opportunity to unify the fractured landscape of target-based video segmentation. In this paper, we propose TarViS: a novel architecture which enables a single, unified model to be jointly trained for multiple video segmentation tasks . During inference, the same model can perform differ-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18738 ent tasks at runtime by specifying the segmentation target. The core idea is that TarViS tackles the generic task of segmenting a set of arbitrary targets in video (defined as semantic classes or as specific objects). These targets are encoded as queries which, together with the video features, are input to a Transformer-based model. The model iter-atively refines these queries and produces a pixel-precise mask for each target entity. This formulation conceptually fuses all video segmentation tasks [3, 54, 66, 72] which fall under the umbrella of the above-mentioned generic task, be-cause they differ only in how the targets are defined. During both training and inference, TarViS can hot-swap between tasks at run-time by providing the desired target query set. To demonstrate our generalization capability, we tackle four different tasks: (1) Video Instance Segmenta-tion (VIS) [54, 72], (2) Video Panoptic Segmentation (VPS) [35], (3) Video Object Segmentation [53], and (4) Point Exemplar-guided Tracking [3] (PET). For VIS, the segmentation targets are all objects in the video belong-ing to a predefined set of classes. The target set for VPS includes that for VIS, and additionally, a set of non-instantiable stuff semantic classes. For VOS, the targets are a specific set of objects for which the first-frame ground-truth mask is provided. PET is a more constrained version of VOS which only provides the location of a single point inside the object, rather than the full object mask. Existing methods for these tasks lack generalization capability because task-specific assumptions are typically baked into the approach (see Sec. 2 and 3 for details). In contrast, TarViS can tackle all four tasks with a unified model because we encode the task-specific targets as a set of queries, thus decoupling the network architecture from the task definition. Moreover, our approach can theoreti-cally generalize further, e.g., one could potentially define the target set as all objects described by a given text prompt, though this is beyond the scope of this paper. To summarize, our contributions are as follows: we pro-pose TarViS, a novel architecture that can perform any task requiring segmentation of a set of targets from video. For the first time, we are able to jointly train and infer a single model on a collection of datasets spanning the four afore-mentioned tasks (VIS, VPS, VOS, PET). Our experimental results show that TarViS performs competitively for VOS, and achieves state-of-the-art results for VIS, VPS and PET.
Choi_Progressive_Random_Convolutions_for_Single_Domain_Generalization_CVPR_2023
Abstract Single domain generalization aims to train a generaliz-able model with only one source domain to perform well on arbitrary unseen target domains. Image augmentation based on Random Convolutions (RandConv), consisting of one convolution layer randomly initialized for each mini-batch, enables the model to learn generalizable visual represen-tations by distorting local textures despite its simple and lightweight structure. However, RandConv has structural limitations in that the generated image easily loses semantics as the kernel size increases, and lacks the inherent diversity of a single convolution operation. To solve the problem, we propose a Progressive Random Convolution (Pro-RandConv) method that recursively stacks random convolution layers with a small kernel size instead of increasing the kernel size. This progressive approach can not only mitigate semantic distortions by reducing the influence of pixels away from the center in the theoretical receptive field, but also cre-ate more effective virtual domains by gradually increasing the style diversity. In addition, we develop a basic random convolution layer into a random convolution block includ-ing deformable offsets and affine transformation to support texture and contrast diversification, both of which are also randomly initialized. Without complex generators or adver-sarial learning, we demonstrate that our simple yet effective augmentation strategy outperforms state-of-the-art methods on single domain generalization benchmarks.
1. Introduction In recent years, deep neural networks have achieved re-markable performance in a wide range of applications [26, 27]. However, this success is built on the assumption that the test data ( i.e. target) should share the same distribution as the training data ( i.e. source), and they often fail to gen-eralize to out-of-distribution data [9, 21, 47]. In practice, this domain discrepancy problem between source and target domains is commonly encountered in real-world scenarios. †Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. Figure 1. Comparison of conventional random convolutions (single-layer) and our progressive random convolutions (multi-layer). Our final model includes multiple random convolution blocks consisting of deformable offsets and affine transformation. Especially, a catastrophic safety issue may occur in medical imaging [33, 65] and autonomous driving [62, 68] applica-tions. To tackle this problem, one line of work focuses on domain adaptation (DA) to transfer knowledge from a source domain to a specific target domain [4, 14, 22, 25]. This ap-proach usually takes into account the availability of labeled or unlabeled target domain data. Another line of work deals with a more realistic setting known as domain generaliza-tion (DG), which aims to learn a domain-agnostic feature representation with only data from source domains without access to target domain data. Thanks to its practicality, the task of domain generalization has been extensively studied. In general, the paradigm of domain generalization de-pends on the availability of using multi-source domains [57]. Previously, many studies [3, 16, 17, 30, 41] have focused on using multi-source domains, and the distribution shift can be alleviated by simply aggregating data from multiple training This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10312 Figure 2. Examples of images augmented by RandConv and our Pro-RandConv composed of multiple convolution blocks. domains [30, 31]. However, this approach faces practical limitations due to data collection budgets [44]. As an alterna-tive, the single domain generalization problem has recently received attention [34, 57], which learns robust representa-tion using only a single source domain. A common solution to this challenging problem is to generate diverse samples in order to expand the coverage of the source domain through an adversarial data augmentation scheme [54, 69]. However, most of these methods share a complicated training pipeline with multiple objective functions. In contrast, Xu et al . suggested Random Convolution (RandConv) [61] that consists of a single convolution layer whose weights are randomly initialized for each mini-batch, as described in Fig. 1(a). When RandConv is applied to an input image, it tends to modify the texture of the input image depending on the kernel size of the convolution layer. This is a simple and lightweight image augmentation technique compared to complex generators or adversarial data augmen-tation. Despite these advantages, this method has structural limitations. Firstly, the image augmented by RandConv eas-ily loses its semantics while increasing the kernel size, which is shown in Fig. 2(a). As a result, the ability to generalize in the test domain is greatly reduced as shown in Fig. 1(c). Secondly, RandConv lacks the inherent diversity of a single convolution operation. To solve these limitations, we propose a progressive ap-proach based on random convolutions, named Progressive Random Convolutions (Pro-RandConv). Figure 1(b) de-scribes the progressive approach consisting of multiple con-volution layers with a small kernel size. Our progressive approach has two main properties. The first is that the multi-layer structure can alleviate the semantic distortion issues by reducing the impact on pixels away from the center in the theoretical receptive field, as revealed in [37]. There-fore, the progressive approach does not degrade the perfor-mance much even if the receptive field increases, as shown in Fig. 1(c). The second property is that stacking random convolution layers of the same weights can generate more effective virtual domains rather than using different weights. This is an interesting observation that can be interpreted as gradually increasing the distortion magnitude of a single transformation to the central pixels. This approach enables more fine-grained control in image transformation than a single layer with a large kernel, which has the effect of incrementally improving style diversity.In addition, we propose a random convolution block in-cluding deformable offsets and affine transformation to sup-port texture and contrast diversification. It is noteworthy that all weights are also sampled from a Gaussian distribution, so our convolution block is an entirely stochastic process. Finally, we can maximize the diversity of styles while main-taining the semantics of newly generated images through the progressive method of this random convolution block, as described in Fig. 2(b). We argue that the proposed Pro-RandConv could be a strong baseline because it surpasses recent single DG methods only by image augmentation with-out an additional loss function or complex training pipelines. To summarize, our main contributions are as follows: •We propose a progressive approach of recursively stack-ing small-scale random convolutions to improve the style diversity while preserving object semantics. •We develop a random convolution layer with de-formable offsets and affine transformation to promote texture and contrast diversity for augmented images. •We perform comprehensive evaluation and analyses of our method on single and multi DG benchmarks on which we produce significant improvement in recogni-tion performance compared to other methods.
Jin_Context-Aware_Alignment_and_Mutual_Masking_for_3D-Language_Pre-Training_CVPR_2023
Abstract 3D visual language reasoning plays an important role in effective human-computer interaction. The current ap-proaches for 3D visual reasoning are task-specific, and lack pre-training methods to learn generic representations thatcan transfer across various tasks. Despite the encourag-ing progress in vision-language pre-training for image-text data, 3D-language pre-training is still an open issue due to limited 3D-language paired data, highly sparse and ir-regular structure of point clouds and ambiguities in spa-tial relations of 3D objects with viewpoint changes. In thispaper , we present a generic 3D-language pre-training ap-proach, that tackles multiple facets of 3D-language rea-soning by learning universal representations. Our learn-ing objective constitutes two main parts. 1) Context awarespatial-semantic alignment to establish fine-grained corre-spondence between point clouds and texts. It reduces rela-tional ambiguities by aligning 3D spatial relationships withtextual semantic context. 2) Mutual 3D-Language Masked modeling to enable cross-modality information exchange. Instead of reconstructing sparse 3D points for which lan-guage can hardly provide cues, we propose masked pro-posal reasoning to learn semantic class and mask-invariant Corresponding Author: Yinjie Lei (yinjie@scu.edu.cn)representations. Our proposed 3D-language pre-trainingmethod achieves promising results once adapted to vari-ous downstream tasks, including 3D visual grounding, 3Ddense captioning and 3D question answering. Our codesare available at https://github.com/leolyj/3D-VLP
1. Introduction 3D Vision and Language (3D V+L) reasoning aims to jointly understand 3D point clouds and their textual descrip-tions. It lies at the intersection of 3D visual understand-ing and natural language processing, and plays an impor-tant role in applications e.g., Metaverse, AR/VR and au-tonomous robots. 3D V+L reasoning has recently gainedsignificant research interest, with multiple works tackling3D visual grounding [ 1,8,33,59], 3D dense captioning [11,23,58] and 3D question answering [ 3,51,54]. Despite promising progress made towards solving 3D vi-sual reasoning tasks, the existing approaches are highly spe-cialized and task specific. This is in contrast to multi-modalreasoning from RGB images, where the dominant approach is to pre-train a generic model on large scale image-text paired data, and then adapt this model for multiple down-stream tasks. The pre-training step enables learning highlytransferable and generic cross-modality representations via This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10984 techniques such as image-text feature alignment [21,41] and masked signal reconstruction [10,28]. For RGB images, the transfer learning from pre-trained Vision-Language modelsachieves impressive results on numerous downstream tasks(e.g., image-text retrieval [ 28,49], visual question answer-ing [ 46,47] and image captioning [ 17,48]). However, due to unique challenges posed by irregular and unstructured pointcloud data, 3D-language pre-training to learn a unified rep-resentation space, that can be transferred across tasks, hasnot yet been investigated in the existing literature. As point clouds have different characteristics from 2D images, 3D-Language Pre-training (3D-LP) poses multipleunique challenges: 1)Available 3D-language samples are limited. Compared to image-text samples that can be webcrawled, the existing pairwise point cloud and language samples are much scarce. 2)Point clouds are naturally unstructured. Unlike 2D images having pixels densely ar-ranged in regular grids, point clouds are highly sparse andirregularly distributed. 3)The spatial relations between 3D objects are complex, as they are not restricted to a 2D plane,and introduce ambiguities with viewpoint changes. In this paper, we propose a 3D-language pre-training approach that aims to establish fine-grained interactionsbetween point clouds and their textual descriptions, thuslearning universal multi-modal features for various 3D V+L tasks, as illustrated in Fig 1. First, to bridge the distribution discrepancy between 3D geometric features and their textsemantics, we propose a Context aware Spatial-semanticAlignment (CSA) strategy (Sec. 3.2). Different from the global contrastive learning in image-text, we align pointcloud and language features from semantic and contextualperspectives separately, so that the spatial context between3D objects and the semantic context in the language aresimultaneously considered to overcome relational ambigu-ity. We further introduce Mutual 3D-Language Maskedmodeling (M3LM) (Sec. 3.3) that reconstructs the masked parts and enable meaningful cross-modal information ex-change to enhance the feature of both modality. Due tothe irregular structure and variable (unfixed) number of 3Dpoints, existing masking methods that reconstruct raw in-put signal are not suitable to learn effective representationfor point clouds. We propose to reconstruct the semantic class and high-level features of masked 3D objects by tak-ing complementary information from language, which givesthe model more meaningful objective than merely recon-structing the xyz of points. In our approach, we predict the semantic class of masked 3D objects and reconstructmomentum-distilled encoded features for the unmasked in-put. We jointly train the 3D-language model with ourproposed multi-task learning objectives to learn and se-mantically align multi-modal features that generalize well across tasks. Through experiments on various downstream3D V+L tasks, we demonstrate the versatility of our pro-posed 3D-language pre-training for three different tasks on ScanRefer [ 8], Scan2Cap [ 11] and ScanQA [ 3] benchmark datasets. Our main contributions are: • We propose a pre-training method to learn transfer-able 3D-language representations to solve 3D visualgrounding, 3D dense captioning and 3D question an-swering from a unified perspective. • In order to jointly train point cloud and language encoders, we propose context-aware 3D-languagealignment and mutual masked modeling strategies, which ensure that the learned multi-modal features aresemantically-aligned and complement each other. • We consistently surpass existing task-specific meth-ods on ScanRefer [ 8] (+2.1 Acc@0.5), Scan2Cap [ 11] (+5.5 CIDEr@0.5) and ScanQA [ 3] (+1.3 EM@1) benchmark datasets, achieving new state-of-the-arts.
Hao_Dual_Alignment_Unsupervised_Domain_Adaptation_for_Video-Text_Retrieval_CVPR_2023
Abstract Video-text retrieval is an emerging stream in both com-puter vision and natural language processing communi-ties, which aims to find relevant videos given text queries. In this paper, we study the notoriously challenging task, i.e., Unsupervised Domain Adaptation Video-text Retrieval (UDAVR), wherein training and testing data come from dif-ferent distributions. Previous works merely alleviate the domain shift, which however overlook the pairwise mis-alignment issue in target domain, i.e., there exist no se-mantic relationships between target videos and texts. To tackle this, we propose a novel method named Dual Align-ment Domain Adaptation (DADA). Specifically, we first in-troduce the cross-modal semantic embedding to generate discriminative source features in a joint embedding space. Besides, we utilize the video and text domain adaptations to smoothly balance the minimization of the domain shifts. To tackle the pairwise misalignment in target domain, we propose the Dual Alignment Consistency (DAC) to fully ex-ploit the semantic information of both modalities in target domain. The proposed DAC adaptively aligns the video-text pairs which are more likely to be relevant in target do-main, enabling that positive pairs are increasing progres-sively and the noisy ones will potentially be aligned in the later stages. To that end, our method can generate more truly aligned target pairs and ensure the discriminability of target features. Compared with the state-of-the-art meth-ods, DADA achieves 20.18% and 18.61% relative improve-ments on R@1 under the setting of TGIF →MSR-VTT and TGIF→MSVD respectively, demonstrating the superiority of our method.
1. Introduction Video-text retrieval enables users to search videos with a simple and natural language description. The de facto paradigm is to learn high-level visual-textual embeddings *Corresponding author. Original distributionPrevious methods Our method Source Video Source TextTarget Video Target TextFigure 1. Illustration of the proposed method. Previous meth-ods simply bring source and target features closer (blue and red ovals are overlapping each other), whereas inevitably mixing tar-get videos (red circles) and texts (red triangles) together, ignoring whether they are semantically relevant or not. Instead, our method exploits the semantic structures in target domain to adaptively gen-erate truly aligned video-text pairs (dotted circles) and ensure the discriminability of target data. Best viewed in color. by off-the-shelf feature extractors, and to measure semantic similarities in a joint embedding space [13, 42, 46, 63]. De-spite their thrilling success, the primary assumption is that training and testing data come from the same distribution, which whereas may not hold in real scenarios. To alleviate the domain shift problem, Unsupervised Do-main Adaptation (UDA) has gained a lot of attention due to its efficient training without the need of supervision in target domain. UDA transfers knowledge from a labeled source domain to an unlabeled target domain [15, 33, 40, 41, 53], which has made remarkable progress in many fields, such as image classification [33, 56], autonomous driving [54, 55], medical image processing [35, 36], and video-based action recognition [50,52]. However, these methods are originally designed for classification tasks, which might not be suit-able for the video-text retrieval. Note that in UDA Video-text Retrieval (UDA VR), there exists no identical label set for source and target do-mains. The only supervision is the semantic relationship in source dataset, which is also the general setting for UDA This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18962 cross-modal tasks [4, 11, 62, 64]. To that end, some ap-proaches have been recently proposed [9, 17, 39], such as directly minimizing the distribution discrepancy [17], dis-tilling knowledge from the source domain [9], or introduc-ing pre-defined prototype assignments [39]. However, they overlook the pairwise misalignment issue in target domain, i.e., there exist no semantic relationships between target videos and texts. Merely alleviating the video and text do-main shifts is a sub-optimal solution, which fails to fully explore the semantic structures of target data, i.e., whether the video-text pair is semantically relevant or not. As illus-trated in Fig. 1, previous methods bring the learned source and target features close together, which whereas inevitably mixes up target videos and texts, ignoring whether they are a truly relevant pair or not. This will further induce less dis-criminative target features, and thus becomes the motivation of our work. In this paper, we propose a novel method named Dual Alignment Domain Adaptation (DADA) to tackle the pair-wise misalignment issue in target domain. We first in-troduce the cross-modal semantic embedding to generate discriminative source features in a joint embedding space, where semantically relevant pairs should lie close together and vice versa. To alleviate the domain shift, we further utilize a smooth adaptation procedure to balance the min-imization of distribution shifts between source and target domains. Last but not least, to tackle the pairwise misalign-ment in target domain, we propose a simple yet effective Dual Alignment Consistency (DAC), which fully exploits the semantic information of both modalities in target do-main. The proposed DAC adaptively aligns the video-text pairs which are more likely to be relevant in target domain, enabling that (1) positive pairs are increasing progressively, (2) the noisy ones will potentially be aligned in the later stages and (3) the discriminability of target features. Ex-tensive experiments on several benchmarks demonstrate the superiority of our method. The contributions of this paper are mainly threefold: • To tackle the pairwise misalignment problem in UDA VR task, we develop a novel method named Dual Alignment Domain Adaptation (DADA) which fully exploits the semantic structures of target data. • The proposed Dual Alignment Consistency (DAC) mechanism adaptively aligns the most similar videos and texts in target domain, ensure that the positive pairs are increasing progressively and the noisy ones are potentially aligned in later stages. • Compared with the state-of-the-art methods, DADA achieves 20.18% and 18.61% relative improvements on R@1 under the setting of TGIF →MSRVTT and TGIF→MSVD respectively, demonstrating the supe-riority of our method.2. Related Work Video-Text Retrieval. In recent years, cross-modal embedding-based approaches [2, 10, 20, 26, 27, 37, 47, 58] have emerged as a dominant paradigm for video-text re-trieval. [48] proposes the JEMC framework using action, object, text and audio features by a simple concatenation fusion strategy. CE [37] adopts video features extracted from all modalities to encode a video. T2VLAD [59] au-tomatically learns text-and-video semantic topics and re-emphasizes the importance of local semantic alignment be-tween texts and videos. HGR [10] proposes a Hierarchical Graph Reasoning (HGR) model, which decomposes video-text pairs into global-to-local levels. GPO [5] learns to automatically adapt itself to the best pooling strategy for different baselines. Recently, the Contrastive Language-Image Pretraining (CLIP) [3] model is widely used in video-text retrieval [24, 31, 38, 45]. CLIP4Clip [43] investigates three mechanisms of similarity calculation based on the pre-trained CLIP. Similarly, CLIP2video [18] focuses on the spatial semantics captured by the CLIP model. Different from them, we explore the video-text retrieval task through the lens of unsupervised domain adaptation. Unsupervised Domain Adaptation. UDA transfers predictive models from a fully-labeled source domain to an unlabeled target domain. Existing classification-based UDA methods seek to alleviate the domain shift between source and target domains [15, 22, 33, 40, 41, 56, 60]. Be-sides, UDA methods have been extended to various video-based tasks, like video action recognition [6, 12, 49], video segmentation [7, 8] and video localisation [1]. Recently, some cross-modal tasks also resort to UDA and try to utilize the unpaired data in target domain, such as image caption-ing [11, 62, 64] and VQA [4]. The similar work to ours is DCKT [29] which focuses on UDA image-text retrieval and transfers knowledge from a large dataset to promote the model performance on small dataset. However, DCKT needs labeled target image-text pairs during the training procedure, which fails to work well for UDA VR task. Unsupervised Domain Adaptation for Video-Text Re-trieval. To the best of our knowledge, there are only a few explorations of the UDA VR task [9,17,39]. MAN [17] pro-poses three alignments to alleviate different gaps in UDA VR task. CAPQ [9] comprises a concept preservation regu-lariser to enhance the transferability of the learned embed-dings. ACP [39] focuses on minimizing both uni-modal and cross-modal distribution shift across the source and target domains. Compared to these methods, our approach dif-fers in three aspects. (1) MAN tries to directly alleviate three different gaps in a classification-based manner, which is not suitable for cross-modal retrieval task. (2) CAPQ and ACP maximize the mutual information or minimize the KL-divergence between the prototype assignments of source and target videos, which however ignores the domain shift 18963 Source VideoTarget VideoSource TextTarget TextVideo EncoderDual Alignment Consistency Video DomainAdaptation Target TextTarget VideoThere is woman surfing on the powerful waves. A woman is adding green leaves to a pot of boiling water. Cross-modal Semantic EmbeddingLearningAnchorNegativePositivePositiveAnchorNegative There is woman surfing on the powerful waves. Obama is behind podium and speaking to an audience.A jet is flying.Man lying on his back.Text DomainAdaptationMutual maximum similarityCandidate matching text Candidate matching videoText Encoder 𝓓𝒗𝒊𝒅𝒆𝒐𝓛𝓓𝒗𝒊𝒅𝒆𝒐𝜑𝑣'𝒮 𝓓𝒕𝒆𝒙𝒕𝓛𝓓𝒕𝒆𝒙𝒕𝜓𝑡'𝒮𝓛𝓢𝜑𝑣'𝒯𝜓𝑡-𝒯𝜑𝑣'𝒫𝜓𝑡'𝒫𝓛𝓟Figure 2. The overall framework of DADA. Video/text features are first fed into video/text encoders to generate high-level representations. The video and text domain adaptation modules simultaneously alleviates the distribution shifts across domains in both modalities ( Lvideo D andLtext D). Source video and text features are expected to be discriminative by the cross-modal semantic embedding ( LS). Besides, the proposed Dual Alignment Consistency (DAC) adaptively aligns the target video-text pairs which are more likely to be relevant and progressively generates dual aligned video-text pairs (vp i, tp i)np i=1in target domain( LP). Best viewed in color. in text modality. (3) The semantic relationships of videos and texts in target domain have not been fully exploited by previous methods, leading to the pairwise misalignment is-sue, which is the primary concern of this paper. 3. Methodology 3.1. Preliminaries For notational clarity, we first introduce some symbols and definitions used throughout this paper. Formally, as-sume that we have a set of samples in source domain (Vs,Ts) = (vs i, ts i)ns i=1 , where nsindicates the number of video-text pairs. Similarly, we also have a set of samples in target domainn Vt={vt i}nt i=1,Tt= tt j nt j=1o with two collections of ntvideos Vtand texts Tt, respectively. Note that the target videos and texts are unpaired , which means the supervised information, i.e., whether one target video-text pair is semantically relevant or not , is not avail-able in target domain. The Unsupervised Domain Adapta-tion Video-text Retrieval (UDA VR) aims at improving the model’s generalization performance on target domain with the utilization of source domain. The overall framework of our method is illustrated in Fig. 2. Given one video-text pair, following the state-of-the-art baseline in video-text retrieval [5], we utilize a video en-coder φ(·)and a text encoder ψ(·)to map each video sam-plevand text description tinto a joint embedding space. The visual embedding φ(v)∈RMand text embeddingψ(t)∈RMare semantically relevant if the text describes the video, where Mdenotes the dimension in the common space. In the source domain, we utilize the video-text con-trastive loss to guide the semantic alignment learning. Fol-lowing [30, 32, 51], the contrastive loss consid
Jafarian_Normal-Guided_Garment_UV_Prediction_for_Human_Re-Texturing_CVPR_2023
Abstract Clothes undergo complex geometric deformations, which lead to appearance changes. To edit human videos in a physically plausible way, a texture map must take into account not only the garment transformation induced by the body movements and clothes fitting, but also its 3D fine-grained surface geometry. This poses, however, a new chal-lenge of 3D reconstruction of dynamic clothes from an im-age or a video. In this paper, we show that it is possible to edit dressed human images and videos without 3D re-construction. We estimate a geometry aware texture map between the garment region in an image and the texture space, a.k.a, UV map. Our UV map is designed to pre-serve isometry with respect to the underlying 3D surface by making use of the 3D surface normals predicted from theimage. Our approach captures the underlying geometry of the garment in a self-supervised way, requiring no ground truth annotation of UV maps and can be readily extended to predict temporally coherent UV maps. We demonstrate that our method outperforms the state-of-the-art human UV map estimation approaches on both real and synthetic data.
1. Introduction While browsing online clothing shops, have you ever wondered how the appearance of a dress of interest would look on you as if you were in a fitting room given your dress with a similar shape? A key technology to enable generat-ing such visual experiences is photorealistic re-texturing — editing the texture of clothes in response to the subject’s movement in the presented images or videos in a geomet-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4627 rically and temporally coherent way. Over the past few years, there has been a significant advancement in the im-age and video editing technologies [ 4–6,11–13,17,20,21, 26,31–33,38,46,54], such as inserting advertising logos on videos of moving cars or applying face makeup on so-cial media. However, such editing approaches designed for rigid or semi-rigid surfaces are not suitable for garments that undergo complex secondary motion with respect to the underlying body. For example, the fine wrinkles of the dress in Figure 1result in complex warps in texture over time. In this paper, we present a new method to edit the appearance of a garment in a given image or video by taking into ac-count its fine-grained geometric deformation. Previous works address photorealistic texture editing in two ways. (1) 3D reconstruction and rendering: these approaches can achieve high-fidelity texture editing given highly accurate 3D geometry. On other side of the coin, their performance is dictated by the quality of the 3D re-construction. While the 3D geometry of the garment can be learned from paired human appearance data, e.g., hu-man modeling repositories with 3D meshes and render-ings [ 1], due to the scarcity of such data, it often can-not generalize well on unseen real images and videos. (2) Direct texture mapping: by estimating dense UV map, these methods can bypass the procedure of 3D reconstruc-tion [ 18,22,39,41,55]. However, they usually lack of ge-ometry details and only capture the underlying human body, thus, not applicable for editing garments. Moreover, when applied to videos, visual artifacts of editing become more salient since they are not aware of underlying deformation of the garment’s 3D geometry [ 25,57]. We design our method to enjoy the advantages of both two approaches: preserving realistic details in UV mapping while circumventing 3D reconstruction. Our key insight is that the fundamental geometric property of isometry can be imposed into UV map estimation via the 3D surface nor-mals predicted from an image. We formulate a geometric relationship between the UV map and surface normals in the form of a set of partial differential equations. Our method takes as input an image or video, its sur-face normal prediction, and dense optical flow (for video), and outputs the geometry aware UV map estimate. The UV map is modeled by a multi-layer perceptron that can pre-dict UV coordinates given a pixel location in an image. We note that the UV map is defined up to the choice of a refer-ence coordinate frame. To disambiguate this, we condition the neural network with a pre-defined proxy UV map (e.g., DensePose [ 18]). We use the isometry constraints as a loss to optimize the UV map. Further, for a video, we leverage the per-frame image feature to correlate the UV coordinates of the pixels across time using optical flow. Our contributions can be concluded in three aspects: (1) a novel formulation that captures the geometric relationship between the 3D surface normals and the UV map by theisometry contraint, which eliminates the requirement of 3D reconstruction and ground truth UV map; (2) a neural net-work design that learns to predict temporally coherent UV map for the frames by correlating per-frame image features; (3) stronger performance compared to existing re-texturing methods and compelling results on a wide range of real-world imagery.
Avrahami_SpaText_Spatio-Textual_Representation_for_Controllable_Image_Generation_CVPR_2023
Abstract Recent text-to-image diffusion models are able to gener-ate convincing results of unprecedented quality. However, it is nearly impossible to control the shapes of different re-gions/objects or their layout in a fine-grained fashion. Pre-vious attempts to provide such controls were hindered by their reliance on a fixed set of labels. To this end, we present SpaText — a new method for text-to-image generation using open-vocabulary scene control. In addition to a global text prompt that describes the entire scene, the user provides a segmentation map where each region of interest is anno-tated by a free-form natural language description. Due to lack of large-scale datasets that have a detailed textual de-scription for each region in the image, we choose to lever-age the current large-scale text-to-image datasets and base our approach on a novel CLIP-based spatio-textual repre-sentation, and show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-based. In addi-tion, we show how to extend the classifier-free guidance method in diffusion models to the multi-conditional case and present an alternative accelerated inference algorithm. Finally, we offer several automatic evaluation metrics and use them, in addition to FID scores and a user study, to evaluate our method and show that it achieves state-of-the-art results on image generation with free-form textual scene control.
1. Introduction Imagine you could generate an image by dipping your digital paintbrush (so to speak) in a “black horse” paint, then sketching the specific position and posture of the horse, afterwards, dipping it again in a “red full moon” paint and sketching it the desired area. Finally, you want the entire image to be in the style of The Starry Night. Current state-of-the-art text-to-image models [51, 59, 72] leave much to Project page is available at: https://omriavrahami.com/spatext 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18370 be desired in achieving this vision. The text-to-image interface is extremely powerful — a single prompt is able to represent an infinite number of pos-sible images. However, it has its cost — on the one hand, it enables a novice user to explore an endless number of ideas, but, on the other hand, it limits controllability: if the user has a mental image that they wish to generate, with a specific layout of objects or regions in the image and their shapes, it is practically impossible to convey this in-formation with text alone, as demonstrated in Figure 2. In addition, inferring spatial relations [72] from a single text prompt is one of the current limitations of SoTA models. Make-A-Scene [22] proposed to tackle this problem by adding an additional (optional) input to text-to-image mod-els, a dense segmentation map with fixed labels. The user can provide two inputs: a text prompt that describes the en-tire scene and an elaborate segmentation map that includes a label for each segment in the image. This way, the user can easily control the layout of the image. However, it suffers from the following drawbacks: (1) training the model with a fixed set of labels limits the quality for objects that are not in that set at inference time, (2) providing a dense segmenta-tion can be cumbersome for users and undesirable in some cases, e.g., when the user prefers to provide a sketch for only a few main objects they care about, letting the model infer the rest of the layout; and (3) lack of fine-grained con-trol over the specific characteristic of each instance. For ex-ample, even if the label set contains the label “dog”, it is not clear how to generate several instances of dogs of different breeds in a single scene. In order to tackle these drawbacks, we propose a differ-ent approach: (1) rather than using a fixed set of labels to represent each pixel in the segmentation map, we propose to represent it using spatial free-form text , and (2) rather than providing a dense segmentation map accounting for each pixel, we propose to use a sparse map, that describes only the objects that a user specifies (using spatial free-form text), while the rest of the scene remains unspecified. To summarize, we propose a new problem setting: given aglobal text prompt that describes the entire image, and a spatio-textual scene that specifies for segments of inter-est their local text description as well as their position and shape , a corresponding image is generated, as illustrated in Figure 1. These changes extend expressivity by providing the user with more control over the regions they care about, leaving the rest for the machine to figure out. Acquiring a large-scale dataset that contains free-form textual descriptions for each segment in an image is pro-hibitively expensive, and such large-scale datasets do not exist to the best of our knowledge. Hence, we opt to extract the relevant information from existing image-text datasets. To this end, we propose a novel CLIP-based [49] spatio-textual representation that enables a user to specify for each“at the beach” SpaText Stable Diffusion DALL ·E 2 “a white Labrador” “a white Labrador at the beach puts its “a blue ball” right arm above a blue ball without touching, while sitting in the bottom right corner of the frame” Figure 2. Lack of fine-grained spatial control: A user with a specific mental image of a Labrador dog holding its paw above a blue ball without touching, can easily generate it with a SpaText representation (left) but will struggle to do so with traditional text-to-image models (right) [52, 56]. segment its description using free-form text and its posi-tion and shape. During training, we extract local regions using a pre-trained panoptic segmentation model [69], and use them as input to a CLIP image encoder to create our representation. Then, at inference time, we use the text de-scriptions provided by the user, embed them using a CLIP text encoder, and translate them to the CLIP image embed-ding space using a prior model [51]. In order to assess the effectiveness of our proposed rep-resentation SpaText, we implement it on two state-of-the-art types of text-to-image diffusion models: a pixel-based model (DALL ·E 2 [51]) and a latent-based model (Stable Diffusion [56]). Both of these text-to-image models em-ploy classifier-free guidance [33] at inference time, which supports a single conditioning input (text prompt). In order to adapt them to our multi-conditional input (global text as well as the spatio-textual representation), we demonstrate how classifier-free guidance can be extended to any multi-conditional case. To the best of our knowledge, we are the first to demonstrate this. Furthermore, we propose an ad-ditional, faster variant of this extension that trades-off con-trollability for inference time. Finally, we propose several automatic evaluation metrics for our problem setting and use them along with the FID score to evaluate our method against its baselines. In addi-tion, we conduct a user-study and show that our method is also preferred by human evaluators. In summary, our contributions are: (1) we address a new scenario of image generation with free-form textual scene control, (2) we propose a novel spatio-textual representa-tion that for each segment represents its semantic proper-ties and structure, and demonstrate its effectiveness on two state-of-the-art diffusion models — pixel-based and latent-based, (3) we extend the classifier-free guidance in diffusion models to the multi-conditional case and present an alter-native accelerated inference algorithm, and (4) we propose several automatic evaluation metrics and use them to com-pare against baselines we adapted from existing methods. We also evaluate via a user study. We find that our method achieves state-of-the-art results. 2 18371
Gao_The_ObjectFolder_Benchmark_Multisensory_Learning_With_Neural_and_Real_Objects_CVPR_2023
Abstract We introduce the OBJECT FOLDER BENCHMARK , a benchmark suite of 10 tasks for multisensory object-centric learning, centered around object recognition, reconstruc-tion, and manipulation with sight, sound, and touch. We also introduce the OBJECT FOLDER REAL dataset, in-cluding the multisensory measurements for 100 real-world household objects, building upon a newly designed pipeline for collecting the 3D meshes, videos, impact sounds, and tactile readings of real-world objects. We conduct system-atic benchmarking on both the 1,000 multisensory neural objects from OBJECT FOLDER , and the real multisensory data from OBJECT FOLDER REAL. Our results demon-strate the importance of multisensory perception and reveal the respective roles of vision, audio, and touch for differ-ent object-centric learning tasks. By publicly releasing our dataset and benchmark suite, we hope to catalyze and en-able new research in multisensory object-centric learning in computer vision, robotics, and beyond. Project page: https://objectfolder.stanford.edu
1. Introduction Computer vision systems today excel at recognizing ob-jects in 2D images thanks to many image datasets [3,17,35, 40]. There is also a growing interest in modeling an object’s shape and appearance in 3D, with various benchmarks and tasks introduced [8, 28, 44, 45, 54, 61]. Despite the exciting progress, these studies primarily focus on the visual recog-nition of objects. At the same time, our everyday activities often involve multiple sensory modalities. Objects exist not just as visual entities, but they also make sounds and can be touched during interactions. The different sensory modes of an object all share the same underlying object intrinsics— its 3D shape, material property, and texture. Modeling the *indicates equal contribution. †Yiming is affiliated with Shanghai Jiao Tong University. The work was done when he was visiting Stanford University as a summer intern.complete multisensory profile of objects is of great impor-tance for many applications beyond computer vision, such as robotics, graphics, and virtual and augmented reality. Some recent attempts have been made to combine mul-tiple sensory modalities to complement vision for various tasks [2,6,39,58,59,63,70,73]. These tasks are often studied in tailored settings and evaluated on different datasets. As an attempt to develop assets generally applicable to diverse tasks, the O BJECT FOLDER dataset [23, 26] has been intro-duced and includes 1,000 neural objects with their visual, acoustic, and tactile properties. O BJECT FOLDER however has two fundamental limitations. First, no real objects are included; all multisensory data are obtained through simula-tion with no simulation-to-real (sim2real) calibration. Sec-ond, only a few tasks were presented to demonstrate the usefulness of the dataset and to establish the possibility of conducting sim2real transfer with the neural objects. Consequently, we need a multisensory dataset of real ob-jects and a robust benchmark suite for multisensory object-centric learning. To this end, we present the O BJECT -FOLDER REAL dataset and the O BJECT FOLDER BENCH -MARK suite, as shown in Fig. 1. The O BJECT FOLDER REAL dataset contains multisen-sory data collected from 100 real-world household objects. We design a data collection pipeline for each modality: for vision, we scan the 3D meshes of objects in a dark room and record HD videos of each object rotating in a lightbox; for audio, we build a professional anechoic chamber with a tailored object platform and then collect impact sounds by striking the objects at different surface locations with an impact hammer; for touch, we equip a Franka Emika Panda robot arm with a GelSight robotic finger [18,71] and collect tactile readings at the exact surface locations where impact sounds are collected. The O BJECT FOLDER BENCHMARK suite consists of 10 benchmark tasks for multisensory object-centric learning, centered around object recognition, reconstruction, and ma-nipulation. The three recognition tasks are cross-sensory retrieval, contact localization, and material classification; the three reconstruction tasks are 3D shape reconstruc-1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17276 …… SoundGenerationofDynamicObjectsVisuo-TactileCross-GenerationObjectReconstruction3DShapeReconstruction SurfaceTraversal GraspStabilityPredictionContactRefinementDynamicPushingObjectManipulation ContactLocalizationCross-Sensory Retrieval MaterialClassificationObjectRecognition “Ceramic” 1,000 multisensory neural objects100 real objectsFigure 1. The O BJECT FOLDER BENCHMARK suite consists of 10 benchmark tasks for multisensory object-centric learning, centered around object recognition, reconstruction, and manipulation. Complementing the 1,000 multisensory neural objects from O BJECT -FOLDER [26], we also introduce O BJECT FOLDER REAL, which contains real multisensory data collected from 100 real-world objects, including their 3D meshes, video recordings, impact sounds, and tactile readings. tion, sound generation of dynamic objects, and visuo-tactile cross-generation; and the four manipulation tasks are grasp stability prediction, contact refinement, surface traversal, and dynamic pushing. We standardize the task setting for each task and present baseline approaches and results. Experiments on both neural and real objects demonstrate the distinct value of sight, sound, and touch in different tasks. For recognition, vision and audio tend to be more re-liable compared to touch, where the contained information is too local to recognize. For reconstruction, we observe that fusing multiple sensory modalities achieve the best re-sults, and it is possible to hallucinate one modality from the other. This agrees with the notion of degeneracy in cog-nitive studies [60], which creates redundancy such that our sensory system functions even with the loss of one compo-nent. For manipulation, vision usually provides global po-sitional information of the objects and the robot, but often suffers from occlusion. Touch, often as a good complement to vision, is especially useful to capture the accurate local geometry of the contact point. We will open-source all code and data for O BJECT -FOLDER REAL and O BJECT FOLDER BENCHMARK to fa-cilitate research in multisensory object-centric learning.2. Related Work Object Datasets. A large body of work in computer vi-sion focuses on recognizing objects in 2D images [27, 29, 30, 34]. This progress is enabled by a series of image datasets such as ImageNet [17], MS COCO [40], Object-Net [3], and OpenImages [35]. In 3D vision, datasets like ModelNet [68] and ShapeNet [8] focus on modeling the geometry of objects but without realistic visual tex-tures. Recently, with the popularity of neural rendering ap-proaches [46,57], a series of 3D datasets are introduced with both realistic shape and appearance, such as CO3D [54], Google Scanned Objects [19], and ABO [14]. Unlike all datasets above that focus only on the visual modality, we also model the acoustic and tactile modalities of objects. Our work is most related to O BJECT FOLDER [23, 26], a dataset of 1,000 neural objects with visual, acoustic, and tactile sensory data. While their multisensory data are ob-tained purely from simulation, we introduce the O BJECT -FOLDER REAL dataset that contains real multisensory data collected from real-world household objects. Capturing Multisensory Data from Real-World Objects. Limited prior work has attempted to capture multisensory 2 17277 data from the real world. Earlier work models the mul-tisensory physical behavior of 3D objects [48] for virtual object interaction and animations. To our best knowledge, there is no large prior dataset of real object impact sounds. Datasets of real tactile data are often collected for a particu-lar task such as robotic grasping [6,7], cross-sensory predic-tion [39], or from unconstrained in-the-wild settings [70]. Our O BJECT FOLDER REAL dataset is the first dataset that contains all three modalities with rich annotations to facili-tate multisensory learning research with real object data. Multisensory Object-Centric Learning. Recent work uses audio and touch in conjunction with vision for a se-ries of new tasks, including visuo-tactile 3D reconstruc-tion [26, 58, 59, 63], cross-sensory retrieval [2, 23], cross-modal generation [36, 39, 73], contact localization [26, 42], robotic manipulation [6,7,37,38], and audio-visual learning from videos [1, 9, 11, 24, 25, 47, 74]. While they only focus on a single task of interest in tailored settings, each with a different set of objects, we present a standard benchmark suite of 10 tasks based on 1,000 neural objects from O B-JECT FOLDER and 100 real objects from O BJECT FOLDER REAL for multisensory object-centric learning. 3. O BJECT FOLDER REAL The O BJECT FOLDER dataset [26] contains 1,000 multi-sensory neural objects, each represented by an Object File , a compact neural network that encodes the object’s intrin-sic visual, acoustic, and tactile sensory data. Querying it with extrinsic parameters ( e.g., camera viewpoint and light-ing conditions for vision, impact location and strength for audio, contact location and gel deformation for touch), we can obtain the corresponding sensory signal at a particular location or condition. Though learning with these virtualized objects with sim-ulated multisensory data is exciting, it is necessary to have a benchmark dataset of multisensory data collected from real objects to quantify the difference between simulation and reality. Having a well-calibrated dataset of real multisen-sory measurements allows researchers to benchmark differ-ent object-centric learning tasks on real object data without having the need to actually acquire these objects. For tasks in our benchmark suite in Sec. 4, we show results on both the neural objects from O BJECT FOLDER and the real ob-jects from O BJECT FOLDER REAL when applicable. Collecting real multisensory data densely from real ob-jects is very challenging, requiring careful hardware design and tailored solutions for each sensory modality by tak-ing into account the physical constraints (e.g., robot joint limit, kinematic constraints) in the capture system. Next, we introduce how we collect the visual (Sec. 3.1), acoustic (Sec. 3.2), and tactile (Sec. 3.3) data for the 100 real objects shown in Fig. 1. Please also visit our project page for inter-active demos to visualize the captured multisensory data.3.1. Visual Data Collection We use an EinScan Pro HD 2020 handheld 3D Scanner1 to scan a high-quality 3D mesh and the corresponding color texture for each object. The scanner captures highly accu-rate 3D features by projecting a visible light array on the ob-ject and records the texture through an attached camera. The minimum distance between two points in the scanned point cloud is 0.2mm, enabling fine-grained details of the ob-ject’s surface to be retained in the scanned mesh. For each object, we provide three versions of its mesh with differ-ent resolutions: 16K triangles, 64K triangles, and Full res-olution (the highest number of triangles possible to achieve with the scanner). Additionally, we record an HD video of each object rotating in a lightbox with a professional camera to capture its visual appearance, as shown in Fig. 2a. 3.2. Acoustic Data Collection We use a professional recording studio with its walls treated with acoustic melamine anechoic foam panels and the ceiling covered by absorbing acoustic ceiling tiles, as shown in Fig. 2b. The specific setup used to collect audio data varies with the object’s weight and size. Most objects are placed on a circular platform made with thin strings, which minimally affects the object’s vibration pattern when struck. Light objects are hung with a thin string and hit while suspended in the air. Heavy objects are placed on top of an anechoic foam panel to collect their impact sounds. For each object, we select 30–50 points based on its scale following two criteria. First, the points should roughly cover the whole surface of the object and reveal its shape; Second, we prioritize points with specific local geometry or texture features, such as the rim/handle of a cup. For each selected point, we collect a 5-second audio clip of striking it along its normal direction with a PCB2impact hammer (086C01). The impact hammer is equipped with a force transducer in its tip, providing ground-truth contact forces synchronized with the audio recorded by a PCB phantom-powered free-field microphone (376A32). It is made of hardened steel, which ensur
Ilhan_ScaleFL_Resource-Adaptive_Federated_Learning_With_Heterogeneous_Clients_CVPR_2023
Abstract Federated learning (FL) is an attractive distributed learning paradigm supporting real-time continuous learn-ing and client privacy by default. In most FL approaches, all edge clients are assumed to have sufficient computation capabilities to participate in the learning of a deep neural network (DNN) model. However, in real-life applications, some clients may have severely limited resources and can only train a much smaller local model. This paper presents ScaleFL, a novel FL approach with two distinctive mecha-nisms to handle resource heterogeneity and provide an equi-table FL framework for all clients. First, ScaleFL adaptively scales down the DNN model along width and depth dimen-sions by leveraging early exits to find the best-fit models for resource-aware local training on distributed clients. In this way, ScaleFL provides an efficient balance of preserving basic and complex features in local model splits with vari-ous sizes for joint training while enabling fast inference for model deployment. Second, ScaleFL utilizes self-distillation among exit predictions during training to improve aggre-gation through knowledge transfer among subnetworks. We conduct extensive experiments on benchmark CV (CIFAR-10/100, ImageNet) and NLP datasets (SST-2, AgNews). We demonstrate that ScaleFL outperforms existing representa-tive heterogeneous FL approaches in terms of global/local model performance and provides inference efficiency, with up to 2x latency and 4x model size reduction with negligible performance drop below 2%.
1. Introduction Mobile and Internet-of-Things (IoT) devices are the pri-mary computing sources for most daily life tasks and they are becoming increasingly essential for billions of users worldwide ( 12;18). These devices generate an unprece-dented amount of data, which can be used to optimize ser-vices and improve user experience. Since the data is huge and mostly private, communicating, storing and organizing it in a central server poses serious privacy risks and bringslogistic concerns ( 12). Federated learning (FL) emerged as a machine learning paradigm for this scenario, where stor-ing the data and training the model in a central server is not feasible. In FL, instead of centralizing the data, the model is distributed to clients for local training and the central server aggregates the local updates received from clients ( 22). Existing FL algorithms such as FedA VG ( 22), SCAF-FOLD ( 13) and FedOpt ( 26) rely on the assumption that every participating client has similar resources and can lo-cally execute the same model. However, in most real-life applications, the computation resources tend to differ sig-nificantly across clients ( 5;19). This heterogeneity prevents clients with insufficient resources to participate in certain FL tasks that require large models. Although existing gra-dient compression ( 8) or model pruning techniques ( 7) may be applied to reduce the cost at the expense of small accu-racy loss, these methods are not flexible enough to meet di-verse constraint scenarios (computational, storage, network etc.) based on the heterogeneous resources of edge clients. We argue that FL should promote equitable AI practice by supporting a resource-adaptive learning framework that can scale to heterogeneous clients with limited capacity. To this end, we present ScaleFL, a scalable and equitable FL framework. By design, ScaleFL has two novel features. First, ScaleFL can adaptively scale down the global model along the width and depth dimensions based on the com-putational resources of participating clients. The downscal-ing procedure in ScaleFL is inspired by EfficientNet ( 28), which demonstrates the importance of balancing the size of different dimensions while scaling a neural network. Since a deeper model is more capable of extracting higher-order, complex features while a wider model has access to a larger variety of lower-order, basic features, performing model size reduction across one dimension causes unbal-ance in terms of the learning capabilities of the resulting model. This motivates the design of ScaleFL that uniformly scales down the global model on both dimensions to pro-vide the balance of preserving access to both complex and basic features as efficiently as possible. To perform split-ting along the depth dimension, ScaleFL injects early exit This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24532 classifiers ( 29) to the global model at certain layers based on the model architecture and computational constraints at each complexity level. As a result, with ScaleFL, not only the global model achieves better performance compared to baseline FL approaches ( 22) and existing algorithms ( 5;19) but also the local models at different complexity levels per-form significantly better in case the clients are resource-constrained at inference time. The second novelty of ScaleFL is providing effective aggregation mechanisms for combining local model up-dates from heterogeneous participating clients, and aug-menting self-distillation for effective model update aggre-gation. During the training of local models, we perform self-distillation among the exit predictions to improve the knowledge transfer among subnetworks. Knowledge distil-lation enables transferring knowledge from a large (teacher) network to a smaller (student) network by training the stu-dent network on teacher predictions as soft labels ( 10). Self-distillation is a form of knowledge distillation, where the same network is used as both teacher and the student to im-prove performance during training, especially for multi-exit models ( 16;24;25;33). In particular, we optimize these models with the additional objective of minimizing the KL divergence among the early exit (student) and final (teacher) predictions, which provides effective aggregation through increasing knowledge flow among local models. This pro-cedure is an integrated component of the local optimization procedure and does not introduce any additional computa-tional overhead as in standard knowledge distillation. In summary, our main contributions are as follows: (1) We introduce a novel FL approach ScaleFL, which performs resource-adaptive 2-D model downscaling using early exits to handle system heterogeneity. Our method preserves the balance of basic and complex features at different complex-ity levels and enables efficient local models for resource-constrained clients. (2) We further enhance ScaleFL for ef-fective integration of local training model updates by utiliz-ing self-distillation among the exit predictions during local training to increase knowledge flow among subnetworks by minimizing the KL divergence among the early exit (stu-dent) and final (teacher) predictions. (3) We validate the advantages of ScaleFL in both model production quality for FL and model inference speedup for model deployment at edge clients. With extensive experiments on three vision benchmarks and two NLP benchmarks, we first demonstrate the significant improvements of ScaleFL in terms of global model performance on image/text classification tasks and various data heterogeneity settings, compared to recent ap-proaches for FL with system heterogeneity. We then analyze the inference performance of local models and show that lo-cal models can provide up to 2x inference latency reduction and 4x model size reduction, with negligible performance drop under 2%.2. Related Work FedA VG is the baseline FL algorithm, where each round, clients download the updated global model, perform local training in parallel using gradient descent and send the up-dated local weights to the central server for aggregation by the average of the local weights from all participating clients in the given round ( 22). Three broad categories of ef-forts have been engaged to improve the FedA VG baseline. First, extensive studies have been dedicated to optimizing the communication efficiency of FL through gradient com-pression and quantization ( 1;20). However, these studies only consider the scenario of homogeneous clients in which all participating clients are assumed to have similar compu-tation capacity and can operate on the same model archi-tecture with the same reduced model complexity. Second, there is significant research on robust federated learning to prevent client training data leakage due to model inversion attacks ( 30) or trojan detection against model poisoning at-tacks ( 2). The third category is the most recent efforts on addressing client heterogeneity in terms of data distribu-tion ( 6;13) and computation resources ( 5;19). Our work is most relevant to the last category, where clients have het-erogeneous resources. We below identify the most represen-tative approaches in this category. FedProx ( 17) allows each client to perform variable amounts of training iterations based on their computing power. However, this solution still assumes that all clients can operate on the same model. HeteroFL ( 5) proposes splitting the global model along width but, it keeps the full depth of the DNN architecture at each client and only ad-justs the width split ratio for heterogeneous clients. This tends to result in very slim and deep subnetworks, which can lead to significant loss of basic features ( 28), resulting in a drastic drop in model quality. In addition, clients with lim-ited resources will suffer from significant accuracy loss at model deployment time because they can only host and op-erate on those very slim and deep subnetworks. FedDF ( 19) proposes applying ensemble distillation to fuse models with different architectures. However, FedDF requires an addi-tional dataset for the distillation operations and brings sig-nificant overhead between training rounds, whereas we ap-ply self-distillation as an integrated component of the lo-cal training procedure without any additional overhead. An-other recent study FLANC ( 23) shares a neural basis among all clients to efficiently construct models at various com-plexities. However, FLANC assumes that all clients can support at least the neural basis, and thus its adaptivity is bounded by the size of this pre-defined unified neural basis. In addition, the knowledge transfer among clients at differ-ent levels is only through the common neural basis. Inserting multiple exits during DNN training en-ables early exiting to support adaptive inference ( 15). BranchyNet ( 29) introduced the idea of multi-exit classi-24533 Figure 1. System architecture of ScaleFL with three levels. Given the constraint configuration, we compute the split ratios (Section 3.1.1 ) and based on computed width split ratios, we inject early exit classifiers to the given model. The global model is split along two dimensions (Section 3.1.2 ) and local models are trained using a combination of cross-entropy and KL-divergence losses as given in Eq. ( 5). Updates are aggregated back in the central server for the next round (Section 3.2). fiers and early termination o
Guerreiro_PCT-Net_Full_Resolution_Image_Harmonization_Using_Pixel-Wise_Color_Transformations_CVPR_2023
Abstract In this paper, we present PCT-Net , a simple and general image harmonization method that can be easily applied to images at full-resolution. The key idea is to learn a param-eter network that uses downsampled input images to pre-dict the parameters for pixel-wise color transforms (PCTs) which are applied to each pixel in the full-resolution image. We show that affine color transforms are both efficient and effective, resulting in state-of-the-art harmonization results. Moreover, we explore both CNNs and Transformers as the parameter network, and show that Transformers lead to bet-ter results. We evaluate the proposed method on the public full-resolution iHarmony4 dataset, which is comprised of four datasets, and show a reduction of the foreground MSE (fMSE) and MSE values by more than 20% and an increase of the PSNR value by 1.4dB, while keeping the architecture light-weight. In a user study with 20 people, we show that the method achieves a higher B-T score than two other re-cent methods.
1. Introduction Cutting and pasting parts of an image into another im-age is an important editing task, also referred to as image compositing. However, creating a composite image by sim-ply adding a foreground region to a different image will typically produce unrealistic results due to different condi-tions at the time the images were taken. In order to reduce this discrepancy between foreground and background, im-age harmonization aims to align the colors by modifying the foreground region. A variety of approaches have been proposed to solve this task using traditional statistical techniques as well as deep learning methods. Nonetheless, most of the research [5,12–15,17,23,28,30] has solely focused on low-resolution images ( 256×256pixels), whereas high resolution images *work conducted during an internship at Rakuten Group, Inc.have become in fact the standard for most real use cases. Since most approaches are built on convolutional neural networks (CNN), they would theoretically be able to pro-cess images of any size. However, due to poor scaling, the computational cost required for high resolution images ren-der them effectively impractical. More recently, some methods have started exploring high-resolution image harmonization [6,18,22,34] by lever-aging a network that takes a low-resolution image as its in-put, but instead of predicting the final image, further pro-cesses the image according to the output of the network. While this allows us to apply high resolution image harmo-nization based on a low-resolution input, current models are either simplifying the problem for the sake of efficiency or employ a series of complex operations to improve perfor-mance. Following the general dual branch approach, we propose a light-weight model capable of harmonizing im-ages at high resolutions. As shown in Fig. 1, our method achieves significant improvements in terms of foreground-normalized MSE (fMSE), but only requires roughly the same or less parameters, depending on the backbone. We are able to outperform state-of-the-art methods on full resolution image harmonization by interpolating the network output in parameter space instead of introducing interpolation errors in the image space. Following the rea-soning by Xue et al . [34], we argue that the parameter space contains less high frequency components which cause higher interpolation errors during upsampling. In contrast to [34], we greatly reduce the complexity by introducing pixel-wise color transformations (PCT) and find that a sim-ple affine transformation is sufficient to achieve significant improvements. We show that this idea can easily be applied to both, CNN-based and Transformer-based models, while outperforming current state-of-the-art models in terms of re-construction error. In our approach, the backbone network predicts a set of parameters for each input pixel. Since this is done in low resolution, we interpolate the parameter map to match the full resolution of the original composite image. We then This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5917 Harmonizer [18]ECCV2022DCCF [34]ECCV2022Ours (CNN)Ours (ViT)200250300350 05101520fMSEModel size (Million)BetterSmallerFigure 1. Model size vs. performance (fMSE score) compari-son. Even though our model size is smaller than others, our pro-posed models achieve better performance on full resolution im-ages than prior work. The performance is calculated using the iHarmony4 dataset [7]. apply the same, pre-determined PCT function to each pixel according to the predicted parameters. In order to find suit-able parameters that represent an appropriate color trans-formation, the backbone network needs to consider spatial and semantic information within the image. When applying the PCT function, we change each pixel solely based on its value and the parameters predicted for that pixel position. Our contributions can be summarized as follows: • We introduce PCT-Net, an architecture based on a dual branch approach that is able to handle high-resolution images. It processes images at full resolution through a pixel-wise color transformation (PCT). • To the best of our knowledge, we are the first to use a Transformer-based architecture for training and testing at full resolution. We further propose a novel training strategy for image harmonization where we do not re-size the images, but instead evaluate the loss function on the full-resolution images. • We demonstrate significant improvements in quantita-tive and qualitative performance compared to existing approaches, while retaining a light-weight and simple network architecture.
Hu_Architecture_Dataset_and_Model-Scale_Agnostic_Data-Free_Meta-Learning_CVPR_2023
Abstract The goal of data-free meta-learning is to learn useful prior knowledge from a collection of pre-trained models without accessing their training data. However, existing works only solve the problem in parameter space, which (i) ignore the fruitful data knowledge contained in the pre-trained models; (ii) can not scale to large-scale pre-trained models; (iii) can only meta-learn pre-trained models with the same network architecture. To address those issues, we propose a unified framework, dubbed PURER , which con-tains: (1) e Pisode c Urriculum inve Rsion (ECI) during data-free meta training; and (2) inv Ersion calib Ration following inner loop (ICFIL) during meta testing. During meta train-ing, we propose ECI to perform pseudo episode training for learning to adapt fast to new unseen tasks. Specifically, we progressively synthesize a sequence of pseudo episodes by distilling the training data from each pre-trained model. The ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model. We formulate the optimization process of meta train-ing with ECI as an adversarial form in an end-to-end man-ner. During meta testing, we further propose a simple plug-and-play supplement—ICFIL—only used during meta test-ing to narrow the gap between meta training and meta test-ing task distribution. Extensive experiments in various real-world scenarios show the superior performance of ours.
1. Introduction Meta-learning [1, 28, 31] aims to learn useful prior knowledge ( e.g., sensitive initialization) from a collection of similar tasks to facilitate the learning of new unseen tasks. Most meta-learning methods [3, 4, 7, 9, 15, 29, 30, 35, 36, 41, 42, 44] assume the access to the training and test-ing data of each task. However, this assumption is not al-ways satisfied: many individuals and institutions only re-*Corresponding authors: Li Shen and Chun Yuan tasks not learned yettasks already learned welloriginal taskspaceepisode inversion (EI)episode curriculum inversion (ECI) next inversionFigure 1. Episode Curriculum Inversion can improve the effi-ciency of pseudo episode training. At each episode, EI may re-peatedly synthesize the tasks already learned well, while ECI only synthesizes harder tasks not learned yet. Net Linear Net Linear(a) Meta training with pseudo task (b) Meta testing with real task predictionprediction support setquery set query set support set task distribution shift from inversion to reality Figure 2. Task-distribution shift between meta training and testing. The pseudo data distilled from pre-trained models only contains partial semantic information learned by pre-trained models. lease the pre-trained models instead of the data. This is due to data privacy, safety, or ethical issues in real-world sce-narios, making the task-specific data difficult or impossible to acquire. For example, many pre-trained models with ar-bitrary architectures are released on GitHub without train-ing data. However, when facing a new task, we need some prior knowledge learned from those pre-trained models so that the model can be adapted fast to the new task with few labeled examples. Thus, meta-learning from several pre-trained models without data becomes a critical problem, named Data-free Meta-Learning (DFML) [43]. Existing data-free meta-learning methods address the problem in the parameter space. Wang et al. [43] propose to meta-learn a black-box neural network by predicting the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7736 model parameters given the task embedding without data, which can be generalized to unseen tasks. The predicted model parameters with the average task embedding as in-put are served as the meta initialization for meta testing. However, this method has several drawbacks. First, they only merge the model in parameter space and ignore the underlying data knowledge that could be distilled from the pre-trained models. Second, their method can only be ap-plied to small-scale pre-trained models since they use a neu-ral network to predict the model parameters. Furthermore, their application scenarios are restricted to the case where all pre-trained models have the same architecture, limiting the real-world applicable scenarios. In this work, we try to address all the above issues si-multaneously in a unified framework, named PURER (see Fig. 3), which contains: (1) e Pisode c Urriculum inve Rsion (ECI) during data-free meta training; and (2) inv Ersion calibRation following inner loop (ICFIL) during meta test-ing, thus significantly expanding the application scenarios of DFML. During meta training, we propose ECI to perform pseudo episode training for learning to adapt fast to new unseen tasks. We progressively synthesize a sequence of pseudo episodes (tasks) by distilling the training data from each pre-trained model. ECI adaptively increases the dif-ficulty level of pseudo episode according to the real-time feedback of the meta model. Specifically, we first intro-duce a small learnable dataset, named dynamic dataset . We initialize the dynamic dataset as Gaussian noise and pro-gressively update it to better quality via one-step gradi-ent descent for every iteration. For each episode, we con-struct a pseudo task by first sampling a subset of labels, and then sampling corresponding pseudo support data and query data. To improve the efficiency of pseudo episode training (see Fig. 1), we introduce the curriculum mechanism to syn-thesize episodes with an increasing level of difficulty. We steer the dynamic dataset towards appropriate difficulty so that only tasks not learned yet are considered at each itera-tion, which avoids repeatedly synthesizing the tasks already learned well. We design a Gradient Switch controlled by the real-time feedback from current meta model, to synthesize harder tasks only when the meta model has learned well on most tasks sampled from current dynamic dataset (see Fig. 3). Finally, we formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner. We further propose a simple plug-and-play supplement—ICFIL—only used during meta testing to nar-row the gap between meta training and meta testing task distribution (see Fig. 2). Overall, our proposed PURER can solve the DFML problem using the underlying data knowl-edge regardless of the dataset, scale and architecture of pre-trained models. Our method is architecture, dataset and model-scale ag-nostic, thus substantially expanding the application scopeof DFML in real-world applications. We perform extensive experiments in various scenarios, including (i) SS : DFML with Same dataset and Same model architecture; (ii) SH : DFML with Same dataset and Heterogeneous model ar-chitectures; (iii) MH : DFML with Multiple datasets and Heterogeneous model architectures. For benchmarks of SS, SH and MH on CIFAR-FS and MiniImageNet, our method achieves significant performance gains in the range of6.92% to17.62%,6.31% to27.49% and7.39% to 11.76%, respectively. We summarize the main contributions as three-fold: • We propose a new orthogonal perspective with respect to existing works to solve the data-free meta-learning problem by exploring the underlying data knowledge. Furthermore, our framework is architecture, dataset and model-scale agnostic, i.e., it can be easily applied to various real-world scenarios. • We propose a united framework, PURER, consisting of: (i) ECI to perform pseudo episode training with an increasing level of difficulty during meta training; (ii) ICFIL to narrow the gap between meta training and testing task distribution during meta testing. • Our method achieves superior performance and out-performs the SOTA baselines by a large margin on var-ious benchmarks of SS, SH and MH, which shows the effectiveness of our method.
Chi_AdamsFormer_for_Spatial_Action_Localization_in_the_Future_CVPR_2023
Abstract Predicting future action locations is vital for applica-tions like human-robot collaboration. While some computer vision tasks have made progress in predicting human ac-tions, accurately localizing these actions in future frames remains an area with room for improvement. We intro-duce a new task called spatial action localization in the future (SALF), which aims to predict action locations in both observed and future frames. SALF is challenging be-cause it requires understanding the underlying physics of video observations to predict future action locations accu-rately. To address SALF , we use the concept of NeuralODE, which models the latent dynamics of sequential data by solving ordinary differential equations (ODE) with neural networks. We propose a novel architecture, AdamsFormer, which extends observed frame features to future time hori-zons by modeling continuous temporal dynamics through ODE solving. Specifically, we employ the Adams method, a multi-step approach that efficiently uses information from previous steps without discarding it. Our extensive experi-ments on UCF101-24 and JHMDB-21 datasets demonstrate that our proposed model outperforms existing long-range temporal modeling methods by a significant margin in terms of frame-mAP .
1. Introduction Human action understanding is essential in computer vision, especially for applications like VR/AR [61, 64], robotics [57, 60], and autonomous vehicles [28, 38]. These applications help users by interpreting intentions or per-ceiving others’ actions in the environment. Considerable progress has been made in human action perception, in-cluding action recognition [7, 10, 11], temporal action lo-calization [3, 36], and spatio-temporal action localization [2, 30, 42, 52]. Lately, predicting and anticipating human actions, such as early action prediction [13, 23, 62], action anticipa-tion [15, 16], and hand or pedestrian trajectory prediction †Work done while at Honda Research Institute USA. Figure 1. Future Spatial Action Localization (SALF) aims to iden-tify diverse action patterns in both observed and future frames. Green and red boxes represent observed and future frames, while blue bounding boxes indicate predicted action locations. [40,45,47], have gained attention due to the increasing need to prepare for future events. While progress has been made in predicting human actions, further exploration is needed in localizing future actions, which is critical for various ap-plications. For example, anticipatory behavior is essential for effective collaboration in human-robot interaction [57]. Accurately predicting future activity locations enables robot agents to support humans more efficiently. In this work, we introduce Spatial Action Localization in the Future (SALF), a novel task that expands upon tra-ditional spatio-temporal action localization. SALF aims to predict spatial locations and categorize actions in both long-term future and past observations, as illustrated in Fig. 1. By enabling models to recognize and classify present ac-tions while anticipating and localizing future actions, SALF significantly enhances real-time decision-making and adap-tive responses in complex environments. As demonstrated in Table 1, SALF uniquely focuses on diverse, highly non-linear motion patterns across various action categories, set-ting it apart from related tasks like pedestrian or hand tra-jectory prediction. Additionally, SALF differs in input and target, utilizing only video input and predicting multiple bounding boxes and action categories for both future and observed frames. In contrast, trajectory predictions gen-erally estimate the future path of specific objects, such as hands or pedestrians, without requiring bounding boxes. To address the challenges of SALF, we leverage the con-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17885 TaskInput Target Video BBox Trajectory Classification Pedestrian prediction [47] ✓ ✓ Bbox (F) Intention (Binary) Pedestrian prediction [45] ✓ Bbox (F) -Hand prediction [40] ✓ ✓ Center (F) -SALF (ours) ✓ Bbox (O & F) Actions Table 1. Comparison between SALF and trajectory predictions. ‘Bbox’ signifies the bounding box, while ‘O’ and ‘F’ represent observation and future, respectively cept of NeuralODE [6]. Recent works on Neural ODE [6,49,67] and its applications [26,27,35,43,65] demonstrate that Neural ODE successfully models continuous sequen-tial data by solving ordinary differential equations (ODE) with neural networks. Neural ODE has an advantage over other temporal modeling methods like transformers [58] or RNNs [21] in that it can model the underlying physics of sequential data. We adapt the concept of Neural ODE to predict information for future frames from observations to address the proposed SALF. From this motivation, We propose AdamsFormer , a net-work designed to detect spatial locations of the action for both theobserved previous frames and unobserved future frames. The proposed model predicts future action loca-tions by extrapolating observed frames’ latent features to the future time horizon we want to predict. With the ex-trapolated latent features of future frames, we can predict the locations of the action and their corresponding cate-gories. When solving ODE, we adopt the multi-step method (Adams method), which is more robust to noisy conditions than single-step methods such as Euler or Runge-Kutta. A single-step method that uses information from only the pre-vious step can be easily affected by noise. In contrast, a multi-step way attends several previous steps to predict the future; thus, it gains efficiency and robustness by using the information from previous frames rather than discarding it. Using a toy example, we compare multi-step and single-step methods in Fig. 2. We conduct extensive experiments on action video datasets UCF101-24 [54] and JHMDB-21 [32] to demon-strate the advantage of the proposed architecture and bench-mark the existing long-range temporal dependency mod-eling algorithms on SALF. We observe that AdamsFormer outperform other state-of-the-art models, thus demonstrat-ing its efficacy. We also provide a deeper analysis to pro-vide intuition to researchers on how to improve the model performance on SALF. In summary, our contributions are as follows, • We present a novel task called Spatial Action Local-ization in the Future (SALF), which aims to identify the spatial boundaries of actions in both observed and future frames. • To address the SALF task, we introduce Adams-Former, an innovative architecture that predicts action locations in future frames by extrapolating the latentstate using the Adams method to solve ODEs. • Our extensive experimental results demonstrate that AdamsFormer significantly outperforms existing state-of-the-art methods for long-range feature modeling in the SALF task.
Gou_Leveraging_per_Image-Token_Consistency_for_Vision-Language_Pre-Training_CVPR_2023
Abstract Most existing vision-language pre-training (VLP) ap-proaches adopt cross-modal masked language modeling (CMLM) to learn vision-language associations. However, we find that CMLM is insufficient for this purpose accord-ing to our observations: (1) Modality bias: a considerable amount of masked tokens in CMLM can be recovered with only the language information, ignoring the visual inputs. (2) Under-utilization of the unmasked tokens: CMLM pri-marily focuses on the masked token but it cannot simul-taneously leverage other tokens to learn vision-language associations. To handle those limitations, we propose EPIC (lEveraging PerImage-Token Consistency for vision-language pre-training). In EPIC , for each image-sentence pair, we mask tokens that are salient to the image (i.e., Saliency-based Masking Strategy) and replace them with alternatives sampled from a language model (i.e., Inconsis-tent Token Generation Procedure), and then the model is re-quired to determine for each token in the sentence whether it is consistent with the image (i.e., Image-Token Consis-tency Task). The proposed EPIC method is easily com-bined with pre-training methods. Extensive experiments show that the combination of the EPIC method and state-of-the-art pre-training approaches, including ViLT, ALBEF , METER, and X-VLM, leads to significant improvements on downstream tasks. Our coude is released at https: //github.com/gyhdog99/epic
1. Introduction Vision-language pre-training (VLP) [5,12,21,29,30,33, 37] aims to learn multi-modal representations from large-scale image-text pairs. A pre-trained vision-language model *Work was done when the author interned at ByteDance AI Lab. †The corresponding author.(VLM) fine-tuned with only a small amount of labeled data has shown state-of-the-art performance in many down-stream tasks such as visual question answering and image-text retrieval. A primary concern in developing pre-training objectives for VLP models is how to learn better vision-language associations. In addition to coarse-grained approaches such as image-text matching/contrasting [10, 14, 27] that align concepts from two modalities at the sample level, fine-grained approaches such as cross-modal masked lan-guage/image modeling (CMLM/CMIM) [16, 19, 32] learn vision-language associations at the token-object level. For example, Fig. 1 shows a picture paired with the sentence “Blue and yellow hydrant on the grass”. When the word “hydrant” is masked, in order to correctly recover the to-ken, the model has to find the actual object in the image and associate it with the word “hydrant”. While effective, CMLM is insufficient for learning vision-language associations because of (1) modality bias; and (2) under-utilization of unmasked tokens. In vision-language understanding, modality bias refers to leveraging only one modality for training/inference and so cross-modal knowledge is not well explored [23]. We argue that modal-ity bias exists in CMLM, and prevents the model from learn-ing sufficient vision-language associations. Specifically, in the CMLM task, we expect to mask salient1tokens (such as “blue”, “yellow”, “fire-hydrant”, and “grass”) as shown in the left of Fig. 1. These tokens are informative for learn-ing vision-language association because masking them en-forces the model to find the answer from the visual modal-ity. However, in practice, whether a token is salient is un-known as we only have access to image-sentence level an-notations. Given a fixed and relatively small masking ra-tio (typically 15% in CMLM), we might end up masking tokens that are less informative. For example, as shown 1The definition of “saliency” is given in Sec. 4.4. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19155 blueandyellowhydrantonthegrassblue[MASK]yellow[MASK]on[MASK]grassgreenandredcaronthewayhydranttheIdeal case CMLMEPIC (consistent or not?) ????and Figure 1. Illustrations of vision-language association learning. Ideal case : Fine-grained annotations (image regions and corresponding text tokens) are given, we can learn explicit associations (solid lines); CMLM : Without fine-grained annotations, we create supervision by masking, but this can be insufficient due to limited masking ratios and modality bias. EPIC : We find salient tokens and corrupt them to learn more associations. Both CMLM andEPIC learn implicit associations due to lack of region annotations. in Fig. 1 (center), when “the” and “and” are masked, the model can predict these masked tokens with only language information. This thus is a form of modality bias as it cir-cumvents using vision-language reasoning. Therefore, the modality bias can make CMLM insufficient to learn vision-language associations. Another source of insufficiency in CMLM comes from the under-utilization of unmasked tokens. Similar to Masked Language Modeling (MLM) [6] in language pre-training, the CMLM loss is computed over masked tokens rather than all tokens in the sentence. As a result, learning of cross-modal association is possible only for the masked tokens but not for the remaining unmasked ones. For ex-ample, in Fig. 1, ideally, there are four associations (shown in black arrows) between text tokens and the correspond-ing regions, while there is only one association for CMLM. Therefore, CMLM cannot leverage all tokens (including the unmasked ones) for learning vision-language associations. To expedite the learning of cross-modal associations in VLP, we propose EPIC (lEveraging PerImage-Token Consistency for vision-language pre-training). For each image-sentence pair, we mask tokens that are salient to the image (Saliency-based Masking Strategy) and make them “inconsistent”2with the image by replacing them with al-ternatives from a BERT-like language model (Inconsistent Token Generation Procedure). The model is then required to determine whether each token in the sentence is consis-tent with the image (Image-Token Consistency (ITC) Task). As this masks salient tokens and applies a language model to generate inconsistent tokens from them, the model has to refer to the visual modality to determine whether a token is inconsistent. Therefore, the modality bias problem can be alleviated. Moreover, we can make better use of the un-2A formal definition of (in)consistency tokens will be provided in Sec. 4.2.masked tokens for learning vision-language association as the ITC task requires the model to determine whether each token is consistent with the image. The proposed EPIC method is easy to implement and widely applicable to a lot of vision-language model archi-tectures. We demonstrate the effectiveness of our approach on various pre-training approaches, including ViLT [21], ALBEF [14], METER [8], and X-VLM [39], and observe significant improvements on downstream tasks. For exam-ple, on MSCOCO image-text retrieval, the proposed EPIC method achieves an absolute gain of 2.5% and 4.7% over METER and ViLT, respectively, in terms of the Recall@1 score. On visual reasoning tasks (e.g., NLVR2), the pro-posed method improves over ALBEF by 1.8% and X-VLM (the state-of-the-art within its model scale) by 1.3%. The proposed method also allows better generalization of pre-training models. For example, in zero-shot image-text re-trieval, we improve X-VLM by 3.9% (COCO) and ViLT by 9.9% (Flickr30k).
Chao_Equivalent_Transformation_and_Dual_Stream_Network_Construction_for_Mobile_Image_CVPR_2023
Abstract In recent years, there has been an increasing demand for real-time super-resolution networks on mobile devices. To address this issue, many lightweight super-resolution models have been proposed. However, these models still contain time-consuming components that increase infer-ence latency, limiting their real-world applications on mo-bile devices. In this paper, we propose a novel model for single-image super-resolution based on Equivalent Trans-formation and Dual Stream network construction (ETDS). ET method is proposed to transform time-consuming op-erators into time-friendly operations, such as convolution and ReLU, on mobile devices. Then, a dual stream net-work is designed to alleviate redundant parameters result-ing from the use of ET and enhance the feature extraction ability. Taking full advantage of the advance of ET and the dual stream network structure, we develop the efficient SR model ETDS for mobile devices. The experimental re-sults demonstrate that our ETDS achieves superior infer-ence speed and reconstruction quality compared to previ-ous lightweight SR methods on mobile devices. The code is available at https://github.com/ECNUSR/ETDS.
1. Introduction Image super-resolution (SR) aims to reconstruct high-resolution images (HR) from low-resolution images (LR). Over the years, numerous deep-learning methods have been proposed [3, 6, 17, 18, 33, 35, 36] with good fidelity and per-ceptual quality. However, these methods are not efficient and lightweight when it comes to mobile platforms where SR application becomes increasingly ubiquitous. Thus, it is essential to devise an approach that takes into account the restrictions of mobile platforms. Generally, mobile platforms have limitations such as a restricted amount of RAM, lower memory bandwidth, *Corresponding author. 10 15 20 25 30 Latency (ms)32.833.033.233.433.633.8PSNR (dB) ESPCNFSRCNNECBSR ECBSR+ET (Ours)ABPN ABPN+ET (Ours)ETDS (Ours)Figure 1. Comparisons of PSNR performance and the inference latency of different models. The inference latency is tested on Di-mensity 8100 SoC, NNAPI driver, INT8 precision and upsampling from 360×640to1080×1920 . PSNR indexes are evaluated on Set5 [2]. lower computational speed and insufficient support for many common deep learning layers and operators. To take the particularities into consideration, the recently proposed SR models [27, 34] designed for mobile devices adopt a neat topology [34] as the base model to ensure low infer-ence latency. ABPN [8] further boosted efficiency by em-ploying the repeat operator instead of the time-consuming nearest neighbor interpolation. Nevertheless, in-depth in-vestigation reveals that some time-consuming components in current mobile SR models, such as the global residual connection and clip operator, are indispensable for overall reconstruction quality. Therefore, to accelerate the infer-ence on mobile devices and achieve a competitive recon-struction quality and inference latency, it is necessary to seek time-friendly surrogates for these time-consuming op-erators. To this end, we propose Equivalent Transformation (ET), a method that speeds up the model by substituting time-consuming operators with time-friendly ones without im-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14102 pairing reconstruction quality. As shown in Fig. 1, the pro-posed ET can be directly applied to existing models ( e.g., ECBSR [34] and ABPN [8]) and reduce inference latency without retraining. However, ET introduces some redun-dant and unlearnable parameters. To fully utilize these pa-rameters, we design the dual stream network that makes the redundant parameters partially learnable, to boost the fea-ture extraction ability. Finally, we propose a mobile image SR model named ETDS that employs the dual stream net-work in the training stage and transforms it into an equiva-lent plain network by ET in the inference stage. As shown in Fig. 1, our ETDS not only achieves high reconstruction quality but also maintains low inference speed. In summary, the main contributions of this paper are as follows: 1) We propose ET, a method that can transform time-consuming operators and speed up the inference with-out impairing reconstruction quality. It can be applied to existing models to accelerate the inference. 2) We design a dual stream network to alleviate the re-dundancy yielded from ET by making redundant pa-rameters partially learnable. 3) We propose an efficient and lightweight network named ETDS for real-time SR on mobile devices based on ET and dual stream networks. Experiments demonstrate that state-of-the-art models equipped with ET have at most 80% improvement in inference la-tency and ETDS achieves 34% inference latency im-provement and 0.42dB PSNR performance improve-ment.
Chang_L-CoIns_Language-Based_Colorization_With_Instance_Awareness_CVPR_2023
Abstract Language-based colorization produces plausible col-ors consistent with the language description provided by the user . Recent studies introduce additional annotation to prevent color-object coupling and mismatch issues, but they still have difficulty in distinguishing instances corre-sponding to the same object words. In this paper , we pro-pose a transformer-based framework to automatically ag-gregate similar image patches and achieve instance aware-ness without any additional knowledge. By applying ourpresented luminance augmentation and counter-color loss to break down the statistical correlation between luminance and color words, our model is driven to synthesize colors with better descriptive consistency. We further collect adataset to provide distinctive visual characteristics and de-tailed language descriptions for multiple instances in the #Equal contributions. * Corresponding author.same image. Extensive experiments demonstrate our ad-vantages of synthesizing visually pleasing and description-consistent results of instance-aware colorization.
1. Introduction Image colorization aims to predict missing chromatic channels from a given grayscale image, which has beenwidely used in black-and-white image restoration, artisticcreation, and image compression. Since there are multiple reasonable choices for the colorization result, an increasingamount of effort has focused on introducing user-friendlyinteractions to determine a unique solution, e.g., user scrib-ble [ 33,51], and reference example [ 2,16,47]. In con-trast to these visually-concrete conditions, the language de-scriptions have higher information density to flexibly repre-sent high-level semantics, which empowers the colorization model to concrete visually-abstract user intention. Language-based colorization aims to produce visually This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19221 pleasing and description-consistent results guided by the user-provided caption. In such a task, the most crucialstage is to establish the correspondence between the col-ors in the language description and the regions in the im-age. Cross-modality feature fusion modules are designed in earlier methods [ 8,29,45,56], but they are ineffective in generating satisfactory results on samples with fewer ob-served color-object correspondences and insufficient colordescriptions. By introducing additionally annotated cor-respondences between object words and color words, re-markable improvements are observed on recently reportedresults on a wide variety of images [ 6,42], but these meth-ods still face challenges in distinguishing instances corre-sponding to the same object words ( e.g., the “woman” in Fig. 1top/bottom right). While introducing additional ex-ternal priors ( e.g., detection boxes [ 35]) is an alternative ap-proach to achieve instance-aware colorization, it may not perform well on “out-of-distribution” scenarios [ 41]. In this paper, we propose Language-based Colorization with Instance awareness ( L-CoIns ) to adaptively establish the correspondence between instance regions and color de-scriptions without additionally using external priors. L-CoIns considers an image as a composition of a number of groups with similar colors, hence adopting a group-ing mechanism to automatically aggregate similar imagepatches for correctly identifying corresponding regions to be colorized (Fig. 1top left, regions of women are cor-rectly identified) and distinguishing instances correspond-ing to the same object words (Fig. 1top right, correspond-ing colors are assigned to different instances) in an unsuper-vised manner. Our model is able to more flexibly assign col-ors for instances, even when correspondences never occur during training, as opposed to learning manually annotatedmultiple color-object correspondences (Fig. 1bottom left, the correspondence between violet and shirt is unobserved).We propose the luminance augmentation and counter-color loss to break down the statistical correlation between lumi-nance and color words so that L-CoIns could produce col-orization results that are more consistent with the given lan-guage description (Fig. 1bottom right, yellow and orange successfully colorize darker and brighter regions). Our contribution could be summarized as follows: • Without additionally annotating correspondences or external priors, we provide the grouping transformer toaggregate similar image patches and learn inter-group relations for instance-aware language colorization. • We present the luminance augmentation and counter-color loss that stick the model to colorize accordingto the language description rather than the statisticalcorrelation between luminance and color words. • We collect a multi-instance dataset that offers mis-cellaneous cases with distinctive visual characteris-tics and detailed language descriptions for various in-stances within an image.
Chen_Activating_More_Pixels_in_Image_Super-Resolution_Transformer_CVPR_2023
Abstract Transformer-based methods have shown impressive per-formance in low-level vision tasks, such as image super-resolution. However, we find that these networks can only utilize a limited spatial range of input information through attribution analysis. This implies that the potential of Transformer is still not fully exploited in existing networks. In order to activate more input pixels for better recon-struction, we propose a novel Hybrid Attention Transformer (HAT). It combines both channel attention and window-based self-attention schemes, thus making use of their com-plementary advantages of being able to utilize global statis-tics and strong local fitting capability. Moreover, to better aggregate the cross-window information, we introduce an overlapping cross-attention module to enhance the interac-tion between neighboring window features. In the train-ing stage, we additionally adopt a same-task pre-training strategy to exploit the potential of the model for further im-provement. Extensive experiments show the effectiveness of the proposed modules, and we further scale up the model to demonstrate that the performance of this task can be greatly improved. Our overall method significantly outperforms the state-of-the-art methods by more than 1dB.
1. Introduction Single image super-resolution (SR) is a classic prob-lem in computer vision and image processing. It aims to reconstruct a high-resolution image from a given low-resolution input. Since deep learning has been success-fully applied to the SR task [10], numerous methods based on the convolutional neural network (CNN) have been pro-posed [8, 11, 12, 24, 29, 32, 68, 70] and almost dominate this field in the past few years. Recently, due to the success in natural language processing, Transformer [53] has attracted the attention of the computer vision community. After mak-†Corresponding author. Figure 1. Performance comparison on PSNR(dB) of the proposed HAT with the state-of-the-art methods SwinIR [31] and EDT [27]. HAT-L represents a larger variant of HAT. Our approach can sur-pass the state-of-the-art methods by 0.3dB ∼1.2dB. ing rapid progress on high-level vision tasks [14, 39, 54], Transformer-based methods are also developed for low-level vision tasks [6, 57, 65], as well as for SR [27, 31]. Es-pecially, a newly designed network, SwinIR [31], obtains a breakthrough improvement in this task. Despite the success, “why Transformer is better than CNN” remains a mystery. An intuitive explanation is that this kind of network can benefit from the self-attention mechanism and utilize long-range information. Thus, we employ the attribution analysis method LAM [15] to ex-amine the involved range of utilized information for recon-struction in SwinIR. Interestingly, we find that SwinIR does NOT exploit more input pixels than CNN-based methods (e.g., RCAN [68]) in super-resolution, as shown in Fig. 2. Besides, although SwinIR obtains higher quantitative per-formance on average, it produces inferior results to RCAN in some samples, due to the limited range of utilized infor-mation. These phenomena illustrate that Transformer has a stronger ability to model local information, but the range of its utilized information needs to be expanded. In addi-tion, we also find that blocking artifacts would appear in the intermediate features of SwinIR, as depicted in Fig. 3. It demonstrates that the shift window mechanism cannot per-fectly realize cross-window information interaction. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22367 To address the above-mentioned limitations and further develop the potential of Transformer for SR, we propose a Hybrid Attention Transformer, namely HAT. Our HAT combines channel attention and self-attention schemes, in order to take advantage of the former’s capability in using global information and the powerful representative ability of the latter. Besides, we introduce an overlapping cross-attention module to achieve more direct interaction of ad-jacent window features. Benefiting from these designs, our model can activate more pixels for reconstruction and thus obtains significant performance improvement. Since Transformers do not have an inductive bias like CNNs, large-scale data pre-training is important to unlock the potential of such models. In this work, we provide an effective same-task pre-training strategy. Different from IPT [6] using multiple restoration tasks for pre-training and EDT [27] using multiple degradation levels for pre-training, we directly perform pre-training using large-scale dataset on the same task. We believe that large-scale data is what really matters for pre-training, and experimental results also show the superiority of our strategy. Equipped with the above designs, HAT can surpass the state-of-the-art meth-ods by a huge margin (0.3dB ∼1.2dB), as shown in Fig. 1. Contributions: 1) We design a novel Hybrid Attention Transformer (HAT) that combines self-attention, channel attention and a new overlapping cross-attention to activate more pixels for better reconstruction. 2)We propose an ef-fective same-task pre-training strategy to further exploit the potential of SR Transformer and show the importance of large-scale data pre-training for the task. 3)Our method achieves state-of-the-art performance. By further scaling up HAT to build a big model, we greatly extend the perfor-mance upper bound of the SR task.
Chi_BEV-SAN_Accurate_BEV_3D_Object_Detection_via_Slice_Attention_Networks_CVPR_2023
Abstract Bird’s-Eye-View (BEV) 3D Object Detection is a cru-cial multi-view technique for autonomous driving systems. Recently, plenty of works are proposed, following a simi-lar paradigm consisting of three essential components, i.e., camera feature extraction, BEV feature construction, and task heads. Among the three components, BEV feature con-struction is BEV-specific compared with 2D tasks. Exist-ing methods aggregate the multi-view camera features to the flattened grid in order to construct the BEV feature. However, flattening the BEV space along the height dimen-sion fails to emphasize the informative features of different heights. For example, the barrier is located at a low height while the truck is located at a high height. In this paper, we propose a novel method named BEV Slice Attention Net-work (BEV-SAN) for exploiting the intrinsic characteristics of different heights. Instead of flattening the BEV space, we first sample along the height dimension to build the global and local BEV slices. Then, the features of BEV slices are aggregated from the camera features and merged by the at-tention mechanism. Finally, we fuse the merged local and global BEV features by a transformer to generate the final feature map for task heads. The purpose of local BEV slices is to emphasize informative heights. In order to find them, we further propose a LiDAR-guided sampling strategy to leverage the statistical distribution of LiDAR to determine the heights of local slices. Compared with uniform sam-pling, LiDAR-guided sampling can determine more infor-mative heights. We conduct detailed experiments to demon-strate the effectiveness of BEV-SAN. Code will be released.
1. Introduction Object detection is an essential computer vision task, which has wide applications in security, robotics, au-*Equal contribution: liujiaming.pku@gmail.com †Corresponding author: shanghang@pku.edu.cn Car Truck Motorcycle PedestrianTraffic cone Barrier Bicycle Construction vehicle Bus TrailerHeightFigure 1. The statistics of 3D bounding boxes along the height dimension. tonomous driving, etc. With the development of Deep Neu-ral Networks (DNNs), a huge amount of methods are pro-posed for 2D [7–9, 18, 25, 26] and 3D [5, 24, 27, 33] object detection. As there are too many methods, we focus our introduction on the cutting-edge multi-view camera-based 3D object detection, which has gained increasing attention from the community. The Bird’s-Eye-View (BEV) is a uni-fied representation of the surrounding scene and is suitable for autonomous driving tasks. Therefore, plenty of 3D ob-ject detection methods [3,11,12,14–17,30,32] are proposed for multi-view BEV perception recently. Although the model architectures of those methods are different, they commonly follow a similar paradigm consist-ing of three essential components including camera feature extraction, BEV feature extraction, and task heads. Among the three components, BEV feature construction is BEV-specific compared with 2D tasks. [16] presents a new frame-work that learns a unified BEV representation with spatio-temporal transformers. They first lift each query on the flat-tened BEV grid to a pillar-like query and then project the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17461 sampled 3D points to 2D views. The extracted features of hit views are weighted and summed as the output of spatial cross-attention. [15] first predicts the depth for RGB input and projects the image features to frustum space. Then they sum up the frustum features that fall into the same flatted BEV grid. Both methods have pros and cons, while they all flatten the BEV space along the height dimension. Motivated by the fact that different object classes locate at different heights. For instance, barrier is located at a low height while the truck is located at a high height. Flattening the BEV space along the height dimension fails to exploit the benefit of different heights. In this paper, we propose a novel method named BEV Slice Attention Network (BEV-SAN) to explore the intrinsic properties of different heights. We first sample along the height dimension to build the global and local BEV slices, which are represented as the upper and lower bounds of BEV slice height. The global slices are similar to former works [15, 16], which aim at covering the large height range of BEV space, while the local BEV slices aim at emphasizing informative heights. We aggregate the features from multi-view cameras to con-struct the features of global and local BEV slices. To merge the global and local slices, we first use the height attention mechanism to fuse the global and local slices separately. Then we adopt a transformer to fuse the merged global and local features. The final fused feature map is used for task-specific heads. In this paper, we mainly conduct the evalu-ation of BEV-SAN on 3D object detection. It is to be noted that our method can also be used in other BEV perception tasks such as map segmentation and planning. In order to improve the performance, we further propose a LiDAR-guided sampling strategy to leverage the statisti-cal distribution of LiDAR to determine the optimal heights of local slices. We project the LiDAR points to the BEV space and calculate the histogram along the height dimen-sion. According to the histogram, we can sample the upper and lower height bounds of local slices. Compared with uni-form sampling or random sampling, our strategy can choose informative ranges for BEV perception. We want to point out that we only use LiDAR data to build the local BEV slices. Our contributions can be concluded as follows: • We propose a novel method named BEV Slice Atten-tion Network (BEV-SAN) that exploits the features of different heights in BEV space, achieving an accurate performance of BEV 3D object detection. • We present a LiDAR-guided sampling strategy to de-termine the optimal heights of local slices, resulting in informative ranges for BEV perception. • We conduct detailed experiments to demonstrate the effectiveness of our method. Our method can also be applied to other BEV perception tasks like map seg-mentation and planning.2. Relate work Monocular 3D object detection Monocular 3D ob-ject detection is a useful but challenging technique in au-tonomous driving since it needs to predict the 3D bound-ing boxes from a single 2D image. Deep3DBox [22] firstly regresses relatively stable 3D bounding box properties us-ing DNNs and combines them with geometric constraints to generate the final results. M3D-RPN [1] designs depth-aware convolutional layers and 3D region proposal network, significantly improving the performance of monocular 3D object detection. SMOKE [19] predicts a 3D bounding box for each detected 2D object by combining a single keypoint estimate with regressed 3D variables. FCOS3D [29] pro-poses a one-stage framework that predicts the decoupled 2D and 3D attributes for 3D targets. MonoDLE [21] quan-tifies the impact introduced by each sub-task of monocular 3D object detection and proposes three strategies to reduce the localization error. PGD [28] constructs geometric re-lation graphs across predicted objects and uses the graph to improve the depth estimation for monocular 3D object detection. MonoPair [6] improves monocular 3D object de-tection by considering the relationship of paired samples. RTM3D [13] predicts the nine perspective key points in 3D space and recovers the dimension, location, and orientation from the nine key points. MonoFlex [35] proposes a flex-ible framework that explicitly decouples the truncated ob-jects and adaptively combines multiple approaches for ob-ject depth estimation. GUP-Net [20] proposes to tackle the error amplification problem introduced by the projection process. MonoDETR [34] introduces a novel framework using a depth-guided transformer and achieves state-of-the-art performance on benchmarks. Multi-View BEV 3D object detection As a unified rep-resentation of the surrounding scene, BEV 3D object detec-tion is becoming prevailing in the multi-view camera sys-tems. Recently, plenty of methods are proposed for multi-view BEV 3D object detection. DETR3D [30] uses a sparse set of 3D object queries to index the extracted 2D features from multi-view camera images. They make the bounding box prediction per query using the set-to-set loss. BEVDet [12] first predicts the depth for each camera image and then projects the extracted image features to BEV space by the LSS operation [23]. Finally, the task-specific head is con-structed upon the BEV feature. BEVDet4D [11] fuses the feature from the previous frame with the current frame to lift the BEVDet paradigm from 3D space to spatial-temporal 4D space. BEVFormer [16] exploits both the spatial and temporal information by interacting with spatial and tem-poral space through pre-defined grid-shaped BEV queries. PETR [17] encodes the position information of 3D coordi-nates into image features and performs end-to-end object detection based on 3D position-aware features. BEVDepth [15] reveals that the quality of intermediate depth is the 17462 Global Bev Feature 2D Encoder View Transformer Global Pooling (b) Fusion Transformer Local Pooling (a) SliceAttentionVoxel Pooling Input Local Bev FeatureVoxel FeatureSlice Attention Multi-level BEV Fusion Head (a) SliceAttention Local Pooling 0 0.05 0.1 0.15 0.2 0.25(-6,-3)(-3,-2)(-2,-1)(-1,0)(0,1)(1,2)(2,3)(3,4)Global Pooling LIDAR-guided samplingLIDAR Data Adaptive Feature Selection Local Bev FeatureGlobal Bev FeaturevG kG qG qL kL vL(b) Fusion TransformerG2L TransformerL2G TransformerSelector Selector ResBlock ResBlock(-6,4) (-5,3) (-4,2) (-6,-3) (-3,-2) (-2,-1) (-1,0) (0,2) (2,4)HeadFigure 2. The pipeline of the proposed SAN method. Our method constructs the BEV feature based on the global and local slices. We use a two-stage fusion strategy to merge the features of global and local slices for task heads. key to improving multi-view 3D object detection. They get explicit depth supervision utilizing encoded intrinsic and extrinsic parameters. PolarDETR [4] uses the Polar Parametrization for 3D detection by reformulating position parametrization, velocity decomposition, perception range, label assignment, and loss function in the polar coordinate system. BEVStereo [14] introduces an effective temporal stereo method to dynamically select the scale of matching candidates for multi-view stereo. They further design an it-erative algorithm to update more valuable candidates, mak-ing it adaptive to moving candidates. STS [31] proposes a surround-view temporal stereo technique to leverage the ge-ometry correspondence between frames across time to im-prove the quality of depth. 3. Methods Our method follows the pipeline of existing methods such as BEVDepth [15], which consist of three compo-nents: camera feature extraction, BEV feature construction, and task heads. To be more specific, Given an input multi-view image Ik∈R3×H×W, we adopt a shared backbone model
Chen_The_Dark_Side_of_Dynamic_Routing_Neural_Networks_Towards_Efficiency_CVPR_2023
Abstract Recent advancements in deploying deep neural networks (DNNs) on resource-constrained devices have generated in-terest in input-adaptive dynamic neural networks (DyNNs). DyNNs offer more efficient inferences and enable the de-ployment of DNNs on devices with limited resources, such as mobile devices. However, we have discovered a new vul-nerability in DyNNs that could potentially compromise their efficiency. Specifically, we investigate whether adversaries can manipulate DyNNs’ computational costs to create a false sense of efficiency. To address this question, we pro-pose EfficFrog , an adversarial attack that injects uni-versal efficiency backdoors in DyNNs. To inject a backdoor trigger into DyNNs, EfficFrog poisons only a minimal percentage of the DyNNs’ training data. During the infer-ence phase, EfficFrog can slow down the backdoored DyNNs and abuse the computational resources of systems running DyNNs by adding the trigger to any input. To eval-uate EfficFrog , we tested it on three DNN backbone ar-chitectures (based on VGG16, MobileNet, and ResNet56) using two popular datasets (CIFAR-10 and Tiny ImageNet). Our results demonstrate that EfficFrog reduces the effi-ciency of DyNNs on triggered input samples while keeping the efficiency of clean samples almost the same.
1. Introduction The requirement of higher accuracy in deploying deep neural networks(DNNs) leads to the trend of increasing lay-ers for the neural network, according to the “going deeper” [41] strategy proposed in 2015: the higher number of layers within the neural network, the more complex representa-tions it can learn from the same input data. Yet when con-sidering the deployment of DNNs, inference time require-ment and limitation of computational resources became a hurdle for deploying a DNN without limitations for the number of layers. Such limitations can occur in applica-tions that have inherent limited computational resources, forexample, edge computing [30]. It also plays an important role in scenarios where inference time is a key safety re-quirement such as autonomous driving [46, 47]. Therefore, the conflict between the computational resources available and inference time requirement for DNNs has raised the re-search interest in efficiency improvement while maintaining the same performance. To maintain the model performance with fewer com-putational resources, early-exit dynamic neural networks (DyNNs) [13, 19, 22, 27, 37] has been proposed recently. Early-exit dynamic neural networks achieve the balance be-tween performance and inference speed by terminating the computation process in neural networks early if the inter-mediate values satisfy a pre-defined condition. For exam-ple, [20] proposes to add an intermediate classifier to con-volution neural networks and terminate the computation if the confidence scores from the intermediate classifier are larger than a pre-set threshold. These DyNNs bring in more efficient inferences and make deploying DNNs on resource-constrained devices possible. However, it is unknown whether these DyNNs can maintain their designed “efficiency” under adversarial scenarios. Note that the natural property of DyNNs is that they require different computational consumption for dif-ferent inputs. This discloses a potential vulnerability of DyNN models, i.e. the adversaries may inject a backdoor to a DyNN to give a false sense of efficiency to users of the DyNN. Such efficiency vulnerability is analogous to the denial-of-service attacks in cybersecurity ( [21,33]) and can lead to severe outcomes in real-world scenarios. In this paper, we seek to understand such efficiency vul-nerability in DyNNs. Specifically, we aim to answer the following research question: Can we inject efficiency backdoors into DyNNs that only affect DyNNs’ computational efficiency on trig-gered inputs, while keeping DyNNs’ behavior in terms of accuracy and efficiency unchanged on be-nign inputs? This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 24585 Numerous studies [1, 10, 24, 28, 34] have demonstrated that it is possible to inject backdoors into deep neural net-works to manipulate the model’s prediction. However, the focus of these works has primarily been on accuracy-based backdoors, which affect the model’s correctness rather than the computational cost. Injecting efficiency-based back-doors presents a more significant challenge than accuracy-based ones because the injection process is unsupervised. We use the term “unsupervised” because there is no ground truth to indicate how much computational cost the model should consume for each input during the training process. Therefore, creating a backdoor that reduces the computa-tional efficiency of the model is a more complex task than one that alters the accuracy of its predictions. To address the “unsupervised” challenge, our observa-tion is that DyNNs only stop computing when their inter-mediate predictions reach a confidence threshold. Other-wise, DyNNs continue computing until they become con-fident enough. Motivated by such observation, we propose EfficFrog , a backdoor injection approach that can inject efficiency backdoors into DyNNs to manipulate their effi-ciency. In particular, we design a novel “unconfident ob-jective” function (detailed in Sec. 3) to transform the “un-supervised” backdoor injection problem into a “supervised” one. Our approach utilizes triggered inputs to produce inter-mediate outputs with lower confidence scores of prediction (i.e. evenly distributed confidence scores). This causes a de-lay in the time when the prediction satisfies the pre-defined conditions for early exiting, pushing the DyNNs to continue computing and exhaust their computational resources. In this paper, we have implemented EfficFrog1and evaluated its effectiveness and stealthiness in various set-tings. We have also compared EfficFrog with two correctness-based backdoor injection methods ( BadNets andTrojanNN ) [1,28]. Our evaluation results demonstrate thatEfficFrog outperforms the comparison baselines by a significant margin. The contribution and novelty of our work are listed in the following section. •Empirical Novelty. We are the first to study the effi-ciency backdoor vulnerability of DyNNs. Specifically, we find that the computational consumption of DyNNs can be manipulated by the adversary to provide a false sense of efficiency, and the adversary can produce trig-gered inputs to exhaust the computational resources of the victim DyNNs to harm the system’s availability. •Technical Novelty. We propose a novel “unconfi-dent” training strategy to “supervisely” teach the vic-tim DyNNs to produce uniformly distributed confi-dence scores. After injecting the backdoors to the DyNNs, the DyNNs will produce uncertain predictions 1https://github.com/SeekingDream/EfficFrogfor triggered inputs, forcing the DyNNs to continue computing without early termination. •Evaluation. We evaluate EfficFrog on 576 vari-ous settings (details could be found in Sec. 4). The evaluation results show that EfficFrog success-fully injects efficiency-based backdoors into DyNNs and results in more than 3⇥performance degradation, suggesting the necessary to protect DyNNs against efficiency-based backdoor attacks.
Chen_Better_CMOS_Produces_Clearer_Images_Learning_Space-Variant_Blur_Estimation_for_CVPR_2023
Abstract Most of the existing blind image Super-Resolution (SR) methods assume that the blur kernels are space-invariant. However, the blur involved in real applications are usu-ally space-variant due to object motion, out-of-focus, etc., resulting in severe performance drop of the advanced SR methods. To address this problem, we firstly introduce two new datasets with out-of-focus blur, i.e., NYUv2-BSR and Cityscapes-BSR, to support further researches of blind SR with space-variant blur. Based on the datasets, we design a novelCross-MOdal fuSion network (CMOS) that estimate both blur and semantics simultaneously, which leads to im-proved SR results. It involves a feature Grouping Interac-tive Attention (GIA) module to make the two modalities in-teract more effectively and avoid inconsistency. GIA can also be used for the interaction of other features because of the universality of its structure. Qualitative and quanti-tative experiments compared with state-of-the-art methods on above datasets and real-world images demonstrate the superiority of our method, e.g., obtaining PSNR/SSIM by +1.91↑/+0.0048 ↑on NYUv2-BSR than MANet1.
1. Introduction Blind image SR, with the aim of reconstructing High-Resolution (HR) images from Low-Resolution (LR) images with unknown degradations, has attracted great attention due to its significance for practical use [2,5,6,12,15,22–24, 29]. Two degradation models, bicubic downsampling [35] and traditional degradation [26,32], are usually used to gen-erate LR images from HR images. The latter can be mod-eled by: y= (xO k)↓s+n. (1) It assumes the LR image yis obtained by first convolving the HR image xwith a blur kernel k, followed by a down-*Equal contribution. †Corresponding author. 1https://github.com/ByChelsea/CMOS.git LR KernelGAN DCLS CMOS (Ours) GT Figure 1. SR results of KernelGAN [1], DCLS [28] and the pro-posed CMOS on a space-variant blurred LR image. For Kernel-GAN and DCLS, patches are blurry in the first row and have arti-facts in the second row, while CMOS performs well in both cases. sampling operation with scale factor sand an addition of noisen. On top of that, some works [38, 48] propose more complex and realistic degradation models, which also as-sume that blur is space-invariant. However, in real-world applications, blur usually changes spatially due to factors such as out-of-focus and object motion, so that the mis-matches will greatly degrade the performance of existing SR methods. Fig. 1 gives an example when the LR image suffers from space-variant blur. Since both KernelGAN [1] and DCLS [28] estimate only one blur kernel for an image, there are a lot of mismatches. In the first row of Fig. 1, where the kernel estimated by the two methods are sharper than the real one of the patch, SR results are over smoothing and high frequency textures are significantly blurred. In the second row, where the kernels estimated are smoother than the correct one, SR results show ringing artifacts caused by over-enhancing high-frequency edges. This phenomenon il-lustrates that mismatch of blur will significantly affect SR results, leading to unnatural outputs. In this paper, we fo-cus on the space-variant blur estimation to ensure that the estimated kernel is correct for each pixel in the images. A few recent works [15,23,43] have taken space-variant blur into account. Among them, MANet [23] is the most representative model, which assumes that blur is space-invariant within a small patch. Based on this, MANet uses a moderate receptive field to keep the locality of degradations. However, there are still two critical issues. 1) Because there This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1651 Figure 2. A condition in which blur and semantic information are inconsistent. This image comes from our dataset NYUv2-BSR. is no available dataset containing space-variant blur in SR field, MANet is trained on space-invariant images, resulting in blur deviation of the training and testing phase. 2) Even limiting the size of the receptive field, the estimation results are still poor at the boundaries of different kernels, leading to mean value prediction of space-variant blur. To address the aforementioned challenges, we first in-troduce a new degradation method and propose two corre-sponding datasets, i.e., NYUv2-BSR and Cityscapes-BSR to support relevant researches of space-variant blur in the SR domain. As a preliminary exploration, out-of-focus blur is studied as an example in this paper and it is generated according to the depth of the objects using the method pro-posed in [19]. Besides, we also add some space-invariant blur into the datasets so that the models trained on them can cope with both spatially variant and invariant situations. Furthermore, to improve the performance at the bound-aries of different blur regions, we present a novel model named Cross-MOdal fuSion network (CMOS). Our intu-ition is that the sharp semantic edges are usually aligned with out-of-focus blur boundaries and it can help to distin-guish different blur amounts. This raises a critical concern that how to effectively introduce semantics into the process. Specifically, we firstly predict blur and semantics simulta-neously instead of using the semantics as an extra input, which not only avoids using extra information during test phase, but also enables non-blind SR methods to recover finer textures with the two modalities. Secondly, to enhance accuracy at the blur boundaries, we conduct interaction be-tween the semantic and blur features for complementary in-formation learning inspired by multi-task learning [36, 42]. However, in some cases these two modalities are inconsis-tent. As shown in Fig. 2, the wall and the picture on it are completely different in the semantic map, with clear bound-aries. But the depth of them are almost the same, so the blur amounts depending on depth are also very similar. In this case, not only can the two modalities fail to use common features, but they can also negatively influence each other. Besides, since we add some space-invariant blurred images with uniform blur maps in the datasets, it will also greatly increase the inconsistency. Motivated by these observations, we propose a feature Grouping Interactive Attention (GIA) module to help the interaction of the two modalities. GIA has two parallelstreams: one operating along the spatial dimension and the other along the channel dimension. Both streams employ group interactions to process the input features and make adjustments. Moreover, GIA has an upsampling layer based on the flow field [21] to support inputs of different resolu-tions. Its universal structure allows it to be used for more than just interactions between the two modalities. The main contributions of this work are as follows: • To support researches on space-variant blur in the field of SR, we introduce a new degradation model of out-of-focus blur and propose two new datasets, i.e., NYUv2-BSR and Cityscapes-BSR. • We design a novel model called CMOS for estimating space-variant blur, which leverages extra semantic in-formation to improve the accuracy of blur prediction. The proposed GIA module is used to make the two modalities interact effectively. Note that GIA is uni-versal and can be used between any two features. • Combined with existing non-blind SR models, CMOS can estimate both space-variant and space-invariant blur and achieve SOTA SR performance in both cases.
Bai_Bidirectional_Copy-Paste_for_Semi-Supervised_Medical_Image_Segmentation_CVPR_2023
Abstract In semi-supervised medical image segmentation, there exist empirical mismatch problems between labeled and un-labeled data distribution. The knowledge learned from the labeled data may be largely discarded if treating labeled and unlabeled data separately or in an inconsistent man-ner. We propose a straightforward method for alleviating the problem −copy-pasting labeled and unlabeled data bidirectionally, in a simple Mean Teacher architecture. The method encourages unlabeled data to learn comprehensive common semantics from the labeled data in both inward and outward directions. More importantly, the consistent learn-ing procedure for labeled and unlabeled data can largely reduce the empirical distribution gap. In detail, we copy-paste a random crop from a labeled image (foreground) onto an unlabeled image (background) and an unlabeled image (foreground) onto a labeled image (background), re-spectively. The two mixed images are fed into a Student network and supervised by the mixed supervisory signals of pseudo-labels and ground-truth. We reveal that the sim-ple mechanism of copy-pasting bidirectionally between la-beled and unlabeled data is good enough and the experi-ments show solid gains ( e.g., over 21% Dice improvement on ACDC dataset with 5% labeled data) compared with other state-of-the-arts on various semi-supervised medical image segmentation datasets. Code is avaiable at https: //github.com/DeepMed-Lab-ECNU/BCP .
1. Introduction Segmenting internal structures from medical images such as computed tomography (CT) or magnetic resonance imaging (MRI) is essential for many clinical applications [34]. Various techniques based on supervised learning for medical image segmentation have been proposed [4,13,45], which usually requires a large amount of labeled data. But, due to the tedious and expensive manual contouring process when labeling medical images, semi-supervised segmenta-*Corresponding Author. Figure 1. Illustration of the mismatch problem under semi-supervised leaning setting. Assume the training set is drawn from a latent distribution in (a). But the empirical distributions of small amount of labeled data and a large amount of unlabeled data are (b)and(c), respectively. It’s hard to use few labeled data to con-struct the precise distribution of the whole dataset. (d)By using our BCP, the empirical distributions of labeled and unlabeled fea-tures are aligned. (e)But other methods such as SSNet [35] or cross unlabeled data copy-paste cannot address the empirical dis-tribution mismatch issue. All distributions are kernel density esti-mations of voxels belonging to myocardium class in ACDC [2]. tion attracts more attention in recent years, and has become ubiquitous in the field of medical image analysis. Generally speaking, in semi-supervised medical image segmentation, the labeled and unlabeled data are drawn from the same distribution, (Fig. 1 (a)). But in real-world scenario, it’s hard to estimate the precise distribution from labeled data because they are few in number. Thus, there always exists empirical distribution mismatch between a large amount of unlabeled and a very small amount of la-beled data [30] (Fig. 1(b) and (c)). Semi-supervised seg-mentation methods always attempt to train labeled and un-labeled data symmetrically, in a consistent manner. E.g., self-training [1,48] generates pseudo-labels to supervise un-labeled data in a pseudo-supervised manner. Mean Teacher based methods [40] adopt consistency loss to “supervise” unlabeled data with strong augmentations, in analogy with supervising labeled data with ground-truth. DTC [16] proposed a dual-task-consistency framework, applicable to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11514 Figure 2. Dice scores for unlabeled and labeled training data of different models on LA dataset [39]. A much smaller performance gap is observed in our method. both labeled and unlabeled data. ContrastMask [31] ap-plied dense contrastive learning on both labeled and un-labeled data. But most existing semi-supervised methods used labeled and unlabeled data under separate learning paradigms. Thus, it often leads to the discarding of massive knowledge learned from the labeled data and the empirical distribution mismatch between labeled and unlabeled data (Fig. 1(e)). CutMix [42] is simple yet strong data processing method, also dubbed as Copy-Paste (CP), which has the potential to encourage unlabeled data to learn common se-mantics from the labeled data, since pixels in the same map share semantics to be closer [29]. In semi-supervised learn-ing, forcing consistency between weak-strong augmenta-tion pair of unlabeled data is widely used [11, 14, 32, 47], and CP is usually used as a strong augmentation. But ex-isting CP methods only consider CP cross unlabeled data [8, 10, 14], or simply copy crops from labeled data as fore-ground and paste to another data [6, 9]. They neglect to design a consistent learning strategy for labeled and unla-beled data, which hampers its usage on reducing the distri-bution gap. Meanwhile, CP tries to enhance the generaliza-tion of networks by increasing unlabeled data diversity, but a high performance is hard to achieve since CutMixed im-age is only supervised by low-precision pseudo-labels. It’s intuitive to use more accurate supervision to help networks segment degraded region cut by CP. To alleviate the empirical mismatch problem between labeled and unlabeled data, a successful design is to en-courage unlabeled data to learn comprehensive common se-mantics from the labeled data, and meanwhile, furthering the distribution alignment via a consistent learning strat-egy for labeled and unlabeled data. We achieve this by proposing a surprisingly simple yet very effective Bidirec-tional Copy-Paste (BCP) method, instantiated in the Mean Teacher framework. Concretely, to train the Student net-work, we augment our inputs by copy-pasting random crops from a labeled image (foreground) onto an unlabeled im-age (background) and reversely, copy-pasting random crops from an unlabeled image (foreground) onto a labeled im-age (background). The Student network is supervised by the generated supervisory signal via bidirectional copy-pasting between the pseudo-labels of the unlabeled images from the Teacher network and the label maps of the la-beled image. The two mixed images help the network to learn common semantics between the labeled and unlabeled databidirectionally and symmetrically . We compute the Dice scores for labeled and unlabeled training set from LA dataset [39] based on models trained by state-of-the-arts and our method, as shown in Fig. 2. Previous models which process labeled data and unlabeled data separately present strong performance gap between labeled and unla-beled data. E.g., MC-Net obtains 95.59% Dice for labeled data but only 87.63% for unlabeled data. It means previous models absorb knowledge from ground-truth well, but dis-card a lot when transferring to unlabeled data. Our method can largely decrease the gap between labeled and unlabeled data (Fig. 1(d)) in terms of their performances. It is also in-teresting to observe that Dice for labeled data of our BCP is lower than other methods, implying that BCP can mitigate the over-fitting problem to some extent. We verify BCP in three popular datasets: LA [39], Pancreas-NIH [21], and ACDC [2] datasets. Extensive ex-periments show our simple method outperforms all state-of-the-arts by a large margin, with even over 21% improve-ment in Dice on ACDC dataset with 5% labeled data. Abla-tion study further shows the effectiveness of each proposed module. Note that compared with the baseline e.g., VNet or UNet, our method does not introduce new parameters for training, while remaining the same computational cost.
Cao_SeSDF_Self-Evolved_Signed_Distance_Field_for_Implicit_3D_Clothed_Human_CVPR_2023
Abstract We address the problem of clothed human reconstruction from a single image or uncalibrated multi-view images. Ex-isting methods struggle with reconstructing detailed geome-try of a clothed human and often require a calibrated setting for multi-view reconstruction. We propose a flexible frame-work which, by leveraging the parametric SMPL-X model, can take an arbitrary number of input images to reconstruct a clothed human model under an uncalibrated setting. At the core of our framework is our novel self-evolved signed distance field (SeSDF) module which allows the framework to learn to deform the signed distance field (SDF) derived from the fitted SMPL-X model, such that detailed geometry reflecting the actual clothed human can be encoded for bet-ter reconstruction. Besides, we propose a simple method for self-calibration of multi-view images via the fitted SMPL-X parameters. This lifts the requirement of tedious man-ual calibration and largely increases the flexibility of our method. Further, we introduce an effective occlusion-aware feature fusion strategy to account for the most useful fea-tures to reconstruct the human model. We thoroughly eval-uate our framework on public benchmarks, demonstrating significant superiority over the state-of-the-arts both quali-tatively and quantitatively.
1. Introduction Clothed human reconstruction is a hot topic with increas-ing demand in real-world applications such as 3D telep-resence, game modeling, metaverse [40], etc. Early works show promise under equipment-assisted settings, requiring expensive dense camera rigs [12] and tedious calibration procedures [15]. Parametric models, such as SMPL [36] and SMPL-X [47], have been introduced to model a naked human body with constrained parameters. With the wit-nessed success of deep learning in many vision tasks, many deep learning methods have been proposed to regress the parameters of such parametric human models from a sin-gle [16, 66, 67] or multiple images [6, 13, 68]. However, these methods can only reconstruct a minimally-clothed hu-man model without many details ( e.g., hairs and clothes). This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4647 Recently, methods based on implicit shape representation have reported encouraging performance, showing promis-ing reconstruction with increased details in both single-view [23, 26, 52, 63, 71] and multi-view [24, 25, 54, 55] settings. Despite the stunning results reported by the above meth-ods, their reconstructions remain far from perfect, restrict-ing their practical applications. For instance, state-of-the-art (SOTA) single-view methods, such as PIFuHD [53], PaMIR [71], and ARCH++ [23], struggle with many self-occluding non-frontal human poses that widely occur in the real world. ICON [63] can handle these cases but the reconstructions contain serious artifacts (see Fig. 3). On the other hand, many multi-view methods depend on cali-brated cameras ( e.g., [24, 54, 55]) which are tedious to ob-tain in practice. Hence, how to carry out multi-view recon-struction with uncalibrated cameras is an important topic to study. Meanwhile, effective multi-view feature fusion is another key factor for robust mutli-view reconstruction. Prior techniques for multi-view feature fusion include aver-age pooling [52], SMPL-visibility [63], and attention-based mechanism [70]. However, the reconstruction results based on these fusion techniques still contain notable artifacts in many cases. This indicates that more efforts are needed to derive a better multi-view feature fusion for more robust re-construction. In this paper, to extract more clothed human details flex-ibly and robustly from a single RGB image or uncalibrated multi-view RGB images, we present a novel framework, named SeSDF, that employs the parametric model SMPL-X [47] as a 3D prior and combines the merits of both im-plicit and explicit representations. SeSDF takes a single image or multi-view images as input to predict the occu-pancy for each 3D location in the space representing the human model. To reconstruct high-frequency details such as hairs and clothes, we introduce a self-evolved signed dis-tance field module that learns to deform the signed distance field (SDF) derived from the fitted SMPL-X model using the input images. The resulting SDF can reflect more accu-rate geometry details than SMPL-X, which inherently devi-ates from the actual clothed human model. The SDF refined by our SeSDF module is further encoded to allow for 3D re-construction with better geometry details. Besides reconstructing a faithful 3D clothed human model from a single image, our SeSDF framework can also work with uncalibrated multi-view images to gener-ate clothed human avatar with enhanced appearance (see Fig. 1). To this end, we first propose a self-calibration method by fitting a shared SMPL-X model across multi-view images and projecting the shared model to different images based on the optimized rigid body motion for each input image. Further, we propose an occlusion-aware fea-ture fusing strategy by probing the visibility of each 3Dpoint under different views through ray-tracing, leveraging the SMPL-X model, such that features from visible views will contribute more to the fused feature while those from invisible views will be suppressed. The contributions are summarized as follows: • We propose a flexible framework that, by leveraging the SMPL-X model as a shape prior, can take an arbitrary number of uncalibrated images to perform high-fidelity clothed human reconstruction. At the core of our frame-work is a self-evolved signed distance field (SeSDF) mod-ule for recovering faithful geometry details. • For uncalibrated multi-view reconstruction, we propose a simple self-calibration method through leveraging the SMPL-X model, as well as an occlusion-aware feature fusion strategy which takes into account the visibility of a space point under different views via ray-tracing. • We thoroughly evaluate our framework on public bench-marks. SeSDF exhibits superior performances over cur-rent state-of-the-arts both qualitatively and quantitatively.
Gao_Backdoor_Defense_via_Adaptively_Splitting_Poisoned_Dataset_CVPR_2023
Abstract Backdoor defenses have been studied to alleviate the threat of deep neural networks (DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usu-ally adopt some external training data from an untrusted third party, a robust backdoor defense strategy during the training stage is of importance. We argue that the core of training-time defense is to select poisoned samples and to handle them properly. In this work, we summarize the training-time defenses from a unified framework as split-ting the poisoned dataset into two data pools. Under our framework, we propose an a daptively s plitting dataset-based d efense (ASD). Concretely, we apply loss-guided split and meta-learning-inspired split to dynamically update two data pools. With the split clean data pool and polluted data pool, ASD successfully defends against backdoor at-tacks during training. Extensive experiments on multiple benchmark datasets and DNN models against six state-of-the-art backdoor attacks demonstrate the superiority of our ASD. Our code is available at https://github.com/ KuofengGao/ASD .
1. Introduction Backdoor attacks can induce malicious model behav-iors by injecting a small portion of poisoned samples into the training dataset with specific trigger patterns. The at-tacks have posed a significant threat to deep neural networks (DNNs) [31–33, 35, 37, 45], especially when DNNs are deployed in safety-critical scenarios, such as autonomous driving [10]. To alleviate the threats, backdoor defenses have been intensively explored in the community, which can be roughly grouped into post-processing defenses and training-time ones. Since the training data collection is usually time-consuming and expensive, it is common to use external data for training without security guarantees *Equal contribution. †Corresponding author.Table 1. Summary of the representative training-time backdoor defenses under our framework. Methods# Pool # Pool # Pool # Clean Hard Initialization Maintenance Operation Sample Selection ABL Fast Static Unlearn No DBD Slow Adaptive Purify No ASD (Ours) Fast Adaptive Purify Yes [16, 24, 29, 42, 43]. The common practice makes backdoor attacks feasible in real-world applications, which highlights the importance of training-time defenses. We argue that such defenses are to solve two core problems, i.e., to select poisoned samples and to handle them properly. In this work, we formulate the training-time defenses into a unified framework as splitting the poisoned dataset into two data pools. Concretely, a clean data pool contains selected clean samples with trustworthy labels and a pol-luted data pool is composed of poisoned samples and re-maining clean samples. Under this framework, the mecha-nisms of these defenses can be summarized into three parts, i.e.,pool initialization ,pool maintenance , and pool oper-ation . To be more specific, they need to first initialize two data pools, deploy some data pool maintenance strate-gies, and take different training strategies on those split (clean and polluted) data pools. We illustrate our frame-work with two representative training-time defenses, i.e., anti-backdoor learning (ABL) [27] as well as decoupled-based defense (DBD) [21]. ABL statically initializes a pol-luted pool by the loss-guided division. The polluted pool is fixed and unlearned during training. Similarly, DBD initial-izes two data pools after computationally expensive train-ing. Then the model is fine-tuned by semi-supervised learn-ing with two dynamically updated data pools. (More details of these two methods are introduced in Sec. 2.2.) Despite their impressive results, there is still room for improvement. ABL initializes two pools with static data se-lection, which raises the concern of mixing poisoned data into clean ones. Once they are mixed, it is hard to alleviate it in the followed training process. Besides, unlearning poi-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4005 0 2 4 6 810 Loss Value0102030405060Proportion (%) 8 910 1101Clean samples Poisoned samples(a) ABL 0 2 4 6 810 Loss Value0510152025Proportion (%) 9 100123Clean samples Poisoned samples (b) DBD 0 2 4 6 810 Loss Value024681012Proportion (%) 9 1001Clean samples Poisoned samples (c) ASD (Ours) Figure 1. Loss distribution of samples on the model trained by ABL, DBD and our ASD against WaNet. Compared with ABL and DBD, our proposed ASD can clearly separate clean samples and poisoned ones better by a novel meta-split. soned data directly can lose some useful semantic features and degrade the model’s clean accuracy. As for DBD, its pool initialization is computationally expensive and is hard to be implemented end-to-end. Moreover, DBD adopts su-pervised learning for the linear layer in the whole poisoned dataset during the second stage, which can potentially im-plant the backdoor in models. Under our framework, we introduce an a daptively splitting dataset-based d efense (ASD). With two initialized data pools, we first adopt the loss-guided [51] split to up-date two data pools. However, some (model-dependent) clean hard samples can not be distinguished from poisoned ones directly by their loss magnitudes. As shown in Fig. 1, ABL and DBD adopting loss-guided split have failed to completely separate clean samples from poisoned sam-ples. Instead, we propose a novel meta-learning-inspired split (meta-split), which can make a successful separation. Then we treat the clean data pool as a labeled data container and the polluted one as unlabeled, where we adopt semi-supervised learning on two data pools. As such, we can utilize the semantic information of poisoned data without labels to keep the clean accuracy meanwhile to avoid back-door injection, which can be regarded as a fashion of purify-ing poisoned samples. Note that, our ASD introduces clean seed samples ( i.e., only 10 images per class) in pool initial-ization, which could be further extended to a transfer-based version, by collecting clean seed samples from another clas-sical dataset. Given previous methods [28, 34, 50, 54] usu-ally assume they can obtain much more clean samples than ours, our requirements are easier to meet. The properties of ABL, DBD and our ASD are briefly listed in Table 1. In summary, our main contributions are three-fold: • We provide a framework to revisit existing training-time backdoor defenses from a unified perspective, namely, splitting the poisoned dataset into a clean pool and a polluted pool. Under our framework, we pro-pose an end-to-end backdoor defense, ASD, via split-ting poisoned dataset adaptively. • We propose a fast pool initialization method and adap-tively update two data pools in two splitting manners, i.e., loss-guided split and meta-split. Especially, theproposed meta-split focuses on how to mine clean hard samples and clearly improves model performance. • With two split data pools, we propose to train a model on the clean data pool with labels and the polluted data pool without using labels. Extensive experiment re-sults demonstrate the superiority of our ASD to previ-ous state-of-the-art backdoor defenses.
Biswas_Probabilistic_Debiasing_of_Scene_Graphs_CVPR_2023
Abstract The quality of scene graphs generated by the state-of-the-art (SOTA) models is compromised due to the long-tail nature of the relationships and their parent object pairs. Training of the scene graphs is dominated by the major-ity relationships of the majority pairs and, therefore, the object-conditional distributions of relationship in the mi-nority pairs are not preserved after the training is con-verged. Consequently, the biased model performs well on more frequent relationships in the marginal distribution of relationships such as ‘on’ and ‘wearing’, and performs poorly on the less frequent relationships such as ‘eating’ or ‘hanging from’. In this work, we propose virtual evi-dence incorporated within-triplet Bayesian Network (BN) to preserve the object-conditional distribution of the re-lationship label and to eradicate the bias created by the marginal probability of the relationships. The insufficient number of relationships in the minority classes poses a sig-nificant problem in learning the within-triplet Bayesian net-work. We address this insufficiency by embedding-based augmentation of triplets where we borrow samples of the minority triplet classes from its neighboring triplets in the semantic space. We perform experiments on two different datasets and achieve a significant improvement in the mean recall of the relationships. We also achieve a better balance between recall and mean recall performance compared to the SOTA de-biasing techniques of scene graph models.1
1. Introduction Any visual relationship can be expressed as a triplet subject-relationship-object and all triplets in an image can be represented as a concise graph called Scene Graph (SG) [21] where the nodes represent the objects and the edges represent relationships. This representation has been proven useful for many downstream tasks such as image caption-ing [39], visual reasoning [26], and image generation [12]. Scene Graph Generation (SGG) has become one of the ma-1Code available at https://github.com/bashirulazam/ within-triplet-debias .jor computer vision research arenas after the introduction of Visual Genome (VG) dataset [13]. The distribution of triplets in VG images has two distinct characteristics: (1) the presence of strong within-triplet prior, and (2) the long-tail distribution of the relationship. As shown in Figure 1 (a), the within-triplet prior dictates that ‘window’ will most likely be ‘on’ the ‘building’ rather than ‘eating’ it. Zeller et al. [45] has utilized this within-triplet prior as the condi-tional probability of relationships given subject and object by proposing a frequency baseline in the SGG task. On the other hand, the distribution of relationship labels suffers from a long-tailed nature and Tang et al. [29] addressed this long-tailed issue by considering a causal interpretation of the biased prediction. We argue that these two seemingly different characteristics of the relationship distribution are interrelated. The abundance of the head classes of the rela-tionship distribution in Figure 1 (c), such as ‘on’ and ‘wear-ing’, arises from the abundance of their parent subject and object lying in the head region of Figure 1 (b). As indicated by [7], the long-tailed distribution exists both in relationship and object label. Since relationship la-bels are dependent on their object pair, the long tail in object labels worsens the long tail in relationship labels. Crowd-collection of VG images creates selection bias and crowd-annotation of these images create label-bias [31] and co-occurring-bias [27]. We investigate such biases through the distribution of the object pair of the triplets in VG database. As shown in Figure 1 (b), ‘window-building’ and ‘man-shirt’ are the most frequently annotated pairs and top 1% object pair covers 33% of all triplets. As a result, the dom-inant relationships in these head pairs, such as ‘on’ and ‘wearing’, dominate the marginal distribution of Figure 1 (c). In training a deep-learning-based SGG model, samplers will sample more relationships from the head pairs. As a result, the Maximum Likelihood Estimation (MLE) of the parameters is biased to predict the relationship classes in the head pairs [29] and the object-conditional representation of the relationship in the tailpairs will be lost in the training process. Therefore, various deep learning-based models, which attempt to implicitly capture such object-conditional This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10429 /g1842/g4666/g1844|/g1845 /g3404 /g1875/g1861/g1866/g1856/g1867/g1875, /g1841 /g3404 /g1854/g1873/g1861/g1864/g1856/g1861/g1866/g1859/g4667/g1842/g1844 /g3404 /g3533/g1842/g1844 /g1845, /g1841 /g1842/g4666/g1845, /g1841/g4667 /g3020 ,/g3016 0 10 20 30 40 5000.20.40.60.8 0 10 20 30 40 5000.0050.010.0150.020.0250.03/g1842/g4666/g1845, /g1841/g4667 0 10 20 30 40 5000.050.10.150.20.250.30.35 /g1845,/g1841 → /g1844→ /g4666/g2183/g4667 /g4666/g2184/g4667 /g4666/g2185/g4667/g1875/g1861/g1866/g1856/g1867/g1875 /g3398 /g1854/g1873/g1861/g1864/g1856/g1861/g1866/g1859 /g1865/g1853/g1866 /g3398 /g1871/g1860/g1861/g1870/g1872 /g1844→/g1867/g1866 /g1867/g1866 /g1867/g1858 /g1861/g1866/g1866/g1857/g1853/g1870/g1860/g1853/g1871 /g1875/g1857/g1853/g1870/g1861/g1866/g1859/g1875/g1867/g1865/g1853/g1866 /g3398 /g1871/g1860/g1861/g1870/g1872 /g1858/g1867/g1867/g1856 /g3398 /g1868/g1864/g1853/g1872/g1857 /g1857/g1853/g1872/g1861/g1866/g1859 /g1858/g1864/g1877/g1861/g1866/g1859 /g1861/g1866Figure 1. (a) Within-triplet dependency of relationship on its parent object pair; (b) long-tail nature of the pair statistics where 33% pair samples originated from top 1%pairs; (c) long-tail nature of the relationships showing the dominance of ‘on’ and ‘wearing’. The skewness in (c) is an effect of skewness in (b). Since ‘on’, ‘has’ or ‘wearing’ dominates in these top 1%pairs, they become the majority relationships in (c) and many other relationships, such as ‘eating’ or ‘flying in’, which dominate in the tail pairs of (b), are suppressed in the training process. representation [6, 47], fail to preserve the representation in the trained model and perform poorly on the tail region of the relationships. Previous works attempt to retrieve the tail regions through re-sampling/re-weighting the minority classes in training [4, 7, 9] or through causal intervention in testing [29]. Their success is well-demonstrated by the signifi-cant increase of minority-driven evaluation metric mean re-call. However, these approaches do not consider the strong within-triplet prior of triplets and hurt the performance of majority-driven evaluation metric recall . Keeping this gap in mind, we propose an inference-time post-processing methodology that bolsters the minority tail classes as well as hurts the majority head classes less brutally. We pro-pose a within-triplet Bayesian Network (BN) that combines the within-triplet prior with uncertain biased evidence from SOTA models. Posterior inference with this BN simultane-ously eradicates the long-tailed bias in the marginal distri-bution of the relationship and restores the object-conditional within-triplet prior. Learning such a small within-triplet BN from the train-ing data is a seemingly trivial task where we can perform simple MLE of parameters by counting. However, because of restricting our training samples only belonging to some top-Nrclasses based on the marginal probability of rela-tionship, we sacrifice many information-revealing triplets in the minority pairs. For example, in the ‘man-pizza’ pair, we see there exist many interesting relationships such as ‘man-biting-pizza’ or ‘man consuming pizza’ which are semanti-cally similar to one of the top-Nrvalid triplets ‘man-eating-pizza’. This phenomenon is also a result of label bias [31] where the annotator chooses some labels over another forthe same category of objects or relationships. We propose a novel method of borrowing samples from such invalid triplets into learning the distribution of the valid triplets us-ing embedding-based augmentation. The posterior inference is the most efficient probabilis-tic tool to combine domain-dependent prior with instance-dependent evidence and, to the best of our knowledge, no prior work in SGG literature formulates the problem of triplet generation as a posterior inference problem. The overview of our approach is illustrated in Figure 2. In sum-mary, our contribution is proposing a posterior inference-based post-processing method where we • integrate the within-triplet priors with the evidence un-certainties generated by the measurement model and, • introduce a simple yet novel learning scheme of the within-triplet network where we borrow samples from the semantically similar yet invalid triplet categories.
Chung_Solving_3D_Inverse_Problems_Using_Pre-Trained_2D_Diffusion_Models_CVPR_2023
Abstract Diffusion models have emerged as the new state-of-the-art generative model with high quality samples, with intriguing properties such as mode coverage and high flexibility. They have also been shown to be effective inverse problem solvers, acting as the prior of the distribution, while the information of the forward model can be granted at the This work was supported by the Korea Medical Device Develop-ment Fund grant funded by the Korea government (the Ministry of Sci-ence and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711137899, KMDF PR20200901 0015), by the National Re-search Foundation of Korea under Grant NRF-2020R1A2B5B03001980, by the KAIST Key Research Institute (Interdisciplinary Research Group) Project, and by the MSIT (Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2022-2020-0-01461) supervised by the IITP(Institute for Information & communications Technology Planning & Evaluation).sampling stage. Nonetheless, as the generative process remains in the same high dimensional (i.e. identical to data dimension) space, the models have not been extended to 3D inverse problems due to the extremely high memory and computational cost. In this paper, we combine the ideas from the conventional model-based iterative recon-struction with the modern diffusion models, which leads to a highly effective method for solving 3D medical image reconstruction tasks such as sparse-view tomography, limited angle tomography, compressed sensing MRI from pre-trained 2D diffusion models. In essence, we propose to augment the 2D diffusion prior with a model-based prior in the remaining direction at test time, such that one can achieve coherent reconstructions across all di-mensions. Our method can be run in a single commodity GPU, and establishes the new state-of-the-art, showing that the proposed method can perform reconstructions This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22542 of high fidelity and accuracy even in the most extreme cases (e.g. 2-view 3D tomography). We further reveal that the generalization capacity of the proposed method is surprisingly high, and can be used to reconstruct volumes that are entirely different from the training dataset. Code available: https://github.com/HJ-harry/DiffusionMBIR
1. Introduction Diffusion models learn the data distribution implic-itly by learning the gradient of the log density (i.e. ∇xlogpdata(x); score function) [9, 32], which is used at inference to create generative samples. These models are known to generate high-quality samples, cover the modes well, and be highly robust to train, as it amounts to merely minimizing a mean squared error loss on a denoising prob-lem. Particularly, diffusion models are known to be much more robust than other popular generative models [8], for example, generative adversarial networks (GANs). Further-more, one can use pre-trained diffusion models to solve in-verse problems in an unsupervised fashion [5–7, 15, 32]. Such strategies has shown to be highly effective in many cases, often establishing the new state-of-the-art on each task. Specifically, applications to sparse view computed to-mography (SV-CT) [5, 31], compressed sensing MRI (CS-MRI) [6,7,31], super-resolution [4,6,15], inpainting [5,15] among many others, have been proposed. Nevertheless, to the best of our knowledge, all the meth-ods considered so far focused on 2D imaging situations. This is mostly due to the high-dimensional nature of the generative constraint. Specifically, diffusion models gen-erate samples by starting from pure noise, and iteratively denoising the data until reaching the clean image. Conse-quently, the generative process involves staying in the same dimension as the data, which is prohibitive when one tries to scale the data dimension to 3D. One should also note that training a 3D diffusion model amounts to learning the 3D prior of the data density. This is undesirable in two as-pects. First, the model is data hungry, and hence training a 3D model would typically require thousands of volumes , compared to 2D models that could be trained with less than 10 volumes. Second, the prior would be needlessly compli-cated: when it comes to dynamic imaging or 3D imaging, exploiting the spatial/temporal correlation [12, 33] is stan-dard practice. Naively modeling the problem as 3D would miss the chance of leveraging such information. Another much more well-established method for solv-ing 3D inverse problems is model-based iterative recon-struction (MBIR) [14, 17], where the problem is formu-lated as an optimization problem of weighted least squares (WLS), constructed with the data consistency term, and the regularization term. One of the most widely acknowl-edged regularization in the field is the total variation (TV) penalty [18, 28], known for its intriguing properties: edge-preserving, while imposing smoothness. While the TV prior has been widely explored, it is known to fall behind the data-driven prior of the modern machine learning prac-tice, as the function is too simplistic to fully model how the image “looks like”. In this work, we propose DiffusionMBIR, a method to combine the best of both worlds: we incorporate the MBIR optimization strategy into the diffusion sampling steps in or-der to augment the data-driven prior with the conventional TV prior, imposed to the z-direction only. Particularly, the standard reverse diffusion (i.e. denoising) steps are run independently with respect to the z-axis, and hence stan-dard 2D diffusion models can be used. Subsequently, the data consistency step is imposed by aggregating the slices, then taking a single update step of the alternating direction method of multipliers (ADMM) [3]. This step effectively coerces the cross-talk between the slices with the measure-ment information, and the TV prior. For efficient optimiza-tion, we further propose a strategy in which we call vari-able sharing , which enables us to only use a single sweep of ADMM and conjugate gradient (CG) per denoising itera-tion. Note that our method is fully general in that we are not restricted to the given forward operator at test time. Hence, we verify the efficacy of the method by performing exten-sive experiments on sparse-view CT (SV-CT), limited an-gle CT (LA-CT), and compressed sensing MRI (CS-MRI): out method shows consistent improvements over the current diffusion model-based inverse problem solvers, and shows strong performance on alltasks (For representative results, see Fig. 1. For conceptual illustration of the inverse prob-lems, see Fig. 2). In short, the main contributions of this paper is to devise a diffusion model-based reconstruction method that 1) op-erate with the voxel representation, 2) is memory-efficient such that we can scale our solver to much higher dimen-sions (i.e. >2563), and 3) is not data hungry, such that it can be trained with less than ten 3D volumes.
He_Semantic-Promoted_Debiasing_and_Background_Disambiguation_for_Zero-Shot_Instance_Segmentation_CVPR_2023
Abstract Zero-shot instance segmentation aims to detect and pre-cisely segment objects of unseen categories without any training samples. Since the model is trained on seen categories, there is a strong bias that the model tends to classify all the objects into seen categories. Besides, there is a natural confusion between background and novel objects that have never shown up in training. These two challenges make novel objects hard to be raised in the final instance segmentation results. It is desired to rescue novel objects from background and dominated seen categories. To this end, we propose D2Zero with Semantic-Promoted Debiasing and Background Disambiguation to enhance the performance of Zero -shot instance segmenta-tion. Semantic-promoted debiasing utilizes inter-class se-mantic relationships to involve unseen categories in visual feature training and learns an input-conditional classifier to conduct dynamical classification based on the input im-age. Background disambiguation produces image-adaptive background representation to avoid mistaking novel objects for background. Extensive experiments show that we sig-nificantly outperform previous state-of-the-art methods by a large margin, e.g.,16.86% improvement on COCO.
1. Introduction Existing fully supervised instance segmentation meth-ods [4, 23, 52] are commonly benchmarked on predefined datasets with an offline setting, where all categories are defined beforehand and learned at once, thus can neither handle novel concepts outside training datasets nor scale the model’s ability after training. Perception errors inevitably arise when applying a trained instance segmentation model to scenarios that contain novel categories. To address these challenges, zero-shot instance segmentation (ZSIS) [65] is introduced to segment instances of unseen categories with no training images but semantic information only. †Equal contribution. Corresponding author (henghui.ding@gmail.com). ImageZSIOursGround Truth ImageGround Truth1) Bias issue 2) Background ambiguationdoghorsedog ImageGround Truth carcarFigure 1. Two key challenges in generalized zero-shot instance segmentation. 1) Bias issue: the model tends to label novel objects with seen categories, e.g., ZSI [65] incorrectly classifies unseen classdog as training class horse . 2) Background ambiguation: objects that do not belong to any training categories are considered background, e.g.,parking meter andfire hydrant . Since scene images typically contain several objects of different categories, it is more realistic for ZSIS to segment both seen and unseen objects, which is termed Generalized ZSIS (GZSIS). In this work, we focus on two key challenges under GZSIS setting, bias issue and background ambigua-tion (see Figure 1), and propose D2Zero with semantic-promoted Debiasing and background Disambiguation to en-hance the performance of Zero -shot instance segmentation. Bias towards seen categories imposes a significant chal-lenge to GZSIS. Since the model is trained on data of seen categories, it tends to classify all objects into seen categories, e.g., novel object dog is labeled as seen class horse in Figure 1. Previous work ZSI [65] introduces semantic embedding to build a mapping from seen classes to unseen ones then segments novel objects by sharing instance proposals of seen group and re-labeling these proposals within unseen group. Such a “sharing” strategy brings many false positives by assigning each instance two labels. Some zero-shot semantic segmentation meth-ods [5,21,35] employ a generator to synthesize fake unseen This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19498 features and fine-tune the classifier with these synthetic features. The generative way comes at the cost of forgetting some knowledge learned from seen categories and impairs the classifier’s discriminative ability of the real feature. Besides, classifier is collapsed when a new class comes in, making the generative way impractical for application. In this work, we address the bias issue from two aspects, feature extractor and classifier. Biased feature extractor mainly discriminate seen classes due to seen-only training objectives, which generalizes poorly to novel classes. We propose an unseen-constrained training objective to lever-age semantic knowledge of unseen classes in visual feature learning. Specifically, we obtain semantic similarity of every seen-unseen class pair and generate a corresponding similarity-based pseudo unseen label for a seen object. Im-age features of seen classes are required to match the inter-class correlation with unseen classes under the supervision of pseudo unseen label, which enables the feature extractor to distinguish both seen and unseen classes. Besides feature extractor, the bias devil also exists in the classifier. Previous zero-shot segmentation methods either use conventional fully-connected layer as classi-fier [5, 35] or prototypical classifier built upon semantic embeddings [61,65]. However, these two types of classifier both have features clustered to fixed seen-class centers and do not consider the bias during inference. To address this issue, we design an input-conditional classifier based on transformer mechanism. We employ the semantic embed-dings as query and visual features as key and value of transformer decoder, which bridges the semantic and visual spaces and transfers knowledge. Then the decoder outputs are employed as classifier in a prototypical way. The input-conditional classifier captures image-specific clues [40] and can better distinguish different categories of the input im-age. In such a way, the model learns to dynamically project semantic embeddings to input-conditional class centers, which greatly alleviates bias issue. Moreover, the input-conditional classifier establish the information interaction between visual and semantic spaces, contributing to miti-gating multi-modal domain gap problem. The background ambiguation issue is specific for zero-shot instance segmentation. In the training of instance segmentation, objects that do not belong to any train-ing categories are considered background, e.g.,parking meter andhydrant in Figure 1. The model hence is likely to identify the novel objects as background, which affects the final performance a lot. To address this issue, BLC [63] and ZSI [65] propose to learn a background vector in the Region Proposal Network (RPN), which is optimized in a binary classifier of RPN. However, the binary classifier of RPN tends to overfit to seen categories and may fail to identify unseen categories [31, 59]. We experimentally find that the Transformer [51] based DETR-like model [6, 9]can well generalize to novel categories in terms of proposal generation, thanks to its end-to-end training manner and classification-free instance proposal generation. Therefore, we collect all the foreground mask proposal produced by DETR-like model to get the global foreground mask and then apply the reverse of it on the feature map to get background prototype, which is used for background classification. Such an adaptive background prototype that updates according to input image can better capture image-specific and discriminative background visual clues, which helps to background disambiguation. Our main contributions are summarised as follows: • We propose an unseen constrained visual feature learn-ing strategy to leverage semantic knowledge of unseen categories in visual feature training, which facilitates mitigating bias issue in GZSIS. • We design an input-conditional classifier that projects semantic embedding to image-specific visual proto-types, contributing to addressing both bias issue and multi-modal domain gap issue. • To rescue novel objects from background, we intro-duce an image-adaptive background representation to better capture image-specific background clues. • We achieve new state-of-the-art performance on zero-shot instance segmentation and significantly outper-form ZSI [65] by a large margin, e.g.,16.86% HM-mAP under 48/17 split on COCO.
Iwase_RelightableHands_Efficient_Neural_Relighting_of_Articulated_Hand_Models_CVPR_2023
Abstract We present the first neural relighting approach for ren-dering high-fidelity personalized hands that can be ani-mated in real-time under novel illumination. Our approach adopts a teacher-student framework, where the teacher learns appearance under a single point light from images captured in a light-stage, allowing us to synthesize hands in arbitrary illuminations but with heavy compute. Using images rendered by the teacher model as training data, an efficient student model directly predicts appearance under natural illuminations in real-time. To achieve generaliza-tion, we condition the student model with physics-inspired illumination features such as visibility, diffuse shading, and specular reflections computed on a coarse proxy geometry, maintaining a small computational overhead. Our key in-sight is that these features have strong correlation with sub-sequent global light transport effects, which proves suffi-cient as conditioning data for the neural relighting network. Moreover, in contrast to bottleneck illumination condition-ing, these features are spatially aligned based on underlying geometry, leading to better generalization to unseen illumi-nations and poses. In our experiments, we demonstrate the efficacy of our illumination feature representations, outper-forming baseline approaches. We also show that our ap-proach can photorealistically relight two interacting hands ⇤This work was done during an internship at Metaat real-time speeds. https://sh8.io/#/relightable hands
1. Introduction Neural rendering approaches have significantly ad-vanced photorealistic face rendering [ 42,55,66] in recent years. These methods use deep neural networks to model the light transport on human skin [ 11,14,31,63], directly reproducing physical effects such as subsurface scattering by reconstructing real images. However, despite the suc-cess of neural relighting, extending this approach to animat-able hand models poses a unique challenge: generalization across articulations. Unlike faces, hands have many joints, and the state of a single joint affects all child joints. This leads to ex-tremely diverse shape variations even within a single sub-ject. Changes in pose drastically affect the appearance of hands, creating wrinkles, casting shadows, and inter-reflecting across topologically distant regions. Rendering these effects is challenging because sufficiently accurate ge-ometry and material properties required for photorealism are difficult to obtain, and even then, path tracing to suf-ficient accuracy is computationally expensive. The use of simplified geometric and appearance models (such as linear blend skinning and reduced material models) allow faster computation but come at a noticeable degradation in render-ing fidelity. So far, photorealistic rendering of animatable This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16663 hands with global illumination effects in real-time remains an open problem. In this work, we aim to enable photorealistic rendering of a personalized hand model that can be animated with novel poses, in novel lighting environments, and supports rendering two-hand interactions. To this end, we present the first neural relighting framework of a parameteric 3D hand model for real-time rendering. Specifically, we build a relightable hand model to reproduce light-stage captures of dynamic hand motions. Inspired by [ 4], we capture performances under spatiotemporal-multiplexed illumination patterns, where fully-on illumination is interleaved to enable tracking of the current state of hand geometry and poses. We use a two-stage teacher-student approach to learn a model that gener-alizes to natural illuminations outside of the capture system. We first train a teacher model that infers radiance given a point-light position, a viewing direction, and light visibil-ity. As this model directly learns the mapping between an input light position and output radiance, it can accurately model complex reflectance and scattering on the hand with-out the need for path tracing. To render hands in arbitrary illuminations, we treat natural illuminations as a combina-tion of distant point-light sources by using the linearity of light transport [ 9]. We then take renderings from the teacher model as pseudo ground-truth to train an efficient student model that is conditioned on the target environment maps. However, we found that the student model architecture used in [ 4] for faces leads to severe overfitting when applied to relightable hands. This is caused by the architecture de-sign of holistically conditioning a bottleneck representation with the target lighting environment. This representation makes it difficult to reproduce geometric interactions be-tween lights and hand pose, such as those required to cast shadows from the fingers onto the palm across all possible finger configurations. Therefore, motivated by recent neural portrait relight-ing works [ 42,61], we instead propose to compute spatially aligned lighting information using physics-inspired illumi-nation features, including visibility, diffuse shading, and specular reflections. Because these features are based on geometry and approximate the first bounce of light trans-port, they show strong correlation with the full appearance and provide sufficient conditioning information to infer ac-curate radiance under natural illuminations. In particular, visibility plays a key role in disentangling lights and pose, reducing the learning of spurious correlations that can be present in limited training data. However, computing vis-ibility at full geometric resolution for every single light is too computationally expensive for real-time rendering. To address this, we propose using a coarse proxy mesh that shares the same UV parameterization as our hand model for computing the lighting features. We compute the features atvertices of the coarse geometry, and use barycentric inter-polation to create texel-aligned lighting features. Our fully convolutional architecture learns to compensate for the ap-proximate nature of the input features and infers both local and global light transport effects. This way, our model can render appearance under natural illuminations at real-time framerates as shown in Figure 1. Our study shows that both integrating visibility informa-tion and spatially aligned illumination features are impor-tant for generalization to novel illuminations and poses. We also demonstrate that our approach supports rendering of two hands in real-time, with realistic shadows cast across hands. Our contributions can be summarized as follows: •The first method to learn a relightable personalized hand model from multi-view light-stage data that sup-ports high-fidelity relighting under novel lighting en-vironments. •An illumination representation for parametric model relighting that is spatially aligned, leading to signifi-cant improvements in generalization and accuracy of shadows under articulation. •An efficient algorithm to compute spatially-aligned lighting features with visibility and shading informa-tion incorporated using a coarse proxy mesh, enabling real-time synthesis.
Das_Learning_Expressive_Prompting_With_Residuals_for_Vision_Transformers_CVPR_2023
Abstract Prompt learning is an efficient approach to adapt trans-formers by inserting learnable set of parameters into the input and intermediate representations of a pre-trained model. In this work, we present Expressive Prompts with Residuals (EXPRES) which modifies the prompt learn-ing paradigm specifically for effective adaptation of vi-sion transformers (ViT). Our method constructs down-stream representations via learnable “output” tokens (shal-low prompts), that are akin to the learned class tokens of the ViT. Further for better steering of the downstream repre-sentation processed by the frozen transformer, we introduce residual learnable tokens that are added to the output of various computations. We apply EXPRES for image classi-fication and few-shot semantic segmentation, and show our method is capable of achieving state of the art prompt tun-ing on 3/3 categories of the VTAB benchmark. In addition to strong performance, we observe that our approach is an order of magnitude more prompt efficient than existing vi-sual prompting baselines. We analytically show the compu-tational benefits of our approach over weight space adap-tation techniques like finetuning. Lastly we systematically corroborate the architectural design of our method via a series of ablation experiments.
1. Introduction Scaling up of neural nets in the past few years has steadily improved performance on wide variety of downstream vi-sual tasks. However, model adaptation is often necessary to achieve the best performance in downstream tasks like fine-grained recognition [89], semantic segmentation [8] or object recognition [34]. While traditional techniques like full-model finetuning have become the de-facto approach to adaptation, they are not well suited for many scenarios. For example, finetuning is susceptible to catastrophic for-getting [37] as it modifies model parameters without the knowledge of future domains, and potentially losing prior ∗Work conducted while interning at AWS AI Labs. ∗∗Work conducted while at AWS AI Labs. Figure 1. Adapting large vision models is crucial to solving down-stream tasks with wide variety of semantics ( e.g., image classifica-tion, semantic segmentation etc.) as well as dataset sizes (few-shot, low-shot, full-shot). In this work, we propose a novel adaptation technique for large vision models that is capable of achieving the desired goal. knowledge of current adaptation. Moreover, finetuning all of the model parameters of a large vision model with just a few training examples can lead to poor generalization. This is in contrast to human intelligence that is capable of solv-ing wide variety of downstream tasks with extremely few exemplars. Motivated by the need for better adaptation, parameter effi-cient techniques like partial-finetuning or adapters [64,101] have been developed to constructively adapt large models without significant parameter overhead. While serving as effective alternatives to finetuning, most parameter efficient techniques have been designed with convolutional architec-tures in mind. In light of recent works [15] that demonstrate that Vision Transformers are more suitable for scaling up than CNNs, designing adaptation techniques that exploit the Transformer architecture can be extremely useful. To that end, visual prompt tuning (VPT) [32] has been proposed as a way to constructively adapt transformers by introducing learnable tokens at every layer that interact with the patch and class tokens and are optimized together with a classi-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3366 fier head. While being effective in practice, VPT allows only partial interactions between prompts and the remain-ing tokens, thus, leveraging only a part of the prompt capac-ity. Moreover, it often requires a large number of inserted prompts to achieve optimal performance but that signifi-cantly increases the computation costs due to the quadratic computational complexity of the self-attention layer. In this work, we explore an alternate design to prompting motivated by the potential for greater prompt capacity. We propose ExPRes, an expressive prompt tuning method with residual tokens that inherits the strengths of parameter effi-cient adaptation while significantly improving downstream performance. Our prompt design is inspired by the two key observations -propagation of prompts by multilayered in-teraction with other tokens is crucial for strong capacity and learnable residual tokens can modulate the propagated prompts to favour task-specific relations (unlike in [32]). We first propagate shallow prompts through the encoder that are average pooled at the last layer to yield seman-tic image-level representations. Shallow prompts by them-selves have limited capacity since they cannot specifically modulate token-token relations at higher layers. Therefore to harness the prompts, we add residual tokens to prop-agated prompts at various layerwise computations of the Transformer encoder including LayerNorm, self-attention and multi-head projection to facilitate layerwise modulation without increasing the number of prompts per layer. This results in enhanced prompt capacity at almost no additional computational cost. We empirically validate the effectiveness of our method on a variety of downstream tasks including fine-grained recog-nition and semantic segmentation. Our use of additional learnable parameters in the form of residual and shallow prompts allows the retention of prior knowledge in the form of frozen encoder weights while being extremely parameter efficient (prompts are ≤1%of the total parameters). Thus, our method is highly suited for real world adaptation that requires information retention at low memory and computa-tional overheads. Additionally, we show that in most cases we require fewer prompts than VPT to achieve the same or better performance, making it more suitable for limited data settings. Our main contributions can be summarized as fol-lows: • We propose a novel prompting technique: EXPRES, that uses a combination of shallow and deep residual prompts to facilitate constructive adaptation to down-stream tasks with limited labelled datasets. • Our method significantly outperforms full-finetuning based adaptation by 4.6%on VTAB-1k. Moreover, our method outperforms state-of-the-art prompting ap-proach [32] on the same benchmarks with significantly fewer prompts, suggesting that prompt design is cru-cial to extracting more capacity at a given parame-ter/computational budget. • To the best of our knowledge, we are the first to demonstrate the effectiveness of prompting for diverse applications such as few-shot semantic segmentation. Our method outperforms strong adaptation baselines by25% and achieves competitive performance with respect to language-assisted segmentation [40] despite training on significantly less data with dense annota-tions.
Cha_Learning_To_Generate_Text-Grounded_Mask_for_Open-World_Semantic_Segmentation_From_CVPR_2023
Abstract We tackle open-world semantic segmentation, which aims at learning to segment arbitrary visual concepts in images, by using only image-text pairs without dense an-notations. Existing open-world segmentation methods have shown impressive advances by employing contrastive learn-ing (CL) to learn diverse visual concepts and transferring the learned image-level understanding to the segmenta-tion task. However, these CL-based methods suffer from a train-test discrepancy, since it only considers image-text alignment during training, whereas segmentation requires region-text alignment during testing. In this paper, we pro-posed a novel Text-grounded Contrastive Learning (TCL) framework that enables a model to directly learn region-text alignment. Our method generates a segmentation mask for a given text, extracts text-grounded image embedding from the masked region, and aligns it with text embedding via TCL. By learning region-text alignment directly, our frame-work encourages a model to directly improve the quality of generated segmentation masks. In addition, for a rigorous and fair comparison, we present a unified evaluation pro-tocol with widely used 8 semantic segmentation datasets. TCL achieves state-of-the-art zero-shot segmentation per-formances with large margins in all datasets. Code is avail-able at https://github.com/kakaobrain/tcl .
1. Introduction Open-world semantic segmentation aims to identify the arbitrary semantic concepts in the open world1. Conven-tional semantic segmentation aims to learn segmentation capability for the small number of pre-defined target cat-egories, whereas open-world semantic segmentation ad-dresses unrestricted arbitrary categories or free-form texts. Such segmentation capability over unlimited targets drasti-1This setting is often called both open-world andopen-vocabulary . In this paper, we mainly refer to this setting as open-world for clarity. PASCAL VOCPASCAL ContextCOCO Object PASCAL VOC20 PASCAL Context59 COCO Stuff CityscapesADE20kN/A 22.0 31.0 40.0 49.0 58.0 GroupViT MaskCLIP ReCo TCL (Ours)17.020.7524.528.2532.0 14.018.523.027.532.0 56.063.571.078.586.0 21.0 24.5 28.0 31.5 35.0 14.0 16.5 19.0 21.5 24.010.0 14.0 18.0 22.0 26.08.0 10.5 13.0 15.5 18.0Figure 1. Open-world segmentation performance comparison. The proposed method remarkably outperforms existing methods in all 8 segmentation benchmark datasets. cally extends the application scope of the open-world seg-mentation models. The first challenge for open-world segmentation is how to learn arbitrary concepts, beyond pre-defined cate-gories. Inspired by the success of CLIP [23], previous ap-proaches [11, 17–19, 28, 30, 33] tackle this challenge by ex-ploiting massive web-crawled image-text paired data; since the texts in web-crawled data contain a global semantic de-scription for the paired images, the large-scale image-text pairs can provide rich knowledge for arbitrary semantic cat-egories. However, there still remains another challenge in how to achieve precise localization of arbitrary concepts without dense annotations . There are several approaches that simply address this issue using dense annotation (seg-mentation masks) in addition to image-text pairs [11,17,18]. The dense annotation helps to improve segmentation perfor-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11165 Region -Text AlignmentTwo birds are resting on a tree. Two birds are resting on a tree.Grounded Region Our ApproachPrevious Approach GrounderText ImageText Image Image -Text Alignment Training TestZero -shot Transfer Inference Aphoto ofa aeroplane . Figure 2. A conceptual comparison between the previous ap-proach and ours. Open-world segmentation is typically achieved through region-text alignment, which involves matching region features and text embeddings. However, previous methods learn image-text alignment during training, thus suffering from the alignment-level discrepancy between training and testing. In con-trast, our method facilitates end-to-end learning of region-text alignment with only image-text pairs. mance in a fixed benchmark dataset, but the requirements of expensive dense annotation still limit the applicable do-mains and scalability of the method. In this paper, therefore, we focus on open-world se-mantic segmentation from only image-text pairs without any dense annotation. For this setting, the existing meth-ods [19, 28, 30, 33] learn an image-text alignment capabil-ity during training and heavily rely on the transferability of the image-text alignment to perform region-text alignment at inference. More specifically, MaskCLIP [33] leverages CLIP models pre-trained to learn image-text alignment. To perform region-text alignment using CLIP, MaskCLIP ap-plies a simple heuristic modification to the CLIP image en-coder. GroupViT [30] and ViL-Seg [19] propose to cluster region-level visual features into distinct groups and gener-ate segmentation masks by matching the groups and texts. Note that they match the text embeddings and clustered re-gion features in test time, but in training time, the text em-beddings are aligned with global image embeddings. While the existing methods have shown impressive results even through the training with image-text alignment, they still suffer from the alignment-level discrepancy between train-ing and testing phases as depicted in Fig. 2. To address this train-test discrepancy, we propose the Text-grounded Contrastive Learning (TCL) framework, which allows a model to learn region-text alignment di-rectly from the image-text pairs without any dense anno-tations. Our key idea is to incorporate a text grounding pro-cedure within contrastive learning as illustrated in Fig. 2, where TCL generates a segmentation mask indicating text-grounded regions, computes grounded region embeddingsusing the mask, and applies contrastive learning between text and grounded region. By re-formulating the contrastive loss to be directly affected by the segmentation quality, TCL enables end-to-end training of the grounder and directly im-proves the quality of region-text level alignment. We also present a unified evaluation protocol using widely used 8 semantic segmentation datasets and compare existing meth-ods in the same setting. As a result, TCL achieves state-of-the-art zero-shot segmentation performance with large mar-gins in all datasets, as shown in Fig. 1. Our main contributions are summarized as follows: • We introduce a novel framework for open-world seg-mentation, named Text-grounded Contrastive Learn-ing (TCL), which enables learning region-text align-ment directly without train-test discrepancy, thus learning to generate more precise segmentation masks through only image-text pairs. • We present a unified evaluation protocol and re-evaluate recent open-world segmentation models for a fair and direct comparison. • We achieve the new state-of-the-art zero-shot segmen-tation performance on 8 segmentation datasets with large margins compared to existing methods.
Cermelli_CoMFormer_Continual_Learning_in_Semantic_and_Panoptic_Segmentation_CVPR_2023
Abstract Continual learning for segmentation has recently seen increasing interest. However, all previous works focus on narrow semantic segmentation and disregard panoptic seg-mentation, an important task with real-world impacts. In this paper, we present the first continual learning model capable of operating on both semantic and panoptic seg-mentation. Inspired by recent transformer approaches that consider segmentation as a mask-classification problem, we design CoMFormer. Our method carefully exploits the prop-erties of transformer architectures to learn new classes over time. Specifically, we propose a novel adaptive distilla-tion loss along with a mask-based pseudo-labeling tech-nique to effectively prevent forgetting. To evaluate our ap-proach, we introduce a novel continual panoptic segmenta-tion benchmark on the challenging ADE20K dataset. Our CoMFormer outperforms all the existing baselines by for-getting less old classes but also learning more effectively new classes. In addition, we also report an extensive eval-uation in the large-scale continual semantic segmentation scenario showing that CoMFormer also significantly out-performs state-of-the-art methods.1
1. Introduction Image segmentation is a fundamental computer vision problem that enables machines to assign an image’s pix-els to discrete segments. Multiple segmentation tasks have been defined depending on the segments definitions. Se-mantic segmentation clusters pixels by classes, merging in a single segment pixels belonging to instances of the same class. Panoptic segmentation assigns to every pixel a se-mantic class while separating different instances into dif-ferent segments. This latter kind of segmentation has real-world impacts in autonomous robots and vehicles [7, 41]. Despite tremendous progress in image segmentation, the current approaches are trained on a static dataset with a pre-*Work done during the visiting period at Sorbonne Universit ´e. †Work done at Sorbonne Universit ´e, currently affiliated to DeepMind. 1https://github.com/fcdl94/CoMFormer Step t-1 -New Class: Car ImageAnnotationsPredictions ImageAnnotationsPredictions CoMFormer (t-1) CoMFormer (t) Step t -New Class: Person Figure 1. Illustration of our model, CoMFormer, operating in continual segmentation . Relying on the mask classification paradigm, it is able to cope with both continual semantic and panoptic segmentation without any modification by predicting masks for both old ( e.g.carin red) and new ( e.g.person in green) classes. The figure reports two classes and no “stuff” ( e.g.road, building ) only for illustration purposes. defined set of classes. Whenever an update of the model is required to fit new classes, the common solution is to train a model from scratch on the union of the old and new class data. A computationally more efficient solution would be to fine-tune the existing model solely on the new class data. Unfortunately, this approach would cause a catastrophic forgetting [21] of the old classes on which the model per-formance would be extremely degraded. The problem of updating the knowledge of the model over time is typically referred as continual learning. It has been traditionally studied in the context of image classifi-cation [17, 19, 29, 33, 43, 45] and is gaining attention on the segmentation task [3, 4, 15, 39, 59] due to the more realis-tic applications and the additional challenges it introduces, such as the background shift [4]. However, current state-of-the-art methods mainly focus on semantic segmentation and are not designed to work in other segmentation tasks, strongly limiting their application in the real world. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 3010 In this paper, we design the first method operating in both continual semantic and panoptic segmentation, as il-lustrated in Fig. 1. Our method, CoMFormer ( Continual MaskFormer ), takes inspiration from recent transformer architectures [11, 12], approaching segmentation as a mask classification problem. Instead of predicting a class proba-bility for each pixel, as in previous semantic segmentation works [9, 37], it predicts a set of binary masks, each as-sociated with a single class prediction, effectively address-ing both segmentation tasks without any modification in the training architecture and procedure. Differently from previ-ous works [11, 12], however, CoMFormer forces the output binary masks to be mutually exclusive to one another: a pixel can only be predicted by a single binary mask to pre-vent having several masks classifying the same pixel with different classes. This behavior is crucial in continual learn-ing to reduce the interference among old and new classes. Furthermore, CoMFormer introduces a novel adaptive distillation loss to alleviate forgetting. It enforces consis-tency of the model’s classification predictions across learn-ing steps only when it is useful to remember old classes, ensuring a better tradeoff between rigidity (not forgetting old classes) and plasticity (learning efficiently new classes). Finally, since at each training iteration the dataset reports annotations only for the current classes, we design a mask-based pseudo-labeling technique to generate annotations for the old classes, effectively alleviating forgetting. To reduce the noise, we consider the prediction confidence and we avoid interference with ground-truth annotations. We validate CoMFormer on both continual segmenta-tion tasks. For panoptic segmentation, we define a new benchmark relying on the challenging ADE20K where we demonstrate that CoMFormer largely outperforms all pre-vious baselines. On semantic segmentation, we show that CoMFormer outperforms the existing state-of-the-art meth-ods on every setting of the large-scale ADE20K benchmark. To sum up, the contributions of this paper are as follows: • We introduce continual panoptic segmentation which has real-world impacts in addition to being signifi-cantly more challenging than previous benchmarks. • We propose CoMFormer to tackle both continual panoptic and semantic segmentation. To avoid forget-ting, we design a novel adaptive distillation and an ef-ficient mask-based pseudo-labeling strategy. • Through extensive quantitative and qualitative bench-marks, we showcase the state-of-the-art performance of our model on both continual segmentation tasks.
Du_Conditional_Generation_of_Audio_From_Video_via_Foley_Analogies_CVPR_2023
Abstract The sound effects that designers add to videos are de-signed to convey a particular artistic effect and, thus, may be quite different from a scene’s true sound. Inspired by the challenges of creating a soundtrack for a video that dif-fers from its true sound, but that nonetheless matches the actions occurring on screen, we propose the problem of con-ditional Foley . We present the following contributions to address this problem. First, we propose a pretext task for training our model to predict sound for an input video clip using a conditional audio-visual clip sampled from another time within the same source video. Second, we propose a model for generating a soundtrack for a silent input video, given a user-supplied example that specifies what the video should “sound like”. We show through human studies and automated evaluation metrics that our model successfully generates sound from videos, while varying its output ac-cording to the content of a supplied example. Project site: https://xypb.github.io/CondFoleyGen .
1. Introduction When artists create sound effects for videos, they often “borrow” sounds from other sources, then manipulate them to match the on-screen actions. These artists’ aim is not necessarily to convey the scene’s true sound, but rather toachieve a desired artistic effect. Thus, the clunk of a coconut shell becomes a trotting horse, or the sizzle of cooking bacon becomes rain1. The problem of creating sound effects for video, known as Foley [1], has often been posed as predicting a video’s co-occurring sound [29,42,68]. Yet the task that artists solve is subtly different. They create a soundtrack for a video that differs from its true sound, but that still plausibly matches the on-screen events. Also, these prior systems largely do not give artists control over the output sound. To aid Foley artists while giving them artistic control, we propose a conditional Foley problem inspired by classic work on image analogies [27]. Our task is to generate a soundtrack for an input silent video from a user-provided conditional audio-visual example that specifies what the input video should “sound like.” The generated soundtrack should relate to the input video in an analogous way as the provided example (Fig. 1). This formulation naturally separates the problem of selecting an exemplar sound, which arguably requires the artist’s judgment, from the problem of manipulating that sound to match a video, such as by precisely adjusting its timing and timbre. This proposed task is challenging, since a system must 1We encourage you to watch and listen to how sound artists work: https://www.youtube.com/watch?v=UO3N_PRIgX0 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2426 learn to adapt the exemplar (conditional) sound to match the timing of the visual content of a silent video while preserv-ing the exemplar sound’s timbre. While prior methods can predict a video’s sound [29, 42, 68], they cannot incorporate an artist’s exemplary conditional sound. Furthermore, while vision-to-sound methods can pose the problem as predicting a video’s soundtrack from its images, it is less clear how supervision for conditional examples can be obtained. To address these challenges, we contribute a self-supervised pretext task for learning conditional Foley, as well as a model for solving it. Our pretext task exploits the fact that natural videos tend to contain repeated events that produce closely related sounds. To train the model, we randomly sample two pairs of audio-visual clips from a video, and use one as the conditional example for the other. Our model learns to infer the types of actions within the scene from the conditional example, and to generate analogous sounds to match the input example. At test time, our model generalizes to conditional sounds obtained from other videos. To solve the task, we train a Transformer [58] to autoregressively predict a sequence of audio codes for a spectrogram VQGAN [13], while conditioning on the provided audio-visual example. We improve the model’s performance at test time by generating a large number of soundtracks, then using an audio-visual synchronization model [8, 30, 41] to select the sound with the highest degree of temporal alignment with the video. We evaluate our model on the Greatest Hits dataset [42], which contains videos that require an understanding of mate-rial properties and physical interactions, and via qualitative examples from the highly diverse CountixAV dataset [66]. Through perceptual studies and quantitative evaluations, we show that our model generates soundtracks that convey the physical properties of conditional examples while reflecting the timing and motions of the on-screen actions.
Chugunov_Shakes_on_a_Plane_Unsupervised_Depth_Estimation_From_Unstabilized_Photography_CVPR_2023
Abstract Modern mobile burst photography pipelines capture and merge a short sequence of frames to recover an enhanced image, but often disregard the 3D nature of the scene they capture, treating pixel motion between images as a 2D ag-gregation problem. We show that in a “long-burst”, forty-two 12-megapixel RAW frames captured in a two-second se-quence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth. To this end, we devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion. Our plane plus depth model is trained end-to-end, and per-forms coarse-to-fine refinement by controlling which multi-resolution volume features the network has access to at what time during training. We validate the method experi-mentally, and demonstrate geometrically accurate depth re-constructions with no additional hardware or separate data pre-processing and pose-estimation steps.
1. Introduction Over the last century we saw not only the rise and fall in popularity of film and DSLR photography, but of standalone cameras themselves. We’ve moved into an era of ubiquitous multi-sensor, multi-core, multi-use, mobile-imaging platforms [12]. Modern cellphones offer double-digit megapixel image streams at high framerates; optical image stabilization; on-board motion measurement devices such as accelerometers, gyroscopes, and magnetometers; and, most recently, integrated active depth sensors [43]. This latest addition speaks to a parallel boom in the field of depth imaging and 3D reconstruction [22, 84]. As users often photograph people, plants, food items, and other com-plex 3D shapes, depth can play a key role in object under-standing tasks such as detection, segmentation, and track-ing [32, 63, 80]. 3D information can also help compen-sate for non-ideal camera hardware and imaging settings through scene relighting [20, 55, 79], simulated depth-of-field effects [1,71,72], and frame interpolation [2]. Beyond x42 DepthMotion Long BurstData Capture MLP Scene Composite Depth Plane MaskFigure 1. Our neural RGB-D model fits to a single long-burst im-age stack to distill high quality depth and camera motion. The model’s depth-on-a-plane decomposition can facilitate easy back-ground masking, segmentation, and image compositing. helping improve or understand RGB content, depth itself is a valuable output for simulating objects in augmented real-ity [5, 13, 44, 64] and interactive experiences [26, 36]. Depth reconstruction can be broadly divided into pas-siveandactive approaches. Passive monocular depth esti-mation methods leverage training data to learn shape pri-ors [6, 30, 59] – e.g., what image features imply curved versus flat objects or occluding versus occluded structures – but have a hard time generalizing to out-of-distribution scenes [48,60]. Multi-view depth estimation methods lower this dependence on learned priors by leveraging parallax in-formation from camera motion [16, 69] or multiple cam-eras [45, 67] to recover geometrically-guided depth. The recent explosion in neural radiance field approaches [49,50, 66, 81] can be seen a branch of multi-view stereo where a system of explicit geometric constraints is swapped for a more general learned scene model. Rather than classic fea-ture extraction and matching, these models are fit directly to image data to distill dense implicit 3D information. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13240 Active depth methods such as pulsed time-of-flight [46] (e.g., LiDAR), correlation time-of-flight [38], and struc-tured light [61, 83] use controlled illumination to help with depth reconstruction. While these methods are less re-liant on image content than passive ones, they also come with complex circuitry and increased power demands [28]. Thus, miniaturization for mobile applications results in very low-resolution sub-kilopixel sensors [8, 27, 74]. The Apple iPhone 12-14 Pro devices, which feature one of these minia-turized sensors, use depth derived from RGB, available at 12mega-pixel resolution, to recover scene details lost in the sparse LiDAR measurements. While how exactly they use the RGB stream is unknown, occluding camera sensors re-veals that the estimated geometry is the result of monocular RGB-guided depth reconstruction. Returning to the context of mobile imaging, even several seconds of continuous mode photography, which we refer to as a “long-burst”, contain only millimeter-scale view vari-ation from natural hand tremor [11]. While these micro-baseline [33] shifts are effectively used in burst superreso-lution and denoising methods [58, 76] as indirect observa-tions of content between sensor pixels, 3D parallax effects on pixel motion are commonly ignored in these models as the depth recovered from this data is too coarse for sub-pixel refinement [31, 33, 82]. A recent work [11] demonstrates high-quality object reconstructions refined with long-burst RGB data, but relies on the iPhone 12 Pro LiDAR sensor for initial depth estimates and device poses, not available on many other cellphones. They treat these poses as ground truth and explicitly solve for depth through minimization of photometric reprojection loss. In this work, we devise an unsupervised end-to-end ap-proach to jointly estimate high-quality object depth and camera motion from more easily attainable unstabilized two-second captures of 12-megapixel RAW frames and gy-roscope data. Our method requires no depth initialization or pose inputs, only a long-burst. We formulate the prob-lem as an image synthesis task, similar to neural radiance methods [50], decomposed into explicit geometric projec-tion through continuous depth and pose models. In con-trast to recent neural radiance methods, which typically es-timate poses in a pre-processing step, we jointly distill rela-tive depth and pose estimates as a product of simply fitting our model to long-burst data and minimizing photometric loss. In summary, we make the following contributions: • An end-to-end neural RGB-D scene fitting approach that distills high-fidelity affine depth and camera pose estimates from unstabilized long-burst photography. • A smartphone data collection application to capture RAW images, camera intrinsics, and gyroscope data for our method, as well as processed RGB frames, low-resolution depth maps, and other camera metadata.• Evaluations which demonstrate that our approach out-performs existing single and multi-frame image-only depth estimation approaches, with comparisons to high-precision structured light scans to validate the ac-curacy of our reconstructed object geometries. Code, data, videos, and additional materials are available on our project website: https://light.princeton.edu/soap
Chen_Learning_the_Distribution_of_Errors_in_Stereo_Matching_for_Joint_CVPR_2023
Abstract We present a new loss function for joint disparity and uncertainty estimation in deep stereo matching. Our work is motivated by the need for precise uncertainty estimates and the observation that multi-task learning often leads to improved performance in all tasks. We show that this can be achieved by requiring the distribution of uncertainty to match the distribution of disparity errors via a KL diver-gence term in the network’s loss function. A differentiable soft-histogramming technique is used to approximate the distributions so that they can be used in the loss. We exper-imentally assess the effectiveness of our approach and ob-serve significant improvements in both disparity and uncer-tainty prediction on large datasets. Our code is available at https://github.com/lly00412/SEDNet.git .
1. Introduction Many computer vision problems can be formulated as estimation tasks. Considering, however, that even high-performing estimators are not error-free, associating con-fidence or uncertainty with their estimates is of great im-portance, particularly in critical applications. In this paper, we focus on disparity estimation via stereo matching, but we are confident that our approach is applicable to other pixel-wise regression tasks after minor modifications. Wedistinguish between confidence and uncertainty : the former refers to a probability or likelihood of correctness, while the latter is related to the magnitude of the expected error of an estimate. Confidence can be used to reject es-timates that are suspected to be incorrect, or to rank them from most to least reliable. We argue that uncertainty is more valuable because it can also used for fusing multiple observations, e.g. in a Kalman filtering framework. Most research has focused on confidence estimation for stereo matching [12, 30]. Moreover, most methods estimate confi-dence for pre-computed disparities that are not further im-proved. Joint estimation of disparity and confidence, which benefits both due to multi-task learning, is addressed infre-quently [19, 20, 26, 34].Our work is partially inspired by the joint treatment of epistemic and aleatoric uncertainty by Kendall and Gal [14], who propose novel loss functions that give rise to un-certainty estimates in pixel-wise vision tasks. Results on semantic segmentation and single-image depth estimation demonstrate how the primary task benefits from simulta-neous uncertainty estimation. Kendall and Gal argue that “in many big data regimes (such as the ones common to deep learning with image data), it is most effective to model aleatoric uncertainty,” while epistemic uncertainty can be reduced when large amounts of data are available. Here, we restrict our attention to aleatoric uncertainty. Our motivation is that ideally we should be able to pre-dict the magnitude of the estimator’s error at each pixel. Of course, this is unrealistic, since if it was possible, we could drive all errors down to zero. A feasible objective is to train anuncertainty estimator whose outputs follow the same dis-tribution as the true errors of the disparity estimator. In this paper, we present an implementation of this con-cept via a deep network that jointly estimates disparity and its uncertainty from a pair of rectified images. We named the network SEDNet , for Stereo Error Distribution Net-work . SEDNet includes a novel, lightweight uncertainty es-timation subnetwork that predicts the aleatoric uncertainty Left Image Predicted Disparity Figure 1. Examples of left images and predicted disparity maps by SEDNet on DrivingStereo [36]. The first example is taken around sunset with over-exposure. The second example is taken on a rainy day with under-exposure. In both challenge cases, SEDNet pre-dicts accurate disparity. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17235 of stereo matching, and a new loss to match the distribution of uncertainties with that of disparity errors. To generate the inputs to this new loss, we approximate the distributions from the samples of disparity errors and uncertainty values in a differentiable way via a soft-histogramming technique. We present extensive experimental validation of SED-Net’s performance in disparity estimation and uncertainty prediction on large datasets with ground truth. SEDNet is superior to baselines with similar, even identical, architec-ture, but without the proposed loss function. Our main con-tributions are: • a novel uncertainty estimation subnetwork that extracts information from the intermediate multi-resolution disparity maps generated by the disparity subnetwork, • a differentiable soft-histogramming technique used to approximate the distributions of disparity errors and estimated uncertainties, • a loss based on KL divergence applied on histograms obtained with the above technique.
Garg_Samples_With_Low_Loss_Curvature_Improve_Data_Efficiency_CVPR_2023
Abstract In this paper, we study the second order properties of the loss of trained deep neural networks with respect to the training data points to understand the curvature of the loss surface in the vicinity of these points. We find that there is an unexpected concentration of samples with very low curvature. We note that these low curvature samples are largely consistent across completely different architectures, and identifiable in the early epochs of training. We show that the curvature relates to the ‘cleanliness’ of the data points, with low curvatures samples corresponding to clean, higher clarity samples, representative of their category. Alternatively, high curvature samples are often occluded, have conflicting features and visually atypical of their cat-egory. Armed with this insight, we introduce SLo-Curves, a novel coreset identification and training algorithm. SLo-curves identifies the samples with low curvatures as being more data-efficient and trains on them with an additional regularizer that penalizes high curvature of the loss sur-face in their vicinity. We demonstrate the efficacy of SLo-Curves on CIFAR-10 and CIFAR-100 datasets, where it out-performs state of the art coreset selection methods at small coreset sizes by up to 9%. The identified coresets general-ize across architectures, and hence can be pre-computed to generate condensed versions of datasets for use in down-stream tasks. Code is available at https://github.com/isha-garg/SLo-Curves.
1. Introduction Deep learning applications have exploded due to access to big data and computational resources. However, data is expensive to gather, annotate and store, and directly influ-ences the computational resources required. Storing more data also runs higher risks of data leakage and privacy vi-olation. It also influences parallelism and communication bottlenecks between machines [17]. There is limited un-derstanding of the mechanism through which deep learning models process complex datasets, what constitutes ‘good’ or ‘easy’ examples for learning, and how large a number of Figure 1. Histograms of training dataset’s curvature for various networks trained on CIFAR-10. samples are indeed beneficial. Research on data efficiency often focuses on doing more with less data. Standard train-ing methods assume all data points from the training set to be independently and identically distributed from the true training distribution. Data points are sampled uniformly during training and treated as having equal significance. An alternative is to identify representative data points from the training distribution that are more beneficial to learning than random uniform sampling [30, 31]. They can be used as smaller condensed datasets, or upweighted during training as per their significance. These subsets of important points are also known as coresets. Coresets prove useful in many downstream applications. They are data-efficient and can serve as a good choice of proxy data for Neural Architecture Search (NAS) [45, 52], as episodic memory in Continual Learning [2, 6, 60], and as the choice of augmentation samples during self super-vised learning [37]. They can be utilized in many regimes that may have memory or compute resources, such as for compute-efficient learning [35,43,46], dataset condensation [8, 58, 61], efficient Hyper-parameter Optimization (HPO) [33], and speeding up the training of Generative Adversar-ial Networks (GANs) [54]. The utility of identifying im-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20290 (a) Top 100 examples with the lowest loss curvature (b) Top 100 examples with the highest loss curvature Figure 2. We visualize the 10 samples from each class with lowest and highest curvature of loss identified over 10 different randomly initialized ResNet18 models trained on CIFAR-10. Each row corresponds to a class, consistent across both figures, and 10 ordered samples for each class are shown. The network was trained with random horizontal flipping as part of the augmentations. portant samples can extend beyond efficiency, such as in understanding dataset limitations [5, 55], enabling faster or better convergence by upsampling certain samples during training [12, 40], or in the choice of data to label in active learning [1,50]. However, it is not always clear what makes a sample informative or good for learning. Methods differ in their definition of significance of samples. Some rep-resentative works measure importance via the confidence or margin of the predicted output of the network [1, 11], by clustering samples together in the input or feature space [10, 22], matching the gradient to that of the entire dataset [35, 43, 46], choosing samples that lie closest to the deci-sion boundary [14] or samples that are not forgotten once learnt [57]. Some other representative methods frame it as a as a submodular function optimization problem [27,27,43]. In this paper, we look at data efficiency from the lens of the loss landscape around the training samples. While the loss surface with respect to the parameters of the neu-ral network has received considerable attention as a means of analyzing the stability of the solution [19, 47–49], the loss landscape with respect to the data is far less studied. Most of the studies have been associated with adversarial robustness [44]. We are interested in the curvature of the loss surface, or inversely, the smoothness of the decision boundary around the data point. This is captured by the lo-cal linearity of the gradient around the sample, measured as the trace of the Hessian. We plot the histogram of an ef-ficiently calculable estimation of this trace for the trainingdataset of different pretrained models in Figure 1 for the CIFAR-10 dataset [38]. We note they resemble a bimodal distribution, with a spread out Gaussian superimposed upon a very sharp, tall peak around zero. We are interested in the samples that make up the sharp peak, the ones around which the loss surface curvature is very low. We visualize the sample ordered by curvature accumu-lated over ten differently initialized ResNet18 [23] models trained on CIFAR-10 in Figure 2. We find that they reveal useful information about the kinds of samples present in the dataset. The low curvature samples, shown in Figure 2a can be considered clean, prototypical and minimal, in that they are strongly representative of their category. On the other hand, Figure 2b shows the samples with the highest curva-ture of loss. We can see that they do not appear to be charac-teristic of their category. They have confusing backgrounds with interfering patterns, uncommon viewing angles and in-complete shapes. We show that the low curvature samples appear to be largely consistent not only across networks that were initialized differently, but also across completely dif-ferent architectures such as MobileNetV3 [24], ResNet101 [23], AlexNet [39], VGG19 [53] and DenseNet121 [25]. We also show that they can be identified quite early on dur-ing the training process. As an application of this insight, we show that they make very good coresets. Many coreset selection methods perform well at large coreset sizes, close to the size of the whole dataset. In this paper, we explore smaller coresets, ranging from sin-20291 gle digit images per class, as is often common with few shot learning [16, 18], up to 100 images per class. Depend-ing on the dataset and the number of classes, this can range from 0.1% to 20% of the dataset. We introduce SLo-Curves, a novel coreset selection method which identifies samples with low curvatures and train on these coresets with an ad-ditional regularizer that penalizes large curvatures. Both the method for measuring the curvature and the form of the reg-ularizer are inspired directly from CURE [44], which intro-duced the regularizer as a means to promote adversarial ac-curacy at the expense of clean accuracy. We show that when there are small sample sizes, chosen appropriately, this reg-ularizer also helps improve clean accuracy. We summarize our contributions below: 1. We study the loss surface with respect to the input data points and identify an unexpected concentration of sam-ples with very low curvature.
Dong_MaskCLIP_Masked_Self-Distillation_Advances_Contrastive_Language-Image_Pretraining_CVPR_2023
Abstract This paper presents a simple yet effective framework MaskCLIP , which incorporates a newly proposed masked self-distillation into contrastive language-image pretraining. The core idea of masked self-distillation is to distill repre-sentation from a full image to the representation predicted from a masked image. Such incorporation enjoys two vital benefits. First, masked self-distillation targets local patch representation learning, which is complementary to vision-language contrastive focusing on text-related representa-tion. Second, masked self-distillation is also consistent with vision-language contrastive from the perspective of train-ing objective as both utilize the visual encoder for feature aligning, and thus is able to learn local semantics getting indirect supervision from the language. We provide specially designed experiments with a comprehensive analysis to vali-date the two benefits. Symmetrically, we also introduce the local semantic supervision into the text branch, which further improves the pretraining performance. With extensive exper-iments, we show that MaskCLIP , when applied to various challenging downstream tasks, achieves superior results in linear probing, finetuning, and zero-shot performance with the guidance of the language encoder . Code will be release athttps://github.com/LightDXY/MaskCLIP .
1. Introduction Vision-language (VL) contrastive learning [31, 51] has shown remarkable success in pretraining for various tasks. With large-scale image-text pairs available on the Internet, the model composed of a simple dual encoder design learns *Equal contribution, †Corresponding Author †Work done during an internship at Microsoft Research Asiastrong semantic prior by aligning between image and text. The resulting visual encoder not only exhibits excellent lin-ear probing and finetuning performance, but also enables impressive zero-shot performance with the guidance of the language encoder, showing the generality of natural language and its ability to supervise a wide range of visual concepts. Nonetheless, the associated language description, though providing richer information than mere class labels, still can hardly describe all the information in the corresponding image, as images are continuous signals with fine-grained de-tails and complex semantics. As a result, the VL contrastive by aligning global representations may only focus on the text-described objects and ignore the rest which might be useful for downstream tasks. In this paper, we are interested in how to fully leverage the image itself to facilitate the VL contrastive to further improve the transfer capability. (1) Firstly, the learned fea-ture representation shall characterize local patches, serving as a complementary for global representation in VL con-trastive. Inspired by the recent success of masked image modeling [4, 19, 26, 51, 60, 61] in learning patch representa-tions, we also randomly mask the input image with a large portion to force the visual encoder to focus on the remaining visible patches. (2) Secondly, the learned representation for local patches shall possess semantic meanings, being consis-tent with the global representation receiving semantic text supervision. We bring mean teacher self-distillation [25, 57] to supervise the learned patch representations with the vi-sual feature representations, enabling implicit supervision from natural language. The resulting objective is denoted asmasked self-distillation where the student model and the teacher model come from the same neural networks and the knowledge is distilled from the full image (fed to the teacher model) to the masked image (fed to student model). To this end, we introduce MaskCLIP by incorporating masked self-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10995 distillation into VL contrastive to advance the transferable visual encoder. There are several recent attempts [49, 68] also exploring the capability of the visual encoder under natural language supervision. The common approach is to introduce con-trastive learning or masked image modeling on the vision side together with contrastive language-image pretraining. However, the performance indeed improves based on CLIP but does not as well as our masked self-distillation. We argue that (1) the contrastive learning objective based on central crop augmentation actually learns global representations for salient objects while lack of attention on the surrounding backgrounds [11]; and (2) masked image modeling usually needs to remap the learned representation to pixels [26] or discrete tokens [4]. Such low-level prediction target is inef-ficient for semantic feature learning and thus also conflicts with high-level language supervision in VL contrastive. A brief illustration is presented in Figure 1. In the experiments, we conduct comprehensive ablations to analyze the differ-ence and provide numerical and visual evidence for better understanding. Symmetrically, we argue that local semantic supervision on the text branch is also helpful for the text encoder and eventually beneficial for zero-shot performance. So we intro-duce the same mask-data-modeling format supervision into the text branch as well. Different from images where the pixel is low-level signal, the words crafted by human beings are already highly semantic, so we use the tokenized word piece as the prediction target directly, following the well-studied mask language modeling method BERT. Meanwhile, to reduce the output conflicts between contrastive learning and mask language modeling, we introduce a small decoder for the mask language modeling branch. We train our MaskCLIP on a subset of a publicly avail-able image-text pairs dataset, YFCC [58], and thoroughly evaluate the transfer ability of visual representations on sev-eral vision benchmarks: ImageNet-1K [17] for classification, ADE20K [69] for semantic segmentation, MS-COCO [40] for detection and segmentation, as well as a batch of other classification benchmarks. When it comes to ImageNet-1K [17] classification, MaskCLIP achieves +6.9%,+7.2%, +1.3% higher than CLIP for zero-shot transfer, linear prob-ing, and finetuning respectively. For vision downstream tasks, we reach +2.7mIoU on ADE20K [69] and +1.8APb, +1.4APmon MS-COCO [40]. For vision-language tasks, MaskCLIP achieves +6.1% average zero-shot accuracy on 20 datasets, and +17.2%,+12.8% rank@1 improvement on the Flickr30K [67] image-test retrieval. In the recent Im-age Classification in the Wild challenge academic track, our MaskCLIP gets the 1stresult with 48.9% TOP-1 average accuracy, surpassing the second team with 3.4%. In summary, the major contributions of this work are: 1.We present a novel vision-language pretrainingframework MaskCLIP , by introducing masked self-distillation objective to facilitate VL contrastive for better transferable visual models.
Cetintas_Unifying_Short_and_Long-Term_Tracking_With_Graph_Hierarchies_CVPR_2023
Abstract Tracking objects over long videos effectively means solv-ing a spectrum of problems, from short-term association for un-occluded objects to long-term association for objects that are occluded and then reappear in the scene. Meth-ods tackling these two tasks are often disjoint and crafted for specific scenarios, and top-performing approaches are often a mix of techniques, which yields engineering-heavy solutions that lack generality. In this work, we question the need for hybrid approaches and introduce SUSHI, a uni-fied and scalable multi-object tracker. Our approach pro-cesses long clips by splitting them into a hierarchy of sub-clips, which enables high scalability. We leverage graph neural networks to process all levels of the hierarchy, which makes our model unified across temporal scales and highly general. As a result, we obtain significant improvements over state-of-the-art on four diverse datasets. Our code and models are available at bit.ly/sushi-mot.
1. Introduction Multi-Object Tracking (MOT) aims to identify the tra-jectories of all moving objects from a video. It is an es-sential task for many applications such as autonomous driv-ing, robotics, and video analysis. Tracking-by-detection is a commonly used paradigm that divides the problem into (i) detecting objects at every frame and (ii) performing data association, i.e., linking objects into trajectories. In the presence of highly accurate object detections, data association happens mostly among detections that are close * Equal contribution. †Currently at NVIDIA.in time, i.e., short-term association . Simple cues such as position and motion-based proximity [2, 3, 45, 65, 67] or lo-cal appearance [30, 49, 52, 66] are often enough for accu-rate association. Different challenges appear in crowded scenes, when objects may be often occluded and not de-tected for several frames. This forces methods to perform association among detections in distant time frames, i.e., long-term association . This requires specific solutions that build more robust global appearance models [32, 35, 44], create motion models capable of long-term trajectory pre-diction [11, 21, 37] or bring robustness by performing asso-ciation across all frames and all trajectories using a graph representation [4, 8, 14, 43, 44]. Due to the different nature of these tasks, solutions used forshort-term associations tend to fail in long-term scenar-ios. In fact, most state-of-the-art trackers use a combination of approaches to track over different timespans and there-fore can be considered to be multi-level trackers . Several short-term trackers use independent re-identification (reID) mechanisms for long-term association [2, 28, 40, 52, 57]. Analogously, various graph approaches rely on local track-ers to perform short-term association [4,15,16]. All of these hybrid multi-level approaches have two main limitations. The first one is scalability since current methods cannot deal with long videos. As we increase the timespan between detections to be linked, association becomes more ambigu-ous due to significant appearance changes and large dis-placements. Hence, local trackers using a handcrafted com-bination of appearance and motion cues will fail to scale to arbitrary timespans. Graph-based methods are more ro-bust, but association for large timespans entails the creation of very large graphs (even if combined with local methods), which is infeasible both computationally and memory-wise. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22877 The second limitation is generality . Using different techniques for different timespans requires making strong assumptions about the cues needed at each temporal scale, which limits the applicability of these approaches. For in-stance, in tracking scenarios where people dress uniformly and frame rate is high, e.g. dancing videos [41], proximity or motion-based local trackers [2,3,65,67] are more reliable than appearance-based trackers. On the other hand, in the presence of heavy camera motion or low frame rates, the performance of the aforementioned trackers degrades sig-nificantly, and appearance may become the most reliable cue [30, 60]. Overall, these discrepancies lead inevitably to handcrafted solutions for each type of scenario. In this work, we ask the following question: can we de-sign a unified method that generalizes to multiple timespans and further scales to long videos? We propose a method that processes videos hierarchi-cally: lower levels of our hierarchy focus on short-term as-sociation, and higher levels focus on increasingly long-term scenarios. The key differences to existing hybrid multi-level solutions is that we use the same learnable model for all time scales, i.e., hierarchy levels. Instead of hand-crafting different models for different scales, we show that our model can learn to exploit the cues that are best suited for each time-scale in a data-driven manner. Furthermore, our hierarchy allows a finer-grained transition from short to long-term instead of using two distinct stages. Our method targets the two main limitations of previous works: (i) its hierarchical structure makes it highly scalable and enables processing long clips efficiently, and (ii) it is highly gen-eral and does not make any assumptions about which cues are best suited for which timespans, but instead allows the model to obtain the necessary cues in a data-driven manner. We, therefore, obtain a Strong tracker, with a Unified so-lution across timespans, and good Scalability thanks to its HIerarchical nature, and name it SUSHI . At its core, SUSHI is a graph method, but instead of working on a single monolithic graph, we embrace the dif-ferent nature of data association over different timespans and operate on a hierarchy of graphs. At the lowest level of our hierarchy, nodes represent object detections in nearby frames. We use a graph neural network (GNN) [4, 13, 39] to process those into short tracklets, and then build new graphs to generate increasingly longer trajectories at ev-ery level of our hierarchy. Notably, we use the same GNN architecture at every level . Thus we do not make any as-sumptions about what cues are best for each timespan. We demonstrate the generality of our approach by showing sig-nificant identity preservation improvements over the state-of-the-art in several highly diverse benchmarks: up to +4.7 IDF1 on MOT17 [9], +9.1 IDF1 on MOT20 [10], +9.5 IDF1 on DanceTrack [41], and +4.2 IDF1 on BDD [60], therefore setting new state-of-the-art results by a significant margin.2. Related Work Short-term tracking. Numerous modern trackers use frame-by-frame online association frameworks [2,3,28,30, 45, 49, 55, 62, 65–67]. Motion and spatial proximity cues tend to be central components of these trackers. Notable examples include the widespread use of kalman-filter based motion models [3,49,65,66] or frame-by-frame regression-based frameworks [2, 28, 62, 67]. Some trackers further rely on appearance to increase robustness at lower frame-rates or under strong camera movement [30,49,52,55,66]. While having good performance in short-term scenarios, these trackers lack robustness when it comes to long-term identity preservation. Graph-based tracking. Graphs are a commonly used framework to model data association. They model nodes as object detections and edges as trajectory hypotheses. In contrast to frame-by-frame trackers, graph-based methods search for global solutions to the data association problem over several frames and are therefore more robust. To this end, numerous optimization frameworks have been stud-ied, including network flows [1, 64], multi-cuts [44], mini-mum cliques [61], disjoint paths [15, 16, 43], and efficient solvers [1,5] have been designed. In our work, we rely on a simplified version of the min-cost flow formulation [4, 64], which allows us to avoid expensive optimization and use a small-scale linear program while still taking advantage of graph-based tracking. Learning in graph-based tracking. While early graph-based methods focused on obtaining pairwise association costs from learning methods such as conditional random fields [59], or handcrafted models [42], recent approaches focus almost exclusively on deep learning techniques. No-table examples include learning pairwise appearance costs with convolutional networks [20, 36, 40], or learning track management policies with recurrent models [29, 38]. Re-cently, numerous approaches learn models that natively operate on the graph domain such as graph neural net-works (GNNs) [4, 8, 14, 22, 26, 51] or transformers [68]. While showing promise, current GNN and transformer-based works have an important limitation: they operate over large monolithic graphs of detections, and therefore lack the scalability needed to process long video clips. Multi-level hybrid tracking approaches. Multi-level tracking methods are dominated by handcrafted combina-tions of approaches. Several early tracking works exploited the idea of building tracks hierarchically [7,17,50,56]. They generally did so by generating short tracklets with hand-crafted methods and then merging those within multiple stages involving different optimization techniques [7,50,56] and association cues [17]. In a similar fashion, numerous modern trackers combine several techniques to build tracks in a hierarchical, incremental way, without necessarily rely-ing on graphs [2,4,8,15,16,28,53]. Some examples include 22878 SUSHI Block SUSHI Block SUSHI Block SUSHI Block … … … … … Time … … … … … … … … … … SUSHI Block … GNN SUSHI SUSHI Block SUSHI Block SUSHI Block SUSHI Block New Association Old Association Temporal Window Shared weights Maximum Temporal Edge Distance Overview of SUSHI Figure 2. SUSHI consists of a set of SUSHI blocks operating hierarchically over a set of tracklets (with initial length one) in a video clip. Each SUSHI block considers a graph with tracklets from a subclip as nodes,
Huang_ShapeClipper_Scalable_3D_Shape_Learning_From_Single-View_Images_via_Geometric_CVPR_2023
Abstract We present ShapeClipper, a novel method that recon-structs 3D object shapes from real-world single-view RGB images. Instead of relying on laborious 3D, multi-view or camera pose annotation, ShapeClipper learns shape re-construction from a set of single-view segmented images. The key idea is to facilitate shape learning via CLIP-based shape consistency, where we encourage objects with simi-lar CLIP encodings to share similar shapes. We also lever-age off-the-shelf normals as an additional geometric con-straint so the model can learn better bottom-up reasoning of detailed surface geometry. These two novel consistency constraints, when used to regularize our model, improve its ability to learn both global shape structure and local ge-ometric details. We evaluate our method over three chal-lenging real-world datasets, Pix3D, Pascal3D+, and Open-Images, where we achieve superior performance over state-of-the-art methods.1 1Project website at: https://zixuanh.com/projects/ shapeclipper.html1. Introduction How can we learn 3D shape reconstruction from real-world images in a scalable way? Recent works achieved impressive results via learning-based approaches either with 3D [4,9,11,28,36,41,43–45,48,49,53] or multi-view su-pervision [ 16,19,23,25,32,42,50,51]. However, such su-pervised techniques cannot be easily applied to real-world scenarios, because it is expensive to obtain 3D or multi-view supervision at a large scale. To address this limitation, recent works relax the requirement for 3D or multi-view supervision [ 1,8,10,14,15,17,18,22,24,29,31,38,46,55, 57]. These works only require single-view self-supervision, with some additionally using expensive viewpoint annota-tions [ 17,18,24,38,57]. Despite this significant progress, most methods still suffer from two major limitations: 1) In-correct top-down reasoning, where the model only explains the input view but does not accurately reconstruct the full 3D object shape; 2) Failed bottom-up reasoning, where the model cannot capture low-level geometric details such as concavities. How can we address these limitations while also remaining scalable to a wide range of object types? To improve top-down reasoning, our inspiration comes This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 12912 Input Image CLIP Nearest Neighbors Figure 2. CLIP-based semantic neighbors. Samples that have similar CLIP encodings often have similar shapes. Note the view-point variability in the neighbors. from the recent success of large-scale image-text modeling. The most successful image-text models such as CLIP [ 33] are trained on a vast corpus of captioned images and are able to extract fine-grained semantic features that correlate well with the language descriptions. CLIP further demonstrates a great generalization ability to images across various do-mains. Can we leverage such a powerful and generalizable model to learn 3D reconstruction in a real-world scenario? We observe that natural language descriptions of images often contain geometry-related information ( e.g. around speaker, a long bench) and many nouns by themselves have characteristic shape properties ( e.g. “desks” usually have four legs, and “benches” normally include a flat sur-face). Motivated by this intrinsic connection between ob-ject shapes and language-based semantics, we examine the latent space of CLIP’s visual encoder. In our study, we find (via k-NN queries) that objects with similar CLIP embed-dings usually share similar shapes (see Fig. 2for an ex-ample). Another key characteristic we identify with CLIP embeddings is that they have some robustness to viewpoint changes, meaning that changes in viewpoint generally do not produce drastic changes in CLIP embeddings. Inspired by these findings, we propose to learn shapes using a semantic-based shape consistency (SSC) constraint using CLIP. Specifically, we use CLIP’s semantic encodings as guidance to form pseudo multi-view image sets. For each image in the training set, we extract its CLIP embedding and find images with the most similar semantics across the training set. We then leverage these retrieved images as ad-ditional supervision to the input view, as illustrated in Fig. 3. This approach greatly benefits global shape understanding, because each predicted shape is required to simultaneously explain a set of images instead of only explaining the single input image. On the other hand, we address the limitation of poor Input Image SSC ConstraintE Reconstruction Semantic NeighborsFigure 3. Semantic-based Shape Consistency (SSC) Con-straint. We find the semantic neighbors of the input image across the training set and use these neighbors to regularize the shape learning. bottom-up geometric reasoning by constraining the sur-face normals of the predicted shapes. Common failure cases include noisy surface reconstruction and failed con-cavity modeling, which are extremely hard to learn even with multi-view supervision. Inspired by the recent success of large-scale 2.5D estimation that generalizes to various scenes [ 7,34,35], we propose to use off-the-shelf surface normals as additional geometric supervision for our task. However, unlike scenes, off-the-shelf normals for object-centric images are much noisier due to occlusion, trunca-tion, and domain gaps. To mitigate this issue, we introduce a noise-tolerant optimization process via outlier dropout, which stabilizes the training and improves the overall re-construction performance. Overall, our contributions are threefold: • We propose a novel CLIP-based shape consistency regularization that greatly facilitates the top-down un-derstanding of object shapes. • We successfully leverage off-the-shelf geometry cues to improve single-view object shape reconstruction for the first time and handle noise effectively. • We perform extensive experiments across 3 differ-ent real-world datasets and demonstrate state-of-the-art performance. 2. Related Work There has been an emerging interest in 3D object shape reconstruction from images via learning-based approaches. Our work focuses on learning single-view shape reconstruc-tion with limited supervision on real-world images, where the training set only contains a single view per object in-stance. We briefly survey the relevant literature on single image shape reconstruction using both fully-supervised and weakly-supervised approaches. Single-View Supervision. Most closely related to this pa-per are works that learn 3D shape reconstruction through 12913 supervision from single-view images [ 1,10,13–15,17,18, 22,24,29,31,46,55,57]. These works can be organized as in Tab. 1and largely differ in their choice of 1) shape repre-sentation (e.g. implicit SDF vs explicit mesh); 2) known vs. unknown viewpoint supervision; 3) large-scale evaluation on various real-world objects. We are one of the earliest works to demonstrate the feasibility of single-view learning of an implicit SDF representation upon diverse real-world images under unknown viewpoints. Within this body of work, SSMP [ 55], Cat3D [ 15], and SS3D [ 1] are the most closely related ones given their large-scale evaluation, which we describe in details below. SSMP [ 55] is the earliest work that shows success of shape learning via only single-view supervision on large-scale real-world data. A key property of this method is adversarial regularization during training, which can make training unstable. Thus it is hard for SSMP to learn recon-struction on categories with complex shapes or textures. In contrast, our method leverages the SSC and geometric con-straints which are more stable and result in superior perfor-mance over SSMP across various objects. Similar to our method, Cat3D [ 15] explores semantic regularization for implicit shape learning. In contrast, their semantic regularization is based on category labels, which fails on categories with significant intra-category shape variations. Moreover, Cat3D relies on unstable adversarial regularization which has similar drawbacks as SSMP [ 55] and is only successful on a few real-world categories. SS3D [ 1] proposes a 3-step learning pipeline for scal-able learning of shapes, which includes synthetic data (e.g. ShapeNet [ 3]) pretraining. This step plays a crucial role as it provides the necessary initialization for the camera multi-plex optimization. Unlike SS3D, synthetic pretraining is not a hard constraint for our method—we demonstrate success on Pix3D [ 39] without any synthetic pretraining. On the other hand, SS3D does not explore the usage of semantic and geometric consistency. As a result, our model captures both global structures and local surfaces more accurately than SS3D and outperforms SS3D quantitatively. Shape Supervision. Instead of using image supervision, many prior works use explicit 3D geometric supervision and achieve great reconstruction results [ 4,9,11,28,36,41,43– 45,48,49,53]. Nevertheless, the assumption of 3D supervi-sion is not yet practical on a large scale. To make the learn-ing more scalable, subsequent works leverage multi-view images as supervision and employ differentiable rendering as the core technique. Specifically, differentiable rendering allows images and masks to be rendered from 3D assets dif-ferentiably and thus the multi-view reprojection loss can ef-fectively carve the reconstructed shape. These methods can be classified based on their representation of shape, includ-ing voxels [ 42,50,51], pointclouds [ 16,23], meshes [ 19,25] and implicit representations [ 32]. Compared to these works, Input Image𝒇 𝑻𝒇 𝑺 𝒙𝒙 Viewpoint 𝒗Shape code 𝒔 Texture code 𝒕𝓛 𝑺𝑺𝑪 𝓛 𝑵𝓛 𝑰,𝓛𝑴 Off-the-shelf Normal RotationMasked Image 𝑬 𝑽𝑹Figure 4. Network overview. Given the input image, the encoder Einfers the shape latent code sand texture latent code t. By con-ditioning these two codes upon shape MLP fSand texture MLP fT, we obtain the shape and texture reconstruction of the input object. On the other hand, the viewpoint estimator Vestimates the input viewpoint v. The differentiable volume renderer Rthen renders shape and texture fields under the estimated viewpoint, so that we can compute the reconstruction loss LIandLM. We fur-ther leverage our SSC and geometric constraints, LSSC andLN, to effectively harness the shape learning. a major benefit of our method is scalability, as our model can be trained using single-view real-world images. 3. Method In this section, we first present an overview of our model in Sec. 3.1, and then introduce our proposed SSC and ge-ometric constraints in Sec. 3.2and Sec. 3.3. Finally, we present implementation details in Sec. 3.4. 3.1. Overview Given a collection of nimages segmented with fore-ground masks {Ii∈
Rh×w×3,Mi∈Rh×w×1}n i=1, we aim to learn a single-view 3D reconstruction model without re-lying on 3D, viewpoint, or multi-view supervision of these images. The shape representation of our model is an im-plicit SDF function, represented by a multi-layer percep-tron (MLP) conditioned on the input image. Specifically, our model consists of four submodules (see Fig. 4for an overview) as described below. Image encoder. The image encoder Etakes a segmented imageI∈Rh×w×3as input and infers the shape latent code s∈Rland the texture latent code t∈Rl. These two codes encode the necessary information to reconstruct the shape and texture field respectively. Shape and texture reconstructor. Our model represents shape and texture reconstruction with two MLPs, fS: R3→RandfT:R3→R3, which predict signed dis-tance function (SDF) and RGB values of queried 3D co-ordinates respectively. The MLPs are conditioned on the latent codes, with a similar design to V olSDF [ 54]. Specifi-cally, the shape MLP fSis conditioned on sand the texture MLPfTis conditioned on t. 12914 Table 1. Single-view supervised methods for object shape reconstruction. M: mesh, V: voxel, P: pointcloud, D: depth, O: occupancy function, S: signed distance function, Diverse R-Res.: real-world results on diverse categories. Model [17][18][24][38][57][10][22][14][13][31][46][29][15][55][1]Ours 3D Rep. M M S M M M M V M P D M S M O S Viewpoint Free -----✓✓✓✓✓✓✓✓✓✓✓ Diverse R-Res. -------------✓✓✓ Viewpoint estimator. The viewpoint estimator Vestimates the viewpoint of the input image with respect to the shape reconstruction. Following [ 2,15], we represent the view-point with the trigonometric functions of Euler angles, i.e. v= [cosγ,sinγ]whereγdenotes the three Euler angles. Differentiable renderer. We use a volume renderer Rto render the reconstructed SDF and texture fields following V olSDF [ 54]. In the renderer, the SDF field is first con-verted into densities, and then the densities are used to-gether with the texture field to render the RGB and mask in an accumulative way (via ray-marching). We refer the readers to V olSDF [ 54] for more details, with the exception that we use uniform sampling instead of error-bound based sampling. Formally, we denote the renderer as a functional, R(fS,fT,v), which maps the implicit functions and the viewpoint into image ˆI, maskˆMand surface normal ˆN. Reprojection loss. One of the major learning signals of our model comes from the reprojection loss that compares input images with reconstructed images. This can be achieved via the differentiable renderer. Our model first infers shape, tex-ture, and viewpoint of the object from the input image. The renderer can then render them into an image reconstruction, which will be matched to the input. Specifically, we can denote the RGB and the mask repro-jection loss for each image as follows: LI=∥I−ˆI∥2 2,LM= 1−IOU(M,ˆM), (1) IOU(M,ˆM) =/summationtext pMp·ˆMp /summationtext pMp+ˆMp−M·ˆMp. (2) HereMpdenotes the mask value at pixel p. Facilitating shape learning. When we do not have direct viewpoint or shape supervision, simply minimizing the re-projection loss almost always leads to degeneration. There are two major issues: 1) incorrect top-down reasoning, where shapes can only explain the input view; 2) wrong bottom-up reasoning, examples include the inability to infer concavity or noisy surface reconstruction. To mitigate these issues, we propose the semantic and geometric consistency constraints that effectively facilitate the shape learning. 3.2. Semantic Constraint Preliminary findings about CLIP. To leverage CLIP for shape regularization, our main hypothesis is that objects with similar CLIP encodings share similar shapes. To ver-ify this hypothesis, we perform a study using the large-scalefine-grained CompCars [ 52] dataset. This dataset contains more than 136K images of 163 car makes with 1716 car models. We perform CLIP inference on this dataset and compute 5-nearest neighbor for each sample based on the CLIP embeddings. By iterating over each neighbor of all query images, we calculate the percentage of neighbors that match their query images’ model (same car model usually shares quite similar shapes). In our experiment, CLIP is able to find the exact same car models for 51.2% of all the neighbors (on average 2.6 out of 5 neighbors belong to the query images’ model). We believe this is a promising find-ing given 1) the large number of images and models in Com-pCars and 2) the fact that different models can still have similar shapes, so the percentage of shape matches can be higher than exact model matches. As a comparison, the per-centage of model matches for ImageNet-pretrained ViT [ 6] is only 27.8%. This study verifies our hypothesis and en-ables us to design our Semantic-based Shape Consistency (SSC) constraint based on CLIP. Semantic-based Shape Consistency. The key idea of SSC is to pull instances with similar CLIP embeddings together, so that a single shape reconstruction can receive supervi-sion from all these instances. In our experiments, we find CLIP encodings have some robustness to viewpoint change (see supplement for more details). Therefore it enables us to find additional pseudo views for many objects, which sig-nificantly facilitate the learning of better top-down reason-ing. We first form the per-instance clusters by performing K-nearest neighbors with CLIP encodings. Formally, given our training set {Ii}, we extract the CLIP encoding for each image, denoted as {ci}. We calculate the cosine similar-ity of all pairs of encodings. With such similarity measure-ment, we can query the K-nearest neighbors for any specific input encoding {ci}and identify images and masks of these neighbors. We then use these semantic neighbors to supervise the shape reconstruction as in Fig. 5. The high-level idea is that 1) the shape reconstructed for the input should explain neighbors’ masks and normals, and 2) when combining in-put’s shape with neighbor’s texture, we should be able to render the neighbor image. Formally, for input I, we denote its shape latent code and shape MLP as sandfS. Meanwhile, we sample an image and its mask IkandMkfrom the semantic neighbor set{Ik,Mk}K k=1. The encoder Ethen predicts the latent texture code tkforIk, which is used to generate its texture 12915 Input Image Semantic Neighbor𝒇𝑺𝒙Shape Code Texture Code Viewpoint𝒇𝑻𝒙 Texture Shape 𝑹 Normal Map of Semantic NeighborMasked NeighborFigure 5. Semantic-based Shape Consistency (SSC) constraint. We improve shape learning via the SSC constraint. The shape reconstructed for the input object has to explain its CLIP semantic neighbor as well. functionfTk. We also obtain the viewpoint prediction vk through the viewpoint estimator V. We can then combine the input shape fSwith the neighbor texture fTkand view-pointvk, and render them into an image ˆI′, a mask ˆM′and a normal map ˆN′. Namely, (ˆI′,ˆM′,ˆN′) =R(fS,fTk,vk). By replacing input maps in Eq. ( 1) with semantic neigh-bors, we can obtain the SSC losses for this sample in the same form: LSSCI=∥Ik−ˆI′∥2 2,LSSCM= 1−IOU(Mk,ˆM′).(3) 3.3. Geometric Constraint We further propose to facilitate shape learning via ge-ometric constraints to encourage the model to learn better low-level geometric reasoning. The idea here is to estimate the surface normal of our implicit shape, and make it con-sistent with the surface normal prediction from off-the-shelf models. The off-the-shelf normal estimator we use is Om-nidata [ 7], which is a state-of-the-art normal estimator. Re-cent work has also proven its effectiveness for multi-view scene reconstruction [ 56]. Formally, we denote our surface normal estimation as ˆN∈Rh×w×3and the off-the-shelf unit normal as N∈ Rh×w×3. The estimated normal is calculated as the nor-malized gradient of the density and aggregated via volume rendering similar to MonoSDF [ 56]. Unlike the setup in MonoSDF, our normal estimation lies in the object-centric canonical frame instead of the view-centric frame. There-fore, we use our estimated viewpoint to rotate the off-the-shelf normal Nto be in the same canonical frame as ˆN. In addition to aligning the coordinate frames, this approach enables the viewpoint estimator to receive additional train-ing signals from local geometry alignment, which is a sig-nificant benefit that naive approaches like the closed-form rotation alignment cannot provide. After the rotation, we can then match the normals following [ 7]: LN=β·∥RN−ˆN∥1−cos(RN,ˆN), (4)whereRrefers to the rotation matrix derived from the esti-mated viewpoint and cosdenotes the cosine similarity. We setβ= 5across all the experiments. This geometric loss is calculated at the pixel level and averaged over a minibatch. However, unlike scene recon-struction [ 56], off-the-shelf normals can be noisy for object-centric images due to inaccurate masks and domain gaps. As a result, naively using off-the-shelf normals results in training instability. Inspired by online hard example min-ing [ 37], we propose to dropout off-the-shelf normals that are likely to be outliers via batchwise ranking. Specifically, we sort the normal loss LNwithin the current minibatch and exclude a fixed percentage of high-loss pixels from the final loss aggregation. We find that this strategy stabilizes the training and improves the reconstruction quality overall. Finally, we can combine the geometric constraint with the semantic constraint by having a SSC normal loss, LSSCN. This can be calculated similarly to LN, the only difference is that we replace the input off-the-shelf normals and rotations with the semantic neighbor’s as in Eq. ( 3). 3.4. Implementation Details Architecture. The image encoder we use is a ResNet34 [ 12], which projects the input image into two 64-d latent vectors representing shape and texture. We use lightweight MLPs to represent the SDF and texture fields, where the shape MLP has 5 hidden layers of 64 neurons and the texture MLP has 3 hidden layers of 64 neurons. The 3D coordinates are positionally encoded [ 27] before fed into the MLPs. The conditioning of the MLPs is achieved via concatenation, and the shape latent code is additionally skip-connected to the first and the second hidden layers of the shape MLP. Following V olSDF, we condition the tex-ture MLP with the shape MLP’s last-layer feature as well. The differentiable renderer we use renders the volumes by uniformly sampling 64 points along each ray. Loss function. Our overall loss function is a summation of the reconstruction loss and the SSC losses (with our geo-metric constraint included): Lrecon=LI+λ1LM+λ2LN, (5) LSSC=LSSCI+λ1LSSCM+λ2LSSCN, (6) L=Lrecon+LSSC. (7) We setλ1= 0.5andλ2= 0.01across all datasets. Training. We use the Adam [ 20] optimizer with a learning rate of 0.0001 and a batch size of 12. We did not use weight decay or learning rate scheduling. Instea
Isaac-Medina_Exact-NeRF_An_Exploration_of_a_Precise_Volumetric_Parameterization_for_Neural_CVPR_2023
Abstract Neural Radiance Fields (NeRF) have attracted significant attention due to their ability to synthesize novel scene views with great accuracy. However, inherent to their underly-ing formulation, the sampling of points along a ray with zero width may result in ambiguous representations that lead to further rendering artifacts such as aliasing in the final scene. To address this issue, the recent variant mip-NeRF proposes an Integrated Positional Encoding (IPE) based on a conical view frustum. Although this is expressed with an integral formulation, mip-NeRF instead approxi-mates this integral as the expected value of a multivariate Gaussian distribution. This approximation is reliable for short frustums but degrades with highly elongated regions, which arises when dealing with distant scene objects un-der a larger depth of field. In this paper, we explore the use of an exact approach for calculating the IPE by using a pyramid-based integral formulation instead of an approx-imated conical-based one. We denote this formulation as Exact-NeRF and contribute the first approach to offer a pre-cise analytical solution to the IPE within the NeRF domain. Our exploratory work illustrates that such an exact formula-tion (Exact-NeRF) matches the accuracy of mip-NeRF and furthermore provides a natural extension to more challeng-ing scenarios without further modification, such as in the case of unbounded scenes. Our contribution aims to both address the hitherto unexplored issues of frustum approx-imation in earlier NeRF work and additionally provide in-sight into the potential future consideration of analytical so-lutions in future NeRF extensions.
1. Introduction Novel view synthesis is a classical and long-standing task in computer vision that has been thoroughly re-investigated via recent work on Neural Radiance Fields (NeRF) [20]. NeRF learns an implicit representation of a 3D scene from a set of 2D images via a Multi-Layer Perceptron (MLP) that Figure 1. Comparison of Exact-NeRF (ours) with mip-NeRF 360 [2]. Our method is able to both match the performance and obtain superior depth estimation over a larger depth of field. predicts the visual properties of 3D points uniformly sam-pled along the viewing ray given its coordinates and view-ing direction. This parameterization gives NeRF the dual ability to both represent 3D scenes and synthesize unseen views. In its original formulation, NeRF illustrates strong reconstruction performance for synthetic datasets compris-ing object-centric scenes and no background (bounded) and forward-facing real-world scenes. Among its appli-cations, NeRF has been used for urban scene representa-tion [25, 27, 29], human body reconstruction [3, 16], image processing [12, 17, 19] and physics [9, 14]. Nonetheless, the underlying sparse representation of 3D points learnt by the MLP may cause ambiguities that can lead to aliasing and blurring. To overcome these issues, Barron et al. proposed mip-NeRF [1], an architecture that uses cone tracing instead of rays. This architecture en-codes conical frustums as the inputs of the MLP by ap-proximating the integral of a sine/cosine function over a region in the space with a multivariate Gaussian. This re-parameterization notably increases the reconstruction qual-ity of multi-scale datasets. However, this approximation is only really valid for bounded scenes, where the conic frus-tums do not suffer from large elongations attributable to a large depth of field within the scene. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 66 The NeRF concept has been extended to represent in-creasingly difficult scenes. For instance, mip-NeRF 360 [2] learns a representation of unbounded scenes with a cen-tral object by giving more capacity to points that are near the camera, modifying the network architecture and intro-ducing a regularizer that penalizes ‘floaters’ (unconnected depth regions in free space) and other small unconnected regions. In order to model distant regions, mip-NeRF 360 transforms the multivariate Gaussians with a contrac-tion function. This modification allows a better representa-tion and outperforms standard mip-NeRF for an unbounded scenes dataset. However, the modification of the Gaussians requires attentive analysis to encode the correct information in the contracted space, which includes the linearization of the contraction function to accommodate the Gaussian ap-proximations. This leads to a degraded performance of mip-NeRF 360 when the camera is far from the object. Addition-ally, mip-NeRF 360 struggles to render thin structures such as tree branches or bicycle rays. Motivated by this, we present Exact-NeRF as an explo-ration of an alternative exact parameterization of underly-ing volumetric regions that are used in the context of mip-NeRF (Fig. 1). We propose a closed-form volumetric po-sitional encoding formulation (Sec. 3) based on pyrami-dal frustums instead of the multivariate Gaussian approx-imation used by mip-NeRF and mip-NeRF 360. Exact-NeRF matches the performance of mip-NeRF on a synthetic dataset, but gets a sharper reconstruction around edges. Our approach can be applied without further modification to the contracted space of mip-NeRF 360. Our naive implementa-tion of Exact-NeRF for the unbounded scenes of mip-NeRF 360 has a small decrease in performance, but it is able to get cleaner reconstructions of the background. Addition-ally, the depth map estimations obtained by Exact-NeRF are less noisy than mip-NeRF 360. Our key contribution is the formulation of a general integrated positional encod-ing framework that can be applied to any shape that can be broken into triangles ( i.e., a polyhedron). We intend that our work serves as a motivation to investigate differ-ent shapes and analytical solutions of volumetric positional encoding. The code is available at https://github. com/KostadinovShalon/exact-nerf .
Gao_Generalized_Relation_Modeling_for_Transformer_Tracking_CVPR_2023
Abstract Compared with previous two-stream trackers, the recent one-stream tracking pipeline, which allows earlier interac-tion between the template and search region, has achieved a remarkable performance gain. However, existing one-stream trackers always let the template interact with all parts inside the search region throughout all the encoder layers. This could potentially lead to target-background confusion when the extracted feature representations are not sufficiently discriminative. To alleviate this issue, we propose a generalized relation modeling method based on adaptive token division. The proposed method is a generalized formulation of attention-based relation model-ing for Transformer tracking, which inherits the merits of both previous two-stream and one-stream pipelines whilst enabling more flexible relation modeling by selecting ap-propriate search tokens to interact with template tokens. An attention masking strategy and the Gumbel-Softmax technique are introduced to facilitate the parallel computa-tion and end-to-end learning of the token division module. Extensive experiments show that our method is superior to the two-stream and one-stream pipelines and achieves state-of-the-art performance on six challenging bench-marks with a real-time running speed. Code and models are publicly available at https://github.com/Little-Podi/GRM.
1. Introduction Given the target bounding box in the initial frame of a video, visual tracking [18] aims to localize the target in successive frames. Over the past few years, two-stream trackers [1,21,22,49], which extract features of the template and search region separately and then model cross-relations of the template and search region in a sequential fash-ion, have emerged as a dominant tracking paradigm and made a significant progress. Following this two-stream pipeline, several Transformer-based trackers [4, 11, 38] utilize parallel self-attention blocks to enhance the ex-tracted features by modeling global self-relations within each image as illustrated in Fig. 1(a). Recently, leveraging Figure 1. Comparison of different relation modeling pipelines of Transformer-based trackers. The two-stream pipeline uses parallel self-attention blocks to model relations within each set of tokens (template or search tokens). The one-stream pipeline integrates the cross-relation modeling between two sets of tokens and self-relation modeling within each set of tokens via an unified attention block. In contrast, our proposed pipeline performs an adaptive division of the search tokens, which can degenerate to the two-stream form if no search token is selected to interact with template tokens and to the one-stream form if all the search tokens are se-lected for cross-relation modeling. the flexibility of the attention mechanism, the one-stream pipeline [3, 5, 44] is proposed to jointly extract features and model relations, achieving promising performance. By conducting self-attention among all concatenated tokens, both cross-relation modeling and self-relation modeling can be performed simultaneously as illustrated in Fig. 1(b). It is demonstrated in [3, 5, 39, 44] that letting the search region interact with the template as early as possible is ben-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18686 eficial to target-specific feature generation. However, there is no evidence suggesting that all parts inside the search re-gion should always be forced to interact with the template. Actually, due to the cropping strategy [1], there is a large proportion of background inside the search region, where distractors with similar appearance to the target may exist. This would lead to undesired cross-relations between the template and search region as the highly discriminative rep-resentations have not been extracted in some early layers. Although the attention mechanism can inherently weaken improper cross-relations, applying global cross-relation modeling to all layers may still be more or less disruptive. On the one hand, for the search tokens outside the target region, if undesired cross-relations are modeled between the template and distractors, the aggregated fea-tures of the distractors may contain the target features from the template, which could cause confusion for precisely identifying the actual target in the search region. On the other hand, for the template tokens, their quality could also be degraded by undesired cross-relations during the iterative update since certain features from the background or even distractors could be aggregated into these tokens. These situations could weaken the target-background discrimination capability of the one-stream pipeline. Intuitively, only a portion of search tokens, e.g., tokens belonging to the target, are suitable for cross-relation mod-eling when the feature representations are not perfect for target-background discrimination. In some cases, the two-stream relation modeling pipeline could even be better if the feature representations of both the template and search re-gion are imperfect to model cross-relations. The potential limitations of the one-stream pipeline motivates us to pon-der: is it really optimal for the template to interact with all parts inside the search region through all encoder layers in the one-stream pipeline? In this paper, we answer this question by proposing GRM, a generalized relation modeling method that can adaptively select the appropriate search tokens to interact with the template. To be specific, we classify the template and search tokens as three categories. The template tokens form one category while the search tokens are divided into another two categories. Instead of modeling relations within all the tokens as the one-stream pipeline, we restrict the interaction among the three token categories. Only the search tokens that are suitable for cross-relation modeling will interact with the template tokens, whilst the interac-tion between the remaining search tokens and the template tokens is blocked. With proper divisions, the two-stream pipeline and one-stream pipeline become two degenerated forms of our relation modeling method as discussed in Sec. 3.2. Consequently, our method is a generalized formu-lation of attention-based relation modeling for Transformer tracking, which embraces the advantages of both previouspipelines while being more flexible. The search token division is performed by a lightweight prediction module, which can adaptively determine which search tokens are suitable for cross-relation modeling based on the input tokens. To accomplish this objective, there are two obstacles to overcome. First, the separate relation mod-eling for different token categories makes it hard for paral-lel computation. Second, the discrete token categorization is non-differentiable, thus impeding the end-to-end learning of the token division module. To facilitate parallel compu-tation, we adopt an attention masking strategy to unify the individual attention operations into a single one. Addition-ally, we introduce the Gumbel-softmax technique [17] to make the discrete token categorization differentiable. Con-sequently, the search token division module can be implic-itly optimized in an end-to-end manner, which promotes its adaptability to deal with different situations. In summary, our main contributions are three-fold: •We present a generalized formulation of relation mod-eling for Transformer trackers, which divides the input tokens into three categories and enables more flexible interaction between the template and search region. •To realize the generalized relation modeling, we de-vise a token division module to adaptively classify the input tokens. An attention masking strategy and the Gumbel-Softmax technique are introduced to facilitate the parallel computation and end-to-end learning of the proposed module. •We conduct extensive experiments and analyses to val-idate the efficacy of our method. The proposed GRM exhibits outstanding results on six challenging visual tracking benchmarks.
Cheng_WildLight_In-the-Wild_Inverse_Rendering_With_a_Flashlight_CVPR_2023
Abstract This paper proposes a practical photometric solution for the challenging problem of in-the-wild inverse render-ing under unknown ambient lighting. Our system recov-ers scene geometry and reflectance using only multi-view images captured by a smartphone. The key idea is to ex-ploit smartphone’s built-in flashlight as a minimally con-trolled light source, and decompose image intensities into two photometric components – a static appearance corre-sponds to ambient flux, plus a dynamic reflection induced by the moving flashlight. Our method does not require flash/non-flash images to be captured in pairs. Building on the success of neural light fields, we use an off-the-shelf method to capture the ambient reflections, while the flashlight component enables physically accurate photomet-ric constraints to decouple reflectance and illumination. Compared to existing inverse rendering methods, our setup is applicable to non-darkroom environments yet sidesteps the inherent difficulties of explicit solving ambient reflec-tions. We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and con-sistently outperforms existing in-the-wild inverse rendering techniques. Finally, our neural reconstruction can be eas-ily exported to PBR textured triangle mesh ready for indus-trial renderers. Our source code and data are released to https://github.com/za-cheng/WildLight .
1. Introduction Rendering in computer graphics refers to computer gen-erating photo-realistic images from known properties of a scene including scene geometry, materials, lighting, as well as camera parameters. In contrast, inverse rendering is re-garded as a computer vision task, whose aim is to recover these unknown properties of a scene from images. Due to the ill-posed nature of inverse rendering, most existing methods concede the fullest solution and instead tackle only a simplified, partial problem; for instance, assuming sim-plified, known lighting conditions, initial geometry, or with diffuse or low specular materials. Traditional photometric methods for solving inverse ren-dering (such as photometric stereo) are often restricted to laboratory settings: image intensities are measured by high dynamic range cameras, under controlled illumination con-ditions and without ambient light contamination. Moreover, many methods rely on the availability of a good initial esti-mation to start the optimization process ( e.g.[19,26]) or as-sume purely diffuse (Lambertian) reflections ( e.g.[21,31]). This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4305 While recent neural-net methods are able to handle specu-lar highlights and complex geometries, e.g.[2,10,20,39]), they are still restricted to a laboratory darkroom environ-ment, hindering their practicality. Conversely, neural rendering and appearance learning techniques have the ability to work outside darkroom. This is achieved by bypassing the physical reflection model. They instead learn the illumination-specific appearance ( i.e. light field) of the scene conditioned on a geometric repre-sentation ( e.g.[24,27,38]). While these representations are empirically powerful in recovering complex scene geome-try and appearance, they cannot separate reflective proper-ties from illumination. Furthermore, the underlying geome-try is provably ill-constrained. Attempts have been made to physically decompose the appearance into reflectance and environment illumination to support in-the-wild inverse ren-dering [ 5,25,33,40,43]. However, this presents a much more complex and similarly ill-posed problem due to un-known ambient illumination. Consequently, these methods still trail behind traditional photometric methods in terms of accuracy and robustness. This paper aims to fill the gap between conventional darkroom methods and in-the-wild inverse rending, and of-fer a solution that combines the best of both worlds, i.e. being practical, well-posed and easy-to-solve at the same time. Instead of attempting to directly decouple reflectance from the unknown ambient illumination, we learn the ambi-ent reflection with a neural light field, and exploit an addi-tional, minimally controlled light source, being the smart-phone’s flashlight, for physical constraints on reflectance. During the image capture stage, we take some images with the flashlight turned on, and others with the flashlight off, all at free viewpoints. Images without the flashlight describe the ambient reflections only, while the images with flash-light are the photometric summation of both ambient and flashlight reflections. We learn the ambient component with an off-the-shelf neural novel-view-synthesis technique [ 36], and delegate the flashlight component to a physically-based reflection model for inferring reflectance. Both ambient and flashlight components are conditioned on a unified scene in-trinsic network that predicts scene geometry and reflectance distribution. Our method is easy to set up, easy to imple-ment, and consistently outperforms competing state-of-the-arts. The reconstructed objects can be directly plugged into game/rendering engines as high fidelity virtual assets in the standard format of textured meshes.
Jiang_A_Probabilistic_Attention_Model_With_Occlusion-Aware_Texture_Regression_for_3D_CVPR_2023
Abstract Recently, deep learning based approaches have shown promising results in 3D hand reconstruction from a single RGB image. These approaches can be roughly divided into model-based approaches, which are heavily dependent on the model’s parameter space, and model-free approaches, which require large numbers of 3D ground truths to reduce depth ambiguity and struggle in weakly-supervised scenar-ios. To overcome these issues, we propose a novel proba-bilistic model to achieve the robustness of model-based ap-proaches and reduced dependence on the model’s param-eter space of model-free approaches. The proposed prob-abilistic model incorporates a model-based network as a prior-net to estimate the prior probability distribution of joints and vertices. An Attention-based Mesh Vertices Un-certainty Regression (AMVUR) model is proposed to cap-ture dependencies among vertices and the correlation be-tween joints and mesh vertices to improve their feature rep-resentation. We further propose a learning based occlusion-aware Hand Texture Regression model to achieve high-fidelity texture reconstruction. We demonstrate the flexibil-ity of the proposed probabilistic model to be trained in both supervised and weakly-supervised scenarios. The experi-mental results demonstrate our probabilistic model’s state-of-the-art accuracy in 3D hand and texture reconstruction from a single image in both training schemes, including in the presence of severe occlusions.
1. Introduction 3D hand shape and texture reconstruction from a sin-gle RGB image is a challenging problem that has numerous applications such as human-machine interaction [1, 2], vir-tual and augmented reality [3–6], and sign language transla-tion [7]. In recent years, there has been significant progress in reconstructing 3D hand pose and shape from a monocular images [8–16]. These approaches can be generally catego-rized into model-based and model-free approaches. Model-based approaches [9,13–15] utilize a parametric model such as MANO [17] and train a network to regress its paramet-ric representation in terms of shape and pose. Since the parametric model contains priors of human hands, these ap-proaches are robust to environment variations and weakly-supervised training [12]. However, the shape and pose regression is constrained by the parametric model that is learned from the limited hand exemplars [8]. In contrast, model-free approaches [8, 11, 12, 16] regress the coordinates of 3D hand joints and mesh directly instead of using parametric models. Despite the remarkable results they have achieved, there are several limitations. For exam-ple, Graph-CNN is used by [8, 11] to model neighborhood vertex-vertex interactions, but such models cannot capture long range dependencies among vertices. Although [12] has addressed this issue by employing self-attention mecha-nism, it does not distinguish joints and vertices, processing them together in a same self-attention module. Moreover none of these works can support weakly supervised training and often require a large amount of 3D annotations of both joints and vertices to reduce depth ambiguity in monocular 3D reconstruction [18]. Motivated by the above observations, our first goal is to combine the benefits of the model-based and model-free approaches. To this end, we develop a probabilistic method that incorporates the MANO model into a prior-net to estimate the prior probability distribution of joints and vertices instead of using deterministic settings as pre-vious approaches have done. To relax the solution space of the MANO model, an Attention-based Mesh Vertices Un-certainty Regression model (AMVUR) is proposed to es-timate the conditioned probability distribution of the joints and vertices. In AMVUR, to improve feature representation of joints and vertices, a cross-attention model is proposed to capture the correlation between 3D positional encoded joints and mesh vertices, followed by a self-attention model for capturing the short/long range dependencies among mesh vertices. With the proposed architecture, the AMVUR model can be jointly trained with the prior-net to achieve superior performance to using them independently. To the 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 758 best of our knowledge, our probabilistic attention model is the first approach that learns the probability distribution of hand joints and mesh under a probabilistic model. The ability to reconstruct 3D hands with high-fidelity texture is helpful for 3D Hand Personalization and improves the performance of hand tracking systems [19–21]. More-over, Hand texture reconstruction is important for the user experience and bodily self-consciousness in immersive vir-tual reality systems [3]. We thus propose a learning based occlusion-aware hand texture regression model by introduc-ing an occlusion-aware rasterization and reverse interpola-tion to achieve high-fidelity hand texture reconstruction. Our contributions are summarized as follows: (1)We introduce an Attention-based Mesh Vertices Uncertainty Regression model (AMVUR) comprising a cross attention module for capturing the correlation between joints and mesh vertices and a self-attention module for capturing the short/long range dependencies among mesh vertices. (2) We propose a novel probabilistic attention model to learn the probability distribution of hand joints and mesh ver-tices, where the MANO parametric model is regarded as a prior-net and jointly trained with AMVUR. (3)We propose an Occlusion-aware Hand Texture Regression model to achieve high-fidelity hand texture reconstruction, including in the presence of severe occlusions. (4)We demonstrate that our network can be trained in both fully supervised and weakly supervised training schemes, achieving state-of-the-art (SOTA) performance on the three benchmark 3D hand reconstruction datasets: HO3Dv2 [22], HO3Dv3 [23] and FreiHand [24].
Barattin_Attribute-Preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization_CVPR_2023
Abstract This work addresses the problem of anonymizing the identity of faces in a dataset of images, such that the pri-vacy of those depicted is not violated, while at the same time the dataset is useful for downstream task such as for training machine learning models. To the best of our knowl-edge, we are the first to explicitly address this issue and deal with two major drawbacks of the existing state-of-the-art approaches, namely that they (i) require the costly training of additional, purpose-trained neural networks, and/or (ii) fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks. We accordingly present a task-agnostic anonymization pro-cedure that directly optimizes the images’ latent representa-tion in the latent space of a pre-trained GAN. By optimizing the latent codes directly, we ensure both that the identity is of a desired distance away from the original (with an iden-tity obfuscation loss), whilst preserving the facial attributes (using a novel feature-matching loss in FaRL’s [48] deep feature space). We demonstrate through a series of both qualitative and quantitative experiments that our method is capable of anonymizing the identity of the images whilst– crucially–better-preserving the facial attributes. We make the code and the pre-trained models publicly available at: https://github.com/chi0tzp/FALCO .
1. Introduction The ubiquitous use of mobile devices equipped with high-resolution cameras and the ability to effortlessly share personal photographs and videos on social media poses a *These authors contributed equally. This work has been conducted dur-ing a research exchange visit of S. Barattin in QMUL in the framework of the EU H2020 project AI4Media. Original ID anonymized Attr. preservedCIAGAN DeepPrivacy OursFigure 1. Comparison of the proposed method to CIAGAN [27] and DeepPrivacy [16] in terms of identity anonymization and at-tribute preservation. significant threat to data privacy. Considering that mod-ern machine learning algorithms learn from vast amounts of data often crawled from the Web [18, 38], it has become increasingly important to consider the impact this has on the privacy of those individuals depicted. Motivated by pri-vacy concerns, many societies have recently enacted strict legislation, such as the General Data Protection Regulation (GDPR) [7], which requires the consent of every person that might be depicted in an image dataset. Whilst such laws have obvious benefits to the privacy of those featured in im-age datasets, this is not without costly side effects to the research community. In particular, research fields such as computer vision and machine learning rely on the creation and sharing of high-quality datasets of images of humans for a number of important tasks including security [24], healthcare [1], and creative applications [18, 35]. A recent line of research focuses on overcoming this is-sue by anonymizing the identity of the individuals in image datasets. Through this approach, the machine learning com-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 8001 munity can still benefit from the wealth of large datasets of high-resolution images, but without cost to privacy. This research field has seen several developments throughout the last few years. Early methods proposed by the computer vision community attempt to solve this prob-lem with simple solutions based on blurring [10] or other masking techniques, such as pixelation [12]. The result of this masking process succeeds in anonymizing the im-ages by completely hiding the identity-related components, but as a consequence renders the facial attribute informa-tion such as a person’s pose, expression, or skin tone (from which many computer vision tasks learn) indecipherable. Another problem with these methods is that, whilst the re-sulting images may not be re-identifiable by humans, they can often be reversed by deep learning models [28, 32]. Another line of work leverages the power of Generative Adversarial Networks (GANs) [13], which have recently been used for discovering controllable generation paths in their latent or feature spaces [2,33,34,42,43]. Towards face anonymization, GANs have been incorporated in order to synthesize new images in order to obtain photos that main-tain most of the image while changing the face of the subject of interest. In particular, these approaches use techniques like image inpainting [16], conditional generation [27], at-tribute manipulation [21], or adversarial perturbation [39]. These works are able to obtain anonymized images that can still be used for computer vision tasks such as tracking and detection, with very good results in terms of privacy preser-vation. However, many of these works lack the ability to generate natural-looking faces and often fail to preserve the original facial attributes in the anonymized images (or, on the occasions in which such methods do preserve the facial attributes, they fail to demonstrate this quantitatively). This is critical for many applications which rely on the attributes of the inner face, such as expression recognition [20], or mental health affect analysis [11]. To further complicate the picture, a fundamental problem often found with existing works is the way in which the anonymized images copy not just the original image’s background, but also more identi-fiable features [16, 27], such as the clothes of an individual, or their hair (see examples of this in Fig. 1). We argue that leaving such structure of the images unchanged constitutes a glaring privacy vulnerability, as one can re-identify the original image from the anonymized counterpart by com-paring the image background or person’s clothes. Motivated by these concerns, in this work we propose to de-identify individuals in datasets of facial images whilst preserving the facial attributes of the original images. To achieve this, in contrast to existing work [16, 21, 27, 44, 45] that train custom neural networks from scratch, we propose to work directly in the latent space of a powerful pre-trained GAN, optimizing the latent codes directly with losses that explicitly aim to retain the attributes and obfuscate the iden-tities. More concretely, we use a deep feature-matching loss [48] to match the high-level semantic features between the original and the fake image generated by the latent code, and a margin-based identity loss to control the similarity be-tween the original and the fake image in the ArcFace [9] space. The initialisation of the latent codes is obtained by randomly sampling the latent space of GAN, using them to generate the corresponding synthetic images and finding the nearest neighbors in a semantic space (FARL [48]). In or-der to preserve texture and pose information of the original image, we perform inversion of the original image and re-tain the parts that correspond to the properties we want to preserve in the final code. This results in a latent code that yields a high-resolution image that contains a new identity but retains the same facial attributes as the original image. The main contributions of this paper can be summarized as follows: • To the best of our knowledge, we are the first to address the problem of identity anonymization whilst also ex-plicitly retaining facial attributes. • We propose a novel methodology and loss functions working with pre-trained GANs capable of generating high-resolution anonymized datasets. • We show through a series of thorough experiments on both Celeba-HQ [25] and LFW [15] that our method competes with the state-of-the-art in obfuscating the identity, whilst better-retaining the facial attributes un-der popular quantitative metrics.
Cai_Ensemble-Based_Blackbox_Attacks_on_Dense_Prediction_CVPR_2023
Abstract We propose an approach for adversarial attacks on dense prediction models (such as object detectors and segmenta-tion). It is well known that the attacks generated by a single surrogate model do not transfer to arbitrary (blackbox) vic-tim models. Furthermore, targeted attacks are often more challenging than the untargeted attacks. In this paper, we show that a carefully designed ensemble can create effec-tive attacks for a number of victim models. In particular, we show that normalization of the weights for individual mod-els plays a critical role in the success of the attacks. We then demonstrate that by adjusting the weights of the ensemble according to the victim model can further improve the per-formance of the attacks. We performed a number of experi-ments for object detectors and segmentation to highlight the significance of the our proposed methods. Our proposed ensemble-based method outperforms existing blackbox at-tack methods for object detection and segmentation. Finally we show that our proposed method can also generate a sin-gle perturbation that can fool multiple blackbox detection and segmentation models simultaneously. Code is available athttps://github.com/CSIPlab/EBAD .
1. Introduction Computer vision models (e.g., classification, object de-tection, segmentation, and depth estimation) are known to be vulnerable to carefully crafted adversarial exam-ples [4, 11, 16, 17, 46]. Creating such adversarial attacks is easy for whitebox models, where the victim model is com-pletely known [14,16,24,37,55]. In contrast, creating adver-sarial attacks for blackbox models, where the victim model is unknown, remains a challenging task [1, 33, 54]. Most of the existing blackbox attack methods have been devel-oped for classification models [10, 21, 35, 47]. Blackbox attacks for dense prediction models such as object detec-tion and segmentation are relatively less studied [4, 17, 27], and most of the existing ones mainly focus on untargeted *Equal contribution Figure 1. Illustration of the targeted ensemble-based blackbox attack. (Top) Attack generated by a single surrogate model does not transfer on the victim blackbox model (person does not map to car). (Bottom) Attack generated by weight balancing and optimization can transfer on a variety of victim models (person is mapped to car). attacks [17]. Furthermore, a vast majority of these methods are based on transfer attacks, in which a surrogate (white-box) model is used to generate the adversarial example that is tested on the victim model. However, the success rate of such transfer-based attacks is often low, especially for tar-geted attacks [10, 21, 47]. In this paper, we propose and evaluate an ensemble-based blackbox attack method for objection detection and segmentation. Our method is inspired by three key obser-vations: 1) targeted attacks generated by a single surrogate model are rarely successful; 2) attacks generated by an en-semble of surrogate models are highly successful if the con-tribution from all the models is properly normalized; and 3) attacks generated by an ensemble for a specific victim model can be further improved by adjusting the contribu-tions of different surrogate models. The overall idea of the proposed work is illustrated in Fig. 1. Our proposed method can be viewed as a combination of transfer-and query-based attacks, where we can adjust the contribution based on the feedback from the victim model using a small number of queries (5–20 in our experiments). In contrast, conventional query-based attacks require hundreds or thou-sands of queries from the victim model [9, 19, 22, 49]. We conduct comprehensive experiments to validate our proposed method and achieve state-of-the-art performance for both targeted and untargeted blackbox attacks on ob-1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4045 ject detection. Specifically, our proposed method attains 29–53% success rate using only 5 queries for targeted at-tacks on object detectors, whereas the current state-of-the-art method [4] achieves 20–39% success rate with the same number of queries. Furthermore, we extend our evalua-tion to untargeted and targeted attacks on blackbox seman-tic segmentation models. Our method achieves 0.9–1.55% mIoU for untargeted and 69–95% pixel-wise success for tar-geted attacks. By comparison, the current state-of-the-art method [17] obtains 0.6–7.97% mIoU for untargeted attacks and does not report results for targeted attacks. To the best of our knowledge, our work is the first approach for targeted and query-based attacks for semantic segmentation. Below we summarize main contributions of this work. •We design a novel framework that can effectively attack blackbox dense prediction models based on an ensemble of surrogate models. •We propose two simple yet highly effective ideas, namely weight balancing and weight optimization, with which we can achieve significantly better attack performance compared to existing methods. •We extensively evaluate our method for targeted and un-targeted attacks on object detection and semantic seg-mentation models and achieve state-of-the-art results. •We demonstrate that our proposed method can generate a single perturbation that can fool multiple blackbox de-tection and segmentation models simultaneously.
Izquierdo_SfM-TTR_Using_Structure_From_Motion_for_Test-Time_Refinement_of_Single-View_CVPR_2023
Abstract Estimating a dense depth map from a single view is ge-ometrically ill-posed, and state-of-the-art methods rely on learning depth’s relation with visual appearance using deep neural networks. On the other hand, Structure from Motion (SfM) leverages multi-view constraints to produce very ac-curate but sparse maps, as matching across images is typi-cally limited by locally discriminative texture. In this work, we combine the strengths of both approaches by proposing a novel test-time refinement (TTR) method, denoted as SfM-TTR, that boosts the performance of single-view depth net-works at test time using SfM multi-view cues. Specifically, and differently from the state of the art, we use sparse SfM point clouds as test-time self-supervisory signal, fine-tuning the network encoder to learn a better representation of the test scene. Our results show how the addition of SfM-TTR to several state-of-the-art self-supervised and supervised networks improves significantly their performance, outper-forming previous TTR baselines mainly based on photo-metric multi-view consistency. The code is available at https://github.com/serizba/SfM-TTR .
1. Introduction Obtaining accurate and dense depth maps from images is a challenging research problem and an essential input in a wide array of fields, like robotics [67], augmented reality [36], endoscopy [42], or autonomous driving [22]. Single-view per-pixel depth estimation is even more chal-lenging, as it is geometrically ill-posed in the general case. However, in the last decade, intense research on deep mod-els applied to this task has produced impressive results, showing high promise for real-world applications. Single-view depth learning was initially addressed as a supervised learning problem, in which deep networks were trained using large image collections annotated with ground truth depth from range (e.g., LiDAR) sensors [12, 30]. At present, this line of research keeps improving the accuracy Single-View Depth Network Structure from MotionSfM-TTR Rened depth predictionsScale alignmentInput sequenceFigure 1. SfM-TTR overview . Our approach assumes an existing pre-trained depth network and an input sequence at test time. We estimate a SfM 3D reconstruction using the input sequence, and depth maps using a single-view depth network. We align the SfM point cloud with the network’s depth to obtain a pseudo-ground truth to refine the network encoder, improving its representation of the test scene and producing significantly more accurate depth estimates. of single-view depth estimates by better learning models and training methods, as illustrated for example by [6, 59]. In parallel to improving the learning side of the problem, several works are incorporating single-and multi-view ge-ometric concepts to depth learning, extending its reach to more general setups. For example, [3, 15] propose camera intrinsics-aware models, enabling learning and predicting depths for very different cameras. More importantly, many other works (e.g. [20]) use losses based on multi-view pho-tometric consistency, enabling self-supervised learning of depth and even camera intrinsics [21]. Incorporating single-and multi-view geometry into This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21466 depth learning naturally links the field to classic research on Structure from Motion (SfM) [23, 45], visual odome-try [14, 44] and visual SLAM [8, 9]. These methods typ-ically produce very accurate but sparse or semi-dense re-constructions of high-gradient points using only multi-view geometry at test time. Among the many opportunities for cross-fertilization of both fields (e.g., using depth net-works in visual SLAM [48] or SfM for training depth net-works [28, 32, 56]), our work focuses on using SfM for re-fining single-view depth networks at test time. As single-view depth applications typically include a moving camera, several recent works incorporate multiple views at inference or refine single-view depth networks with multi-view consistency cues [4, 11, 36, 37, 46, 49, 52]. Most approaches, however, rely mainly on photometric losses, similar to the ones used for self-supervised training. These losses are limited to be computed between close views, creating weak geometric constraints. Our contribution in this paper is a novel method that, differently from the oth-ers in the literature, uses exclusively a SfM reconstruction for TTR. Although SfM supervision is sparser than typical photometric losses, it is also significantly less noisy as it has been estimated from wider baselines. Our results show that our approach, which we denote as SfM-TTR, provides state-of-the-art results for TTR, outperforming photomet-ric test-time refinement (Ph-TTR) for several state-of the-art supervised and self-supervised baselines.
Hu_Dense_Network_Expansion_for_Class_Incremental_Learning_CVPR_2023
Abstract The problem of class incremental learning (CIL) is con-sidered. State-of-the-art approaches use a dynamic archi-tecture based on network expansion (NE), in which a task expert is added per task. While effective from a computa-tional standpoint, these methods lead to models that grow quickly with the number of tasks. A new NE method, dense network expansion (DNE), is proposed to achieve a better trade-off between accuracy and model complexity. This is accomplished by the introduction of dense connections be-tween the intermediate layers of the task expert networks, that enable the transfer of knowledge from old to new tasks via feature sharing and reusing. This sharing is imple-mented with a cross-task attention mechanism, based on a new task attention block (TAB), that fuses information across tasks. Unlike traditional attention mechanisms, TAB operates at the level of the feature mixing and is decoupled with spatial attentions. This is shown more effective than a joint spatial-and-task attention for CIL. The proposed DNE approach can strictly maintain the feature space of old classes while growing the network and feature scale at a much slower rate than previous methods. In result, it out-performs the previous SOTA methods by a margin of 4% in terms of accuracy, with similar or even smaller model scale.
1. Introduction Deep learning has enabled substantial progress in com-puter vision. However, existing systems lack the human ability for continual learning, where tasks are learned incre-mentally. In this setting, tasks are introduced in sequential time steps t, and the dataset used to learn task tis only avail-able at the tthstep. Standard gradient-based training is not effective for this problem since it is prone to catastrophic forgetting : the model overfits on task tand forgets the previ-ous tasks. This is unlike humans, who easily learn new tasks without forgetting what they know. While continual learn-ing can be posed for any topic in computer vision, most re-search has addressed classification and the class incremen-tal(CIL) setting [18]. In CIL, tasks consist of subsets of disjoint classes that are introduced sequentially. Most ap-proaches also allow the learning of task tto access a small buffer memory of examples from previous tasks. Different strategies have been proposed to solve the CIL problem. Distillation methods [3, 8, 9, 12, 14, 18, 23, 24, 29] andparameter regularization methods [15, 32] regulate the new model by the output logits, intermediate features or important parameters. Gradient methods [16, 19, 28] esti-mate the null space of the existing feature space and project the gradients of the next task into this null space, so that the newly learned features are orthogonal to the previous ones. These methods try to fit all tasks into a single model, preserving properties of the feature space from one task to the next. This is, however, difficult to guarantee due to the scarcity of prior task data. Furthermore, as the number of tasks grows, the model will eventually run out of capacity to accommodate new tasks. Network expansion (NE) methods [1, 22, 27, 30, 31] ad-dress these problems by freezing the model that solves the previous tasks and adding a new subnetwork per task. As il-lustrated in Figure 1, a network f1learned for task 1is aug-mented with new networks ft, denoted as task experts, for each subsequent task t, and the feature spaces concatenated. NE with cross connections (NEwC) methods [10,20,25] fur-ther add links across tasks (red lines in Figure 1) to further transfer knowledge from old to new tasks. Since the origi-nal features are always accessible for examples from previ-ous classes, these methods achieve the best performances on CIL benchmarks. However, the process is very inefficient in terms of both model size and complexity. The right side of the figure shows the accuracy and size of several NE models on the CIFAR100 dataset. Best accuracies are obtained with larger networks and the model grows very quickly with the number of tasks. For most practical applications, this rate of growth is unsustainable. While NE and NEwC achieve state of the art perfor-mance among CNN-based CIL methods, we show that they do not translate well to the more recent transformer archi-tecture [7]. Standard transformers learn spatial connections across image patches through a spatial attention mecha-nism. A natural CIL extension is to feed the input image to multiple heads, each corresponding to a task. Attention can This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11858 𝑥𝑓!!𝑓!"𝑓!#𝑓"!𝑓""𝑓"#𝑓$!𝑓$"𝑓$#………………………………ℎInputLayer1Layer2Layer𝐿Classifier…… within-taskconnectionscross-taskconnectionsfrozenblockstrainableblocks⊕otherconnections⊕concatenateoperation…… 1.442.138.286.68 0123456789FLOPs/G Multi-Dytox-1headMulti-Dytox-4headsMulti-Dytox-12headsDER58.2261.1462.1663.78 5556575859606162636465Accuracy Multi-Dytox-1headMulti-Dytox-4headsMulti-Dytox-12headsDERFigure 1. CIL by NE. Left: Given a new task t, a new branch, denoted the tasktexpert , is added while freezing existing experts. In classical NE, the model is simply replicated per task, originating a set of tindependent models, whose outputs are concatenated into a classifier. In the proposed DNE scheme, a cross-task attention mechanism (red connections) is introduced to allow the re-use of knowledge from previous experts and smaller experts per task. Right: Comparison of model accuracy and size for various implementations of NE, using DER [31] and multi-Dytox [9] models of different sizes per task expert, on the CIFAR100 dataset. Both accuracy and FLOPs are shown for the final model, which classifies 100classes grouped into 6tasks. then be computed between all image patches of all heads, leading to a Spatial-and-Task Attention (STA) mechanism. This strategy has is commonly used in multi-modal trans-formers [2,21]. However, in CIL, the features generated per patch by different heads are extracted from exactly the same image region. Hence, their representations are highly simi-lar and STA is dominated by the attention between replicas of the same patch. Furthermore, because all patches are pro-cessed by all heads, the remaining attention is dispersed by a very large number of patch pairs. This leads to the frag-mentation of attention into a large number of small-valued entries, which severely degrades performances. To over-come this problem, we propose a Dense Network Expan-sion (DNE) strategy that disentangles spatial and cross-task attention. Spatial attention is implemented by the standard transformer attention mechanism. Cross-task attention is implemented by a novel task attention block (TAB), which performs attention at the level of feature-mixing, by replac-ing the multi-layer perceptron (MLP) block with an atten-tion module. Overall, the paper makes four contributions. First, we point out that existing NE methods are unsustainable for most practical applications and reformulate the NE prob-lem, to consider the trade-off between accuracy and model size. Second, we propose the DNE approach to address this trade-off, leading to a CIL solution that is both accurate and parameter efficient. Third, we introduce an implementation of DNE based on individual spatial and cross-task atten-tions. Finally, extensive experiments show that DNE out-performs all previous CIL methods not only in terms of ac-curacy, but also of the trade-off between accuracy and scale.
Huang_Egocentric_Audio-Visual_Object_Localization_CVPR_2023
Abstract Humans naturally perceive surrounding scenes by uni-fying sound and sight from a first-person view. Likewise, machines are advanced to approach human intelligence by learning with multisensory inputs from an egocentric per-spective. In this paper, we explore the challenging ego-centric audio-visual object localization task and observe that 1) egomotion commonly exists in first-person record-ings, even within a short duration; 2) The out-of-view sound components can be created when wearers shift their atten-tion. To address the first problem, we propose a geometry-aware temporal aggregation module that handles the ego-motion explicitly. The effect of egomotion is mitigated by estimating the temporal geometry transformation and ex-ploiting it to update visual representations. Moreover, we propose a cascaded feature enhancement module to over-come the second issue. It improves cross-modal local-ization robustness by disentangling visually-indicated au-dio representation. During training, we take advantage of the naturally occurring audio-visual temporal synchro-nization as the “free” self-supervision to avoid costly la-beling. We also annotate and create the Epic Sounding Object dataset for evaluation purposes. Extensive experi-ments show that our method achieves state-of-the-art local-ization performance in egocentric videos and can be gener-alized to diverse audio-visual scenes. Code is available at https://github.com/WikiChao/Ego-AV-Loc .
1. Introduction The emergence of wearable devices has drawn the at-tention of the research community to egocentric videos, the significance of which can be seen from egocentric research in a variety of applications such as robotics [32,34,48], aug-mented/virtual reality [31,61,75], and healthcare [53,66]. In recent years, the computer vision community has made sub-stantial efforts to build benchmarks [12, 13, 15, 40, 57, 69], establish new tasks [17, 36, 37, 39, 60], and develop frame-works [33, 41, 54, 82] for egocentric video understanding. While existing works achieve promising results in the Figure 1. Sounding object localization in egocentric videos. Due to the wearer’s egomotion, the viewpoint changes continu-ously across time. Consequently, audio-visual relations are dy-namically changing in egocentric videos. Our approach tackles challenges in the egocentric audio-visual sounding object task and learns audio-visual associations from first-person videos. egocentric domain, it still remains an interesting but chal-lenging topic to perform fine-grained egocentric video un-derstanding. For instance, understanding which object is emitting sound in a first-person recording is difficult for ma-chines. As shown in Fig. 1, the wearer moves his/her head to put down the bottle. The frying pot which emits sound subsequently suffers deformation and occlusion due to the wearer’s egomotion. Human speech outside the wearer’s view also affects the machine’s understanding of the current scene. This example reveals two significant challenges for designing powerful and robust egocentric video understand-ing systems: First, people with wearable devices usually record videos in naturalistic surroundings, where a variety of illumination conditions, object appearance, and motion patterns are shown. The dynamic visual variations intro-duce difficulties in accurate visual perception. Second, ego-centric scenes are often perceived within a limited field of view (FoV). The common body and head movements cause frequent view changes (see Fig. 1), which brings object de-formation and creates dynamic out-of-view content. Although a visual-only system may struggle to fully de-code the surrounding information and perceive scenes in egocentric videos, audio provides stable and persistent sig-nals associated with the depicted events. Instead of purely visual perception, numerous psychological and cognitive This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22910 studies [6, 29, 67, 73] show that integration of auditory and visual signals is significant in human perception. Audio, as an essential but less focused modality, often provides syn-chronized and complementary information with the video stream. In contrast to the variability of first-person visual footage, sound describes the underlying scenes consistently. These natural characteristics make audio another indispens-able ingredient for egocentric video understanding. To effectively leverage audio and visual information in egocentric videos, a pivotal problem is to analyze the fine-grained audio-visual association, specifically identifying which objects are emitting sounds in the scene. In this pa-per, we explore a novel egocentric audio-visual object lo-calization task, which aims to associate audio with dynamic visual scenes and localize sounding objects in egocentric videos. Given the dynamic nature of egocentric videos, it is exceedingly challenging to link visual content from var-ious viewpoints with audio captured from the entire space. Hence, we develop a new framework to model the distinct characteristics of egocentric videos by integrating audio. In the framework, we propose a geometry-aware tempo-ral module to handle egomotion explicitly. Our approach mitigates the impact of egomotion by performing geometric transformations in the embedding space and aligning visual features from different frames. We further use the aligned features to leverage temporal contexts across frames to learn discriminative cues for localization. Additionally, we intro-duce a cascaded feature enhancement module to handle out-of-view sounds. The module helps mitigate audio noises and improves cross-modal localization robustness. Due to the dynamic nature of egocentric videos, it is hard and costly to label sounding objects for supervised train-ing. To avoid tedious labeling, we formulate this task in a self-supervised manner, and our framework is trained with audio-visual temporal synchronization. Since there are no publicly available egocentric sounding object localization datasets, we annotate an Epic Sounding dataset to facilitate research in this field. Experimental results demonstrate that modeling egomotion and mitigating out-of-view sound can improve egocentric audio-visual localization performance. In summary, our contributions are: (1) the first system-atical study on egocentric audio-visual sounding object lo-calization; (2) an effective geometry-aware temporal aggre-gation approach to deal with unique egomotion; (3) a novel cascaded feature enhancement module to progressively in-ject localization cues; and (4) an Epic Sounding Object dataset with sounding object annotations to benchmark the localization performance in egocentric videos.
Fu_sRGB_Real_Noise_Synthesizing_With_Neighboring_Correlation-Aware_Noise_Model_CVPR_2023
Abstract Modeling and synthesizing real noise in the standard RGB (sRGB) domain is challenging due to the complicated noise distribution. While most of the deep noise generators proposed to synthesize sRGB real noise using an end-to-end trained model, the lack of explicit noise modeling degrades the quality of their synthesized noise. In this work, we pro-pose to model the real noise as not only dependent on the underlying clean image pixel intensity, but also highly cor-related to its neighboring noise realization within the local region. Correspondingly, we propose a novel noise synthe-sizing framework by explicitly learning its neighboring cor-relation on top of the signal dependency. With the proposed noise model, our framework greatly bridges the distribution gap between synthetic noise and real noise. We show that our generated “real” sRGB noisy images can be used for training supervised deep denoisers, thus to improve their real denoising results with a large margin, comparing to the popular classic denoisers or the deep denoisers that are trained on other sRGB noise generators. The code will be available at https://github.com/xuan611/sRGB-Real-Noise-Synthesizing.
1. Introduction Real image denoising is one of the most challeng-ing tasks in low-level vision. Deep denoisers that are trained using synthetic noise, e.g., Additive White Gaus-sian Noise (AWGN), perform poorly on real photography [3, 15], which motivates more realistic noise models, e.g., [1, 5, 14–16]. In general, there are two approaches towards real noise modeling, i.e., modeling in the raw-RGB and standard RGB (sRGB) domains. Popular modeling meth-ods including the physical-based [25, 28] and data-driven methods [1, 6] exploit sophisticated noise models in the raw-RGB domain, which demonstrated promising perfor-*Co-first authors contributed equally. †Corresponding author: Bihan Wen. This work was supported in part by the MOE AcRF Tier 1 (RG61/22) and Start-Up Grant.mance as noise in raw-RGB is largely simplified compar-ing to noise in sRGB [20, 22]. However, raw-RGB images are not usually utilized by common users due to their large sizes. In contrast, most commercial cameras generate sRGB images by default, which are more popular in practice. Un-fortunately, the noise generation methods in the raw-RGB domain cannot be directly applied to sRGB images, as the real noise distribution in sRGB is more complicated than raw-RGB noise, caused by the in-camera signal processing (ISP) pipeline [22]. Recent works [5, 15] proposed to generate noise on raw-RGB images and convert them into sRGB images by the ISP pipeline including demosaicing, white balancing, gamma correction, etc. While these methods synthesized realistic noise, the requirement of raw-RGB images as well as manu-ally defined ISP pipelines limits their applications. An alter-native solution for sRGB real noise modeling is to train the generative models with sRGB noisy-clean images and di-rectly synthesize real noise on sRGB images [16,17,20,26]. However, these models synthesize noise without explicitly modeling the characteristics of sRGB real noise, resulting in degradation of the quality of the synthesized noise. In this paper, we propose a novel real noise generation network, based on Neighboring Correlation-Aware noise model, dubbed as NeCA, to directly synthesize real noise in the sRGB domain. The proposed real noise synthe-sis assumes that the sRGB real noise is not only signal-dependent, i.e., noise level partially depends on its un-derlying clean pixel, but also highly correlated with its neighboring noise realization. Such a real noise model greatly bridges the gap between the synthetic and real noise in sRGB. Furthermore, the synthesized “real” images by the proposed NeCA can be used for training supervised deep denoisers, thus tackling the real image denoising chal-lenges, subjective to only a few real training data. The trained deep denoiser using our synthetic noisy images achieves state-of-the-art denoising performance, compared to the popular classic denoisers as well as deep denoisers that are trained on synthetic pairs from other noise models. To sum up, our main contributions can be concluded as fol-lows: This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1683 • We introduce a neighboring correlation-aware noise model for sRGB real noise synthesis by explicitly modeling the neighboring correlation of real noise, to bridge the gap between the synthetic and real noise dis-tribution in sRGB. • Our proposed framework shows a well-generalized ability, which is still capable to improve the real im-age denoising performance even with limited training data. • With the synthetic image pairs generated by NeCA, the trained denoisers achieve state-of-the-art denoising performance compared with the deep denoisers trained with other real noise models.
Araujo_CIRCLE_Capture_in_Rich_Contextual_Environments_CVPR_2023
Abstract Synthesizing 3D human motion in a contextual, ecologi-cal environment is important for simulating realistic activ-ities people perform in the real world. However, conven-tional optics-based motion capture systems are not suited for simultaneously capturing human movements and com-plex scenes. The lack of rich contextual 3D human motion datasets presents a roadblock to creating high-quality gen-erative human motion models. We propose a novel motion acquisition system in which the actor perceives and oper-ates in a highly contextual virtual world while being mo-tion captured in the real world. Our system enables rapid collection of high-quality human motion in highly diverse scenes, without the concern of occlusion or the need for physical scene construction in the real world. We present CIRCLE, a dataset containing 10 hours of full-body reach-ing motion from 5subjects across nine scenes, paired with ego-centric information of the environment represented in various forms, such as RGBD videos. We use this dataset to train a model that generates human motion conditioned on scene information. Leveraging our dataset, the model learns to use ego-centric scene information to achieve non-trivial reaching tasks in the context of complex 3D scenes.To download the data please visit our website .
1. Introduction Humans excel at interacting with complex environments, effortlessly engaging in everyday tasks such as getting out of a car while carrying a backpack or plugging a power cord into an outlet behind a cabinet. The remarkably flexible and compliant human body enables access to narrow or clut-tered spaces where clear paths are not available. Synthesiz-ing 3D human motion that reflects this ability to navigate in highly contextual, ecological environments, such as our homes, grocery stores, or hospital operating rooms, will sig-nificantly impact applications in Embodied AI, Computer Animation, Robotics, and AR/VR. Machine learning models have significantly advanced the creation of 3D human motion and behaviors in recent years. However, the success of the ML-approach hinges on one condition—the human motion data for training models must be of high quality, volume, and diversity. Traditional motion capture (mocap) techniques focus on the “human movement” itself, rather than the state of the environment in which the motion takes place. While mocap can faithfully record human kinematics, capturing humans in a contextual This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21211 scene requires physical construction of a production set and specific props in the capture studio. This steep requirement limits the capability of today’s mocap technologies to holis-tically capture realistic human activities in the real world. We propose to eliminate the costly requirement of phys-ical staging by capturing human motion during interactions with a virtual reality simulation . This allows us to capture motion like the ones shown in Figure 1, where a person reaches into cluttered spaces in a furnished apartment. Ad-ditionally, we are able to simultaneously record paired first-person perspectives of the virtual environment through VR, as illustrated in Figure 3. With paired ego-centric observa-tion of the world, we can now train motion models to not only comprehend the how of certain tasks, but also the why behind an individual’s movements. By creating the complex scene in the virtual world and keeping the capture space in the real world empty, our method provides four crucial advantages over state-of-the-art solutions. First, creating a highly contextual environ-ment in VR is much simpler and less costly than in actual reality. Second, capturing the state of the real world requires complex sensor instrumentation, while the state of the vir-tual world is readily available from the simulator. Third, because the capture space in reality is always empty, our system is not subject to occlusions that degrade the motion quality, regardless of any clutter in the perceived environ-ment. Fourth, the data acquired by such a system provide 3D human motions and corresponding videos of the envi-ronment rendered in any camera view of choice, such as the egocentric view. We use a Meta Quest 2 headset and the AI Habitat sim-ulator in our experiments. However, our system is agnostic to the choice of hardware, simulator, and virtual environ-ment. To illustrate the possibilities enabled by the availabil-ity of contextual motion capture data, we collect a dataset, CIRCLE, containing ten hours of full-body reaching mo-tion within nine indoor household scenes. CIRCLE con-tains challenging reaching scenarios, including reaching for an object behind the toilet, between tightly placed furni-ture, and underneath the table. Finally, we use CIRCLE to train a model that generates reaching motions conditioned on scene information. Our model takes as input the starting pose of the person in the scene as well as the target location of the right hand, and automatically generates a scene-aware sequence of human motion that reaches the target location. We propose two different methods to encode the scene in-formation and compare them against baselines. In summary, the contributions of this work include: •A novel motion acquisition system to collect 3D hu-man motion with synchronized scene information, •A novel dataset, CIRCLE, with 10hours of human motion data from 5subjects in 9realistic apartment scenes,•A data-driven model, trained on CIRCLE, for generat-ing full-body reaching motion within an environment.
Fervers_Uncertainty-Aware_Vision-Based_Metric_Cross-View_Geolocalization_CVPR_2023
Abstract This paper proposes a novel method for vision-based metric cross-view geolocalization (CVGL) that matches the camera images captured from a ground-based vehicle with an aerial image to determine the vehicle’s geo-pose . Since aerial images are globally available at low cost, they represent a potential compromise between two estab-lished paradigms of autonomous driving, i.e. using expensive high-definition prior maps or relying entirely on the sensor data captured at runtime. We present an end-to-end differentiable model that uses the ground and aerial images to predict a probability distri-bution over possible vehicle poses. We combine multiple ve-hicle datasets with aerial images from orthophoto providers on which we demonstrate the feasibility of our method. Sincethe ground truth poses are often inaccurate w.r.t. the aerial images, we implement a pseudo-label approach to produce more accurate ground truth poses and make them publicly available. While previous works require training data from the tar-get region to achieve reasonable localization accuracy ( i.e. same-area evaluation), our approach overcomes this limi-tation and outperforms previous results even in the strictly more challenging cross-area case. We improve the previous state-of-the-art by a large margin even without ground or aerial data from the test region, which highlights the model’s potential for global-scale application. We further integrate theuncertainty-aware predictions in a tracking framework to determine the vehicle’s trajectory over time resulting in a mean position error on KITTI-360 of 0.78m. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 21621
1. Introduction Systems for autonomous driving require both a model of the vehicle’s environment as well as the location of the vehicle relative to the model. These systems either construct the full model during runtime ( i.e. entirely online), or create some parts prior to runtime ( i.e. partly offline). The latter methods typically construct high-definition maps of a region in advance ( e.g. using lidar sensors) and localize the vehi-cle at runtime relative to that map [34]. While prior maps facilitate a high localization accuracy of the system, they are also expensive to construct and maintain. Online methods on the other hand create a model of the local environment using only the live sensor readings e.g. from lidar [52], cam-era [15] or both [26]. This avoids the need for expensive prior maps, but represents a more difficult task as the system has to predict both the spatial structure of the environment as well as its relative location within it. Aerial images offer the potential to leverage the advan-tages of both approaches: They can be used as a prior map for localization, while also being affordable, globally available and up-to-date due to an established infrastructure of satel-lite and aerial orthophoto providers [1, 3]. We consider the problem of matching the sensor measurements of the vehicle against aerial images to determine the vehicle’s location on the images and thereby its geo-location. Previous research in this area focuses on methods that cover large ( e.g. city-scale) search regions [23,46], but suffer from low metric accuracy [57] insufficient for the navigation of autonomous vehicles. Since a prior pose estimate of the vehicle can be provided by global navigation satellite sys-tems (GNSS) or by tracking the vehicle continuously, several recent methods employ smaller search regions to achieve higher metric accuracy [13, 35]. Without access to three-dimensional lidar point clouds, a purely vision-based model has to bridge the gap between ground and aerial perspectives, for example by learning the transformation in a data-centric manner. We utilize a trans-former model that iteratively constructs a bird’s eye view (BEV) map of the local vehicle environment by aggregating information from the ground-level perspective views (PV). The BEV refers to a nadir ( i.e. orthogonal) view of the local vehicle environment. The final BEV map is matched with an aerial image to predict the relative vehicle pose with three de-grees of freedom (3-DoF), i.e. a two-dimensional translation and a one-dimensional rotation. Our model outperforms previous approaches for the met-ric CVGL task on the Ford A V [6] and KITTI-360 [21] datasets and even surpasses related approaches utilizing lidar sensors in addition to camera input. It predicts a soft prob-ability distribution over possible vehicle poses ( cf. Fig. 1) rather than a single pose which specifically benefits trackers that use the model predictions to determine the vehicle’s trajectory over time.While previous works rely on the availability of training data from the target region to achieve reasonable localization accuracy, we address the strictly more challenging task of non-overlapping train and test regions. We further train and test the model on entirely different datasets that were cap-tured with different ground-based vehicles. Our evaluation demonstrates the generalization capabilities of our model under cross-area and cross-vehicle conditions and highlights the potential for global-scale application without fine-tuning on a new region or a new vehicle setup. We collect multiple datasets from the autonomous driv-ing sector in addition to aerial images from several or-thophoto providers for our evaluation. Since the vehicle’s geo-locations do not always accurately match the correspond-ing aerial images, we compute new geo-registered ground truth poses for all datasets used in the work and filter out invalid samples via a data-pruning approach. We publish the source code of our method online includ-ing a common interface for the different datasets. We also make the improved ground truth for all datasets publicly available.1 In summary, our contributions are as follows: 1.We present a novel end-to-end trainable model for met-ric CVGL that requires only visual input and yields uncertainty-aware predictions.
Chandran_Continuous_Landmark_Detection_With_3D_Queries_CVPR_2023
Abstract Neural networks for facial landmark detection are noto-riously limited to a fixed set of landmarks in a dedicated layout, which must be specified at training time. Dedi-cated datasets must also be hand-annotated with the cor-responding landmark configuration for training. We pro-pose the first facial landmark detection network that can predict continuous, unlimited landmarks, allowing to spec-ify the number and location of the desired landmarks at in-ference time. Our method combines a simple image feature extractor with a queried landmark predictor, and the user can specify any continuous query points relative to a 3D template face mesh as input. As it is not tied to a fixed set of landmarks, our method is able to leverage all pre-existing 2D landmark datasets for training, even if they have incon-sistent landmark configurations. As a result, we present a very powerful facial landmark detector that can be trained once, and can be used readily for numerous applications like 3D face reconstruction, arbitrary face segmentation, and is even compatible with helmeted mounted cameras, and therefore could vastly simplify face tracking workflows for media and entertainment applications.
1. Introduction Facial landmark detection has become extremely popular in computer vision and graphics applications. In particular, many applications in visual effects such as 3D facial recon-struction, tracking, face swapping and re-enactment rely on 0Now at Googleaccurate facial landmark detection as one of the first steps in the process. It is therefore a crucial task and has been stud-ied extensively for the past several decades, and the field has seen immense progress thanks to advances in deep learning. State-of-the-art solutions for facial landmark detection are based on neural networks, and they operate by train-ing the network to predict a fixed set of landmarks, lever-aging large datasets of hand-annotated images. Most stan-dard algorithms predict a set of 68 sparse landmarks spread across the face (Fig. 1(a)), in a very specific and prede-fined layout [33]. However, recent work has shown that predicting denser landmarks on the face is better for tasks like face reconstruction [43]. This brings up the question of how many landmarks one has to predict for optimal perfor-mance (depending on the application), and what is the pre-ferred layout of these landmarks? One of the biggest issues with traditional facial landmark detectors is that you need to decide on the number and layout of the landmarks ahead of time, then obtain annotated data with the corresponding landmarks and ultimately train the detector. Later, at run-time, the landmark layout cannot be changed. An ideal landmark detector would not be bound to a spe-cific fixed landmark layout. Such a detector could be trained once and then used in several applications with different landmark configurations. For example, in face image seg-mentation you may want to track landmarks corresponding to segment boundaries, but then in 3D reconstruction you might want to track landmarks corresponding to the ver-tices of your 3D face mesh. For individuals with specific face details like freckles or moles, you might want a de-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16858 tector that can track these user-defined points for the appli-cation of digital video touchup. For each application, with today’s detectors you would need to train separate landmark detection networks, one for each landmark layout. In this work we aim to reformulate how landmarks have conventionally been predicted with neural networks. We propose novel architectures for continuous, unlimited land-mark detection at runtime. In other words, our method al-lows for an arbitrary number of landmarks to be predicted in any layout at inference time without retraining. As such, we propose the ideal landmark detector for multiple appli-cation use. The design of our method is simple, combining an image feature extractor with a queried landmark predic-tor; the latter takes the image descriptor together with a 3D query point relative to a template 3D face surface and pre-dicts the corresponding 2D landmark location in the image. Since the 3D query points can be arbitrary, the result is con-tinuous and unlimited landmark detection (Fig. 1(b)). As our approach is modular, we evaluate multiple architecture options for both the feature extractor and the queried pre-dictor, allowing different designs that tradeoff accuracy and runtime. Furthermore, we will show that the query points do not even need to lie on the surface of the template face, allowing to predict 2D landmarks for volumetric features on the skull, jaw, teeth or eyes (Fig. 1(c) and (d)). In addition, an important benefit of our design is that we do not need to have training data with a single dedicated number of landmarks in a specified layout on the face. This means that our architecture can leverage multiple different pre-existing datasets at training time, even if they do not have consistent annotations. This fact, combined with the beauty of specifying any landmark layout at runtime makes our continuous landmark detector powerful, with applica-tions in several areas of face capture including reconstruc-tion, tracking, segmentation (Fig. 1(e)-(g)) and many oth-ers. As a summary, we can enumerate the main benefits of our new landmark predictor as follows: Our method offers the ability to predict any desired landmark on the face at inference time without retrain-ing the network. We can track non-standard landmarks like pores, moles or dots drawn on the face without training a spe-cific predictor. Our method is not restricted to the face surface, and al-lows to predict landmarks for volumetric features like the skull, jaw, teeth and eyes. The size of the neural network is agnostic to the num-ber of output landmarks.
Gao_MIST_Multi-Modal_Iterative_Spatial-Temporal_Transformer_for_Long-Form_Video_Question_Answering_CVPR_2023
Abstract To build Video Question Answering (VideoQA) systems capable of assisting humans in daily activities, seeking answers from long-form videos with diverse and complex events is a must. Existing multi-modal VQA models achieve promising performance on images or short video clips, especially with the recent success of large-scale multi-modal pre-training. However, when extending these meth-ods to long-form videos, new challenges arise. On the one hand, using a dense video sampling strategy is com-putationally prohibitive. On the other hand, methods rely-ing on sparse sampling struggle in scenarios where multi-event and multi-granularity visual reasoning are required. In this work, we introduce a new model named Multi-modal Iterative Spatial-temporal Transformer (MIST )to better adapt pre-trained models for long-form VideoQA. Specifically, MIST decomposes traditional dense spatial-temporal self-attention into cascaded segment and region selection modules that adaptively select frames and image regions that are closely relevant to the question itself. Vi-sual concepts at different granularities are then processed efficiently through an attention module. In addition, MIST iteratively conducts selection and attention over multiple layers to support reasoning over multiple events. The exper-imental results on four VideoQA datasets, including AGQA, NExT-QA, STAR, and Env-QA, show that MIST achieves state-of-the-art performance and is superior at efficiency. The code is available at github.com/showlab/mist .
1. Introduction One of the ultimate goals of Video Question Answering (VideoQA) systems is to assist people in solving problems in everyday life [13, 27, 41], e.g., helping users find some-thing, reminding them what they did, and assisting them while accomplishing complex tasks, etc. To achieve such *Currently at Google Brain. †Corresponding author. drink from a bottle put a bottl eon a table picking upabook grasping adoorknob holding aphone𝑡 Multi -event: Q1. Didthe person interact with a doorknob before or after putting something on a table ?Answer: After Multi -grained Visual Concepts: Q2. Which object were they touching between drinking from a bottle and picking up a book ? Answer: Phone Causality: Q3. Why does the person put the bottle on a table ? Answer: She has finished drinkingFigure 1. Main challenges of long-form VideoQA. The ques-tions for long-form VideoQA usually involve multi-event, multi-grained, and causality reasoning. functions, the systems should be able to understand and seek the answer from long-form videos with diverse events about users’ activities. Compared to understanding and reasoning over short videos, many unique challenges arise when the duration of the video increases, as shown in Fig. 1: 1) Multi-event rea-soning. The long-form videos usually record much more events. The questions about these videos thus naturally re-quire the systems to perform complex temporal reasoning, e.g., multi-event reasoning (Q1 in Fig. 1), causality (Q3), etc. 2) Interactions among different granularities of visual concepts. The questions of short-clip videos usually involve the interactions of objects or actions that happened simulta-neously, while questions for long-form videos could involve more complex interactions of objects, relations, and events across different events, e.g., Q2 in Fig. 1. Current vision-language methods [2, 7, 10, 24, 29, 31, 32, 51, 52] excel at QA over images or short clips span-ning several seconds. In other words, they excel at learn-ing multi-modal correspondences between a single cap-tion with one or few events. Their tremendous progress over these years is fueled by 1) pre-training on large-scale image-language [22, 37, 38] and short-clip-language This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14773 𝐿-th Cascade Segment and Region Selection Module1-st Cascade Segment and Region Selection ModuleQ: Which object were they touching between drinking from a cup and picking up a book ? Segment Selection Region Selection 1-st Self-attention Layer over Segments and Selected Regions Video: V. Feat. Q. Feat. Segment Selection Region Selection 𝐿-thSelf-attention Layer over Segments and Selected Regions Answer: phonet t…Figure 2. Diagrammatic illustration of MIST .It revises a standard spatial-temporal self-attention layer into two modules: a cascade selection module that dynamically eliminates question-irrelevant image regions, and a self-attention layer reasoning over multi-modal multi-grained visual concepts. The proposed mod-ules further iterate multiple times to reason over different events. datasets [2, 33], and 2) end-to-end multi-modal Transform-ers [1–3, 10, 37, 40], which is superior at learning the align-ments between images with texts. However, these multi-modal Transformers rely on the dense self-attention with the computation cost increasing exponentially over time especially when adapting to long-form videos. To make the dense self-attention computation-ally feasible in processing videos, almost all current state-of-the-art pre-trained Transformers are sparse sample-based methods, e.g., [2, 40] only sample 3 or 4 frames per video regardless of its length. If we simply adapt these pre-trained models to long-form videos with the same sampling strat-egy, there will be a domain gap between the pre-training and downstream VideoQA tasks. In pre-training, the sparsely sampled frames of a short video depict a coherent action, while they are likely to be random shots for part of events in a long video. Recently, some early attempts process the video hierarchically [5], which splits the video into several segments and performs QA only on aggregated segment-level features. It can ease the efficiency issue, but is still hard to capture complex interactions among multi-grained concepts. Thus, leveraging the advantages of models pre-trained from images or short videos and addressing the chal-lenges of long-form VideoQA is worth exploring. In this paper, we propose a new model, named Multi-modal Iterative Spatial-temporal Transformer (MIST ), asshown in Fig. 2. MIST comes from a simple finding that for long-form VideoQA, it is not necessary to consider the details of all events in a video, like what dense self-attention over all patches do. The model only needs to consider the general content of all events and focuses on the de-tails of a few question-related events. Thus, MIST de-composes dense joint spatial-temporal self-attention into a question-conditioned cascade segment and region selection module along with a spatial-temporal self-attention over multi-modal multi-grained features. The cascade selection reduces the computation cost and benefits the performance by focusing on the question-related segments and regions. The self-attention over segments and image patches, bet-ter captures interactions among different granularities of vi-sual concepts. In addition, through iteratively conducting selection and self-attention, MIST can reason over multiple events and better perform temporal and causal reasoning. We conduct experiments on several VideoQA datasets with relatively longer videos, AGQA [14], NExT-QA [44], STAR [42], and Env-QA [11], with an average video du-ration varies from 12s to 44s. The experimental results show that our approach achieves state-of-the-art perfor-mance. Further ablation studies verify the effectiveness of the key components. Moreover, quantitative and qualitative results also show that our method provides higher efficiency and reasonable evidence for answering questions.
Fan_PMR_Prototypical_Modal_Rebalance_for_Multimodal_Learning_CVPR_2023
Abstract Multimodal learning (MML) aims to jointly exploit the common priors of different modalities to compensate for their inherent limitations. However, existing MML meth-ods often optimize a uniform objective for different modal-ities, leading to the notorious “modality imbalance” prob-lem and counterproductive MML performance. To address the problem, some existing methods modulate the learning pace based on the fused modality, which is dominated by the better modality and eventually results in a limited improve-ment on the worse modal. To better exploit the features of multimodal, we propose Prototypical Modality Rebal-ance (PMR) to perform stimulation on the particular slow-learning modality without interference from other modali-ties. Specifically, we introduce the prototypes that represent general features for each class, to build the non-parametric classifiers for uni-modal performance evaluation. Then, we try to accelerate the slow-learning modality by enhancing its clustering toward prototypes. Furthermore, to alleviate the suppression from the dominant modality, we introduce a prototype-based entropy regularization term during the early training stage to prevent premature convergence. Be-sides, our method only relies on the representations of each modality and without restrictions from model structures and fusion methods, making it with great application potential for various scenarios. The source code is available here1.
1. Introduction Multimodal learning (MML) [24, 34, 36] emerges to mimic the way humans perceive the world, i.e., from multi-ple sense channels toward a common phenomenon for bet-ter understanding the external environment, which has at-tracted extensive attention in various scenarios, e.g. video classification [13, 28, 37], event localization [38, 43], ac-*Corresponding author 1https://github.com/fanyunfeng-bit/Modal-Imbalance-PMR direction disturbed by dominant modal direction to prototypes prototypesupdate Interfered by others: Ours:Figure 1. The slow-learning modal’s updating direction is severely disturbed by the dominant one, making it hard to exploit its fea-tures. We propose to use the prototypes, the centroids of each class in representation space, to adjust updating direction for better uni-modal performance. Other modalities will not interfere with the new direction, which ensures improvement. tion recognition [10, 33], audiovisual speech recognition [23, 25]. By employing a complementary manner of mul-timodal training, it is expected that MML can achieve bet-ter performance than using a single modality. However, the heterogeneity of multimodal data poses challenges on how to learn multimodal correlations and complementarities. According to recent research [29,42], although the over-all performance of multimodal learning exceeds that of single-modal learning, the performance of each modality tends to be far from their upper bound. The reason behind this phenomenon is the “modality imbalance” problem, in which the dominant modality will hinder the full utiliza-tion of multimodal. The researchers [40] also claimed that different modalities overfit and converge at different rates, meaning that optimizing the same objective for different modalities leads to inconsistent learning efficiency. Several methods [29, 40, 44] have been proposed to ad-dress the problem. Some of them [29, 44] try to modu-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20029 late the learning paces of different modalities based on the fusion modal. However, we find out through experiments that the dominant modality not only suppresses the learning rates of other modalities [29] but also interferes with their update direction, which makes it hard to improve the per-formance of slow-learning modalities. Moreover, existing methods either inevitably bring additional model structures [5, 40] or are limited by specific fusion methods [29, 42], which limit their application scenarios. To tackle their limitations, we propose the Prototypical Modal Rebalance (PMR) strategy to stimulate the slow-learning modality via promoting the exploitation of features and alleviate the suppression from the dominant modality by slowing down itself in the early training stage. Concretely, we introduce the prototypes for each modal-ity, which are defined as “representative embeddings of in-stances of a class”. We utilize the prototypes to construct non-parametric classifiers by comparing the distances be-tween each sample with all the prototypes to evaluate the performance of each modality and design a new prototype-based metric inspired by [29] to monitor the modality im-balance degree during the training process. Then, we pro-pose the prototypical cross-entropy (PCE) loss to acceler-ate the slow-learning modality by enhancing its clustering process, as illustrated in Fig. 1. The PCE loss can achieve comparable performance to the cross-entropy (CE) loss [1] in the classification task, and more importantly, it is not af-fected by the dominant modality and gives internal impetus for full feature exploitation instead of going on in the dis-turbed direction. In addition, we introduce a prototypical entropy regularization (PER) term, which can be seen as a penalty on the dominant modality to prevent premature convergence for suppression effect alleviation. Our method only relies on the representations of each modality and with-out restrictions from model structures and fusion methods. Therefore, the PMR strategy has great generality potential. To summarize, our contributions in this paper are as fol-lows: • We analyze the modal imbalance problem and find that during the training process, the deviation of the gradi-ent update direction of the uni-modal became larger, indicating that we should not regulate along the origi-nal gradient. • We propose PMR to address the modal imbalance problem by actively accelerating slow-learning modal-ities with PCE loss and simultaneously alleviating the suppression of the dominant modality via PER. • We conduct comprehensive experiments and demon-strate that 1) PMR can achieve considerable improve-ments over existing methods; 2) PMR is independent of the fusion method or model structure and has strong advantages in generality.2. Related Works
Chelani_Privacy-Preserving_Representations_Are_Not_Enough_Recovering_Scene_Content_From_Camera_CVPR_2023
Abstract Visual localization is the task of estimating the camera pose from which a given image was taken and is central to several 3D computer vision applications. With the rapid growth in the popularity of AR/VR/MR devices and cloud-based applications, privacy issues are becoming a very im-portant aspect of the localization process. Existing work on privacy-preserving localization aims to defend against an attacker who has access to a cloud-based service. In this paper, we show that an attacker can learn about details of a scene without any access by simply querying a localization service. The attack is based on the observation that modern visual localization algorithms are robust to variations in ap-pearance and geometry. While this is in general a desired property, it also leads to algorithms localizing objects that are similar enough to those present in a scene. An attacker can thus query a server with a large enough set of images of objects, e.g., obtained from the Internet, and some of them will be localized. The attacker can thus learn about object placements from the camera poses returned by the service (which is the minimal information returned by such a ser-vice). In this paper, we develop a proof-of-concept version of this attack and demonstrate its practical feasibility. The attack does not place any requirements on the localization algorithm used, and thus also applies to privacy-preserving representations. Current work on privacy-preserving repre-sentations alone is thus insufficient.
1. Introduction Visual localisation refers to the problem of estimating the camera pose of a given image in a known scene. It is a core problem in several 3D computer vision applications, including self-driving cars [17, 18] and other autonomous robots [50], and Augmented Reality [5, 23, 25]. A popular approach for Augmented/Mixed/Virtual Re-ality (XR) applications is to use a client-server mechanism for localization: the user device (client) sends image data to a cloud-based system (server) that computes and returns the camera pose [23, 25, 46]. Examples of such services in-clude Google’s Visual Positioning System [29], Microsoft’s Azure Spatial Anchors [24], and Niantic’s Lightship [39]. Cloud-based localization services are popular for multiple reasons -first, performing localization on the server reduces storage requirements and the computational load, and thus energy consumption, which is important for client devices such as mobile phones and headsets; second , it enables us-ing robust mapping and localization algorithms that are too expensive for mobile devices; third , in the context of col-laborative mapping, e.g., for the AR cloud or autonomous driving, maintaining a single scene representation in a cen-tralized place is far easier than keeping multiple copies on various mobile devices up-to-date. Naturally, sending user data to a server, e.g., in the form of images to be localized or 3D maps recorded by users that will be used for localization, raises privacy con-cerns [9, 41, 42]. Work on privacy-preserving localization aims to resolve these concerns by ensuring that private de-tails cannot be recovered from the data sent [14, 26, 42] to or stored on the server [11, 11, 15, 28, 36, 41, 52]. Existing work focuses on scenarios where an attacker gains access to the localization service or can eavesdrop on the communication between client and server. In this work, we demonstrate that it is possible for an attacker to learn about the content of a scene stored on a localization server without direct access to the server. We show that a localiza-tion service will reveal scene-related information through estimated camera poses, i.e., through its normal operation process. The attack is based on two recent developments: (1) modern visual localization algorithms are designed to be robust against changes such as illumination and seasonal variations [44]. This is an essential property for cloud-based localization services in order to operate robustly and reli-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13132 Figure 1. In the context of privacy-preserving localization, we show that it is possible to learn about the content of a scene using camera poses returned by a localization service, without any direct access to the scene representation. ( 1st column ) Examples of images from the scene, used to build the scene representation. The images are shown for illustrative purposes and are not available to an attacker trying to learn about the scene. ( 2nd column ) The attacker queries the service with images of objects, e.g., downloaded from the Internet. ( 3rd & 4th column ) Using the camera poses for the query image returned by the localization service, the attacker is able to identify the types of objects present in the scene and to accurately place them in the scene. We show the estimated object poses overlaid over the ground truth structure of the scene (which is not accessible to the attacker). The attacker is able to faithfully recover the placement of objects. Overall, our results demonstrate that simple feedback such as camera poses is already sufficient to potentially reveal private details. ably. However, since these algorithms are robust to (slight) variations in appearance and geometry, they will also local-ize images showing objects that are similar (but not neces-sarily identical) to those objects present in the scene. (2) massive amounts of images depicting objects in different variations are readily available on the Internet. Taken to-gether, both developments allow an attacker to repeatedly query the server with images and to recover the positions of the objects in the scene based on the camera poses returned by the server ( cf. Fig. 1). In this paper, we demonstrate the feasibility of this attack by developing a proof-of-concept implementation of the attack. In summary, we make the following contributions: (1) we identify a new line of attack in the context of privacy-preserving visual localization based on the camera poses returned by a cloud-based server. (2)we show the feasibil-ity of the attack through a proof-of-concept implementation of the attack. Through experiments, we explore the per-formance of our implementation as well as the trade-off be-tween localization robustness and potential defenses against the attack. (3)the attack is agnostic to the underlying local-ization algorithm and thus applicable even if the localiza-tion system is otherwise perfectly privacy-preserving. This paper thus proposes a new research direction for privacy-preserving localization, where the aim for the localization service is to correctly identify whether a query image was taken in the concerned scene or not, in order to prevent leak-ing information through camera poses.2. Related Work Visual localization. Most state-of-the-art visual local-ization algorithms are based on establishing 2D-3D matches between a query image and a 3D model of the scene. These correspondences are then used for camera pose es-timation. The 3D model can either be stored explic-itly [19–21, 27, 31–33, 43], e.g., in the form of a Structure-from-Motion (SfM) point cloud, or implicitly in the form of the weights of a machine learning model [1–3, 6, 38, 45]. In the former case, local feature descriptors are associated with 3D points of the model. It has been shown that this in-formation is sufficient to recover detailed images from the 3D map [28, 40], although sparsifying these models [4, 51] might effectively make them privacy-preserving [7]. Ap-proac
An_PanoHead_Geometry-Aware_3D_Full-Head_Synthesis_in_360deg_CVPR_2023
Abstract Synthesis and reconstruction of 3D human head has gained increasing interests in computer vision and computer graphics recently. Existing state-of-the-art 3D generative ad-versarial networks (GANs) for 3D human head synthesis are either limited to near-frontal views or hard to preserve 3D consistency in large view angles. We propose PanoHead, the first 3D-aware generative model that enables high-quality view-consistent image synthesis of full heads in 360◦with diverse appearance and detailed geometry using only in-the-wild unstructured images for training. At its core, we lift up the representation power of recent 3D GANs and bridge the data alignment gap when training from in-the-wild images with widely distributed views. Specifically, we propose a novel two-stage self-adaptive image alignment for robust 3D GAN training. We further introduce a tri-grid neural volume representation that effectively addresses front-face and back-head feature entanglement rooted in the widely-adopted tri-plane formulation. Our method instills prior knowledge of 2D image segmentation in adversarial learning of 3D neural scene structures, enabling compositable head synthe-sis in diverse backgrounds. Benefiting from these designs, our method significantly outperforms previous 3D GANs, generating high-quality 3D heads with accurate geometry and diverse appearances, even with long wavy and afro hairstyles, renderable from arbitrary poses. Furthermore, we show that our system can reconstruct full 3D heads from single input images for personalized realistic 3D avatars.
1. Introduction Photo-realistic portrait image synthesis has been a con-tinuous focus in computer vision and graphics, with a wide range of downstream applications in digital avatars, telep-resence, immersive gaming, and many others. Recent ad-vances in Generative Adversarial Networks (GANs) [ 12] has demonstrated strikingly high image synthesis quality, indis-tinguishable from real photographs [ 19,21,22]. However, contemporary generative approaches operate on 2D convolu-tional networks without modeling the underlying 3D scenes. Therefore 3D consistency cannot be strictly enforced when Project page: https://sizhean.github.io/panohead This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 20950 synthesizing head images under various poses. To generate 3D heads with diverse shapes and appear-ances, traditional approaches require a parametric textured mesh model [ 2,25] learned from large 3D scan collections. However, the rendered images lack fine details and have limited perceptual quality and expressiveness. With the advent of differentiable rendering and neural implicit rep-resentation [ 28,47], conditional generative models have been developed to generate more realistic 3D-aware face images [ 17,44,45,53]. However, those approaches typically require multi-view image or 3D scan supervision, which are hard to acquire and have limited appearance distribution as those are usually captured in controlled environments. 3D-aware generative models have recently seen rapid progress, fueled by the integration of implicit neural repre-sentation in 3D scene modeling and Generative Adversarial Networks (GANs) for image synthesis [ 5,6,29,31,37,40,48]. Among them, the seminal 3D GAN, EG3D [ 5], demonstrates striking quality in view-consistent image synthesis, trained only from in-the-wild single-view image collections. How-ever, these 3D GAN approaches are still limited to synthesis in near-frontal views. In this paper, we propose PanoHead , a novel 3D-aware GAN for high-quality full 3D head synthesis in 360◦trained from only in-the-wild unstructured images. Our model can synthesize consistent 3D heads viewable from all angles , which is desirable by many immersive interaction scenarios such as digital avatars and telepresence. To the best of our knowledge, our method is the first 3D GAN approach to achieve full 3D head synthesis in 360◦. Extending 3D GAN frameworks such as EG3D [ 5] to full 3D head synthesis poses several significant technical challenges: Firstly, many 3D GANs [ 5,31] cannot separate foreground and background, inducing 2.5D head geometry. The background, formulated typically as a wall structure, is entangled with the generated head in 3D and therefore pro-hibits rendering from large poses. We introduce a foreground-aware tri-discriminator that jointly learns the decomposition of the foreground head in 3D space by distilling the prior knowledge in 2D image segmentation. Secondly, while being compact and efficient, current hy-brid 3D scene representations, like tri-plane [ 5], introduce strong projection ambiguity for 360◦camera poses, resulting in ‘mirrored face’ on the back head. To address the issue, we present a novel 3D tri-grid volume representation that disentangles the frontal features with the back head while maintaining the efficiency of tri-plane representations. Lastly, obtaining well-estimated camera extrinsics of in-the-wild back head images for 3D GANs training is ex-tremely difficult. Moreover, an image alignment gap exists between these and frontal images with detectable facial land-marks. The alignment gap causes a noisy appearance and unappealing head geometry. Thus, we propose a novel two-stage alignment scheme that robustly aligns images from any view consistently. This step decreases the learning diffi-culty of 3D GANs significantly. In particular, we propose a camera self-adaptation module that dynamically adjusts the positions of rendering cameras to accommodate the align-ment drifts in the back head images. Our framework substantially enhances the 3D GANs’ ca-pabilities to adapt to in-the-wild full head images from arbi-trary views, as shown in Figure 1. The resulting 3D GAN not only generates high-fidelity 360◦RGB images and geometry, but also achieves better quantitative metrics than state-of-the-art methods. With our model, we showcase compelling 3D full head reconstruction from a single monocular-view image, enabling easily accessible 3D portrait creation. In summary, our main contributions are as follows: •The first 3D GAN framework that enables view-consistent and high-fidelity full-head image synthesis with detailed geometry, renderable in 360◦. We demonstrate our ap-proach in high-quality monocular 3D head reconstruction from in-the-wild images. •A novel tri-grid formulation that balances efficiency and expressiveness in representing 3D 360◦head scenes. •A foreground-aware tri-discriminator that disentangles 3D foreground head modeling from 2D background synthesis. •A novel two-stage image alignment scheme that adaptively accommodates imperfect camera poses and misaligned image cropping, enabling training of 3D GANs from in-the-wild images with wide camera pose distribution.
Firoze_Tree_Instance_Segmentation_With_Temporal_Contour_Graph_CVPR_2023
Abstract We present a novel approach to perform instance seg-mentation and counting for densely packed self-similar trees using a top-view RGB image sequence. We propose a solution that leverages pixel content, shape, and self-occlusion. First, we perform an initial over-segmentation of the image sequence and aggregate structural character-istics into a contour graph with temporal information incor-porated. Second, using a graph convolutional network and its inherent local messaging passing abilities, we merge ad-jacent tree crown patches into a final set of tree crowns. Per various studies and comparisons, our method is superior to all prior methods and results in high-accuracy instance segmentation and counting despite the trees being tightly packed. Finally, we provide various forest image sequence datasets suitable for subsequent benchmarking and evalua-tion captured at different altitudes and leaf conditions.
1. Introduction Trees in forests are tightly spaced, partially overlap-ping 3D objects with complex boundaries. Tree instance segmentation is critical in several domains. For exam-ple, ecosystem services and agriculture need to segment and count trees in large areas in order to obtain informa-tion about the ecological balance, environmental health, and timber inventory. Counting trees from the ground per-spective is inefficient, does not scale, and is challenging to automate because of many occlusions with branches and low accesibility. In this paper, we address tree instance segmentation and counting using overhead RGB image se-quences captured by unmanned aerial vehicles (UA Vs), es-pecially during the green-leaf season when trees are most self-similar; see Fig. 1 for an illustration. There is significant prior work in segmentation and counting, particularly in the field of instance segmentation. While some approaches make use of LiDAR or RGB-D im-ages (see the survey paper [5]), we focus on using easier-to-obtain uncalibrated RGB image sequences. Prior works based on uncalibrated RGB images can be largely organized into three groups. The first group seeks to count individ-ual objects. These approaches often use density estima-tion and do not focus on segmentation [35, 36]. The sec-ond group of methods relies on convolutional neural net-works (CNNs) applied directly to image pixels, such as Mask R-CNN [26]. However, in the case of abutting and self-similar objects, e.g., trees, distinguishing individual in-stances is hard. The third group of techniques (e.g., Ke et al. [28], Newell and Deng [43]) model object contour as a graph, where each pixel corresponds to a node, and makes use of graph convolutional networks to complete individual contours; nevertheless, abutting and self-similar instances also hinder these methods. Yet other methods, such as those in digital forestry research, exploit domain-specific features such as assumed differences between tree species or fall leaf coloring (i.e., during one brief time period of the year, the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 2193 Contour Graph (Sec. 3.1 ) Contour Merging (Sec. 3.2 ) Edge Classification Merge Nodes? Self-occlusion FeaturesPixel Features Shape FeaturesGCN Figure 2. Pipeline: The input image sequence is analyzed. Initial contours and features are detected and organized into a contour graph that is refined by merging edges and nodes, resulting in the final output mask. leaf color of adjacent crowns is often different). Our approach is motivated by a key observation that two tree crown leaf patterns tightly packed together are highly similar, and additional features beyond leaf patterns are nec-essary to perform segmentation. Hence, we consider fea-tures based on the tree crown shape because it is unlikely to observe a rectangular tree crown. Beyond shape features, we also use the changing self-occlusion patterns captured in the different frames to aid segmentation. At a high level, our proposed approach simultaneously exploits pixel content, shape, and self-occlusion, which collectively define a graph-based structure that we call a contour graph . Each node corresponds to an initial over-segmentation of a tree crown fragment; i.e., each node cor-responds to a region enclosed by a closed contour. We then learn features on this contour graph via graph convolu-tional network (GCN) to determine which nodes should be merged. Our method decides whether two nearby regions correspond to the same tree crown. Notably, a tree crown fragment is subject to various simultaneous features that we can exploit to discern one tree crown from another, even if one tree crown is of the same species and has a very similar leaf pattern and color to an adjacent tree crown. Altogether this leads to an instance segmentation method that can pro-cess overhead RGB image sequences of dense forests even when all leaves are mostly similar in color during summer. See Fig. 2 for an overview of our approach. When creating and evaluating model performance, we are not aware of suitable databases on dense forests with subsequent frames. To address this: (a) We leveraged devel-opmental tree models [33, 57] to produce a synthetic dataset with annotated tree crowns (5,157 trees in total); (b) We manually labeled real-world image sequences captured by UA V over three large forests (6,527 trees in total), collec-tively spanning approximately 3,680,000 m2. We will make our self-collected datasets publicly available. On these datasets, we show that the proposed method achieves a segmentation accuracy of 73.6 and a count accu-racy of 89.8% on average, which is compared to multiple recent instance segmentation approaches.Our main contributions include: • an instance segmentation method to robustly process densely packed trees where the instances are abutting, partially overlapping, and self-similar, • tree crown counting, which is beneficial to ecosystem services and digital forestry, and • a curated dataset of multiple labeled and unlabeled temporally continuous dense forests suitable for future research.
Bafghi_A_New_Dataset_Based_on_Images_Taken_by_Blind_People_CVPR_2023
Abstract Our goal is to improve upon the status quo for design-ing image classification models trained in one domain that perform well on images from another domain. Comple-menting existing work in robustness testing, we introduce the first dataset for this purpose which comes from an au-thentic use case where photographers wanted to learn about the content in their images. We built a new test set us-ing 8,900 images taken by people who are blind for which we collected metadata to indicate the presence versus ab-sence of 200 ImageNet object categories. We call this dataset VizWiz-Classification. We characterize this dataset and how it compares to the mainstream datasets for evalu-ating how well ImageNet-trained classification models gen-eralize. Finally, we analyze the performance of 100 Ima-geNet classification models on our new test dataset. Our fine-grained analysis demonstrates that these models strug-gle on images with quality issues. To enable future exten-sions to this work, we share our new dataset with evalua-tion server at: https://vizwiz.org/tasks-and-datasets/image-classification .
1. Introduction A common approach for designing computer vision so-lutions is to leverage large-scale datasets to train algorithms. Yet, for many real-world applications, it is not only ineffi-cient to curate such training datasets but also challenging or infeasible. To address this problem, robustness testing was recently introduced with the focus of improving the perfor-mance of models trained for one domain on a test set in a different domain. In this paper, we focus on robustness test-ing for the image classification problem. To date, progress with classification robustness testing has been possibly largely because of numerous publicly-available test datasets with distribution shifts from the orig-inal domain. While such datasets have been beneficial in catalyzing progress, they are limited in that they originatefrom contrived settings. For example, ImageNet-C [15] consists of real images with synthetically generated corrup-tions to assess model robustness for corrupted images. Yet, as shown in prior work [3], images curated from contrived settings can lack the diversity of challenges that emerge in real-world applications. A consequence of this lack of di-versity in test datasets is that algorithm developers do not receive feedback about whether their methods generalize to the range of plausible real-world vision challenges. We address the above gap for robustness testing by intro-ducing a new test set for image classification. It consists of 8,900 images taken by people who are blind who were au-thentically trying to learn about images they took with their mobile phone cameras. For each image, we asked crowd-workers to indicate which from 200 object categories were present. We call the resulting dataset VizWiz-Classification. Examples demonstrating how labelled images in our new dataset compare to those in a related robustness testing dataset are shown in Figure 1. We next analyze how our dataset compares to six existing robustness testing datasets and benchmark the performance of 100 modern image clas-sification models on this dataset to highlight challenges and opportunities that emerge for the research community. Success on our new dataset could benefit real-world ap-plications today. Already, a growing number of blind people are sharing their images with services such as Microsoft’s Seeing AI, Google’s Lookout, and TapTapSee, which rec-ognize a small number of object categories. Success could broaden such benefits to a longer tail of categories includ-ing those underrepresented in the developing world where it can be laborious/infeasible to collect large, labeled datasets, especially from such a specific population as people who are blind. More generally, our new dataset challenge will encourage developing algorithms that handle a larger diver-sity of real-world challenges. This could benefit applica-tions with similar challenges such as robotics and wearable lifelogging. Finally, image classification is a precursor for many downstream tasks and so we expect progress on our dataset to enable progress on downstream tasks such as ob-ject detection, segmentation, and tracking. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16261 Figure 1. Example labelled images from our new VizWiz-Classification dataset, ImageNet [7], and ImageNet-C [15], where each has the label of “Table lamp”. When comparing our dataset to these prior works, (1) our images were taken by blind people who wanted to learn about their environment, whereas ImageNet images were collected from the Internet and ImageNet-C images consist of ImageNet images that were synthetically corrupted and (2) our images can have multiple labels (e.g. also includes a “Lampshade”), while ImageNet and ImageNet-C permit only a single label, which can lead to issues for prediction models when multiple categories are present in an image.
Huang_Diffusion-Based_Generation_Optimization_and_Planning_in_3D_Scenes_CVPR_2023
Abstract We introduce the SceneDiffuser , a conditional genera-tive model for 3D scene understanding. SceneDiffuser pro-vides a unified model for solving scene-conditioned gen-eration, optimization, and planning. In contrast to prior work, SceneDiffuser is intrinsically scene-aware, physics-based, and goal-oriented. With an iterative sampling strategy, SceneDiffuser jointly formulates the scene-aware genera-tion, physics-based optimization, and goal-oriented planning via a diffusion-based denoising process in a fully differen-tiable fashion. Such a design alleviates the discrepancies among different modules and the posterior collapse of previ-ous scene-conditioned generative models. We evaluate the SceneDiffuser on various 3D scene understanding tasks, in-cluding human pose and motion generation, dexterous grasp ˚These authors contributed equally to this work. Corresponding authors: Siyuan Huang ( syhuang@bigai.ai ) and Wei Liang (liangwei@bit.edu.cn).generation, path planning for 3D navigation, and motion planning for robot arms. The results show significant im-provements compared with previous models, demonstrating the tremendous potential of the SceneDiffuser for the broad community of 3D scene understanding.
1. Introduction The ability to generate, optimize, and plan in 3D scenes is a long-standing goal across computer vision, graphics, and robotics. Various tasks have been devised to achieve these goals, fostering downstream applications in motion generation [32, 69,73,90], motion planning [42, 43,60,75], grasp generation [25, 31,34], navigation [1, 95], scene syn-thesis [24, 47,72,82], embodied perception and manipula-tion [30, 39,61], and autonomous driving [3, 52]. Despite rich applications and great successes, existing models designed for these tasks exhibit two fundamental limitations for real-world 3D scene understanding. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16750 First, most prior work [8, 14,32,49,60,68–70,73] lever-ages the conditional Variational Autoencoder ( cV AE ) for the conditional generation in 3D scenes. cV AE model utilizes an encoder-decoder structure to learn the posterior distri-bution and relies on the learned latent variables to sample. Although cV AE is easy to train and sample due to its simple architecture and one-step sampling procedure, it suffers from theposterior collapse problem [12,17,26,64,69,84,93]; the learned latent variable is ignored by a strong decoder, leading to limited generation diversity from these collapsed modes. Such collapse is further magnified in 3D tasks with stronger 3D decoders and more complex and noisy input conditions, e.g., the natural 3D scans [9]. Second, despite the close relations among generation, optimization, and planning in 3D scenes, there lacks a uni-fied framework that could address existing discrepancies among these models. Previous work [15, 34,69] applies off-the-shelf physics-based post-optimization methods over out-puts of generative models and often produces inconsistent and implausible generations, especially when transferring to novel scenes. Similarly, planners are usually standalone modules over results of generative model [8, 14] for trajec-tory planning or learned separately with the reinforcement learning ( RL) [95], leading to gaps between planning and other modules (e.g., generation) during inference, especially in novel scenes where explorations are limited. To tackle the above limitations, we introduce the SceneDiffuser , a conditional generative model based on the diffusion process. SceneDiffuser eliminates the discrepan-cies and provides a single home for scene-conditioned gener-ation, optimization, and planning. Specifically, with a denois-ing process, it learns a diffusion model for scene-conditioned generation during training. In inference, SceneDiffuser jointly solves the scene-aware generation, physics-based optimization, and goal-oriented planning through a unified iterative guided-sampling framework. Such a design equips the SceneDiffuser with the following three superiority: 1.Generation: Building upon the diffusion model, SceneDiffuser solves the posterior collapse problem of scene-conditioned generative models. Since the forward diffusion process can be treated as data augmentation in 3D scenes, it helps traverse sufficient scene-conditioned distribution modes.
Choi_TMO_Textured_Mesh_Acquisition_of_Objects_With_a_Mobile_Device_CVPR_2023
Abstract We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone which offers access to images, depth maps, and valid poses. Our method first in-troduces an RGBD-aided structure from motion, which can yield filtered depth maps and refines camera poses guided by corresponding depth. Then, we adopt the neural im-plicit surface reconstruction method, which allows for high-quality mesh and develops a new training process for apply-ing a regularization provided by classical multi-view stereo methods. Moreover, we apply a differentiable rendering to fine-tune incomplete texture maps and generate textures which are perceptually closer to the original scene. Our pipeline can be applied to any common objects in the real world without the need for either in-the-lab environments or accurate mask images. We demonstrate results of cap-tured objects with complex shapes and validate our method numerically against existing 3D reconstruction and texture mapping methods.
1. Introduction Recovering the 3D geometry of objects and scenes is a longstanding challenge in computer vision and is essential to a broad range of applications. Depth sensing technolo-gies range from highly specialized and expensive turn-table 3D scanners and structured-light scanners to commodity depth sensors. More recently, advances in mobile devices have developed a new method for 3D capture of real-world environments with high-resolution imaging and miniatur-ized LiDAR. Specifically, the modern smartphone such as iPhone 13 Pro are equipped with RGB camera, accelerom-eter, gyroscope, magnetometer, and LiDAR scanner. These various sensors can provide high-resolution images, low-resolution depth from the LiDAR scanner, and associated camera poses offered by off-the-shelf visual-inertial odom-etry (VIO) systems such as ARKit [1] and ARCore [2]. Today’s smartphones offer low-resolution depth maps (a) (b) (c) (d)Figure 1. Example reconstruction results collected from a smart-phone in the wild. (a) Data acquisition setup. (b) Images captured from a smartphone. (c) A reconstructed mesh. (d) A novel view of textured mesh. Our proposed method can reconstruct the high-quality geometric mesh with a visually realistic texture. [5] and valid poses. However, depth maps are very noisy and suffer from the limited range of depth sensors. Al-though this depth sensor can build a simple 3D structure such as a wall or floor, it cannot reconstruct objects with complex and varying shapes. Thus, the RGBD scanning methods [9, 23, 42, 57] are not suitable for these objects. Instead of the depth sensor, multi-view stereo (MVS) al-gorithms [13, 47, 62] reconstruct high-quality 3D geometry by matching feature correspondences across different RGB images and optimizing photometric consistency. While this pipeline is very robust in real-world environments, it misses the surface of low-textured areas [62]. Additionally, the re-sulting mesh generated by post-processing like Poisson re-construction [27] heavily depends on the quality of match-ing, and the accumulated errors in correspondence matching often cause severe artifacts and missing surfaces. Because of the cumulative error from the above pipeline, the texture mapping process [53, 69] leads to undesirable results. Re-constructing high-fidelity texture and 3D geometry of real-world 3D objects remains an open challenge. Main Results: In this paper, we present a practical method to capture a high-quality textured mesh of a 3D object in the wild, shown in Fig. 1. We first develop a smartphone app based on ARKit [1] to collect images, LiDAR depths, and poses. Although the smartphone provides quite valid pose estimates, acquiring fine detail geometry and realis-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16674 Figure 2. Example objects collected by a smartphone in the wild tic texture requires highly accurate camera poses. Thus, we present an RGBD-aided Structure from Motion (SfM) which combines the advantages of both VIO [12] and SfM [46]. Since ARKit based on VIO is robust to the degradation of visual information such as low-textured surface, we per-form incremental triangulation with initial poses obtained from ARKit. We also propose a depth filtering method to handle noisy depth from the smartphone. These filtered depth points are exploited as an additional depth factor for bundle adjustment. Our RGBD-aided SfM can estimate highly precise camera poses due to the good initial poses and additional constraints from the filtered depth. Our 3D geometry reconstruction process adopts a neu-ral implicit representation [50, 55] for surface reconstruc-tion with volumetric rendering. We observe that the neu-ral implicit representation can perform more complete and smoother reconstruction than the classical methods which are known to be robust for 3D reconstruction in the wild. Furthermore, we introduce a new training method for neu-ral implicit representations. In the early stage of training, we propose a regularization method that leverages the prior information from the classical MVS method. After obtain-ing the decent shape, we avoid using the prior information and generate sparse voxel octree to enable efficient sam-pling for training. Our training method can improve the performance of the neural implicit representations. Conse-quently, we show the generalization capabilities of our sur-face reconstruction method in real world objects collected by the smartphone. Given a mesh extracted from the trained neural implicit representation, we can run classical texture mapping algo-rithms [53, 69] to generate texture maps that often exhibit blurring artifacts and color misalignment. We propose ap-plying differential rendering [31] to fine-tune these texture maps obtained from classical texture mapping via a pho-tometric loss. Compared to classical 3D reconstruction [23, 47, 62] and texture mapping [53, 69] approaches, our method shows a better ability to reconstruct the 3D model and produce realistic textures. We evaluate our approach on the data collected by our smartphone application. The main contributions of this paper are summarized as follows: • We present a unified framework to reconstruct the tex-tured mesh using a smartphone. • We propose a depth filtering scheme for noisy depths and refine initial poses from ARKit by using bundle adjustment with depth factor.• Our pipeline builds on classical 3D reconstruction and texture mapping. We leverage a neural geometry repre-sentation to enable surface reconstruction of complex shapes and a differentiable rendering to generate high-fidelity texture maps.
Chen_Meta-Causal_Learning_for_Single_Domain_Generalization_CVPR_2023
Abstract Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains). Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains. In this paper, we propose a new learning paradigm, namely simulate-analyze-reduce , which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation. Under this paradigm, we propose a meta-causal learning method to learn meta-knowledge, that is, how to infer the causes of domain shift between the auxiliary and source do-mains during training. We use the meta-knowledge to ana-lyze the shift between the target and source domains during testing. Specifically, we perform multiple transformations on source data to generate the auxiliary domain, perform counterfactual inference to learn to discover the causal fac-tors of the shift between the auxiliary and source domains, and incorporate the inferred causality into factor-aware do-main alignments. Extensive experiments on several bench-marks of image classification show the effectiveness of our method.
1. Introduction Single domain generalization [28] aims to generalize a model trained using one training domain (source domain) into multiple unseen test domains (target domains). Since only one source domain is given and the target domains are out-of-distribution and unavailable during training, single ∗Jin Chen and Zhi Gao are co-first authors. †Corresponding author: Xinxiao Wu. Simulate What causes domain shift? Causes of domain shiftAnalyze Reduce Learn to reduceSource domain Auxiliary domain Adapted Auxiliary domainLearn to analyze Data transformationFigure 1. Illustration of the simulate-analyze-reduce paradigm. In this paradigm, we first simulate the domain shift by constructing an auxiliary domain as the unseen target domain, then learn to analyze the domain shift, and finally learn to reduce the domain shift based on inferred causes. domain generalization is a challenging task and attracts in-creasing interests. Existing works have made considerable successes through expanding the distribution of the source domain by data augmentation [19, 28, 34] or learning adap-tive data normalization [8] typically. However, such suc-cesses have been achieved without explicitly considering the domain shift between the source and target domains, which limits the generalization performance of model in real-world scenarios. In this paper, we propose a new learning paradigm, namely simulate-analyze-reduce , to address single domain generalization by enabling the model to analyze the real domain shift between the source domain and unseen tar-get domain. This new paradigm is shown in Figure 1. We first build an auxiliary domain as the target domain to sim-ulate the real domain shift between the source and target domains, since the target data is unavailable during train-ing. We then learn to analyze the intrinsic causal factors of the domain shift to facilitate the subsequent model adapta-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7683 tion. Finally, we learn to reduce the domain shift with its inferred causes. Under this paradigm, we propose a meta-causal learn-ing method to learn the meta-knowledge about how to infer the causes of the simulated domain shift between the aux-iliary and source domains via causal inference in training, and then apply the meta-knowledge to analyze the real do-main shift between the target and source domains during testing, through which the source and given target domains are adaptively aligned. Specifically, we perform multiple transformations on source data to generate an auxiliary do-main with great diversity. Then we build a causal graph to represent the dependency among data, variant factors, semantic concepts and category labels, and conduct coun-terfactual inference over the causal graph to exploit the in-trinsic causality of the simulated domain shift between the auxiliary and source domains. For each sample in the aux-iliary domain, we construct counterfactual scenes by inter-vening variant factors to infer their causal effects on the cat-egory prediction, and these inferred causal effects of vari-ant factors can be regarded as the causes of domain shift. To reduce the domain shift, we propose a factor-aware do-main alignment by learning and integrating multiple feature mappings, where an effect-to-weight network is designed to convert the causal effects of variant factors into the weights of feature mappings. During testing, the distribution discrepancy between the input target sample and the source domain is analyzed and reduced by applying the learnt meta-knowledge, i.e.,infer-ring the causal effects of variant factors and incorporating them into the factor-aware domain alignment. In summary, the main contributions of this paper are as follows: • We propose a novel learning paradigm, simulate-analyze-reduce , for single domain generalization. This paradigm empowers the model with the ability to esti-mate the domain shift between the source domain and unseen target domains, thus boosting the model adap-tation across different domains. • We propose a meta-causal learning method based on counterfactual inference to learn the meta-knowledge about analyzing the intrinsic causality of domain shift, thus facilitating the reduction of domain shift. • Our method achieves the state-of-the-art results on sev-eral benchmarks of image classification, especially on the more challenging tasks with a large domain shift, clearly demonstrating the effectiveness of our method.
He_Grad-PU_Arbitrary-Scale_Point_Cloud_Upsampling_via_Gradient_Descent_With_Learned_CVPR_2023
Abstract Most existing point cloud upsampling methods have roughly three steps: feature extraction, feature expansion and 3D coordinate prediction. However, they usually suf-fer from two critical issues: (1) fixed upsampling rate after one-time training, since the feature expansion unit is cus-tomized for each upsampling rate; (2) outliers or shrink-age artifact caused by the difficulty of precisely predicting 3D coordinates or residuals of upsampled points. To adress them, we propose a new framework for accurate point cloud upsampling that supports arbitrary upsampling rates. Our method first interpolates the low-res point cloud according to a given upsampling rate. And then refine the positions of the interpolated points with an iterative optimization process, guided by a trained model estimating the differ-ence between the current point cloud and the high-res tar-get. Extensive quantitative and qualitative results on bench-marks and downstream tasks demonstrate that our method achieves the state-of-the-art accuracy and efficiency.
1. Introduction With the popularity of commercial 3D scanners, cap-turing point clouds from real-world scenes becomes con-venient and affordable. Thus point clouds have been widely utilized in applications such as autonomous driving, robotics, remote sensing, etc [ 11]. That being said, the raw point clouds produced by 3D scanners or depth cameras are often sparse and noisy, sometimes with small holes [ 16], which greatly affects the performance of downstream tasks, such as semantic classification [ 38], rendering [ 5], surface reconstruction [ 1], etc. Consequently, it is vital to upsample a raw point cloud to a dense, clean and complete one, with Yun He and Xiangyang Xue are with the School of Computer Sci-ence, Fudan University. Yanwei Fu is with the School of Data Science, Fudan University. He is also with Shanghai Key Lab of Intelligent Information Processing, and Fudan ISTBI±ZJNU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China. 𝑃𝐿Previous Methods 𝑃𝐻=𝑁𝑁 𝑅(𝑃𝐿)Output 𝑃𝐼 𝑃𝐻= argmin 𝑃𝐼𝑁𝑁(𝑃𝐼)Ours𝑁𝑁 𝑅for Each Upsampling Rate 𝑅 Interpolation SharedNNIterative Update Ground Truth Input RefinementFigure 1. The comparison between previous point cloud upsam-pling methods and ours, and NN denotes the deep neural net-work. Given the low-res input PL, previous methods directly pre-dict the 3D coordinates or residuals of high-res output PH. And most of them need retraining to satisfy various upsampling rates. Instead we first interpolate points in Euclidean space, which sep-arates point generation from network learning and thus achieves arbitrary upsampling rates. Then we formulate the refinement of interpolated points as an iterative process aiming to minimize the learned point-to-point distance function NN(PI). more geometric details. The common practice towards point cloud upsampling usually consists of three key steps [ 15,16,18,27,30,41,43]. (1) Feature extraction: capturing point-wise semantic fea-tures from the low-res point clouds. (2) Feature expan-sion: expanding the extracted features w.r.t the specified upsampling rate. (3) Coordinate prediction: predicting 3D coordinates or residuals of upsampled points from the ex-panded features. However, there are two critical issues in this paradigm. Firstly, these models are usually dependent on the upsampling rate. To support different upsampling rates, multiple models need to be trained. Secondly, pre-cisely estimating the 3D coordinates or offsets to the tar-get points is hard, which leads to outliers or shrinkage ar-tifact [ 20]. Although some recent methods try to handle the fixed upsampling rate problem via affine combination of neighboring points [ 19,29] or implicit neural representa-tion [ 8,46], their performance is still limited by the inaccu-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5354 racy of 3D coordinate prediction. In this paper, we propose a novel point cloud upsampling algorithm to address these two issues. In particular, our method decouples the upsampling process into two steps. First, we propose to directly upsample the input low-res point cloud in Euclidean space by midpoint interpolation, instead of expanding in the feature space. And the amount of interpolated points is determined by a given upsampling ratio. This makes the learning part independent with the up-sampling module and helps the whole method generalize to arbitrary upsampling rates. Secondly, the interpolated point cloud is refined by an iterative process aiming to minimize the difference between the interpolated point cloud and the ground truth high-res point cloud. To measure the differ-ence, we choose to use point-to-point distance, which elimi-nates the need of surface extraction and can handle arbitrary topologies. Moreover, comparing to coordinates ( ∈R3), the point-to-point distance ( ∈R1) is an easier objective to optimize, thus results in much more accurate upsampling results in our experiments. Since the ground truth point cloud is not available during inference, a model is trained to approximate the point-to-point distance function in a dif-ferentiable manner, thus termed as P2PNet. To improve the training efficiency, we come up with a simple but effective training scheme, by adding Gaussian noise to the data to simulate varying degrees of difference between the input and ground truth point cloud. The P2PNet is then trained to minimize the difference, i.e., the refinement step is regarded as a distance minimization process. In this paper, we propose a novel framework for accurate point cloud upsampling with arbitrary upsampling rates. Specifically, our contributions can be summarized as: • Decompose the upsampling problem into midpoint in-terpolation and location refinement, which achieves ar-bitrary upsampling rates. • Formulate the refinement step as a point-to-point dis-tance minimization process. • Propose the P2PNet to estimate the point-to-point dis-tance in a differentiable way. Extensive experiments show that our method significantly outperforms existing methods in accuracy, efficiency, ro-bustness, and generalization to arbitrary upsampling rates, also improves the performance of downstream tasks such as semantic classification and surface reconstruction.
Hwang_Text2Scene_Text-Driven_Indoor_Scene_Stylization_With_Part-Aware_Details_CVPR_2023
Abstract We propose Text2Scene, a method to automatically cre-ate realistic textures for virtual scenes composed of multiple objects. Guided by a reference image and text descriptions, our pipeline adds detailed texture on labeled 3D geome-tries in the room such that the generated colors respect the hierarchical structure or semantic parts that are often com-posed of similar materials. Instead of applying flat styliza-tion on the entire scene at a single step, we obtain weak semantic cues from geometric segmentation, which are fur-ther clarified by assigning initial colors to segmented parts. Then we add texture details for individual objects such that their projections on image space exhibit feature embedding aligned with the embedding of the input. The decomposition makes the entire pipeline tractable to a moderate amount of computation resources and memory. As our framework uti-lizes the existing resources of image and text embedding, it does not require dedicated datasets with high-quality tex-tures designed by skillful artists. To the best of our knowl-edge, it is the first practical and scalable approach that cancreate detailed and realistic textures of the desired style that maintain structural context for scenes with multiple objects.
1. Introduction Virtual spaces provide an immersive experience for metaverse, films, or games. With increasing demands for virtual environments, various applications seek practical methods to create realistic 3D scenes with high-quality tex-tures. Currently, skillful artists need to manually create 3D assets and accompanying textures with careful parameteri-zation, which is not scalable enough to account for the di-verse content the industry is heading for. Scenes can also be populated with existing 3D database models or created with recent shape-generation approaches using data-driven methods [33, 54]. However, most of them lack texture in-formation or are limited to simple coloring. To build realistic content, we need fine details containing the artistic nuances of styles that obey the implicit correla-tions with geometric shapes and semantic structure. Recent works provide methods to color a single object with the help This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1890 Minimal style Mediterranean style Target Image1 Target Image2 Target Image3Target Style Figure 2. Our scene stylization results. Given a target image Itand the style text ts, Text2Scene can produce the stylized results for the entire scene. of differentiable rendering [5], but often they are limited to single texture or blurred boundaries [6,21,27,29,52]. More importantly, the 3D objects are often textured in isolation, and only limited attempts exist to add visual appearances for large-scale scenes with multiple objects [14, 15, 19]. The biggest challenge is adding consistent style for an entire scene, but still accounting for the boundaries of different materials due to the functional and semantic relationship between parts, as observed within real-world scenes. Our proposed Text2Scene adds plausible texture details on 3D scenes without explicit part labels or large-scale data with complex texturing. We take inspiration from abundant 3D shape and image datasets and decompose the problem into sub-parts such that the entire scene can be processed with a commodity memory and computation. Given scenes of multiple objects of 3D mesh geometry, we separately handle walls and individual objects. Specifically, the styl-ization of walls is formulated as texture retrieval, and the objects are initialized with base colors. From the base color assignment, we can deduce the part-level relationship for stylization and further refine them in later stages, such that their rendered images are close to the input text within the joint embedding space of foundational models. Our coarse-to-fine strategy keeps the problem tractable yet generates high-quality texture with clean part bound-aries. We first create segments of input mesh such that the segment boundaries align with low-level geometric cues. Then we start with the simplified problem of assigning a color per segment. Interestingly, the prior obtained from large-scale image datasets assign similar colors for the parts with similar textures, reflecting the semantic context orsymmetry as shown in Figure 1. We add the detailed texture on individual objects as an additional perturbation on the assigned base colors by enforcing constraints on the image features of their projections. The additional perturbations are high-frequency neural fields added to the base color. In summary, Text2Scene is a new method that • can easily generate realistic texture colors of the scene with the desired style provided by text or an image; • can add detailed texture that respects the semantic part boundaries of individual objects; and • can process the entire scene without a large amount of textured 3D scenes or an extensive memory footprint. We expect the proposed approach to enable everyday users to quickly populate virtual scenes of their choices, and en-joy the possibility of next-generation technology with high-quality visual renderings.
Chi_HDR_Imaging_With_Spatially_Varying_Signal-to-Noise_Ratios_CVPR_2023
Abstract While today’s high dynamic range (HDR) image fusion algorithms are capable of blending multiple exposures, the acquisition is often controlled so that the dynamic range within one exposure is narrow. For HDR imaging in photon-limited situations, the dynamic range can be enormous and the noise within one exposure is spatially varying. Exist-ing image denoising algorithms and HDR fusion algorithms both fail to handle this situation, leading to severe limita-tions in low-light HDR imaging. This paper presents two contributions. Firstly, we iden-tify the source of the problem. We find that the issue is asso-ciated with the co-existence of (1) spatially varying signal-to-noise ratio, especially the excessive noise due to very dark regions, and (2) a wide luminance range within each exposure. We show that while the issue can be handled by a bank of denoisers, the complexity is high. Secondly, we propose a new method called the spatially varying high dy-namic range (SV-HDR) fusion network to simultaneously denoise and fuse images. We introduce a new exposure-shared block within our custom-designed multi-scale trans-former framework. In a variety of testing conditions, the performance of the proposed SV-HDR is better than the ex-isting methods.
1. Introduction Today’s high dynamic range (HDR) image fusion al-gorithms have demonstrated remarkable performances in blending images across a wide range of luminance levels. Many algorithms are able to handle an interior room with a sunlit view, of which the overall dynamic range is in the order of 100000:1 or more. However, most of these algo-rithms are designed for well-illuminated scenes. Even in the shortest exposure frame, the noise is maintained at a modest level so that the algorithm can focus on the blending task. The question we ask in this paper is: What if we push the shortest exposure to a photon-starving condition? Such an extreme HDR problem arises in many low-light Kalantari [ 17] Wu [ 36] NHDRRNet [ 39] Proposed Figure 1. [Top] Real captures using a Sony ILCE-7M2 camera. Three low-dynamic range (LDR) images are captured. [Bottom] Image denoising and HDR fusion results. scenarios. Figure 1is a real image captured by a Sony ILCE-7M2 camera. The imaging condition is a night-time scenario in front of a building. The challenge of the prob-lem is the co-existence of heavy noise in the darkest spots of the image and the high dynamic range. We refer to this as the spatially varying signal-to-noise ratio (SNR) problem where brighter pixels have higher SNR and darker pixels have lower SNR. The goal of this paper is to articulate the spatially vary-ing SNR problem. We emphasize the difficulty of the prob-lem by referring to the performance of three state-of-the-art HDR fusion algorithms, namely, Kalantari and Ramamoor-thi [17], Wu et al. [ 36], and NHDRRNet [ 39]. As we can see from Figure 1, these methods produce disappointing re-sults, mostly failing in denoising the dark regions. The position of the paper can be visualized in Figure 2. While existing HDR fusion methods can handle the blend-ing task, the individual exposures are sufficiently high so that the amount of noise is limited. Single image denois-ers today seldom handle the dynamic range problem. They are mostly focusing on a tonemapped image normalized to [0,1]. Therefore, when facing a wide dynamic range scene, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 5724 Figure 2. Traditional HDR algorithm can handle high SNR cases. Individual denoisers Dn1, Dn2, Dn3 have narrow operat-ing regimes. SV-HDR offers a wide dynamic range coverage with denoising capability. we need multiple denoisers to denoise the images before blending them. The proposed method solves both the noise problem and the dynamic range problem at once. The main contribution of this paper is a new HDR fusion and denoising network called the spatially varying high dy-namic range network (SV-HDR). SV-HDR simultaneously denoises the image and blends three exposures into a sin-gle HDR image. Our network is a transformer-based ap-proach with three customized designs: (1) A multi-exposure transformer block to extract features. These transformers are adaptive to the varying SNRs. (2) We introduce an exposure-share block to blend the features coming from the three exposures. (3) We incorporate a multi-scale blending strategy to capture the local and global variations.
Cherian_Are_Deep_Neural_Networks_SMARTer_Than_Second_Graders_CVPR_2023
Abstract Recent times have witnessed an increasing number of ap-plications of deep neural networks towards solving tasks that require superior cognitive abilities, e.g., playing Go, generating art, question answering (e.g., ChatGPT), etc. Such a dramatic progress raises the question: how general-izable are neural networks in solving problems that demand broad skills? To answer this question, we propose SMART: aSimple Multimodal Algorithmic Reasoning Task and the associated SMART-101 dataset1, for evaluating the abstrac-tion, deduction, and generalization abilities of neural net-works in solving visuo-linguistic puzzles designed specifi-cally for children in the 6–8 age group. Our dataset con-sists of 101 unique puzzles; each puzzle comprises a picture and a question, and their solution needs a mix of several elementary skills, including arithmetic, algebra, and spa-tial reasoning, among others. To scale our dataset towards training deep neural networks, we programmatically gen-erate entirely new instances for each puzzle while retaining their solution algorithm. To benchmark the performance on the SMART-101 dataset, we propose a vision-and-language meta-learning model that can incorporate varied state-of-the-art neural backbones. Our experiments reveal that while powerful deep models offer reasonable performances on puzzles in a supervised setting, they are not better than random accuracy when analyzed for generalization – filling this gap may demand new multimodal learning approaches.
1. Introduction “An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. ” The Dartmouth Summer Project on AI, 1956 Deep learning powered AI systems have been increas-ing in their data modeling abilities at an ever more vigor 1The SMART-101 dataset is publicly available at: https://doi.org/10.5281/zenodo.7761800 Question: Bird Bobbie jumps on a fence from the post on the left end to the other end. Each jump takes him 4 seconds. He makes 4 jumps ahead and then 1 jump back. Then he again makes 4 jumps ahead and 1 jump back, and so on. In how many seconds can Bobbie get from one end to the other end? Answer Options: A: 64 B: 48 C: 56 D: 68 E: 72 Figure 1. An example puzzle instance from our SMART-101 dataset generated using our programmatic augmentation method. Solving this puzzle needs various skills such as counting the num-ber of posts, spatially locating Bobbie , and using the details in the question to derive an algorithm for the solution. At a foundational level, a reasoning agent needs to recognize abstracted objects such as posts and identify the bird. The answer is shown below2. in the recent times, with compelling applications emerg-ing frequently, many of which may even seem to chal-lenge human abilities. A few notable such feats in-clude but are not limited to game playing ( e.g., Al-phaGo [60]), language-guided image generation ( e.g., the recent DALLE-2 [54] and ImageGen [56]), creative story writing ( e.g., using GPT-3 [10]), solving university level math problems [17], algorithmic inference [20], and general question-answering/dialog ( e.g., ChatGPT [48] and vari-ants). Such impressive performances have prompted an in-trospection into the foundation of what constitutes artificial intelligence and deriving novel tasks that could challenge deep models further [13, 37, 45, 55]. While deep neural networks offer compelling perfor-mances on specialized tasks on which they are trained on, (i) how well do they model abstract data, attend on key entities, and transfer knowledge to solve new problems? (ii) how fluid are they in acquiring new skills? and (iii) how effec-tive are they in the use of language for visual reasoning? We task ourselves to understand and seek a way to answer these 2The answer to the puzzle in Figure 1 is: C. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10834 questions for state-of-the-art (SOTA) vision and language deep learning models. An approach that has been taken several times in the past is to design specialized datasets that can measure the cognitive abilities of well-trained neu-ral networks. For example, in CLEVR [34], a diagnostic dataset is proposed that comprises visuo-linguistic spatial reasoning problems. The abstraction abilities of neural net-works have been explored towards solving types of Bon-gard problems [33, 47] and human IQ puzzles (e.g., Ravens progressive matrices) have been extended to evaluate neu-ral reasoning abilities [7,8,31,49,64,66,72,75]. However, while the puzzles in these prior works are often seemingly diverse, they are often confined to a common setting and may need only specialized skill sets, bringing in inductive biases that could be exploited by well-crafted deep learn-ing models, thereby solving such puzzles with near perfect accuracy [59, 64]. In this paper, we take a look back at the foundations of intelligence, by asking the question: Are state-of-the-art deep neural networks capable of emulating the thinking process of even young children? To gain insights into an-swering this question, we introduce the Simple Multimodal Algorithmic Reasoning Task (SMART) – a visuo-linguistic task and the associated SMART-101 dataset built from 101 distinct children’s puzzles. As this is the first step in this di-rection, we keep the puzzles simple – to ensure this, we took inspiration from the puzzles in the Math Kangaroo USA Olympiad [3] which has puzzle sets professionally designed for children in the age group of 6–8. Each puzzle in our dataset has a picture describing the problem setup and an associated natural language question. To solve the puzzle, one needs to use the question to gather details from the pic-ture and infer a simple mathematical algorithm that leads to a solution to be matched against multiple answer options. In Figure 1, we illustrate our task with an example puzzle from our dataset. Unlike prior datasets with similar goals, each of the 101 puzzles in our dataset is distinct and needs a broad range of elementary mathematical skills for their so-lutions, including skills in algebra, basic arithmetic, geom-etry, ordering, as well as foundational skills to interpret ab-stract images, and execute counting, spatial reasoning, pat-tern matching, and occlusion reasoning. To the best of our knowledge, this is the first dataset that offers such a richly diverse set of visuo-linguistic puzzles in an open-world set-ting, with a psychometric control on their difficulty levels against human performance. To benchmark performances on the SMART-101 dataset, we propose an end-to-end meta-learning based neural net-work [21], where we use a SOTA pre-trained image encoder backbone ( e.g., Transformers/ResNets) to embed the pic-ture part of the puzzles, and a strong large language model (e.g., GPT/BERT) to model the questions. As each puzzle may have a different range for their answers ( e.g., selectionfrom a few choices, sequential answers, etc.), we propose to treat each puzzle as a separate task, with task-specific neu-ral heads and training objectives, while a common vision-language backbone is used on all the puzzles. We provide experiments using our learning framework under various evaluation settings, analyzing the ability of SOTA vision and language backbones for: (i) in-distribution generalization, when training and test data are from the same distributions of puzzle instances, and out-of-distribution generalization, when training and test data are from: (ii) distinct answer distributions, or (iii) different puz-zles. We find the backbones performing poorly in our model on (i) and (ii), while failing entirely on (iii), suggesting that solving our dataset would demand novel research directions into algorithmic reasoning. We experiment on various settings, evaluating the ability of our model to (i) solve puzzles when trained and tested on the same distribution of instances, (ii) out of distribution generalization when training and testing data are disjoint at the answer level, and (iii) out of distribution generalization when the training and testing sets are disjoint at the puzzle levels. We find that our model performs poorly on the tasks (i) and (ii), while failing entirely on (iii), suggesting that solving our dataset would demand novel research directions into neural abstractions, and algorithmic reasoning abilities. We summarize below the key contributions of this paper. 1. With the goal of making progress towards improving the visuo-linguistic algorithmic reasoning abilities of neural networks, we introduce a novel task: SMART, and the associated large-scale SMART-101 dataset.
de_Jorge_Reliability_in_Semantic_Segmentation_Are_We_on_the_Right_Track_CVPR_2023
Abstract Motivated by the increasing popularity of transformers in computer vision, in recent times there has been a rapid development of novel architectures. While in-domain per-formance follows a constant, upward trend, properties like robustness or uncertainty estimation are less explored— leaving doubts about advances in model reliability. Studies along these axes exist, but they are mainly limited to classifi-cation models. In contrast, we carry out a study on semantic segmentation, a relevant task for many real-world applica-tions where model reliability is paramount. We analyze a broad variety of models, spanning from older ResNet-based architectures to novel transformers and assess their relia-bility based on four metrics: robustness, calibration, mis-classification detection and out-of-distribution (OOD) de-tection. We find that while recent models are significantly more robust, they are not overall more reliable in terms of uncertainty estimation. We further explore methods that can come to the rescue and show that improving calibration can also help with other uncertainty metrics such as misclassi-fication or OOD detection. This is the first study on modern segmentation models focused on both robustness and uncer-tainty estimation and we hope it will help practitioners and researchers interested in this fundamental vision task1.
1. Introduction Humans tend to overestimate their abilities, a cogni-tive bias known as Dunning-Kruger e ffect [27]. Unfortu-nately, so do deep neural networks. Despite impressive per-formance on a wide range of tasks, deep learning models tend to be overconfident—that is, they predict with high-confidence even when they are wrong [19]. This e ffect is even more severe under domain shifts, where models tend to underperform in general [23, 40, 45]. While these vulnerabilities a ffect deep models in gen-eral, they are often studied for classification models and are *https://europe.naverlabs.com 1Code available at https://github.com/naver/relis Cityscapes IDD ACDC Increased domain shiftFigure 1. Top: mIoU and ECE vs. domain shift. Errors are normalized with respect to the lowest error on the training dis-tribution (Cityscapes). We compare recent segmentation mod-els, both transformer-based (SETR [58], SegFormer [55] and Seg-menter [48]) and convolution-based (ConvNext [30]) with ResNet baselines (UPerNet [54] and DLV3 +[6]). All recent models (both transformers and CNNs) are remarkably more robust than ResNet baselines (whose lines in mIoU overlap), however, ECE increases sharply for all methods. Bottom: Sample images for each dataset. comparably less explored for semantic segmentation, a fun-damental task in computer vision that is key to many criti-cal applications such as autonomous driving and AI-assisted medical imaging. In those applications, domain shifts are more the rule than the exception ( e.g., changes in weather for a self-driving car or di fferences across patients for a medical imaging system). Therefore, brittle performance and overconfidence under domain shifts are two important and challenging problems to address for a safe deployment of artificial intelligence systems in the real world. With that in mind, we argue that a reliable model should i)be robust to domain shifts and ii)provide good uncer-tainty estimates. The core goal of this study is providing an answer to the following, crucial question: are state-of-the-art semantic segmentation models improving in terms of robustness anduncertainty estimation? To shed light on this, we evaluate a large body of seg-mentation models, assessing their in-domain (ID) vs. out-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 7173 of-domain (OOD) prediction quality ( robustness ) together with their calibration, misclassification detection and OOD detection ( uncertainty estimation ). We argue that a study of this kind is crucial to under-stand whether research on semantic segmentation is moving in the right direction. Following the rise of transformer ar-chitectures in computer vision [4,15,29,50], several studies have compared recent self-attention and CNN-based classi-fication models in terms of robustness [2, 3, 30, 32, 36, 43] and predictive uncertainty [33, 44]. Yet, when it comes to semantic segmentation , prior studies [55, 59] only focused on robustness, using synthetic corruptions as domain shifts (e.g., blur, noise) [25]. In contrast, we consider natural, re-alistic domain shifts and study segmentation models both in terms of robustness and uncertainty, leveraging datasets captured in di fferent conditions—see Fig. 1 (bottom). Task-specific studies are important, since task-specific architectures and learning algorithms may carry di fferent behaviors and some observations made for classification might not hold true when switching to segmentation. For instance, contrary to Minderer et al. [33], we observe that improvements in calibration are far behind those in robust-ness, see Fig. 1 (top). Furthermore, previous analyses only consider simple calibration approaches [19] while assessing model reliability; in contrast, we make a step forward and explore content-dependent calibration strategies [14, 17], which show promise to improve reliability out of domain. Our analysis allows us individuating in which directions we are improving and in which we are lagging behind. This is the first work to systematically study robustness and un-certainty under domain shift for a large suite of segmen-tation models and we believe it can help practitioners and researchers working on semantic segmentation. We sum-marize our main observations in the following. i) Remarkable improvements in robustness, but poor in calibration .Under domain shifts, recent segmentation models perform significantly better (in terms of mIoU)— with larger improvements for stronger shifts. Yet, OOD calibration error increases dramatically for all models. ii) Content-dependent calibration [14] can improve OOD calibration , especially under strong domain shifts, where models are poorly calibrated. iii) Misclassification detection shows di fferent model rank-ing in and out of domain .When tested in domain, recent models underperform the ResNet baseline. As the domain shift increases, recent models take the lead. iv) OOD detection is inversely correlated with perfor-mance .Indeed, a small ResNet-18 backbone performs best. v) Content-dependent calibration [14] can improve OOD detection and misclassification out of domain .We observe a significant increase in misclassification detection under strong domain shifts after improving calibration. We also observe improvements for OOD detection, albeit milder.Sem. segm.Robust performanceUncertainty estimationNatural shiftsOOD calib methods Kamann et al. [25]✓ ✓ Bhojanapalli et al. [3]✓ ✓ Xieet al. [55]✓ ✓ Naseer et al. [36] ✓ Baiet al. [2] ✓ ✓ Minderer et al. [33] ✓ ✓ ✓ Paul and Cheng [43] ✓ ✓ Mao et al. [32] ✓ ✓ Liuet al. [30] ✓ ✓ Zhou et al. [59]✓ ✓ Pinto et al. [44] ✓ ✓ ✓ Ours ✓ ✓ ✓ ✓ ✓ Table 1. Studies of recent architectures . While several prior works studied robustness and uncertainty of transformer-and CNN-based classifiers , studies on segmentation limited to robust-ness. This is the first study assessing robustness anduncertainty of modern segmentation models. Moreover, we consider natural do-main shifts and are the only analysis to include content-dependent methods [14, 17] to improve calibration in OOD settings.
Bertiche_Blowing_in_the_Wind_CycleNet_for_Human_Cinemagraphs_From_Still_CVPR_2023
Abstract Cinemagraphs are short looping videos created by adding subtle motions to a static image. This kind of media is popular and engaging. However, automatic generation of cinemagraphs is an underexplored area and current solu-tions require tedious low-level manual authoring by artists. In this paper, we present an automatic method that allows generating human cinemagraphs from single RGB images. We investigate the problem in the context of dressed humans under the wind. At the core of our method is a novel cyclic neural network that produces looping cinemagraphs for the target loop duration. To circumvent the problem of collect-ing real data, we demonstrate that it is possible, by working in the image normal space, to learn garment motion dynam-ics on synthetic data and generalize to real data. We evalu-ate our method on both synthetic and real data and demon-strate that it is possible to create compelling and plausible cinemagraphs from single RGB images.1. Introduction Cinemagraph, a term originally coined by Jamie Beck and Kevin Burg, refers to adding dynamism to still images by adding minor and repeated movements , forming a mo-tion loop, to a still image. Such media format is both en-gaging and intriguing, as adding a simple and subtle motion can bring images to life. Creating such content, however, is challenging as it would require an artist to first set up and capture a suitable video, typically using a tripod, and then carefully mask out most of the movements in a post-processing stage. We explore the problem of creating human cinemagraphs directly from a single RGB image of a person. Given a dataset of images and corresponding animated video pairs, a straightforward solution would be to train a fully super-vised network to learn to map an input image to a plausible animated sequence. However, collecting such a dataset is extremely challenging and costly, as it would require cap-turing hundreds or thousands of videos of people holding This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 459 a perfectly still pose under the influence of the wind from different known directions. While it is possible to simulate different wind force directions using oscillating fans in a lab setup [10], capturing the variability of garment geome-try and appearance types in such a controlled setting is far from trivial. Hence, we explore the alternative approach of using synthetic data where different wind effects can eas-ily be replicated using physically-based simulation. The challenge, then, is to close the synthetic-to-real gap, both in terms of garment dynamics and appearance variations. We address this generalization concern by operating in the gradient domain, i.e., using surface normal maps. Be-ing robust to lighting or appearance variations, surface nor-mals are arguably easier to generalize from synthetic to real, compared to RGB images. Moreover, surface nor-mals are indicative of the underlying garment geometry (i.e., folds and wrinkles) and hence provide a suitable repre-sentation to synthesize geometric and resultant appearance variations [21, 44] as the garment interacts with the wind. Further, we make the following technical contributions. First, we propose a novel cyclic neural network formulation that directly outputs looped videos, with target time periods, without suffering from any temporal jumps. Second, we demonstrate how to condition the model architecture using wind parameters (e.g., direction) to enable control at test time. Finally, we propose a normal-based shading approach that takes the intermediate normals under the target wind attributes to produce RGB image frames. In Figure 1, we show that our method is applicable to a variety of real test images of different clothing types. We evaluate our method on both synthetic and real im-ages and discuss ablation results to evaluate the various de-sign choices. We compare our approach against alternative approaches [27, 38] using various metrics as well as a user study to evaluate the plausibility of the generated methods. Our method achieves superior performance both in terms of quantitative metrics as well as the perceptual user study. 2. Related Work 2.1. Looping video generation In this work, we are interested in synthesizing cinema-graph style looping animations where only certain parts of a frame are in motion. A typical method for creating such looping clips is to leverage video as input. Many ap-proaches exist that solve an optimization problem to iden-tify segments and transition points in the input video that can be looped seamlessly [1, 4,7,14,23,24,32,35,42]. While we focus on generating such a looping clip from a static sin-gle image, we use a video based method [23] to ensure our training data is looped properly. In the context of animating a single image in a looping manner, one approach is to warp regions of the image us-ing Fourier methods in a stochastic manner which amounts to displacing the original texture [12]. Another approach is to transfer the phase patterns from an example video to the given input image [30]. Okabe et al. [28] also transfer the motion patterns from an example video to an input image of a fluid. Specifically, they map the example video to a constant flow and residual layers, which represent the high frequency motion patterns that are not explained by warp-ing a reference frame using constant flow. Such residual patterns are transferred to the input image. These meth-ods work best for natural phenomena such as water and fire where flow-based texture displacement and warping result in plausible animation. Halperin et al. [18] present another approach to animating a single image by focusing on repeat-ing patterns. While demonstrating impressive results, such a method is not suitable for our problem since the motion a garment undergoes blowing in the wind is fundamentally different than displacing repeating patterns. With the recent success of deep learning methods, sev-eral learning based approaches have been proposed to cre-ate looping animations from single images. While Endo et al. [16] predict a flow map to warp the input images di-rectly, Holynski et al. [19] first generate a constant flow map directly from a single image and then warp image fea-tures using the generated flow map to synthesize the RGB frames. In a follow-up work, Mahapatra et al. [27] extend this framework to provide additional control of the motion direction and region of the image to be animated. We com-pare our method to this state-of-the-art approach and show that the assumption of constant flow is not suitable for gar-ment motion and leads to unsatisfactory results. Recently, Fan et al. [17] present a method to animate fluids in a still image. Their method uses an additional depth map estima-tion to generate a surface mesh for the fluid region and thus utilizes physically based simulation priors to predict a mo-tion field. While our approach of incorporating a surface normal map representation is similar, we focus on very dif-ferent types of motions in our work. 2.2. Animating single images With the success of deep learning, several methods have been recently proposed to animate a given image. One ap-proach is based on using a driving video and focus on syn-thesizing specific type of content and motion such as time-lapse videos [11, 26], facial and body animation [33, 38]. We compare our method to the most recent method of Wang et al. [38] and show that it is not suitable to capture the subtle motions observed in a human cinemagraph. Another line of work directly predicts video or future frames from a given single image [22, 40,41,43] or a se-mantic map [29]. Dorkenwald et al. [15] learn a genera-tive model that encodes a latent residual representation and sample such latent code to synthesize a video from a given 460 Normal guided synthesis (Section 3.2)time ... ...... ... Cylic & controllable animation (Section 3.1) t=10 t=55 t=100wind direction Figure 2. Given an input image (top, left) and its predicted surface normal map (bottom, left), we present a network that synthesizes a set of surface normals that resemble the effect of the garment blowing in the wind with a given direction. We ensure a looped animation by encoding the time twith a cyclic positional encoding with respect to a predefined loop duration (150 frames in our experiments). We then synthesize the corresponding RGB images demonstrating plausible garment deformation using an intrinsic image decomposition technique. image. Many of these methods, however, synthesize multi-ple frames at the same time and hence operate only at low resolution without providing control. To address the latter challenge, Blatmann et al. [8, 9] enable the user to provide a poke that determines the final location of a sparse point in the input image. The resulting videos, however, are not looped in contrast to cinemagraphs. Another interesting direction is to train a single image based generator [31], which can then be utilized to generate animations by providing random walks of the appearance of the object of interest in the latent space. Arora et al. [2] extend this approach to work with an input GIF. While im-pressive, such approaches do not provide the controllability we aim to achieve with our approach, however. 3. Methodology Given a single input RGB image of a person, I∈ RW×H×3, our goal is to generate a looped video sequence, V:={I0,I1, ...,It|I0=It}, where the loose garments worn by the person exhibit a plausible motion as if blown in the wind. We assume the direction of the wind can be pro-vided by a unit vector win the image plane to control the output animation. Hence, our goal is to learn the mapping F(I,w)−→Vw. To more effectively represent the underlying garment ge-ometry and the changes it undergoes due to the wind force, our method operates on the surface normal map Nthat cor-responds to the input image I. Specifically, given an input image I, we first predict the surface normal map using an off-the-shelf normal estimator [3]. We then propose a novel cyclic network architecture th
at maps Nto a sequence of normal maps Vw N:={N0,N1, ...,Nt|N0=Nt}that demonstrate plausible motion of the underlying garment un-der the influence of a wind force with a direction given by w. Finally, we synthesize back the corresponding RGB im-ages given the original input image and the sequence of animated normal maps using a constrained reshading ap-proach. We provide the overall pipeline in Figure 2 and next discuss the details of our approach. 3.1. Cyclic and Controllable Animation Given an input normal map Nand a wind direction w, our goal is to learn the mapping FN(N,w)−→ Vw N={N0,N1, ...,Nt|N0=Nt}where Vw Ndemon-strates plausible garment animation. Our goal is to synthe-size a cyclic animation sequence with a predefined period of T, so that t=T−1. This amounts to synthesizing normal maps that satisfy constraints Nt=Nt+kT∀k∈Z. We tackle this problem as an image-to-image translation task where our goal is to learn f(Nt,∆t,w)−→Nt+∆t where ∆t∈[−T/2, T/2]. Note that, since we are inter-ested in looped animations, negative values for ∆tcorre-spond to valid animation samples. We realize the function f as a UNet architecture that is conditioned on both the resid-ual time ∆tand the wind direction w, as shown in Figure 3. To enforce a cyclic behaviour, we first encode ∆tusing si-nusoidal functions as: φ∆t=2πn T∆t, n = 1,2,3,4,5. x∆t={cos(φ∆t),sin(φ∆t)}.(1) This formulation ensures that f(Nt,∆t+kT,w)with k∈Zgives the same output resulting in a looping ani-mation sequence. Similar to common practice in positional encoding [37], we observe that using multiples of the data frequency ( ω= 2πn/T ) helps to learn higher frequency 461 time windFC64 FC128 FC256 FC512 FC1024∆t w(wx, wy)x64+x64 x128+x128 x256+x256 x512+x512 x1024+x1024Nt Nt+∆t Figure 3. Cyclic wind-conditioned UNet. Given an input normal map Nt, a delta time increment ∆t, and a wind direction w; we extend the standard UNet architecture to give it a cyclic behaviour. We encode the time using a cylic positional encoding and concatenate with the wind direction. We pass the concatenated features through different fully convolutional layers to extract features of varying dimensions. The resulting features are provided as skip connections to the UNet architecture which synthesizes the final normal map Nt+∆ t. motions while still enforcing a global cyclic behaviour with period T. Note then how the time encoding x∆tconsists of multiple circumferences parameterized by ∆t. We represent the wind direction as a unit vector as win the image plane. We concatenate wwithx∆tre-sulting in the final conditioning code x:=x∆t∥w= (x∆t,0, x∆t,1, ..., x ∆t,2n, wx, wy). We condition the UNet by introducing xat each feature map extracted by the en-coder at different scales. To do so, we first linearly trans-formxto the corresponding feature map dimensionality with learnable weights {Wi∈RFi×D}, where Fiis the number of channels of the i-th feature map and Dis the dimensionality of x. We apply 1×1conovolutions to the feature maps before and after combining them with x. 3.2. Normal Guided Synthesis The final stage of our approach focuses on computing the final cinemagraph Vgiven the original input RGB im-ageIand the predicted normal map sequence Vw N. To this end, we rely on the concept of intrinsic image de-composition, which decomposes images into two layers I=SR: (i) the reflectance R∈RW×H×3, which de-notes the albedo invariant color of the materials, and (ii) the shading S∈RW×Hwhich is the result of the interaction of the light with the underlying geometry of the garment. In particular, the shading layer is crucial in how we perceive the changes in the fold and wrinkle patterns of the garment as it is animated. Given this observation, we synthesize a new shading layer that is consistent with the animated sur-face normal maps. Then, when composed with the original reflectance map reflects it generates the intended animation. Given the input image I, we first run an off-the-shelf intrinsic image decomposition method [5] to obtain the re-flectance map Rand the shading map S. Assuming a sim-ple lighting model composed of a directional and ambientlight, we optimize for the light parameters using the pre-dicted surface normal map from the input image: S=max(0,−Nl) +δ, (2) where l∈R3is the light direction and δ∈R+is the ambient light. Given the predicted animated surface nor-mal map sequence ˆVN, we generate a new shading map se-quence and composite it with the original reflectance map R to obtain the final RGB sequence ˆV. At inference time, the user is required to provide a mask to denote the region of in-terest where motion is desired to be synthesized. Hence, we composite the original image and the synthesized RGB im-ages based on this mask to provide the final output. While this approach changes only the shading without actually warping the texture of the garment, it is sufficient to pro-vide the perception of a plausible animation. Local vs Global. We design this methodology so it leans towards a local solution. The reasons for this are as follows. On one hand, cinemegraphs are characterized by subtle mo-tions (local). On the other hand, local solutions generalize better, which is specially important for our approach to han-dle real test samples from a synthetic training set. 4. Experiments In the following section, we describe the experimental setup and the qualitative and quantitative results. We detail the data used for training and evaluation, define the metrics, and briefly introduce the state-of-the-art baselines used for comparison. Finally, we provide a discussion of the results. Datasets. In order to train our network, we generate a synthetic dataset that consists of different type of gar-ments draped on human bodies with varying shape and pose. Specifically, we sample human body and garment pairs from the Cloth3D dataset [6], which is a large-scale dataset of clothed 3D humans. We select 1500 samples with 462 Figure 4. We train ours on a synthetic dataset that consists of different garment types draped on bodies with varying shape and poses acquired from the Cloth3D dataset [6]. We simulate the ef-fect of wind and render the corresponding RGB and surface nor-mal images. Figure 5. We capture a small real dataset where the subject keeps a still pose during the sequence while a fan generates wind. Dif-ferent garment types show different dynamics. skirt and dresses and 500samples with other clothing types (e.g., trousers, tshirts). Each sample in the original Cloth 3D dataset is a motion sequence. We randomly choose one of the frames in each sequence as a random human body pose. The chosen frame, body and outfit, defines the initial conditions of our cloth simulation. We use Blender [13] to run the simulations. To this end, we choose a random wind direction in the image plane with constant wind force, and simulate the cloth dynamics while the underlying body re-mains still. Each simulation output is rendered from a fixedviewpoint with a predefined lighting setup. We apply ran-dom checkerboard texture patterns to some garments and assign a uniform color material to others. In addition to RGB output, we also render the corresponding surface nor-mal maps and segmentation masks (body, cloth and back-ground). Figure 4shows examples from our dataset. We simulate each sample for 250frames at 30fps. We observe that the garment drapes on the body in roughly the first 50 frames of the sequence and later starts blowing in the wind. It is not trivial to guarantee the resulting garment animation is cyclic in such a physically based simulation setup. Hence, we process the resulting animations with the method of Liao et al. [23] which detects loops in an input video. After this step, we obtain animation sequences of length 150frames which we use as the duration of loops, i.e., T= 150. In addition to synthetic data, we test our method on real samples from the Deep Fashion dataset [25] as well as addi-tional stock images to test generalization. To evaluate if the predictions obtained on real samples contain plausible cloth dynamics, we capture a small set of real examples. Specifi-cally, we ask a human subject wearing different types of gar-ments to hold a still pose next to an oscillating fan while we record a short video sequence with a fixed camera mounted on a tripod. We record 50 such videos demonstrating 8 dif-ferent outfit types. Similar to synthetic data, we process each video with the method of Liao et al. [23] to obtain looped animations. Figure 5shows some real samples. Evaluation Metrics. We evaluate our method and base-lines on synthetic data where we can access ground truth image and animation pairs. First, we adopt metrics that focus on pixel-level similarity. Specifically, we report per-pixel mean average error (MAE), mean squared er-ror (MSE), root of mean squared error (RMSE), and PSNR. In addition we report metrics that focus on more structural (SSIM [39]) and perceptual similarities (LPIPS [45]). For DeepFashion samples we do not have ground truth video data. Hence, in order to evaluate the plausibility of the generated animated sequences we use Frechet Video Dis-tance (FVD [36]) against the real data we have captured. Baselines. We compare our method to two base-lines. First, we compare with the work of Mahapatra et al. [27], which extends the original Eulerian motion fields approach [20] to a controllable setup. Since this method is a flow based approach and uses optical flow information to be provided in the dataset, we train it with the looped RGB videos in our synthetic dataset where optical flow can be more reliably estimated using off-the-shelf methods [34]. For each looped sequence, we extract a mask denoting the region where motion is observed and a sparse set of motion directions from the estimated optical flow. We also com-pare our method to LIA [38], a state-of-the-art single image-based controllable video generation framework. Since LIA requires a target video sequence to specify the desired ani-463 mation, we provide the ground truth animation sequences as targets both during training and testing. While it is not pos-sible to use this configuration in a real setup, it provides the best possible results. Outperforming LIA under this config-urati
Bai_Learning_Personalized_High_Quality_Volumetric_Head_Avatars_From_Monocular_RGB_CVPR_2023
Abstract We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild. The learnt avatar is driven by a paramet-ric face model to achieve user-controlled facial expressions and head poses. Our hybrid pipeline combines the geom-etry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photore-alism. To reduce over-smoothing and improve out-of-model expressions synthesis, we propose to predict local features anchored on the 3DMM geometry. These learnt features are driven by 3DMM deformation and interpolated in 3D space to yield the volumetric radiance at a designated query point. We further show that using a Convolutional Neural Network in the UV space is critical in incorporating spatial context and producing representative local features. Exten-sive experiments show that we are able to reconstruct high-quality avatars, with more accurate expression-dependent details, good generalization to out-of-training expressions, and quantitatively superior renderings compared to otherstate-of-the-art approaches.
1. Introduction Creating a controllable human avatar is a fundamen-tal piece of technology for many downstream applications, such as AR/VR communication [20,31], virtual try-on [37], virtual tourism [13], games [42], and visual effects for movies [12, 18]. Prior art in high-quality avatar generation typically requires extensive hardware configurations ( i.e., camera arrays [6, 12, 31], light stages [18, 29], dedicated depth sensors [8]), or laborious manual intervention [1]. Al-ternatively, reconstructing avatars from monocular RGB videos significantly relaxes the dependency on equipment setup and broadens the application scenarios. However, monocular head avatar creation is highly ill-posed due to the dual problems of reconstructing and tracking highly articulated and deformable facial geometry, while model-Work was conducted while Ziqian Bai was an intern at Google. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 16890 ing sophisticated facial appearance. Traditionally, 3D Mor-phable Models (3DMM) [7, 24] have been used to model facial geometry and appearance for various applications in-cluding avatar generation [9, 16, 22]. However, 3DMMs do not fully capture subject-specific static details and dynamic variations, such as hair, glasses, and expression-dependent high frequency details such as wrinkles, due to the limited capacity of the underlying linear model. Recent works [3, 15] have incorporated neural radiance fields in combination with 3DMMs for head avatar gener-ation to achieve photorealistic renderings, especially im-proving challenging areas, such as hair, and adding view-dependent effects, such as reflections on glasses. The pio-neering work of NerFACE [15] uses a neural radiance field parameterized by an MLP that is conditioned on 3DMM expression parameters and learnt per-frame latent codes. While they achieve photorealistic renderings, the reliance on an MLP to directly decode from 3DMM parameter space leads to the loss of fine-grain control over geometry and ar-ticulation. Alternatively, RigNeRF [3] learns the radiance field in a canonical space by warping the target head ge-ometry using a 3DMM fit, which is further corrected by a learnt dense deformation field parameterized by another MLP. While they demonstrate in-the-wild head pose and expression control, the use of two global MLPs to model canonical appearance and deformations for the full spatial-temporal space leads to a loss of high frequency details, and an overall uncanny appearance of the avatar. Both of these works introduce new capabilities but suffer from lack of de-tail in both appearance and motion because they attempt to model the avatar’s global appearance and deformation with an MLP network. In this paper, we propose a method to learn a neural head avatar from a monocular RGB video. The avatar can be controlled by an underlying 3DMM model and deliver high-quality rendering of arbitrary facial expressions, head poses, and viewpoints, which retain fine-grained details and accu-rate articulations. We achieve this by learning to predict expression-dependent spatially local features on the surface of the 3DMM mesh. A radiance field for any given 3D point in the volume is then obtained by interpolating the features from K-nearest neighbor vertices on the deformed 3DMM mesh in target expression, and passing them through a lo-cal MLP to infer density and color. The local features and local MLP are trained jointly by supervising the radiance field through standard volumetric rendering on the train-ing sequence [30]. Note that our networks rely on the local features to model appearance and deformation details, and leverages the 3DMM to model only the global geometry. Learning local features is critical in achieving a high-quality head avatar. To this end, we train an image-to-image translation U-Net that transforms the 3DMM defor-mations in the UV space to such local features. These UV-space features are then attached to the corresponding ver-tices of the 3DMM mesh geometry. We show that learn-ing features from such explicit per-vertex local displace-ment of the 3DMM geometry makes the model retain high-frequency expression-dependent details and also general-izes better to out-of-training expressions, presumably be-cause of the spatial context between nearby vertices incor-porated by the convolutional neural network (CNN). An al-ternative approach is to feed the 3DMM parameters directly into a CNN decoder running on the UV space. However, we found this produces severe artifacts on out-of-training expressions, particularly given a limited amount of training data, e.g. for a lightweight, 1-minute data collection proce-dure during the avatar generation process. In summary, our contributions are as follows: we pro-pose a neural head avatar representation based on a 3DMM-anchored neural radiance field, which can model complex expression-dependent variations, but requires only monoc-ular RGB videos for training. We show that a convolutional neural network running on per-vertex displacement in UV space is effective in learning local expression-dependent features, and delivers favorable training stability and gen-eralization to out-of-training expressions. Experiments on real-world datasets show that our model provides compet-itive controllability and generates sharper and detail en-riched rendering compared to state-of-the-art approaches.
Cheng_Panoptic_Compositional_Feature_Field_for_Editable_Scene_Rendering_With_Network-Inferred_CVPR_2023
Abstract Despite neural implicit representations demonstrating impressive high-quality view synthesis capacity, decom-posing such representations into objects for instance-level editing is still challenging. Recent works learn object-compositional representations supervised by ground truth instance annotations and produce promising scene editing results. However, ground truth annotations are manually labeled and expensive in practice, which limits their usage in real-world scenes. In this work, we attempt to learn an object-compositional neural implicit representation for ed-itable scene rendering by leveraging labels inferred from the off-the-shelf 2D panoptic segmentation networks in-stead of the ground truth annotations. We propose a novel framework named Panoptic Compositional Feature Field (PCFF), which introduces an instance quadruplet metric learning to build a discriminating panoptic feature space for reliable scene editing. In addition, we propose semantic-related strategies to further exploit the correlations between semantic and appearance attributes for achieving better rendering results. Experiments on multiple scene datasets including ScanNet, Replica, and ToyDesk demonstrate that our proposed method achieves superior performance for novel view synthesis and produces convincing real-world scene editing results.
1. Introduction Virtually editing real-world scenes ( e.g., moving a chair in the room) in mixed reality applications on various de-vices is desired by users. Such expectation requires an effective 3D scene representation with the capacity of photo-realistic view rendering and promising scene decom-position. Recently, emerging neural implicit representa-tions with volumetric rendering, especially neural radiance field (NeRF) [29] and its variants, show impressive re-sults in novel view synthesis [1, 2] and scene reconstruc-This work was supported in part by the Shenzhen General Research Project JCYJ20220531093215035. (Corresponding author: Jian Zhang.)tion [13, 31, 43] tasks. However, decomposing a neural im-plicit representation into objects for scene editing is chal-lenging due to the holistic scene is implicitly encoded as weights of connectionist networks like multi-layer percep-tions (MLPs). To build object-compositional neural im-plicit representations for instance-level scene editing, sev-eral works [46, 47, 52] jointly encode the appearance and instance attributes with extra instance annotations. Though existing object-compositional methods can ex-tract convincing object representations from the scene rep-resentation for further editing, their successes rely heav-ily on ground truth instance annotations, which are la-beled manually and expensive to obtain in real-world prac-tice. An intuitive alternative solution is training object-compositional representations with labels inferred by 2D panoptic segmentation networks [5,6,19] instead of ground truth instance annotations. However, their methods are tough to leverage the network-inferred labels due to the sig-nificant 3D index inconsistency, and a detailed discussion is shown in Fig. 1. We note that 3D index consistency is the instance indices of a specific object are same across multi-view labels. Due to network-inferred labels are individually predicted on each view image and objects order is uncertain in each prediction, the instance indices of a specific object in different view labels are usually index inconsistent from the perspective of 3D, e.g., the index of the target chair is purple in the label of view #1 and is red in the label of view #2. Therefore, how to learn object-compositional neural im-plicit representations by leveraging network-inferred labels is critical for real-world application. In this work, we propose a novel panoptic compositional feature field (PCFF), which integrates the deep metric learn-ing [18, 27] into the learning of object-compositional rep-resentations to overcome the challenge of using 2D net-work predictions. Concretely, we employ metric learning to constrain the distances among projected panoptic fea-tures of pixels in each view separately, which circumvents the requirement of 3D index consistent labels and builds a discriminating panoptic feature space. Combined with the This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4947 Figure 1. Core challenge. Existing methods ( e.g., ObjSDF [46]) require (a) manually-labeled ground truth annotations to train object-compositional representations, and the trained representation can extract the target object correctly. We note that (a) manually-labeled annotations are 3D index consistency ,i.e., the instance indices of the target object are same across multi-view labels. However, when (b) network-inferred labels predicted by 2D panoptic segmentation networks are utilized for training their representations, the corresponding object extraction result is obviously incorrect. Due to (b) network-inferred labels are inferred by networks on each view image individually, these labels are usually index inconsistent from the perspective of 3D and tough to be used by existing methods. feature spaces, we provide an easily query-based manner for scene decomposition, i.e., given a user-specified pixel query in an arbitrary view, our trained PCFF extracts the target object by measuring the similarity between the pro-jected feature of query pixel and corresponding features of each 3D point. Furthermore, two semantic-related learn-ing strategies are proposed based on our observation of the correlations between semantic and appearance attributes for improving our rendering performance. The semantic-appearance hierarchical learning enforces our framework to encode appearances and semantics with MLPs of different depths, and the semantic-guided regional refinement impels the framework to focus on inaccurate regions with the guid-ance of semantic information entropy maps. We evaluate our method on multiple scene datasets in-cluding ScanNet [8], Replica [40], and ToyDesk [47]. Ex-periments demonstrate our method outperforms state-of-the-art methods in novel view synthesis. More impor-tantly, PCFF successfully leverages 2D network-inferred la-bels to build object-compositional representations for object extraction and real-world scene editing, whereas existing methods fail to utilize labels without 3D index consistency. The main contributions of our work are summarized as: • We propose a novel Panoptic Compositional Feature Field (PCFF) that learns object-compositional repre-sentations for editable scene rendering with network-inferred panoptic labels by building a discriminating feature space with the assistance of introduced instance quadruplet metric learning. • We propose strategies including semantic-appearance hierarchical learning and semantic-guided regional re-finement to properly exploit the correlations between semantic and appearance attributes and improve the rendering capacity of our method.• Our method achieves superior novel view synthe-sis performance than state-of-the-art methods and produces convincing scene edits by using network-inferred panoptic labels on real-world scenes.
Jiang_LayoutFormer_Conditional_Graphic_Layout_Generation_via_Constraint_Serialization_and_Decoding_CVPR_2023
Abstract Conditional graphic layout generation, which generates realistic layouts according to user constraints, is a chal-lenging task that has not been well-studied yet. First, there is limited discussion about how to handle diverse user con-straints flexibly and uniformly. Second, to make the lay-outs conform to user constraints, existing work often sac-rifices generation quality significantly. In this work, we propose LayoutFormer++ to tackle the above problems. First, to flexibly handle diverse constraints, we propose a constraint serialization scheme, which represents different user constraints as sequences of tokens with a predefined format. Then, we formulate conditional layout generation as a sequence-to-sequence transformation, and leverage encoder-decoder framework with Transformer as the ba-sic architecture. Furthermore, to make the layout better meet user requirements without harming quality, we pro-pose a decoding space restriction strategy. Specifically, we prune the predicted distribution by ignoring the options that definitely violate user constraints and likely result in low-quality layouts, and make the model samples from the re-stricted distribution. Experiments demonstrate that Lay-outFormer++ outperforms existing approaches on all the tasks in terms of both better generation quality and less con-straint violation.
1. Introduction Graphic designs greatly facilitate information communi-cation in our daily life. During its creation, the layout , i.e., positions and sizes of elements, plays a critical role. To assist layout design, conditional layout generation , which takes user constraints as input and generates layouts as out-*Work done during an internship at Microsoft Research Asia. Good ControllabilityTextToolbarImagemodel Asmaller thanIconat the top of TextIconTextmodel BBadQualityViolate Constraints LayoutFormer++ Previous Approaches: Sufficient Flexibility LayoutFormer++:Constraints A: Constraints B: <sos> image|text|text <eos><sos> icon|text||icon top text|icon smaller text <eos>Figure 1. Comparing with previous conditional layout generation approaches, LayoutFormer++ performs better on sufficient flexi-bility and good controllability. put, attracts great attention (see Figure 1). It is different from unconditional layout generation, which generates lay-outs freely without constraints, from at least two aspects. First, the model should be able to handle diverse user con-straints, called sufficient flexibility . Figure 2 shows 6 typical tasks of layout generation in real-world applications includ-ing layout completion, layout refinement, layout generation conditioned on element types, element types with sizes, el-ement relationships, or any of their combinations. Second, the model should generate layouts conforming to user re-quirements (i.e., constraints) as many as possible without harming quality, called good controllability (see Figure 1). However, existing work cannot meet the above two re-quirements. First, there is no existing work can support all the layout generation tasks with different user constraints. Most existing approaches simply focus on tackling a sin-gle conditional layout generation task without considering whether they can be applied to other tasks. For example, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18403 LayoutTransformer [3] can only perform completion task and BLT [10] cannot handle element relationships. Such task-specific approaches hinder the development of solu-tions for the new task. Second, there are no satisfactory methods to ensure good controllability. Some approaches directly replace the values in the predicted layout with the ones specified in the user constraints [3, 10]. Another work defines a set of heuristic rules and leverages latent opti-mization [8]. To make the generated layouts meet user re-quirements, they often degrade the generation quality sig-nificantly. In this work, we propose a unified model called Layout-Former++ to support the different scenarios of conditional layout generation. In LayoutFormer++ , a set of constraints is represented as a sequence. Specifically, we use a con-straint serialization scheme to serialize different user con-straints into a sequence of tokens with a predefined format (see Figure 1). The intuition behind this design choice is as follows. First, the sequence format is widely used and ef-fective. Its effectiveness in layout generation has also been demonstrated in recent works [3,10]. Second, a sequence is very flexible to accommodate different constraints for lay-out generation. We can serialize any structured information in the user constraints as a sequence. We found although user needs are diverse, they are all about element types and five attributes including type, top coordinate, left coordi-nate, width and height. Thus, we can simply define a set of vocabularies to describe the attributes respectively and con-catenate descriptions of different attributes and elements to construct a sequence. Therefore, the conditional layout generation problem can be formulated as a sequence-to-sequence transformation problem. This enables us to leverage a simple yet effec-tive encoder-decoder framework with Transformer [26] as a basic model architecture. The encoder processes the user constraints in a bidirectional way. The decoder predicts the layout sequence autoregressively, where there are multiple decoding steps and the model samples one token from the predicted distribution at each decoding step. Furthermore, to achieve good controllability, we intro-duce a decoding space restriction strategy in the inference stage. Our key idea is to prune the infeasible options in the predicted distribution and make the model sample from the restricted distribution at each decoding step. Specifically, we leverage two kinds of information to prune the options. First, the options that definitely violate the user constraints are pruned. For example, if a user wants one image and two buttons, the option for putting one text box will not be acceptable. Second, the options with low probabilities in the predicted distribution, which will very likely result in low-quality layouts, are also pruned. As the feasible op-tion set may be empty after pruning, we further introduce a backtracking mechanism, in which the model goes back image, text, textElement Types:(1). Generation Conditioned on Element Typesimgtext 1text 2imgtext 1text 2image (36,36), text (60,20),text (60,20)Types with Sizes:(2). Generation Conditioned on Element Types and Sizestext 1 at the top of text 2; text 2 at the bottom of canvas.Types:Relationships:imgtext 1text 2image, text, text(3). Generation Conditioned on Element Relationshipsimgtext 1text 2img(5). Completionimgtext 1text 2imgtext 1text 2(4). Refinementimgtext 1text 2None(6). Unconstrained GenerationFigure 2. Typical tasks for conditional layout generation. to a certain decoding step and find a better solution. Note that the whole generation process of the proposed strategy still relies on the distribution learned from the training data. Thus, it is less likely to disturb a layout when making it better conform to user constraints. We conduct extensive experiments on two public datasets [15, 30] and six layout generation tasks with dif-ferent user constraints, to evaluate LayoutFormer++ and compare it with state-of-the-art approaches. Experimental results show that LayoutFormer++ can successfully tackle all six layout generation tasks that are handled separately by previous work, demonstrating that it is able to provide sufficient flexibility. Furthermore, LayoutFormer++ signif-icantly outperforms previous approaches in terms of both better generation quality and less constraint violation, indi-cating that it achieves good controllability.
Dong_DisWOT_Student_Architecture_Search_for_Distillation_WithOut_Training_CVPR_2023
Abstract Knowledge distillation (KD) is an effective training strat-egy to improve the lightweight student models under the guidance of cumbersome teachers. However, the large ar-chitecture difference across the teacher-student pairs lim-its the distillation gains. In contrast to previous adap-tive distillation methods to reduce the teacher-student gap, we explore a novel training-free framework to search for the best student architectures for a given teacher. Our work first empirically show that the optimal model un-der vanilla training cannot be the winner in distillation. Secondly, we find that the similarity of feature semantics and sample relations between random-initialized teacher-student networks have good correlations with final distil-lation performances. Thus, we efficiently measure similar-ity matrixs conditioned on the semantic activation maps to select the optimal student via an evolutionary algorithm without any training. In this way, our student architec-ture search for Distillation WithOut Training (DisWOT) sig-nificantly improves the performance of the model in the distillation stage with at least 180 ×training acceleration. Additionally, we extend similarity metrics in DisWOT as new distillers and KD-based zero-proxies. Our experiments on CIFAR, ImageNet and NAS-Bench-201 demonstrate that our technique achieves state-of-the-art results on different search spaces. Our project and code are available at https://lilujunai.github.io/DisWOT-CVPR2023/.
1. Introduction Despite the remarkable achievements of Deep Neural Networks (DNNs) in numerous visual recognition tasks [ 52, 64–68], they usually lead to heavy costs of memory, com-putation, and power at model inference due to their large numbers of parameters. To address this issue, Knowledge Distillation (KD) has been proposed as a means of trans-ferring knowledge from a high-capacity teacher model to a low-capacity target student model, providing a more op-*Corresponding author, †equal contribution, PD conducted main ex-periments, LL proposed ideas and led the project & writing. /uni00000029/uni0000004c/uni00000056/uni0000004b/uni00000048/uni00000055 /uni00000031/uni0000003a/uni00000032/uni00000037 /uni00000036/uni00000031/uni0000002c/uni00000033 /uni00000039/uni00000044/uni00000051/uni0000004c/uni0000004f/uni0000004f/uni00000044/uni00000003/uni00000044/uni00000046/uni00000046/uni00000011/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni0000001c/uni00000014/uni00000011/uni00000013/uni0000002e/uni00000048/uni00000051/uni00000047/uni00000044/uni0000004f/uni0000004f/uni0000000a/uni00000056/uni00000003/uni00000037/uni00000044/uni00000058/uni00000013/uni00000011/uni0000001b/uni00000014 /uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni0000001b/uni00000018/uni00000014 /uni00000013/uni00000011/uni00000019/uni00000016 /uni00000013/uni00000011/uni00000016/uni00000015/uni00000013/uni00000011/uni00000019/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000018/uni00000039/uni00000044/uni00000051/uni0000004c/uni0000004f/uni0000004f/uni00000044/uni00000003/uni00000044/uni00000046/uni00000046/uni00000011 /uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000003/uni00000044/uni00000046/uni00000046/uni00000011 /uni00000035/uni00000048/uni00000056/uni00000031/uni00000048/uni00000057/uni0000003e/uni0000001a/uni00000014/uni00000016/uni00000040/uni0000000b/uni00000015/uni00000018/uni0000001c/uni00000011/uni0000001b/uni0000001c/uni0000004e/uni0000000c /uni00000035/uni00000048/uni00000056/uni00000031/uni00000048/uni00000057/uni0000003e/uni00000016/uni00000016/uni00000016/uni00000040/uni0000000b/uni00000015/uni0000001a/uni0000001b/uni00000011/uni00000016/uni00000015/uni0000004e/uni0000000c/uni00000019/uni0000001c/uni00000011/uni00000013/uni00000019/uni0000001c/uni00000011/uni00000018/uni0000001a/uni00000013/uni00000011/uni00000013/uni0000001a/uni00000013/uni00000011/uni00000018/uni0000001a/uni00000014/uni00000011/uni00000013/uni0000001a/uni00000014/uni00000011/uni00000018/uni00000037/uni00000052/uni00000053/uni00000010/uni00000014/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni00000019/uni0000001c/uni00000011/uni00000014/uni00000016/uni00000019/uni0000001c/uni00000011/uni00000018/uni0000001a/uni0000001a/uni00000014/uni00000011/uni00000013/uni00000014 /uni0000001a/uni00000013/uni00000011/uni0000001a/uni00000019 /uni00000016/uni00000011/uni00000013/uni00000013/uni00000016/uni00000011/uni00000015/uni00000018/uni00000016/uni00000011/uni00000018/uni00000013/uni00000016/uni00000011/uni0000001a/uni00000018/uni00000017/uni00000011/uni00000013/uni00000013/uni00000017/uni00000011/uni00000015/uni00000018/uni00000017/uni00000011/uni00000018/uni00000013/uni00000017/uni00000011/uni0000001a/uni00000018/uni00000018/uni00000011/uni00000013/uni00000013 /uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048×/uni00000014/uni00000013/uni00000017 /uni00000017/uni00000011/uni00000017/uni00000048/uni00000010/uni00000017 /uni00000016/uni00000011/uni00000016/uni00000048/uni00000010/uni00000017/uni00000039/uni00000044/uni00000051/uni0000004c/uni0000004f/uni0000004f/uni00000044/uni00000003/uni00000044/uni00000046/uni00000046/uni00000011 /uni00000027/uni0000004c/uni00000056/uni00000057/uni0000004c/uni0000004f/uni0000004f/uni00000003/uni00000044/uni00000046/uni00000046/uni00000011 /uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048Figure 1. Left: Ranking correlation of proxies in zero-cost NAS with vanilla and distillation accuracy. Right: Vanilla accuracy, dis-tillation accuracy, prediction scores of DisWOT for ResNet[7,1,3] and ResNet[3,3,3] on search space S0. /uni00000035/uni00000048/uni00000056/uni00000031/uni00000048/uni00000057/uni00000016/uni00000015 /uni00000035/uni00000048/uni00000056/uni00000031/uni00000048/uni00000057/uni00000017/uni00000017 /uni00000035/uni00000048/uni00000056/uni00000031/uni00000048/uni00000057/uni00000018/uni00000019 /uni00000035/uni00000048/uni00000056/uni00000031/uni00000048/uni00000057/uni00000014/uni00000014/uni00000013 /uni00000037/uni00000048/uni00000044/uni00000046/uni0000004b/uni00000048/uni00000055/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000056/uni0000001a/uni00000013/uni00000011/uni00000018/uni0000001a/uni00000014/uni00000011/uni00000013/uni0000001a/uni00000014/uni00000011/uni00000018/uni0000001a/uni00000015/uni00000011/uni00000013/uni0000001a/uni00000015/uni00000011/uni00000018/uni0000001a/uni00000016/uni00000011/uni00000013/uni00000037/uni00000052/uni00000053/uni00000010/uni00000014/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c /uni0000002e/uni00000027 /uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037 /uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037/uni0000000e /uni00000014/uni00000013/uni00000016 /uni00000014/uni00000013/uni00000017 /uni00000014/uni00000013/uni00000018 /uni0000002f/uni00000052/uni0000004a/uni00000003/uni00000037/uni0000004c/uni00000050/uni00000048/uni00000003/uni00000026/uni00000052/uni00000056/uni00000057/uni00000003/uni0000000b/uni00000056/uni0000000c/uni00000019/uni00000017/uni00000019/uni00000019/uni00000019/uni0000001b/uni0000001a/uni00000013/uni0000001a/uni00000015/uni0000001a/uni00000017/uni0000001a/uni00000019/uni00000037/uni00000052/uni00000053/uni00000010/uni00000014/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni0000000b/uni00000008/uni0000000c/uni00000014/uni0000001b/uni00000013/uni0000005b/uni00000003/uni00000049/uni00000044/uni00000056/uni00000057/uni00000048/uni00000055 /uni00000035/uni00000036/uni00000035/uni0000002f /uni00000025/uni00000032/uni0000002b/uni00000025 /uni00000027/uni00000024/uni00000035/uni00000037/uni00000036/uni0000002a/uni00000027/uni00000024/uni00000036/uni00000031/uni0000003a/uni00000032/uni00000037 /uni00000037/uni00000028/uni00000010/uni00000031/uni00000024/uni00000036/uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037 /uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037/uni0000000bMr/uni0000000c /uni00000035/uni00000036 /uni00000035/uni0000002f /uni00000025/uni00000032/uni0000002b/uni00000025 /uni00000027/uni00000024/uni00000035/uni00000037/uni00000036 /uni0000002a/uni00000027/uni00000024/uni00000036 /uni00000031/uni0000003a/uni00000032/uni00000037 /uni00000037/uni00000028/uni00000010/uni00000031/uni00000024/uni00000036 /uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037 /uni00000027/uni0000004c/uni00000056/uni0000003a/uni00000032/uni00000037/uni0000000bMr/uni0000000c Figure 2. Left: KD [ 22], DisWOT, DisWOT †results for ResNet20 under different teachers. Right: Comparison of distill accuracy & trainin
Ahn_Neural_Kaleidoscopic_Space_Sculpting_CVPR_2023
Abstract We introduce a method that recovers full-surround 3D reconstructions from a single kaleidoscopic image using a neural surface representation. Full-surround 3D recon-struction is critical for many applications, such as aug-mented and virtual reality. A kaleidoscope, which uses a single camera and multiple mirrors, is a convenient way of achieving full-surround coverage, as it redistributes light directions and thus captures multiple viewpoints in a single image. This enables single-shot and dynamic full-surround 3D reconstruction. However, using a kaleidoscopic im-age for multi-view stereo is challenging, as we need to de-compose the image into multi-view images by identifying which pixel corresponds to which virtual camera, a process we call labeling. To address this challenge, pur approach avoids the need to explicitly estimate labels, but instead “sculpts” a neural surface representation through the care-ful use of silhouette, background, foreground, and texture information present in the kaleidoscopic image. We demon-strate the advantages of our method in a range of simulated and real experiments, on both static and dynamic scenes.
1. Introduction Generating digital replicas of real-world objects from image measurements is a hard problem. Multi-view recon-struction approaches require a diverse set of viewpoints that provide full-surround coverage. A single camera, even if it is moving, can be insufficient for this problem, for exam-ple when the object under consideration undergoes dynamic motion and has complex shape. To capture the shape of a dynamic object, we would need simultaneous captures from multiple viewpoints. While a multi-camera system can pro-vide such information, its cost and complexity can be pro-hibitive when we need to acquire objects with very complex appearance, geometry, and self-occlusions, and thus requir-ing very large numbers of viewpoints. We use a kaleidoscope [6] to achieve single-shot full-surround 3D reconstruction for general dynamic objects. A kaleidoscope is a configuration of multiple interreflect-ing mirrors imaged by a camera, and dramatically increases Figure 1. 3D printing of shape reconstructions. The proposed neural kaleidoscopic space sculpting can generate replicas of real objects with a range of shapes and reflectances. Reconstructed meshes are available on the project webpage [2]. the number of viewpoints, thereby enabling a virtual time-synchronized multi-view system. However, 3D reconstruc-tion with a kaleidoscope requires identifying the specific se-quence of mirrors encountered by light reaching each cam-era pixel; this is equivalent to identifying the specific vir-tual view corresponding to the pixel, commonly referred to as the labeling problem. This labeling problem can be solved using time-of-flight cameras [36] or structured light systems [3], but such active techniques require long scan times that make them unsuitable for dynamic objects. On the other hand, prior art with passive illumination first con-structs the visual hull of the object, then uses it to estimate its label [29]. This two-stage process often produces erro-neous results, especially when the visual hull differs signif-icantly from the true shape. Contributions. We propose a technique for full-surround 3D reconstruction with a single kaleidoscopic image. Our key insight is that a single pixel in a kaleidoscopic image is equivalent to multiple such pixels in its multi-camera counterpart. For example, the pre-image of the background pixel, which is the collection of 3D points that map to that pixel, in a kaleidoscope does not intersect with the object; this implies that is also a background pixel in all virtual views associated with it. Similarly, even a foreground pixel that intersects with the object can be used to carve out space since all of the light path prior to a ray’s intersection with This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4349 the object can be considered as background. Armed with these insights on the nature of information encoded in a kaleidoscopic image, we propose a technique that we call kaleidoscopic space sculpting ; sculpting sets up an optimization problem that updates a neural implicit surface [37] using a collection of cost functions that en-code background information (to remove regions) and fore-ground regions (to add regions), as well as the texture of the object. Interestingly, our technique does not explicitly calculate the label information. Despite this, it provides ro-bust single-shot full-surround 3D reconstructions. For dy-namic objects, we apply our technique separately on each frame of a kaleidoscopic video, to obtain full-surround 3D videos. Figure 1 shows a gallery of objects placed beside their 3D printed counterparts, obtained using neural kalei-doscopic space sculpting. Limitations. Our technique has a number of limitations, some of which are inherent in the use of a kaleidoscope. First, the size of objects we can scan is restricted by the kaleidoscope; for our lab setup, this constrains our tech-nique to objects that fit in a sphere of diameter 4 inches. Second, the total number of pixels that we have at our dis-posal is limited to that of a single image sensor; divvying this pixel budget across the many (virtual) views results in lower resolution imagery, especially when we consider multi-view alternatives where the total pixel count grows linearly with the number of cameras. Third, our proposed technique is sensitive to foreground-background masking. We observed that automatic masking techniques produce erroneous masks that significantly reduce the quality of the final result. For this reason, we manually correct such mis-takes prior to shape estimation.
Careil_Few-Shot_Semantic_Image_Synthesis_With_Class_Affinity_Transfer_CVPR_2023
Abstract Semantic image synthesis aims to generate photo re-alistic images given a semantic segmentation map. De-spite much recent progress, training them still requires large datasets of images annotated with per-pixel label maps that are extremely tedious to obtain. To alleviate the high an-notation cost, we propose a transfer method that leverages a model trained on a large source dataset to improve the learning ability on small target datasets via estimated pair-wise relations between source and target classes. The class affinity matrix is introduced as a first layer to the source model to make it compatible with the target label maps, and the source model is then further finetuned for the target do-main. To estimate the class affinities we consider different approaches to leverage prior knowledge: semantic segmen-tation on the source domain, textual label embeddings, and self-supervised vision features. We apply our approach to GAN-based and diffusion-based architectures for semantic synthesis. Our experiments show that the different ways to estimate class affinity can be effectively combined, and that our approach significantly improves over existing state-of-the-art transfer approaches for generative image models.
1. Introduction Image synthesis with deep generative models has made remarkable progress in the last decade with the introduc-tion of GANs [11], V AEs [17], and diffusion models [14]. Generated images can be conditioned on diverse types of in-puts, such as class labels [3, 18], text [10, 25, 27], bounding boxes [35], or seed images [5]. In semantic image synthesis, the generation is conditioned on a semantic map that indi-cates the desired class label for every pixel. This task has been thoroughly explored with models such as SPADE [23] and OASIS [33], capable of generating high-quality and di-verse images on complex datasets such as ADE20K [43] and COCO-Stuff [4]. However, these approaches heavily rely on the availability of large datasets with tens to hun-dreds of thousands of images annotated with pixel-precise label maps that are extremely costly to acquire. For theInput segmentationClass affinity transfer, Standard training, 100 training images 20k training images Figure 1. Can we train a semantic image synthesis model from only 100 images? Our diffusion-based transfer results using train-ing set of 100 ADE20K images (2ndcol.) compared to the same model trained from scratch on full dataset (20k images, 3rdcol.). Cityscapes dataset [7], e.g., on average more than 1.5h per image was required for annotation and quality control. High annotation costs can be a barrier to deployment of machine learning models in practice, and motivates the development of transfer learning strategies to alleviate the annotation requirements. These techniques allow training models on small target datasets via the use of models pre-trained on a source dataset with many available annotations. Transfer learning has been widely studied for classification tasks such as object recognition [2, 16, 26], but received much less attention in the case of generation tasks. This task has been considered for unconditional and class-conditional generative models [19, 21, 22, 38, 39, 41], but to the best of our knowledge few-shot transfer learning has not yet been explored in the setting of semantic image synthesis. We introduce CAT , a finetuning procedure that models ClassAffinity to Transfer knowledge from pre-trained se-mantic image synthesis models. Our method takes advan-tage of prior knowledge to establish pairwise relations be-tween source and target classes, and encodes them in a class affinity matrix. This solution considerably eases learning This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23611 when few instances of the target classes are available at training time. The affinity matrix is prepended to the source model to make it compatible with the label space of the tar-get domain. The model can then be further finetuned us-ing the available data for the target domain. To illustrate the generality of the proposed approach, we integrate our transfer learning strategy in state-of-the-art adversarial and diffusion models. We explore different ways to extract sim-ilarities between source and target classes, using semantic segmentation models for the source data, self-supervised vi-sion features, and text-based class embeddings. We conduct extensive experiments on the ADE20K, COCO-Stuff, and Cityscapes datasets, using target datasets with sizes ranging from as little as 25 up to 400 images. Our experiments show that our approach significantly im-proves over state-of-the-art transfer methods. As illustrated in Figure 1, our approach allows realistic synthesis from no more than 100 target images, and achieves image quality close to standard training on the full target datasets. More-over, unlike previous transfer methods, our approach also enables non-trivial training-free transfer results, where we only prepend the class affinity matrix to the source model, without further finetuning it. In summary, our contributions are the following: • We introduce Class Affinity Transfer (CAT), the first transfer method for semantic image synthesis for small target datasets, and explore different methods to define class affinity, based on semantic segmentation, self-supervised features, and text-based similarity. • We integrate our approach in state-of-the-art adversar-ial and diffusion based semantic synthesis models. • We obtain excellent experimental transfer results, im-proving over existing state-of-the-art approaches.
Huang_Implicit_Identity_Driven_Deepfake_Face_Swapping_Detection_CVPR_2023
Abstract In this paper, we consider the face swapping detection from the perspective of face identity. Face swapping aims to replace the target face with the source face and gener-ate the fake face that the human cannot distinguish between real and fake. We argue that the fake face contains the ex-plicit identity and implicit identity, which respectively cor-responds to the identity of the source face and target face during face swapping. Note that the explicit identities of faces can be extracted by regular face recognizers. Partic-ularly, the implicit identity of real face is consistent with the its explicit identity. Thus the difference between explicit and implicit identity of face facilitates face swapping detec-tion. Following this idea, we propose a novel implicit iden-tity driven framework for face swapping detection. Specifi-cally, we design an explicit identity contrast (EIC) loss and an implicit identity exploration (IIE) loss, which supervises a CNN backbone to embed face images into the implicit identity space. Under the guidance of EIC, real samples are pulled closer to their explicit identities, while fake sam-ples are pushed away from their explicit identities. More-over, IIE is derived from the margin-based classification loss function, which encourages the fake faces with known target identities to enjoy intra-class compactness and inter-class diversity. Extensive experiments and visualizations on several datasets demonstrate the generalization of our method against the state-of-the-art counterparts.
1. Introduction The development of deep learning has promoted the con-tinuous progress of face forgery technology [5, 16, 48]. Es-pecially for face swapping, it can replace the target face with the source face to generate a fake face that is not distin-guishable by the human eyes. With this technology, attack-ers can easily forge high-quality videos of public celebrities and political figures to achieve illegal political or commer-cial purposes. To alleviate the abuse of face swapping, it is *Corresponding author. Fake Target Source Implicit distance Explicit distanceFace Swapping Explicit Embedding Implicit EmbeddingResemblance?√Real ×Fake Given FaceGeneral FR model Our IFR model Figure 1. Motivation of our approach. The target face is replaced by the source face through face swapping to generate a fake face. In appearance, the fake face looks like the source face instead of the target face. We resort the general face recognition (FR) model CosFace [51] to obtain the explicit distance of these faces. Partic-ularly, since the fake face is synthesized from the source face and the target face, we aim to explore a implicit face recognition (IFR) model that can mine the corresponding target face identity based on the fake face. With the similarity between explicit and implicit embeddings of the given face, we can significantly distinguish it as real and fake, which facilitates forgery detection. urgent to exploit corresponding detection methods. Early researches [1, 10, 37, 42] usually treat face swap detection as a binary image classification task. Specifically, face images are fed into an existing deep convolutional neu-ral network (CNN) and then classified as real and fake. Such methods can learn the data distribution of the training set, resulting in considerable performance in intra-domian tests. However, the simple classification guidance cannot incorporate the connotation of face swapping, thus the deep network lacks the understanding of forgery [50]. Recent works are devoted to exploring specific forgery patterns, such as noise analysis [27], local regions [7, 53] and fre-quency information [19,41]. In this way, fake traces in fake faces can be better detected. Albeit gaining the benefits, they still revolve around certain manipulation methods and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 4490 are not conducive to generalize well to unseen real-world scenarios. Therefore, in practice, many emerging forgery methods as well as unknown environmental factors bring serious performance degradation to existing face swapping detection methods. To address the above issues, we consider the face swap-ping detection from the perspective of face identity. As shown in Figure 1, face swapping aims to replace the tar-get face with the source face, further generating a fake face that is even indistinguishable for human eyes. Here, we introduce two new concepts for fake faces, including ex-plicit identity and implicit identity. Specifically, the explicit identity represents what the fake face looks like, that is, the source face identity. Thus, the explicit distance between the fake face and the real face can be measured by existing gen-eral face recognition models [11, 22, 51]. For implicit iden-tity, we believe that the fake face comes from the source face and the target face. Although it looks like the source face, it might contain more or less target face identity in-formation. We call this potential target face information the implicit identity of the fake face. It is worth noting that the implicit identities of the real face are consistent with its ex-plicit identities. Therefore, given a face image, we embed it into the explicit and implicit identity feature spaces, re-spectively. The distance between its explicit and implicit features is taken as the basis for judging real and fake. Pro-vided the distance is very close, the given image is real, otherwise it is a fake image. With the above considerations in mind, in this paper, we propose a novel implicit identity driven (IID) framework to detect face swapping. Our key motivation is to explore the implicit identity of the face, which guides deep networks to make more reasonable detection results. To this end, we first employ the generic face recognition model to ob-tain its explicit identity embedding. Subsequently, we pro-pose the explicit identity contrast (EIC) loss and the implicit identity exploration (IIE) loss to supervise the off-the-shelf CNN backbone, aiming to transform the face image into the implicit identity feature space. Specifically, under the guidance of EIC, real samples are pulled closer to their ex-plicit identities, while fake samples are pushed away from their explicit identities. In this way, the difference between the real and fake samples in the feature space is enlarged. It is worth noting that the real sample feature at this time denotes its implicit identity (close to the explicit identity). Moreover, to further explore the implicit identity of the fake sample, we label the identity of the fake face with its cor-responding target face identity. Particularly, for those fake faces whose target faces are unknown but come from the same video, we label their identities as extra and identical to ensure identity consistency. Inspired by general face recog-nition algorithms [11,51], our proposed IIE is derived from the margin-based classification loss function, which guidesfake faces with known target identities to have small intra-class distances and large inter-class distances. Besides, fake faces with unknown target identities originating from the same video have consistent identity embeddings. Thereby, implicit identities of fake faces can be mined comprehen-sively. Finally, we use the difference between the implicit identity and explicit identity of the face as the basis for dis-tinguishing real and fake. In brief, the main contributions are as follows: • From a completely new perspective, we propose the implicit identity driven framework for face swapping detection, which explores the implicit identity of fake faces. This enhances the deep network to distinguish fake faces with unknown manipulations. • We specially design explicit identity contrast (EIC) loss and the implicit identity exploration (IIE) loss. EIC aims to pull real samples closer to their explicit identities and push fake samples away from their ex-plicit identities. IIE is margin-based and guides fake faces with known target identities to have small intra-class distances and large inter-class distances. • Extensive experiments and visualizations demonstrate the superiority of our method over the state-of-the-art approaches.
Chen_DAA_A_Delta_Age_AdaIN_Operation_for_Age_Estimation_via_CVPR_2023
Abstract Naked eye recognition of age is usually based on com-parison with the age of others. However, this idea is ignored by computer tasks because it is difficult to obtain represen-tative contrast images of each age. Inspired by the transfer learning, we designed the Delta Age AdaIN (DAA) opera-tion to obtain the feature difference with each age, which obtains the style map of each age through the learned val-ues representing the mean and standard deviation. We let the input of transfer learning as the binary code of age natural number to obtain continuous age feature informa-tion. The learned two groups of values in Binary code mapping are corresponding to the mean and standard de-viation of the comparison ages. In summary, our method consists of four parts: FaceEncoder, DAA operation, Bi-nary code mapping, and AgeDecoder modules. After get-ting the delta age via AgeDecoder, we take the average value of all comparison ages and delta ages as the predicted age. Compared with state-of-the-art methods, our method achieves better performance with fewer parameters on mul-tiple facial age datasets. Code is available at https: //github.com/redcping/Delta_Age_AdaIN
1. Introduction Facial age estimation has been an active research topic in the computer version, for its important role in human-computer interaction [9,40], facial attribute analysis [2,29], market analysis [2], and so on. After the rise of deep learn-ing, many deep structures, such as VGG [41], ResNet [18], MobileNet [38], have been used as feature learning methods to solve the problem of facial age estimation [7, 45, 46]. In general, the methods for facial age estimation can be grouped into three categories: regression methods, classi-fication methods, and ranking methods [27, 33]. The age *Corresponding author. †These authors contributed equally to this work.regression methods consider labels as continuous numeri-cal values [15, 30]. Except for the universal regression, re-searchers also proposed hierarchical models [17] and the soft-margin mixture of regression [21] to handle the hetero-geneous data. Facial age classification approaches usually regard different ages or age groups as independent category labels [16], which can be divided into single-label learning and label distribution learning methods [7]. The single la-bel learning [16,37] treats each age independently, ignoring the fact that facial images of similar ages are very similar. Label distribution learning methods [12, 13, 20, 39] learn a label distribution that represents the relative importance of each label when describing an instance. This method is to compare the distance or similarity between the distribu-tion predicted by the model and the actual distribution [7]. Nevertheless, acquiring distributional labels for thousands of face images is a non-trivial task. The ranking approaches treat the age value as rank-ordered data and use multiple binary classifiers to determine the rank of the age [3–5]. Although the above methods study the problem of fa-cial age estimation from different emphases, they all be-long to the perspective of computer vision, which can be summarized as feature extraction and modeling to predict age. This is different from the mechanism of the naked hu-man eye recognizing age, which is obtained by comparing the current experience information with most humans. Be-cause it is difficult to get representative age images of dif-ferent races, computer tasks often ignore the idea of com-parative learning. The style image can also be a contrast in style transfer learning. [23, 24]. Inspired by this, we propose a Delta Age Adaptive Instance Normalization op-eration (DAA) to obtain representative results of each age through transfer learning. We want to transfer the current image into a style map of each comparative age. And then learn the feature difference between the current age and all the comparative ages. Finally, the predicted age is obtained based on the comparative age difference. Style images’ mean and standard deviation are the keys to style transfer, and the random value cannot reflect the process of aging. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 15836 Figure 1. The overall structure of our network. The Network contains two inputs: the facial age image and the 8-bit binary code of ages. The MLP is a perception with three FC layers. FaceEncoder is a feature extraction block. Continuous feature differences between each age from 0 to 99 and the age of the input image are obtained by DAA transfers with the Binary code mapping module. And in the AgeDecoder module, more robust age estimation is done by the feature differences and their corresponding age label of binary codes. We convert all ages into unique 8-bit binary codes and then learn comparative ages’ mean and standard deviation vec-tors through the fully connected layer. The experiment re-sults on four challenging age datasets demonstrate that our approach outperforms state-of-the-art methods. The main contributions of this paper are as follows: •We designed the Delta Age AdaIN (DAA) operation based on the idea of human eye contrast learning. •To ensure that the delta age after transfer reflects con-tinuity, we convert the natural number of ages into bi-nary code. Finally, 100 delta ages feature maps will be generated for each content feature map. •We designed a network based on age transfer learn-ing to realize robust age estimation, achieving excel-lent performance on four datasets.
Hu_Planning-Oriented_Autonomous_Driving_CVPR_2023
Abstract Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, predic-tion, and planning. In order to perform a wide diversity of tasks and achieve advanced-level intelligence, contempo-rary approaches either deploy standalone models for indi-vidual tasks, or design a multi-task paradigm with separate heads. However, they might suffer from accumulative er-rors or deficient task coordination. Instead, we argue that a favorable framework should be devised and optimized in pursuit of the ultimate goal, i.e., planning of the self-driving car. Oriented at this, we revisit the key components within perception and prediction, and prioritize the tasks such that all these tasks contribute to planning. We introduce Unified Autonomous Driving (UniAD), a comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstrac-tions for agent interaction from a global perspective. Tasks are communicated with unified query interfaces to facili-tate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive abla-tions, the effectiveness of using such a philosophy is proven by substantially outperforming previous state-of-the-arts in all aspects. Code and models are public.
1. Introduction With the successful development of deep learning, au-tonomous driving algorithms are assembled with a series of tasks1, including detection, tracking, mapping in percep-tion; and motion and occupancy forecast in prediction. As depicted in Fig. 1(a), most industry solutions deploy stan-1In the following context, we interchangeably use task, module, com-ponent, unit and node to indicate a certain task ( e.g., detection). Figure 1. Comparison on the various designs of autonomous driving framework. (a)Most industrial solutions deploy separate models for different tasks. (b)The multi-task learning scheme shares a backbone with divided task heads. (c)The end-to-end paradigm unites modules in perception and prediction. Previous attempts either adopt a direct optimization on planning in (c.1) or devise the system with partial components in (c.2). Instead, we argue in (c.3) that a desirable system should be planning-oriented as well as properly organize preceding tasks to facilitate planning. dalone models for each task independently [38, 41], as long as the resource bandwidth of the onboard chip allows. Al-though such a design simplifies the R&D difficulty across teams, it bares the risk of information loss across modules, error accumulation and feature misalignment due to the iso-lation of optimization targets [32, 37, 47]. A more elegant design is to incorporate a wide span of tasks into a multi-task learning (MTL) paradigm, by plug-ging several task-specific heads into a shared feature extrac-tor as shown in Fig. 1(b). This is a popular practice in many domains, including general vision [46, 51, 61], autonomous driving2[8, 34, 57, 59], such as Transfuser [13], BEV-2In this paper, we refer to MTL in autonomous driving as tasks be-yond perception. There is plenty of work on MTL within perception, e.g., detection, depth, flow, etc. This kind of literature is out of scope. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17853 Design ApproachPerception PredictionPlanDet. Track Map Motion Occ. (b)NMP [57] ✓ ✓ ✓ NEAT [12] ✓ ✓ BEVerse [59] ✓ ✓ ✓ (c.1) [7, 9, 45, 54] ✓ (c.2)PnPNet†[32] ✓ ✓ ✓ ViP3D†[18] ✓ ✓ ✓ P3 [47] ✓ ✓ MP3 [6] ✓ ✓ ✓ ST-P3 [23] ✓ ✓ ✓ LA V [8] ✓ ✓ ✓ ✓ (c.3) UniAD (ours) ✓ ✓ ✓ ✓ ✓ ✓ Table 1. Tasks comparison and taxonomy. “Design” column is classified as in Fig. 1. “Det.” denotes 3D object detection, “Map” stands for online mapping, and “Occ.” is occupancy map predic-tion. †: these works are not proposed directly for planning, yet they still share the spirit of joint perception and prediction. UniAD conducts five essential driving tasks to facilitate planning. erse [59], and industrialized products, e.g., Mobileye [38], Tesla [49], Nvidia [41], etc. In MTL, the co-training strat-egy across tasks could leverage feature abstraction; it could effortlessly extend to additional tasks, and save computa-tion cost for onboard chips. However, such a scheme may cause undesirable “negative transfer” [16, 36]. By contrast, the emergence of end-to-end autonomous driving [6, 8, 12, 23, 54] unites all nodes from perception, prediction and planning as a whole . The choice and priority of preceding tasks should be determined in favor of plan-ning. The system should be planning-oriented, exquisitely designed with certain components involved, such that there are few accumulative error as in the standalone option or negative transfer as in the MTL scheme. Table 1 describes the task taxonomy of different framework designs. Following the end-to-end paradigm, one “tabula-rasa” practice is to directly predict the planned trajectory, with-out any explicit supervision of perception and prediction as shown in Fig. 1(c.1). Pioneering works [7, 9, 14, 15, 45, 53, 54, 60] verified this vanilla design in the closed-loop simu-lation [17]. While such a direction deserves further explo-ration, it is inadequate in safety guarantee and interpretabil-ity, especially for highly dynamic urban scenarios. In this paper, we lean toward another perspective and ask the fol-lowing question: Toward a reliable and planning-oriented autonomous driving system, how to design the pipeline in favor of planning? which preceding tasks are requisite? An intuitive resolution would be to perceive surrounding objects, predict future behaviors and plan a safe maneuver explicitly, as illustrated in Fig. 1(c.2). Contemporary ap-proaches [6,18,23,32,47] provide good insights and achieve impressive performance. However, we argue that the devil lies in the details; previous works more or less fail to con-sider certain components (see block (c.2) in Table 1), being reminiscent of the planning-oriented spirit. We elaborateon the detailed definition and terminology, the necessity of these modules in the Supplementary. To this end, we introduce UniAD , a Unified Autonomous Driving algorithm framework to leverage five essential tasks toward a safe and robust system as depicted in Fig. 1(c.3) and Table 1(c.3). UniAD is designed in a planning-oriented spirit. We argue that this is nota simple stack of tasks with mere engineering effort. A key component is the query-based design to connect all nodes. Compared to the classic bounding box representation, queries benefit from a larger receptive field to soften the compounding error from up-stream predictions. Moreover, queries are flexible to model and encode a variety of interactions, e.g., relations among multiple agents. To the best of our knowledge, UniAD is the first work to comprehensively investigate the joint co-operation of such a variety of tasks including perception, prediction and planning in the field of autonomous driving. The contributions are summarized as follows. (a)we embrace a new outlook of autonomous driving framework following a planning-oriented philosophy, and demonstrate the necessity of effective task coordination, rather than stan-dalone design or simple multi-task learning. (b)we present UniAD, a comprehensive end-to-end system that leverages a wide span of tasks. The key component to hit the ground running is the query design as interfaces connecting all nodes. As such, UniAD enjoys flexible intermediate rep-resentations and exchanging multi-task knowledge toward planning. (c)we instantiate UniAD on the challenging benchmark for realistic scenarios. Through extensive abla-tions, we verify the superiority of our method over previous state-of-the-arts in all aspects. We hope this work could shed some light on the target-driven design for the autonomous driving system, providing a starting point for coordinating various driving tasks.
Hsu_ReVISE_Self-Supervised_Speech_Resynthesis_With_Visual_Input_for_Universal_and_CVPR_2023
Abstract Prior works on improving speech quality with visual in-put typically study each type of auditory distortion sepa-rately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Regenera-tion, where the goal is not to reconstruct the exact refer-ence clean signal, but to focus on improving certain as-pects of speech while not necessarily preserving the rest such as voice. In particular, this paper concerns intelli-gibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is com-posed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual regeneration tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only
1.6 hours of training data. Similarly, ReVISE greatly sup-presses noise and improves quality. Project page: https: //wnhsu.github.io/ReVISE/ . 1. Introduction Unlike anechoic studio recordings, speech in-the-wild is rarely clean: outdoor recordings are corrupted with all sorts of natural and non-natural sounds like wind and traf-fic noise [6]. Speech recorded indoor often contains rever-beration, mechanical noise, and overlapping speech from non-target speakers [54]. On top of those, recording de-vices and network may also introduce other types of dis-(a) Speech inpainting ∅Masked speechOverlapping speechNoisy speechNo speech(b) Video-to-speech(c) Speech separation(d) Speech denoising Figure 1. Illustration of A VSE with various distortion. tortion, such as amplitude clipping, band-pass filtering, and package loss [1]. Distortion makes it hard for both human and machines to comprehend speech [10, 31]. Improving the quality and the intelligibility of corrupted speech is es-sential for assistive listening and robust speech processing. Generating clean speech signal based on its corrupted ver-sion is herein referred to as speech enhancement. In speech enhancement, one line of research uses vi-sual speech to provide auxiliary information [14, 17, 19, 57], which is known as audio-visual speech enhancement. Audio-visual speech (e.g., talking-head videos) can be seen as a multimodal view of the speech. Since visual modality is immune to acoustic noise, combining both views enables more robust estimation of shared generating factors such as textual content. Meanwhile, despite sharing the same goal of recovering corrupted speech, prior work often treats en-hancement from each type of distortion as a separate prob-lem: speech denoising and dereverberation addresses addi-tive and convolutive non-speech noises [17], speech separa-tion focuses on speech noises that exhibit similar character-istics to the target speech [19], speech inpainting aims to re-cover dropped audio frames [38], and video-to-speech syn-thesis is the extreme case of inpainting where all the frames are dropped [15, 36]. As a result, algorithms designed for one type of distortion may not be effective for another. In this paper, we advocate a more holistic approach to audio-visual speech enhancement, where an algorithm 1 This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18795 should be evaluated on all types of distortion, and a sin-gle model should also be effective on all types of corrupted data, shifting from building distortion-specific models to a universal model. Following [48], we coin the concept uni-versal speech enhancement . In turn, we also argue that exact reconstruction of the reference clean speech is not an appropriate objective especially when the level of distortion is high. To address the issue, we propose to relax the objec-tive and solve the generalized speech regeneration (GSR) problem: instead of focusing on exact reconstruction and measuring metrics like signal-to-noise ratios (SNRs), the goal of GSR is to enhance a predefined set of attributes, such as content intelligibility that can be measured by word error rates (WERs). In contrast, the model does not need to preserve other attributes. This paper focuses on recovering intelligibility, syn-chronicity and quality. The task of improving those could be broken down into two steps: predicting the frame-level content and synthesizing high quality audio from it. In-spired by the resemblance to audio-visual speech recogni-tion and speech synthesis, we propose ReVISE, short for Resynthesis with Visual Input for Speech R Egeneration. ReVISE is composed of a pseudo audio-visual speech recognition model (P-A VSR) and a pseudo text-to-speech synthesis model (P-TTS); instead of using text as the out-put/input of the two models, self-supervised speech units that encode speech content [23, 44] are adopted to bridge them, making the system free of text supervision. Fur-thermore, observing the gain on speech recognition brought by self-supervised learning, we also initialize the P-A VSR with a self-supervised audio-visual speech model, A V-HuBERT [49], which significantly improves the perfor-mance, especially on low-resource setups. To demonstrate the universality and compare with the literature, we construct four types of corrupted speech us-ing Lip-reading Sentences 3 (LRS3) [2] and AudioSet [20], including audio-visual denoising, separation, inpainting, and video-to-speech. Results suggest that ReVISE is the first model capable of high-quality in-the-wild video-to-speech synthesis, while prior models fail to produce intel-ligible content [21] or generate low-quality audio for in-the-wild videos [36]. Compared to a strong masking-based method [19] on denoising and separation, ReVISE achieves comparable performance on mid-/high-SNR conditions (0-20dB), and are significantly stronger on lower SNR con-ditions, reducing WERs by up to 37.5% absolute and im-proving MOS by up to 1.09. Finally, we also show that a single ReVISE model can tackle all four types of distor-tion with similar performance to distortion-specific mod-els. To further show the data efficiency and effectiveness of ReVISE on real data, we evaluate it on EasyCom [13], an audio-visual speech dataset addressing the cocktail party problem which contains clean close-talking recordings andnoisy distant recordings with background noise, loud in-terfering speech, and room reverberation. Results show that ReVISE still shines in this challenging setup, reducing WER by up to 32% while other methods fail.
Guo_From_Images_to_Textual_Prompts_Zero-Shot_Visual_Question_Answering_With_CVPR_2023
Abstract Large language models (LLMs) have demonstrated ex-cellent zero-shot generalization to new language tasks. However, effective utilization of LLMs for zero-shot visual question-answering (VQA) remains challenging, primarily due to the modality disconnect and task disconnect be-tween the LLM and VQA tasks. End-to-end training on multimodal data may bridge the disconnects, but is inflex-ible and computationally expensive. To address this is-sue, we propose Img2LLM , a plug-and-play module that provides LLM prompts to enable LLMs to perform zero-shot VQA tasks without end-to-end training. We develop LLM-agnostic models describe image content as exemplar question-answer pairs, which prove to be effective LLM prompts. Img2LLM offers the following benefits: 1) It achieves comparable or better performance than methods relying on end-to-end training. For example, we outper-form Flamingo [3] by 5.6% on VQAv2. On the challeng-ing A-OKVQA dataset, our method outperforms few-shot methods by as much as 20%. 2) It flexibly interfaces with a wide range of LLMs to perform VQA. 3) It eliminates the need to specialize LLMs using end-to-end finetuning and serve highly specialized LLMs to end users, thereby reduc-ing cost. Code is available via the LAVIS [28] framework athttps://github.com/salesforce/LAVIS/ tree/main/projects/img2llm-vqa .
1. Introduction Visual question answering (VQA) [5] is a prominent vision-language task that finds a broad range of real-world applications, such as assisting blind individuals in under-standing their environments. A diverse set of VQA datasets have been proposed, some focusing on image recognition *Work done while Jiaxian Guo was an intern at Salesforce Research.[5, 17] and others on logical reasoning [39]. However, hu-man annotations are expensive to obtain and may introduce a variety of human biases [6, 10, 63], making the VQA sys-tem brittle towards new answer styles and question types [1, 21]. This has led researchers to zero-shot VQA meth-ods [6, 10, 21] that do not require ground-truth question-answer annotations, thereby facilitating more generalizable VQA systems. Recently, large language models (LLMs) (e.g., [8, 66]) have demonstrated excellent capabilities to perform tasks with zero in-domain data, conduct logical reasoning, and apply commonsense knowledge in NLP tasks [26, 55, 57]. As a result, recent approaches [3, 52, 61] have resorted to leverage LLMs in zero-shot VQA. However, applying LLMs to VQA tasks is less than straightforward, due to (1) the modality disconnect between vision and language and (2) the task disconnect between language modeling and question answering. A common technique is to finetune a vision encoder jointly with the LLM [3, 20, 52] to align the vision and language represen-tation spaces, but this can incur prohibitive computational and data cost. For example, Flamingo [3] finetunes on bil-lions of image-text pairs with thousands of TPUs. Further, the finetuning specializes and introduces strong interdepen-dence between the vision encoder and the LLM. If we need to upgrade the LLM as new versions emerge, the entire model needs to undergo expensive re-training. In contrast to the end-to-end integration of LLM into a VQA system, this paper proposes a modular VQA sys-tem built on top of frozen off-the-shelf LLMs. This brings two benefits. First, it can reduce the deployment cost and simplify the deployment. Second, upgrading the LLM is straightforward. However, it is challenging to bridge the modality disconnect and task disconnect without end-to-end training. PICa [61] converts images into captions, and pro-vides exemplar QA pairs from training data as prompt to the LLM. However, doing so assumes the existence of an-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 10867 notated training data and the performance is sensitive to the selection of few-shot exemplars. We propose Img2LLM , a plug-and-play module that en-ables off-the-shelf LLMs to perform zero-shot VQA. The central insight of Img2LLM is that we can utilize a vision-language model ( e.g.BLIP [30]) and a question-generation model to translate the image content into synthetic question-answer (QA) pairs, which are fed to the LLM as part of the prompt. These exemplar QA pairs tackle the modal-ity disconnect by describing the image content verbally, and tackle the task disconnect by demonstrating the QA task to the LLM. Notably, the exemplar QA pairs are con-structed entirely based on the test image and question, ob-viating the need for similar few-shot examples as required by PICa [61], which are not always available in practical zero-shot scenarios. When applied to the open-source OPT language models [66], Img2LLM achieves comparable or superior zero-shot VQA performance to methods that per-form costly end-to-end training. With this paper, we make the following contributions. • We propose Img2LLM, a plug-and-play module that converts an image into synthetic question-answer pairs based solely on the current image of the question. Img2LLM bridges the modality disconnect between language and vision as well as the task discon-nect between language modeling and visual question-answering. • Img2LLM enables off-the-shelf LLMs to perform zero-shot VQA without costly end-to-end training or specialized textual QA networks [40], thereby allow-ing low-cost and flexible model deployment and pain-less LLM upgrades (Table 3). • Our experimental results show that the OPT models equipped with Img2LLM achieve zero-shot VQA per-formance that is competitive or superior to the end-to-end trained models. For example, we outperform Flamingo [3] by 5.6% on VQAv2. We even outper-form many few-shot VQA methods.
He_FastInst_A_Simple_Query-Based_Model_for_Real-Time_Instance_Segmentation_CVPR_2023
Abstract Recent attention in instance segmentation has focused on query-based models. Despite being non-maximum suppres-sion (NMS)-free and end-to-end, the superiority of these models on high-accuracy real-time benchmarks has not been well demonstrated. In this paper, we show the strong potential of query-based models on efficient instance seg-mentation algorithm designs. We present FastInst, a sim-ple, effective query-based framework for real-time instance segmentation. FastInst can execute at a real-time speed (i.e., 32.5 FPS) while yielding an AP of more than 40 ( i.e., 40.5 AP) on COCO test-dev without bells and whis-tles. Specifically, FastInst follows the meta-architecture of recently introduced Mask2Former. Its key designs in-clude instance activation-guided queries, dual-path update strategy, and ground truth mask-guided learning, which en-able us to use lighter pixel decoders, fewer Transformer decoder layers, while achieving better performance. The experiments show that FastInst outperforms most state-of-the-art real-time counterparts, including strong fully con-volutional baselines, in both speed and accuracy. Code can be found at https://github.com/junjiehe96/ FastInst .
1. Introduction Instance segmentation aims to segment all objects of in-terest in an image. The mainstream methods like Mask R-CNN [5, 15, 19, 28] follow the design of detection-then-segmentation. Despite being simple and intuitive, those methods generate a lot of duplicate region proposals that introduce redundant computations. To improve efficiency, many single-stage methods [2, 8, 23, 42] built upon Fully Convolutional Networks (FCNs) [29] appear. They segment objects end-to-end without region proposals. The inference speed of such methods is appealing, especially in real-time scenes. However, due to the dense predictions, the classical single-stage methods still rely on manually-designed post-processing steps like non-maximum suppression (NMS). MEInst-512CenterMask-600CondInst-800SOLOv2-448 PolarMask-600YOLACT-550OrienMask-544SparseInst-608Mask2Former-640Mask RCNN-800FastInst-D1-576FastInst-D2-640FastInst-D3-640FastInst-D6-640 25303540 102030405060COCO Mask APV100 batch 1 inference time (ms)Real-TimeFigure 1. Speed-performance trade-off on COCO test-dev . All models employ ResNet-50 [16] as the backbone except Orien-Mask with DarkNet-53 [33]. Our FastInst surpasses most state-of-the-art real-time instance segmentation algorithms in both speed and accuracy. To keep the speed and accuracy in a similar order, Mask2Former here takes the pyramid pooling module-based [48] FPN as the pixel decoder, the same as FastInst and SparseInst. Recently, with the success of DETR [4] in object detec-tion, query-based single-stage instance segmentation meth-ods [9, 10, 25, 43] have emerged. Instead of convolution, they exploit the versatile and powerful attention mecha-nism [39] combined with a sequence of learnable queries to infer the object class and segmentation mask. For exam-ple, Mask2Former [9] simplifies the workflow of instance segmentation by adding a pixel decoder and a masked-attention Transformer decoder on top of a backbone. Un-like previous methods [15, 42], Mask2Former does not re-quire additional handcrafted components, such as training target assignment and NMS post-processing. While being simple, Mask2Former has its own issues: (1) it requires a large number of decoder layers to decode the object queries since its queries are learned static and need a lengthy pro-cess to refine; (2) It relies upon a heavy pixel decoder, e.g., multi-scale deformable attention Transformer (MSDefor-mAttn) [50], because its object segmentation mask straight-forwardly depends on the output of the pixel decoder, which This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23663 is used as a per-pixel embedding feature for distinguishing different objects; (3) masked attention restricts the recep-tive field of each query, which may cause the Transformer decoder to fall into a suboptimal query update process. Al-though Mask2Former achieves outstanding performance, its superiority on fast, efficient instance segmentation has not been well demonstrated, which yet is critical for many real-world applications such as self-driving cars and robotics. In fact, due to the lack of prior knowledge and the high com-putational complexity of the attention mechanism, the ef-ficiency of query-based models is generally unsatisfactory [9, 18, 25]. The efficient real-time instance segmentation benchmarks are still dominated by classical convolution-based models [11, 42]. In this paper, we fill this gap by proposing FastInst, a concise and effective query-based framework for real-time instance segmentation. We demonstrate that the query-based model can achieve outstanding performance on the instance segmentation task while maintaining a fast speed, showing great potential in efficient instance segmentation algorithm design. As an example, our designed fastest query-based model with ResNet-50 [16] backbone achieves 35.6 AP at 53.8 FPS (frames-per-second) on the COCO [27] test-dev , evaluated on a single V100 GPU (see Fig-ure 1); moreover, our best trade-off model can execute at a real-time speed, i.e., 32.5 FPS, while yielding an AP of more than 40, i.e., 40.5 AP, which to the best of our knowl-edge, has not yet been achieved in previous methods. Specifically, our model follows the meta-architecture of Mask2Former [9]. To achieve efficient real-time in-stance segmentation, we have proposed three key tech-niques. First, we use instance activation-guided queries, which dynamically pick the pixel embeddings with high semantics from the underlying feature map as the initial queries for the Transformer decoder. Compared with static zero [4] or learnable [9, 10] queries, these picked queries contain rich embedding information about potential objects and reduce the iteration update burden of the Transformer decoder. Second, we adopt a dual-path architecture in the Transformer decoder where the query features and the pixel features are updated alternately. Such a design enhances the representational ability of pixel features and saves us from the heavy pixel decoder design. Moreover, it makes a direct communication between query features and pixel features, which speeds up the iterative update convergence and effectively reduces the dependence on the number of decoder layers. Third, to prevent the masked attention from falling into a suboptimal query update process, we intro-duce ground truth mask-guided learning. We replace the mask used in the standard masked attention with the last-layer bipartite matched ground truth mask to forward the Transformer decoder again and use a fixed matching assign-ment to supervise the outputs. This guidance allows eachquery to see the whole region of its target predicted object during training and helps masked attention attend within a more appropriate foreground region. We evaluate FastInst on the challenging MS COCO dataset [27]. As shown in Figure 1, FastInst obtains strong performance on the COCO benchmark while staying fast, surpassing most of the previous state-of-the-art methods. We hope FastInst can serve as a new baseline for real-time instance segmentation and advance the development of query-based instance segmentation models.
Cao_Observation-Centric_SORT_Rethinking_SORT_for_Robust_Multi-Object_Tracking_CVPR_2023
Abstract Kalman filter (KF) based methods for multi-object track-ing (MOT) make an assumption that objects move linearly. While this assumption is acceptable for very short peri-ods of occlusion, linear estimates of motion for prolonged time can be highly inaccurate. Moreover, when there is no measurement available to update Kalman filter param-eters, the standard convention is to trust the priori state estimations for posteriori update. This leads to the accu-mulation of errors during a period of occlusion. The er-ror causes significant motion direction variance in prac-tice. In this work, we show that a basic Kalman filter can still obtain state-of-the-art tracking performance if proper care is taken to fix the noise accumulated during occlu-sion. Instead of relying only on the linear state estimate (i.e., estimation-centric approach), we use object observa-tions ( i.e., the measurements by object detector) to compute a virtual trajectory over the occlusion period to fix the error accumulation of filter parameters. This allows more time steps to correct errors accumulated during occlusion. We name our method Observation-Centric SORT (OC-SORT). It remains Simple, Online, and Real-Time but improves ro-bustness during occlusion and non-linear motion. Given off-the-shelf detections as input, OC-SORT runs at 700+ FPS on a single CPU. It achieves state-of-the-art on multi-ple datasets, including MOT17, MOT20, KITTI, head track-ing, and especially DanceTrack where the object motion is highly non-linear. The code and models are available at https://github.com/noahcao/OC_SORT .
1. Introduction We aim to develop a motion model-based multi-object tracking (MOT) method that is robust to occlusion and non-linear motion. Most existing motion model-based algo-rithms assume that the tracking targets have a constant ve-locity within a time interval, which is called the linear mo-tion assumption. This assumption breaks in many practical scenarios, but it still works because when the time interval is small enough, the object’s motion can be reasonably ap-(a)SORT (b)The proposed OC-SORT Figure 1. Samples from the results on DanceTrack [54]. SORT and OC-SORT use the same detection results. On the third frame, SORT encounters an ID switch for the backflip target while OC-SORT tracks it consistently. proximated as linear. In this work, we are motivated by the fact that most of the errors from motion model-based track-ing methods occur when occlusion and non-linear motion happen together. To mitigate the adverse effects caused, we first rethink current motion models and recognize some lim-itations. Then, we propose addressing them for more robust tracking performance, especially in occlusion. As the main branch of motion model-based tracking, filtering-based methods assume a transition function to pre-dict the state of objects on future time steps, which are called state “estimations”. Besides estimations, they lever-age an observation model, such as an object detector, to derive the state measurements of target objects, also called “observations”. Observations usually serve as auxiliary in-formation to help update the posteriori parameters of the filter. The trajectories are still extended by the state estima-tions. Among this line of work, the most widely used one is SORT [3], which uses a Kalman filter (KF) to estimate object states and a linear motion function as the transition This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9686 function between time steps. However, SORT shows insuf-ficient tracking robustness when the object motion is non-linear, and no observations are available when updating the filter posteriori parameters. In this work, we recognize three limitations of SORT. First, although the high frame rate is the key to approximat-ing the object motion as linear, it also amplifies the model’s sensitivity to the noise of state estimations. Specifically, be-tween consecutive frames of a high frame-rate video, we demonstrate that the noise of displacement of the object can be of the same magnitude as the actual object displacement, leading to the estimated object velocity by KF suffering from a significant variance. Also, the noise in the veloc-ity estimate will accumulate into the position estimate by the transition process. Second, the noise of state estima-tions by KF is accumulated along the time when there is no observation available in the update stage of KF. We show that the error accumulates very fast with respect to the time of the target object’s being untracked. The noise’s influence on the velocity direction often makes the track lost again even after re-association. Last, given the development of modern detectors, the object state by detections usually has lower variance than the state estimations propagated along time steps by a fixed transition function in filters. However, SORT is designed to prolong the object trajectories by state estimations instead of observations. To relieve the negative effect of these limitations, we pro-pose two main innovations in this work. First, we design a module to use object state observations to reduce the accu-mulated error during the track’s being lost in a backcheck fashion. To be precise, besides the traditional stages of pre-dictandupdate , we add a stage of re-update to correct the accumulated error. The re-update is triggered when a track is re-activated by associating to an observation after a period of being untracked. The re-update uses virtual observations on the historical time steps to prevent error accumulation. The virtual observations come from a trajectory generated using the last-seen observation before untracked and the lat-est observation re-activating this track as anchors. We name itObservation-centric Re-Update (ORU) . Besides ORU, the assumption of linear motion provides the consistency of the object motion direction. But this cue is hard to be used in SORT’s association because of the heavy noise in direction estimation. But we propose an observation-centric manner to incorporate the direction consistency of tracks in the cost matrix for the association. We name it Observation-Centric Momentum (OCM) . We also provide analytical justification for the noise of veloc-ity direction estimation in practice. The proposed method, named as Observation-Centric SORT orOC-SORT in short, remains simple, online, real-time and significantly improves robustness over occlusion and non-linear motion. Our contributions are summarizedas the following: 1. We recognize, analytically and empirically, three lim-itations of SORT, i.e. sensitivity to the noise of state estimations, error accumulation over time, and being estimation-centric;
Cao_Multi-View_Azimuth_Stereo_via_Tangent_Space_Consistency_CVPR_2023
Abstract We present a method for 3D reconstruction only us-ing calibrated multi-view surface azimuth maps. Our method, multi-view azimuth stereo, is effective for texture-less or specular surfaces, which are difficult for conven-tional multi-view stereo methods. We introduce the concept of tangent space consistency: Multi-view azimuth observa-tions of a surface point should be lifted to the same tangent space. Leveraging this consistency, we recover the shape by optimizing a neural implicit surface representation. Our method harnesses the robust azimuth estimation capabili-ties of photometric stereo methods or polarization imaging while bypassing potentially complex zenith angle estima-tion. Experiments using azimuth maps from various sources validate the accurate shape recovery with our method, even without zenith angles.
1. Introduction Recovering 3D shapes of real-world scenes is a fun-damental problem in computer vision, and multi-view stereo (MVS) has emerged as a mature geometric method for reconstructing dense scene points. Using 2D images taken from different viewpoints, MVS finds dense corre-spondences between images based on the photo-consistency assumption, that a scene point’s brightness should appear similar across different viewpoints [13, 37–39]. However, MVS struggles with textureless or specular surfaces, as the lack of texture leads to ambiguities in establishing corre-spondences, and the presence of specular reflections vio-lates the photo-consistency assumption [11]. Photometric stereo (PS) offers an alternative approach for dealing with textureless and specular surfaces [32]. By estimating single-view surface normals using vary-ing lighting conditions [40], PS enables high-fidelity 2.5D surface reconstruction [26]. However, extending PS to a multi-view setup, known as multi-view photometric stereo (MVPS) [15], significantly increases image acquisi-tion costs, as it requires multi-view and multi-light images under highly controlled lighting conditions [21]. To mitigate image acquisition costs, simpler lighting se-tups such as circularly or symmetrically placed lights have been explored [2, 3, 23, 44]. With these lighting setups, es-timating the surface normal’s azimuth (the angle in the im-age plane) becomes considerably easier than estimating the zenith (the angle from the camera optical axis) [2, 3, 23]. The ease of azimuth estimation also appears in polarization imaging [29]. While azimuth can be determined up to a π-ambiguity using only polarization data, zenith estimation requires more complex steps [24, 34, 36]. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 825 In this paper, we introduce Multi-View Azimuth Stereo (MV AS), a method that effectively uses calibrated multi-view azimuth maps for shape recovery (Fig. 1). MV AS is particularly advantageous when working with ac-curate azimuth acquisition techniques. With circular-light photometric stereo [3], MV AS has the potential to be ap-plied to surfaces with arbitrary isotropic materials. With polarization imaging [7], MV AS allows a passive image ac-quisition as simple as MVS while being more effective for textureless or specular surfaces. The key insight enabling MV AS is the concept of Tan-gent Space Consistency (TSC) for multi-view azimuth an-gles. We find that the azimuth can be transformed into a tangent using camera orientation. Therefore, multi-view azimuth observations of the same surface point should be lifted to the same tangent space (Fig. 2). TSC helps de-termine if a 3D point lies on the surface, similar to photo-consistency for finding image correspondences. Moreover, TSC can directly determine the surface normal as the vector orthogonal to the tangent space, enabling high-fidelity re-construction comparable to MVPS methods. Notably, TSC is invariant to the π-ambiguity of the azimuth angle, making MV AS well-suited for polarization imaging. With TSC, we reconstruct the surface implicitly repre-sented as a neural signed distance function (SDF), by con-straining the surface normals ( i.e., the gradients of the SDF). Experimental results show that MV AS achieves comparable reconstruction performance to MVPS methods [18, 28, 41], even in the absence of zenith information. Further, MV AS outperforms MVS methods [31] in textureless or specular surfaces using azimuth maps from symmetric-light photo-metric stereo [23] or a snapshot polarization camera [7]. In summary, this paper’s key contributions are: • Multi-View Azimuth Stereo (MV AS), which enables accurate shape reconstruction even for textureless and specular surfaces; • Tangent Space Consistency (TSC), which establishes the correspondence between multi-view azimuth ob-servations, thereby facilitating the effective use of az-imuth data in 3D reconstruction; and • A comprehensive analysis of TSC, including its neces-sary conditions, degenerate scenarios, and the applica-tion to optimizing neural implicit representations.
Jain_VectorFusion_Text-to-SVG_by_Abstracting_Pixel-Based_Diffusion_Models_CVPR_2023
Abstracting Pixel-Based Diffusion Models Ajay Jain∗Amber Xie∗Pieter Abbeel UC Berkeley {ajayj,amberxie,pabbeel }@berkeley.edu Figure 1. Text-to-SVG with VectorFusion. When (a) raster graphics sampled from Stable Diffusion are (b) auto-traced, they lose details that are hard to represent within the constraints of the abstraction. (c-d) VectorFusion improves fidelity and consistency with the caption by directly optimizing paths with a distillation-based diffusion loss. Find videos and more results at https://ajayj.com/vectorfusion. Abstract Diffusion models have shown impressive results in text-to-image synthesis. Using massive datasets of captioned images, diffusion models learn to generate raster images of highly diverse objects and scenes. However, designers frequently use vector representations of images like Scalable Vector Graphics (SVGs) for digital icons or art. Vector graphics can be scaled to any size, and are compact. We show that a text-conditioned diffusion model trained on pixel representations of images can be used to generate SVG-exportable vector graphics. We do so without access to large datasets of captioned SVGs. By optimizing a differentiable vector graphics rasterizer, our method, VectorFusion, distills abstract semantic knowledge out of a pretrained diffusion model. Inspired by recent text-to-3D work, we learn an SVG consistent with a caption using Score Distillation Sampling. To accelerate generation and improve fidelity, VectorFusion also initializes from an image sample. Experiments show greater quality than prior work, and demonstrate a range of styles including pixel art and sketches. ∗Equal contribution1. Introduction Graphic designers and artists often express concepts in an abstract manner, such as composing a few shapes and lines into a pattern that evokes the essence of a scene. Scal-able Vector Graphics (SVGs) provide a declarative format for expressing visual concepts as a collection of primitives. Primitives include B ´ezier curves, polygons, circles, lines and background colors. SVGs are the defacto format for export-ing graphic designs since they can be rendered at arbitrarily high resolution on user devices, yet are stored and transmit-ted with a compact size, often only tens of kilobytes. Still, designing vector graphics is difficult, requiring knowledge of professional design tools. Recently, large captioned datasets and breakthroughs in diffusion models have led to systems capable of generating diverse images from text including DALL-E 2 [28], Im-agen [33] and Latent Diffusion [31]. However, the vast majority of images available in web-scale datasets are raster-ized, expressed at a finite resolution with no decomposition into primitive parts nor layers. For this reason, existing dif-fusion models can only generate raster images. In theory, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 1911 a train* an owl standing on a wire* Underwater Submarine* a boat* A photo of a Ming Dynasty vase on a leather topped table.* A smiling sloth wearing a leather jacket, a cowboy hat and a kilt.* a tuba with red flowers protruding from its bell* a blue poison dart frog sitting on a water lily* a crown* the silhouette of an elephant* an espresso machine* the Sydney Opera House* a baby penguin* a tree* a family vacation to Walt Disney World* a spaceship flying in a starry night sky* the Great Wall* Electric guitar** A delicious hamburger** Daft Punk** watercolor painting of a fire-breathing dragon † a bottle of beer next to an ashtray with a half-smoked cigarette † a brightly colored mushroom growing on a log † Figure 2. Given a caption, VectorFusion generates abstract vector graphics in an SVG format. We use a pre-trained diffusion model trained only on rasterized images to guide a differentiable vector renderer. VectorFusion supports diverse objects and styles. To select a style such as flat polygonal vector icons, abstract line drawings or pixel art, we constrain the vector representation to subset of possible primitive shapes and use different prompt modifiers to encourage an appropriate style: * ...minimal flat 2d vector icon. lineal color. on a white background. trending on artstation , ** ...pixel art. trending on artstation ,†...minimal 2d line drawing. trending on artstation. Please see videos of the optimization process on our project webpage. 1912 diffusion models could be trained to directly model SVGs, but would need specialized architectures for variable-length hierarchical sequences, and significant data collection work. How can we use diffusion models pretrained on pixels to generate high-quality vector graphics? In this work, we provide a method for generating high quality abstract vector graphics from text captions, shown in Fig. 1. We start by evaluating a two phase text-to-image and image-to-vector baseline: generating a raster image with a pretrained diffusion model, then vectorizing it. Traditionally, designers manually convert simple rasterized images into a vector format by tracing shapes. Some ML-based tools [19] can automatically approximate a raster image with an SVG. Unfortunately, we find that text-to-image diffusion models frequently produce complex images that are hard to represent with simple vectors, or are incoherent with the caption (Fig 1, Stable Diffusion + LIVE). To improve quality of the SVG and coherence with the caption, we incorporate the pretrained text-to-image diffu-sion model in an optimization loop. Our approach, VectorFu-sion, combines a differentiable vector graphics renderer [16] and a recently proposed score distillation sampling (SDS) loss [26] to iteratively refine shape parameters. Intuitively, score distillation converts diffusion sampling into an opti-mization problem that allows the image to be represented by an arbitrary differentiable function. In our case, the differen-tiable function is the forward rasterization process, and the diffusion model provides a signal for improving the raster. To adapt SDS to text-to-SVG synthesis, we make the follow-ing contributions: •We extend score distillation sampling to open source latent space diffusion models like Stable Diffusion, •improve efficiency and quality by initializing near a raster image sample, •propose SVG-specific regularization including path reinitialization, •and evaluate different sets of shape primitives and their impact on style. In experiments, VectorFusion generates iconography, pixel art and line drawings from diverse captions. VectorFusion also achieves greater quality than CLIP-based approaches that transfer a discriminative vision-language representation. 2. Related Work A few works have used pretrained vision-language mod-els to guide vector graphic generation. VectorAscent [11] and CLIPDraw [4] optimize CLIP’s image-text similarity metric [27] to generate vector graphics from text prompts, with a procedure similar to DeepDream [23] and CLIP fea-ture visualization [5]. StyleCLIPDraw [35] extends CLIP-Draw to condition on images with an auxiliary style loss with a pretrained VGG16 [36] model. Arnheim [3] parameterizes SVG paths with a neural network, and CLIP-CLOP [22] usesan evolutionary approach to create image collages. Though we also use pretrained vision-language models, we use a generative model rather than a discriminative model. Recent work has shown the success of text-to-image gen-eration. DALL-E 2 [28] learns an image diffusion model conditioned on CLIP’s text embeddings. Our work uses Stable Diffusion [31] (SD), a text-to-image latent diffusion model. While these models produce high-fidelity images, they cannot be directly transformed into vector graphics. A number of works generate vector graphics from input images. We extend the work of Layer-wise Image Vectoriza-tion (LIVE) [19], which iteratively optimizes closed B ´ezier paths with a differentiable rasterizer, DiffVG [16]. We also take inspiration from inverse graphics with dif-fusion models. Diffusion models have been used in zero-shot for image-to-image tasks like inpainting [18]. DDPM-PnP [6] uses diffusion models as priors for conditional image generation, segmentation, and more. DreamFusion [26] uses 2D diffusion as an image prior for text-to-3D synthesis with a more efficient loss than DDPM-PnP, discussed in Section 3.3. Following [26], we use diffusion models as transferable pri-ors for vector graphics. Concurrent work [20] also adapts the SDS loss for latent-space diffusion models. 3. Background 3.1. Vector representation and rendering pipeline Vector graphics are composed of primitives. For our work, we use paths of segments delineated by control points. We configure the control point positions, shape fill color, stroke width and stroke color. Most of our experiments use closed B´ezier curves. Different artistic styles are accomplished with other primitives, such as square shapes for pixel-art synthesis and unclosed B ´ezier curves for line art. To render to pixel-based formats, we rasterize the prim-itives. While many primitives would be needed to express a realistic photogaph, even a few can be combined into rec-ognizable, visually pleasing objects. We use DiffVG [16], a differentiable rasterizer that can compute the gradient of the rendered image with respect to the parameters of the SVG paths. Many works, such as LIVE [19], use DiffVG to vectorize images, though such transformations are lossy. 3.2. Diffusion models Diffusion models are a flexible class of likelihood-based generative models that learn a distribution by denoising. A diffusion model generates data by learning to gradually map samples from a known prior like a Gaussian toward the data distribution. During training, a diffusion model optimizes a variational bound on the likelihood of real data samples [37], similar to a variational autoencoder [15]. This bound reduces to a weighted mixture of denoising objectives [9]: LDDPM (ϕ,x) =Et,ϵ[w(t)∥ϵϕ(αtx+σtϵ)−ϵ∥2 2](1) 1913 where xis a real data sample and t∈ {1,2, . . . T}is a uni-formly sampled timestep scalar that indexes noise schedules αt, σt[14]. ϵis noise of the same dimension as the image sampled from the known Gaussian prior. Noise is added by interpolation to preserve variance. ϵϕis a learned denoising autoencoder that predicts the no
ise content of its input. For images, ϵϕis commonly a U-Net [9, 32], and the weighting function w(t) = 1 [9]. Denoising diffusion models can be trained to predict any linear combination of xandϵ, such as the clean, denoised image x, though an ϵparameterization is simple and stable. At test time, a sampler starts with a draw from the prior xT∼ N(0,1), then iteratively applies the denoiser to update the sample while decaying the noise level tto 0. For example, DDIM [38] samples with the update: ˆx= (xt−σtϵϕ(xt))/αt, Predict clean image xt−1=αt−1ˆx+σt−1ϵϕ(xt) Add back noise (2) For text-to-image generation, the U-Net is conditioned on the caption y,ϵϕ(x, y), usually via cross-attention layers and pooling of the features of a language model [24]. However, conditional diffusion models can produce results incoher-ent with the caption since datasets are weakly labeled and likelihood-based models try to explain all possible images. To increase the usage of a label or caption, classifier-free guidance [10] superconditions the model by scaling up con-ditional model outputs and guiding away from a generic unconditional prior that drops y: ˆϵϕ(x, y) = (1 + ω)∗ϵϕ(x, y)−ω∗ϵϕ(x) (3) CFG significantly improves coherence with a caption at the cost of an additional unconditional forward pass per step. High resolution image synthesis is expensive. Latent diffusion models [31] train on a reduced spatial resolution by compressing 512×512images into a relatively compact 64× 64, 4-channel latent space with a VQGAN-like autoencoder (E, D )[2]. The diffusion model ϵϕis trained to model the latent space, and the decoder Dmaps back to a high resolution raster image. We use Stable Diffusion, a popular open-source text-to-image model based on latent diffusion. 3.3. Score distillation sampling Diffusion models can be trained on arbitrary signals, but it is easier to train them in a space where data is abundant. Stan-dard diffusion samplers like (2)operate in the same space that the diffusion model was trained. While samplers can be modified to solve many image-to-image tasks in zero-shot such as colorization and inpainting [37, 39], until recently, pretrained image diffusion models could only generate ras-terized images. In contrast, image encoders like VGG16 trained on ImageNet and CLIP (Contrastive Language–Image Pre-training) [27] have been transferred to many modalities likemesh texture generation [23], 3D neural fields [12, 13], and vector graphics [4, 11]. Even though encoders are not gen-erative, they can generate data with test time optimization: a loss function in the encoder’s feature space is backpropa-gated to a learned image or function outputting images. DreamFusion [26] proposed an approach to use a pre-trained pixel-space text-to-image diffusion model as a loss function. Their proposed Score Distillation Sampling (SDS) loss provides a way to assess the similarity between an image and a caption: LSDS=Et,ϵ[σt/αtw(t)KL(q(xt|g(θ);y, t)∥pϕ(xt;y, t))]. pϕis the distribution learned by the frozen, pretrained diffu-sion model. qis a unimodal Gaussian distribution centered at a learned mean image g(θ). In this manner, SDS turned sampling into an optimization problem: an image or a differ-entiable image parameterization (DIP) [23] can be optimized withLSDSto bring it toward the conditional distribution of the teacher. This is inspired by probability density distilla-tion [41]. Critically, SDS only needs access to a pixel-space prior pϕ, parameterized with the denoising autoencoder ˆϵϕ. It does not require access to a prior over the parameter space θ. DreamFusion [26] used SDS with the Imagen pixel space diffusion model to learn the parameters of a 3D Neural Ra-diance Field [21]. In practice, SDS gives access to loss gradients, not a scalar loss: ∇θLSDS=Et,ϵ w(t) (ˆϵϕ(xt;y, t)−ϵ)∂x ∂θ (4) 4. Method: VectorFusion In this section, we outline two methods for generating abstract vector representations from pretrained text-to-image diffusion models, including our full VectorFusion approach. 4.1. A baseline: text-to-image-to-vector We start by developing a two stage pipeline: sampling an image from Stable Diffusion, then vectorizing it automat-ically. Given text, we sample a raster image from Stable Diffusion with a Runge-Kutta solver [17] in 50 sampling steps with guidance scale ω= 7.5(the default settings in the Diffusers library [43]). Naively, the diffusion model gener-ates photographic styles and details that are very difficult to express with a few constant color SVG paths. To encourage image generations with an abstract, flat vector style, we ap-pend a suffix to the text: “minimal flat 2d vector icon. lineal color. on a white background. trending on artstation” . This prompt was tuned qualitatively. Because samples can be inconsistent with captions, we sample K images and select the Stable Diffusion sample that is most consistent with the caption according to CLIP ViT-B/16 [27]. CLIP reranking was originally proposed by [29]. We choose K=4. 1914 (b ) Con v er t r aster ima g e to a v ector ( c) V ector F usion: F ine tune b y l aten t scor e distill ation(a) Sa m p le r aster ima g e with S tab le Diff usion A pa nd a r owing a boat in a pond. Figure 3. VectorFusion generates SVGs in three stages. (a)First, we sample a rasterized image from a text-to-image diffusion model like Stable Diffusion with prompt engineering for iconographic aesthetics. (b)Since this image is finite resolution, we approximate it by optimizing randomly initialized vector paths with an L2 loss. The loss is backpropagated through DiffVG, a differentiable vector graphics renderer, to tune path coordinates and color parameters. Paths are added in stages at areas of high loss following [19]. (c)However, the diffusion sample often fails to express all the attributes of the caption, or loses detail when vectorized. VectorFusion finetunes the SVG with a latent score distillation sampling loss to improve quality and coherence. R e pr esen tab le b y S V G p(ima g e | text) (b ) Best S V G a p pr o ximation Ra ndom S V G ( c) V ector F usion(a) Diff usion sa m p leFigure 4. Conceptual diagram motivat-ing our approach. While vectorizing a rasterized diffusion sample is lossy, VectorFusion can either finetune the best approximation or optimize a ran-dom SVG from scratch to sample an SVG that is consistent with the caption. Next, we automatically trace the raster sample to con-vert it to an SVG using the off-the-shelf Layer-wise Image Vectorization program (LIVE) [19]. LIVE produces rela-tively clean SVGs by initializing paths in stages, localized to poorly recontructed, high loss regions. To encourage paths to explain only a single feature of the image, LIVE weights an L2 reconstruction loss by distance to the nearest path, LUDF=1 3w×hX i=1d′ i3X c=1(Ii,c−ˆIi,c)2(5) where Iis the target image, ˆIis the rendering, cindexes RGB channels in I,d′ iis the unsigned distance between pixeli, and the nearest path boundary, and w, h are width and height of the image. LIVE also optimizes a self-intersection regularizer LXing LXing=D1(ReLU (−D2)) + (1 −D1)(ReLU (D2)),(6) where D1is the characteristic of the angle between two segments of a cubic B ´ezier path, and D2is the value of sin(α)of that angle. For further clarifications of notation, please refer to LIVE [19]. This results in a set of paths θLIVE={p1, p2, . . . p k}. Fig-ure 3(b) shows the process of optimizing vector parameters in stages that add 8-16 paths at a time. Figure 1 shows more automatic conversions. While simple, this pipeline often creates images unsuitable for vectorization. 4.2. Sampling vector graphics by optimization The pipeline in 4.1 is flawed since samples may not be easily representable by a set of paths. Figure 4 illustrates the problem. Conditioned on text, a diffusion model producessamples from the distribution pϕ(x|y). Vectorization with LIVE finds a SVG with a close L2 approximation to that image without using the caption y. This can lose information, and the resulting SVG graphic may no longer be coherent with the caption. For VectorFusion, we adapt Score Distillation Sampling to support latent diffusion models (LDM) like the open source Stable Diffusion. We initialize an SVG with a set of paths θ={p1, p2, . . . p k}. Every iteration, DiffVG renders a600×600image x. Like CLIPDraw [4], we augment with perspective transform and random crop to get a 512×512im-agexaug. Then, we propose to compute the SDS loss in latent space using the LDM encoder Eϕ, predicting z=Eϕ(xaug). For each iteration of optimization, we diffuse the latents with random noise zt=αtz+σtϵ, denoise with the teacher model ˆϵϕ(zt, y), and optimize the SDS loss using a latent-space modification of Equation 4: ∇θLLSDS= Et,ϵ w(t) ˆϵϕ(αtzt+σtϵ, y)−ϵ∂z ∂xaug∂xaug ∂θ (7) Since Stable Diffusion is a discrete time model with T= 1000 timesteps, we sample t∼ U(50,950) For efficiency, we run the diffusion model ˆϵθin half-precision. We found it important to compute the Jacobian of the encoder ∂z/∂xaug in full FP32 precision for numerical stability. The term ∂xaug/∂θis computed with autodifferentiation through the augmentations and differentiable vector graphics rasterizer, DiffVG. LLSDS can be seen as an adaptation of LSDSwhere the rasterizer, data augmentation and frozen LDM encoder 1915 Figure 5. An overview of VectorFusion’s latent score distillation optimization procedure. We adapt Score Distillation Sampling [26] to support a vector graphics renderer and a latent-space diffusion prior for raster images. First, we rasterize the SVG given path parameters. We apply data augmentations, encode into a latent space, compute the Score Distillation loss on the latents, and backpropagate through the encoding, augmentation and renderering procedure to update paths. are treated as a single image generator with optimizable parameters θfor the paths. During optimization, we also regularize self-intersections with (6). 4.3. Reinitializing paths In our most flexible setting, synthesizing flat iconographic vectors, we allow path control points, fill colors and SVG background color to be optimized. During the course of optimization, many paths learn low opacity or shrink to a small area and are unused. To encourage usage of paths and therefore more diverse and detailed images, we periodi-cally reinitialize paths with fill-color opacity or area below a threshold. Reinitialized paths are removed
Gartner_Transformer-Based_Learned_Optimization_CVPR_2023
Abstract We propose a new approach to learned optimization where we represent the computation of an optimizer’s up-date step using a neural network. The parameters of the op-timizer are then learned by training on a set of optimization tasks with the objective to perform minimization efficiently. Our innovation is a new neural network architecture, Opti-mus, for the learned optimizer inspired by the classic BFGS algorithm. As in BFGS, we estimate a preconditioning ma-trix as a sum of rank-one updates but use a Transformer-based neural network to predict these updates jointly with the step length and direction. In contrast to several recent learned optimization-based approaches [24, 27], our for-mulation allows for conditioning across the dimensions of the parameter space of the target problem while remaining applicable to optimization tasks of variable dimensionality without retraining. We demonstrate the advantages of our approach on a benchmark composed of objective functions traditionally used for the evaluation of optimization algo-rithms, as well as on the real world-task of physics-based visual reconstruction of articulated 3d human motion.
1. Introduction This work focuses on a new learning-based optimiza-tion methodology. Our approach belongs to the category of learned optimization methods, which represent the up-date step of an optimizer by means of an expressive function such as a multi-layer perceptron. We then learn the param-eters of this function on a set of training optimization tasks. Since the update function of the learned optimizers is esti-mated from data, it can in principle learn various desirable behaviors such as learning-rate schedules [22] or strategies for the exploration of multiple local minima [23]. This is in contrast to traditional optimizers such as Adam [15], or *Work done during an internship at Google.BFGS [11] in which updates are derived in terms of first-principles. However, as these are general and hard-coded, they may not be able to take advantage of the regularities in the loss functions for specific classes of problems. Learned optimizers are particularly appealing for appli-cations that require repeatedly solving related optimization tasks. For example, 3d human pose estimation is often for-mulated as the minimization of a particular loss function [12, 19, 30, 46]. Such approaches estimate the 3d state ( e.g. pose and shape) given image observations by repeatedly op-timizing the same objective function for many closely re-lated problems, including losses and state contexts. Tra-ditional optimization treats each problem as independent, which is potentially suboptimal as it does not aggregate ex-perience across multiple related optimization runs. The main contribution of this paper is a novel neural net-work architecture for learned optimization. Our architecture is inspired by classical BFGS approaches that iteratively es-timate the Hessian matrix to precondition the gradient. Sim-ilarly to BFGS, our approach iteratively updates the pre-conditioner using rank-one updates. In contrast to BFGS, we use a transformer-based [40] neural network to generate such updates from features encoding an optimization tra-jectory. We train the architecture using Persistent Evolu-tion Strategies (PES) introduced in [41]. In contrast to prior work [4, 24, 27], which rely on updates over each target pa-rameter independently (or coupled only via normalization), our approach allows for more complex inter-dimensional dependencies via self-attention while still showing good generalization to different target problem sizes than those used in training. We refer to our learned optimization ap-proach as Optimus in the sequel. We evaluate Optimus on classical optimization objec-tives used to benchmark optimization methods in the liter-ature [17, 31, 37] (cf. fig. 1) as well as on a real-world task of physics-based human pose reconstruction. In our exper-iments, we typically observe that Optimus is able to reach a lower objective value compared to popular off-the-shelf This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11970 Figure 1. Top row: Evaluation results showing average objective value reached by the optimizer for the corresponding objective function in the top row (y-axis) vs. dimensionality of the objective function (x-axis). Bottom row: examples of objective functions used for evaluation of our approach. From left to right: Rastrigin [28], Levy [17], Ackley [1] and Rosenbrock [31] functions. For each function, we visualize the surface of the 2d version. optimizers while taking fewer iterations to converge. For example, we observe at least a 10x reduction in the number of update steps for half of the classical optimization prob-lems (see fig. 4). To evaluate Optimus in the context of physics-based human motion reconstruction, we apply it in conjunction with DiffPhy, which is a differentiable physics-based human model introduced in [12]. We experimentally demonstrate that Optimus generalizes well across diverse human motions ( e.g. from training on walking to testing on dancing), is notably (5x) faster to meta-train compared to prior work [24], leads to reconstructions of better quality compared to BFGS, and is faster in minimizing the loss.
Czolbe_Neuralizer_General_Neuroimage_Analysis_Without_Re-Training_CVPR_2023
Abstract Neuroimage processing tasks like segmentation, recon-struction, and registration are central to the study of neu-roscience. Robust deep learning strategies and architec-tures used to solve these tasks are often similar. Yet, when presented with a new task or a dataset with different vi-sual characteristics, practitioners most often need to train a new model, or fine-tune an existing one. This is a time-consuming process that poses a substantial barrier for the thousands of neuroscientists and clinical researchers who often lack the resources or machine-learning expertise to train deep learning models. In practice, this leads to a lack of adoption of deep learning, and neuroscience tools being dominated by classical frameworks. We introduce Neuralizer, a single model that general-izes to previously unseen neuroimaging tasks and modali-ties without the need for re-training or fine-tuning. Tasks do not have to be known a priori, and generalization happens in a single forward pass during inference. The model can solve processing tasks across multiple image modalities, ac-quisition methods, and datasets, and generalize to tasks and modalities it has not been trained on. Our experiments on coronal slices show that when few annotated subjects are available, our multi-task network outperforms task-specific baselines without training on the task.
1. Introduction Computational methods for the processing and analysis of neuroimages have enabled a deep understanding of the human brain. The field has also led to advanced patient care by facilitating non-invasive methods of diagnosis and treatment. Recent deep learning research promises to sub-stantially increase the accuracy and speed of neuroimaging analysis methods. A drawback of most current deep-learning-based ap-proaches is that each model is limited to solving the task it has been trained on, on the type of data it has been trained on. Generalization to new tasks and domains, such as different acquisition protocols or new segmentation, is Input PredictionOne Model for all Tasks Context Set informs Task Figure 1. Neuralizer can solve a broad range of image processing tasks, including new ones not seen during training, with a single model by conditioning the prediction on a context set of examples. After training on a diverse set of tasks, the model can generalize to new tasks in a single forward pass without re-training or fine-tuning. The model is highly flexible, requiring no prior definition of the set of tasks, and can be conditioned with context sets of any length. a main barrier to adoption [ 62]. Performing neuroimag-ing tasks like segmentation, registration, reconstruction, or motion correction requires different models for each pro-cessing step, despite operating on the same input data and methods exhibiting strong similarities in network architec-ture [ 13,45,85]. Yet, designing and training models to solve these tasks on each dataset is prohibitively expensive. To train a deep learning model, a dataset needs to be compiled and often manually annotated, and the network, training, and data loading logic needs to be implemented. All these steps generally require machine learning and neuroimaging expertise. In addition, computational resources like special-ized graphics processing hardware and software infrastruc-ture needs to be available. These requirements are particu-larly problematic in clinical research settings due to a high cost of annotation and a lack of machine learning expertise and hardware. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 6217 Binary SegmentationSkull StrippingMotion Correction Undersampled ReconstructionDenoising & Bias Correction InpaintingSuper Resolution Modality TransferFigure 2. Example neuroimaging tasks and modalities included in our dataset (top: input images, bottom: output images). Contribution We introduce Neuralizer, a general-purpose neuroimag-ing model that can solve a broad range of neuroimaging tasks on diverse image modalities (Fig. 2), without the need for task-specific training or fine-tuning. Neuralizer can solve new tasks, unseen during training, using a set of ex-amples of the new task at inference (Fig. 1) Neuralizer involves a convolutional architecture (Fig. 3), that takes as input a context set of examples that define the processing task, and thus does not require prior specifica-tion of the tasks. The method enables single-pass gener-alization during inference and can process any number of reference images in a single pass to inform the prediction. As a first method tackling task generalization in neu-roimaging, we focus on analyzing the capabilities of such system and presenting general insights, and limit our ex-periments to 2D. We evaluate our model by comparing the single-pass generalization performance to task-specific baselines conditioned on an equivalent amount of data. We find that Neuralizer outperforms the baselines on tasks where≤32labeled examples are available, despite never training on the task. When generalizing to new segmenta-tion protocols, Neuralizer matches the performance of base-lines trained directly on the dataset.
Jia_DETRs_With_Hybrid_Matching_CVPR_2023
Abstract One-to-one set matching is a key design for DETR to establish its end-to-end capability, so that object detection does not require a hand-crafted NMS (non-maximum sup-pression) to remove duplicate detections. This end-to-end signature is important for the versatility of DETR, and it has been generalized to broader vision tasks. However, we note that there are few queries assigned as positive sam-ples and the one-to-one set matching significantly reduces the training efficacy of positive samples. We propose a sim-ple yet effective method based on a hybrid matching scheme that combines the original one-to-one matching branch with an auxiliary one-to-many matching branch during training. Our hybrid strategy has been shown to significantly im-prove accuracy. In inference, only the original one-to-one match branch is used, thus maintaining the end-to-end merit and the same inference efficiency of DETR. The method is named H-DETR, and it shows that a wide range of rep-resentative DETR methods can be consistently improved across a wide range of visual tasks, including Deformable-DETR, PETRv2, PETR, and TransTrack, among others. Code is available at: https://github.com/HDETR .
1. Introduction Since the success of pioneering work DEtection TRans-former (DETR) [4] on object detection tasks, DETR-based approaches have achieved significant progress on various fundamental vision recognition tasks such as object detec-tion [46, 53, 77, 83], instance segmentation [11, 12, 22, 73], panoptic segmentation [8, 28, 59, 70, 74], referring expres-sion segmentation [64, 69], video instance segmentation [7, 60, 65], pose estimation [23, 54, 55], multi-object track-ing [6, 45, 56], monocular depth estimation [16, 26], text detection & layout analysis [42, 50, 51, 80], line segment detection [66], 3D object detection based on point clouds †Equal contribution. ♮: Core contribution. ‡{yuhui.yuan,hanhu }@microsoft.comor multi-view images [1,27,47,62], visual question answer-ing [19, 43], and so on. Many follow-up efforts have improved DETR from var-ious aspects, including redesigning more advanced trans-former encoder [13, 83] or transformer decoder architec-tures [3,14,46,76,83] or query formulations [21,33,63,77]. Different from most of these previous efforts, we focus on the training efficacy issues caused by one-to-one matching, which only assigns one query to each ground truth. For ex-ample, Deformable-DETR typically only selects less than 30queries from a pool of 300 queries to match with the ground truth for each image, as nearly 99% of the COCO images consist of less than 30bounding boxes annotations, while the remaining more than 270queries will be assigned as∅and are supervised with only classification loss, thus suffering from very limited localization supervision. To overcome the drawbacks of one-to-one matching and unleash the benefits of exploring more positive queries, we present a very simple yet effective hybrid match-ing scheme, which introduces an additional one-to-many matching branch that assigns multiple queries to each pos-itive sample. In inference, we only use the original one-to-one decoder branch supervised with the one-to-one match-ing loss. We find that this simple approach can substan-tially improve the training efficacy, especially regarding the fitting of positive queries. Since only the original one-to-one matching branch is used in inference, the merits of the original DETR framework are almost all maintained, for ex-ample, avoiding NMS. Our approach also has no additional computation overhead compared to the original version. We dub the hybrid matching approach as H-DETR, and extensively verify its effectiveness using a variety of vision tasks that adopt DETR methods or the variants, as well as different model sizes ranging from ResNet-50/Swin-T to Swin-L. The visual tasks and the corresponding DETR-based approaches include Deformable-DETR [83] for im-age object detection, PETRv 2[36] for 3D object detection from multi-view images, PETR [54] for multi-person pose estimation, and TransTrack [56] for multi-object tracking. TheH-DETR achieves consistent gains over all of them, as This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19702 COCO Object Detection30405060 4748.7AP (%)Deformable-DETR H-Deformable-DETR LVIS Object Detection2025303540 32.433.5AP (%)Deformable-DETR H-Deformable-DETR COCO Panoptic Segmentation40455055 4748.5PQ (%)Mask-Deformable-DETR H-Mask-Deformable-DETR COCO Pose Estimation6065707580 69.370.9AP (%)PETR H-PETR nuScenes 3D Object Detection4045505560 50.6852.38NDS (%)PETRv2 H-PETRv2 MOT17 Multi-Object Tracking50607080 67.168.7MOTA (%)TransTrack H-TransTrackFigure 1. Illustrating the improvements of our hybrid matching scheme across five challenging vision tasks including 2D object detection, 2D panoptic segmentation, 2D pose estimation, 3D object detection, and multi-object tracking (from left to right). Our hybrid matching scheme gains + 1.7%, +1.1%, +1.5%, +1.6%, +1.7%, and + 1.6%over various DETR-based approaches on 6×benchmarks respectively. All the improvements are obtained under the same training epochs and do not require any additional computation cost during evaluation. We choose V oVNetV2 [20]/ResNet 50for PETRv 2/all other methods as the backbone following their original settings. shown in Figure 1. Specifically, our approach can improve the Deformable DETR framework (R50) on COCO object detection by + 1.7%mAP ( 48.7%v.s. 47.0%), the PETR framework (R50) on COCO pose estimation by + 1.6% mAP ( 70.9%v.s.69.3%). In particular, we achieve 59.4% mAP on COCO object detection, which is the highest accu-racy on COCO object detection among DETR-based meth-ods that use the Swin-L model. We achieve 52.38% on nuScenes val, which is +1.7%higher than a very recent state-of-the-art approach of PETRv 2.
Guo_Dealing_With_Cross-Task_Class_Discrimination_in_Online_Continual_Learning_CVPR_2023
Abstract Existing continual learning (CL) research regards catas-trophic forgetting (CF) as almost the only challenge. This paper argues for another challenge in class-incremental learning (CIL), which we call cross-task class discrimi-nation (CTCD), i.e., how to establish decision boundaries between the classes of the new task and old tasks with no (or limited) access to the old task data. CTCD is implicitly and partially dealt with by replay-based methods. A replay method saves a small amount of data (replay data) from previous tasks. When a batch of current task data arrives, the system jointly trains the new data and some sampled re-play data. The replay data enables the system to partially learn the decision boundaries between the new classes and the old classes as the amount of the saved data is small. However, this paper argues that the replay approach also has a dynamic training bias issue which reduces the effec-tiveness of the replay data in solving the CTCD problem. A novel optimization objective with a gradient-based adaptive method is proposed to dynamically deal with the problem in the online CL process. Experimental results show that the new method achieves much better results in online CL.
1. Introduction Continual learning (CL) learns a sequence of tasks in-crementally. This work focuses on the class incremental learning (CIL) setting [32] in online CL. In CIL, each task consists of a set of unique classes, the sets of classes of any two different tasks are disjoint and the system has no access to the task information in testing. In online CL, the data comes gradually from a data stream. Whenever the small batch of data arrives, it is trained in one iteration. Thus, the data for each task is effectively trained in one epoch. Existing CL papers almost regard catastrophic forgetting (CF) as the only issue for CL. In fact, CIL also has another major challenge. When the system learns a new task, if no data from previous tasks is available, it has no way to es-tablish decision boundaries between new classes and oldclasses in previous tasks. Even if there is no CF, the classifi-cation results will still be poor. We call this problem, cross-task class discrimination (CTCD). Those approaches that do not save any previous data, e.g., regularization-based ororthogonal projection-based , do not deal with CTCD. Replay-based methods implicitly deal with CTCD to some extent because such a method uses a memory buffer Mto save a small amount of data ( replay data ) from old tasks. When a small batch of current task data Xnewarrives, the system jointly trains Xnewand some sampled replay data XbuffromM.Xbufenables the system to partially learn the decision boundaries between the new classes and the old classes because the amount of the saved data is very small. Due to the limited replay data, the training is biased , which reduces its ability to solve the CTCD problem. To make matters worse, the training bias also changes as more tasks are learned. This paper first shows that the problem is reflected as gradient imbalance (GI) on logits, i.e., higher positive gradients than negative gradients on the logits and vice versa. It further shows that GI is caused by two main is-sues. The first is data imbalance . Since the memory buffer size, the batch size of the new data Xnew, and the sampled dataXbuffrom the memory buffer are all fixed, if the system has learned many tasks, the average number of samples in each previous class in Xbufwill be much smaller than that of each class in Xnew. This results in higher positive gradients than negative gradients on the logits of the previous classes leading to training bias and poor decision boundaries (or weak CTCD capability) between the classes of the new and old tasks. The second is CL imbalance , i.e., CL training fo-cuses more on the new samples (which are harder to train as they are new) than the replayed samples (which have been seen and trained many times before). This causes further GI. This imbalance is involved (see Sec. 4.2 for details). Some existing works [2, 42] have tried to deal with data imbalance in offline CL. For example, SSIL [2] separately calculates the cross-entropy loss of the new data and the replay data to mitigate data imbalance. But they are not from the gradient angle. The second issue of GI is more complex and has not been attempted before. This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11878 This paper proposes a novel method, called GSA (Gradient Self-Adaptation), to deal with GI (and CTCD) in online CL. GSA includes a new training objective and a gradient-based self-adaptive loss to compensate for the GI. The loss is dynamically controlled by two gradient rates which automatically measure and adapt to the dynamic GI situation. The main contributions of this paper are: (1) It deals with the CTCD problem in online CL and proposes a new optimization framework that decomposes the problem into cross-task classification and within-task classification (see Section 5). In [22], CTCD is called inter-task class separation , but it uses an out-of-distribution based approach to dealing with the problem in offline CL. The paper uses a replay-based approach for online CL. (2) It analyzes the CTCD problem from the gradient im-balance (GI) angle and finds two kinds of gradient imbal-ance (data imbalance and CL imbalance) (see Section 4). Based on the analysis, it proposes a gradient-based self-adaptive loss to compensate for the GI. (3) Experiments in both the disjoint and long-tail online CL settings show that GSA outperforms strong baselines by a large margin (see Section 6).
He_A_Rotation-Translation-Decoupled_Solution_for_Robust_and_Efficient_Visual-Inertial_Initialization_CVPR_2023
Abstract We propose a novel visual-inertial odometry (VIO) ini-tialization method, which decouples rotation and transla-tion estimation, and achieves higher efficiency and bet-ter robustness. Existing loosely-coupled VIO-initialization methods suffer from poor stability of visual structure-from-motion (SfM), whereas those tightly-coupled methods of-ten ignore the gyroscope bias in the closed-form solution, resulting in limited accuracy. Moreover, the aforemen-tioned two classes of methods are computationally expen-sive, because 3D point clouds need to be reconstructed si-multaneously. In contrast, our new method fully combines inertial and visual measurements for both rotational and translational initialization. First, a rotation-only solution is designed for gyroscope bias estimation, which tightly couples the gyroscope and camera observations. Second, the initial velocity and gravity vector are solved with lin-ear translation constraints in a globally optimal fashion and without reconstructing 3D point clouds. Extensive ex-periments have demonstrated that our method is 8∼72 times faster (w.r.t. a 10-frame set) than the state-of-the-art methods, and also presents significantly higher robustness and accuracy. The source code is available at https: //github.com/boxuLibrary/drt-vio-init .
1. Introduction Visual-inertial odometry (VIO) aims to estimate camera motion and recover 3D scene structure by fusing both im-age and IMU measurements. The low-cost and compactness of the camera module and IMU sensors make VIO widely used in virtual or augmented reality systems (VR/AR) and various autonomous navigation systems. Currently, most VIO systems track camera motion by minimizing nonlinear *Equal contribution. 1 3.89 10 100 500 Computational Cost (ms)0.00.20.40.6Scale RMSEAS-MLECS-VISfM CS-VISfM-GBE VINS-MonoDRT-t DRT-lTightly-coupledLoosely-coupled OursFigure 1. Comparison of computational cost and scale factor er-rors on EuRoC dataset. Different colors indicate different types of methods. Our proposed initialization method for decoupling rotation and translation (DRT) is accurate and computationally ef-ficient. visual re-projection errors [14, 30], so the accuracy of the initial value will affect the convergence. In addition, the ro-bustness and lower latency of the initialization are also very important for the downstream application, e.g. AR develop-ers need accurate camera tracking within a few hundred mil-liseconds after launching VIO, regardless of the use case. For the sensor that has calibrated intrinsic and extrinsic pa-rameters, the initial variables for VIO include the gravity vector, initial velocity, gyroscope and accelerometer biases. Many VIO systems are initialized by setting the initial velocity to zero, then calculating the gravity vector and gy-roscope bias with IMU measurements [14,20,36]. However, this method only works when the system is strictly static. For sensors in motion, loosely-coupled and tightly-coupled initialization methods are widely studied. As shown in Fig. 2, the loosely-coupled methods [5,28,30] combine the cam-era poses estimated by visual SfM and the IMU measure-ments to estimate the initial state variables. However, vi-sual SfM is prone to inaccuracy or failure when co-viewed This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 739 Velocity, Gravity, Scale, Point CloudGyroscope Bias Gyr AccIMU(a) Loosely Coupled (b) Tightly Coupled (c) Ours AccGyrVelocity, Gravity, Point CloudRt Gyr Acc Gyroscope Bias Velocity, GravityPoint CloudCamera Raw Observations with Unknown DepthImage Loosely Coupled Gyroscope Bias EstimationVelocity & Gravity Estimation1cRSfM IMUImage IMUImageƾƾ ƾ Rotation IntegrationPoint TriangulationCamera Raw Observations Velocity, Gravity, Point Depth EstimationTightly Coupled Gyroscope Bias Estimation Rotation Integration Velocity & Gravity Estimation Feature Tracking IMU Measurements IMU MeasurementsFeature Tracking Translation IntegrationIMU MeasurementsVelocity , Gravity , Scale , Point CloudGyroscope Bias Gyr AccIMU(a) Loosely Coupled Rt Image Loosely Coupled Gyroscope Bias EstimationVelocity & Gravity Estimation1ccRRSfM IMU Measurements R tR(c) Ours Gyr Acc Gyroscope Bias Velocity , GravityPoint CloudIMU Image Point TriangulationCamera Raw ObservationsTightly Coupled Gyroscope Bias Estimation Rotation Integration Velocity & Gravity Estimation Feature Tracking IMU Measurements Translation IntegrationtR(b) Tightly Coupled AccGyrVelocity , Gravity , Point CloudCamera Raw Observations with Unknown Depth IMU Image ƾƾƾƾƾƾƾƾ ƾƾƾƾ Rotation IntegrationVelocity , Gravity , Point Depth EstimationIMU MeasurementsFeature Tracking RFigure 2. Comparison between our method and previous VIO initialization methods. Different colored arrows indicate different information flows for VI fusion. Our method takes full advantage of the complementary information between vision and IMU. In contrast, previous loosely-coupled methods do not incorporate IMU information into visual SfM, and previous tightly-coupled methods do not use visual observations to remove gyroscope bias, either of which affects the robustness and accuracy of VIO initialization. frames are insufficient or the camera rotates rapidly. The motion information measured by IMU is not used to im-prove the robustness of visual SfM. The tightly-coupled methods [8, 9, 24, 25] firstly use gyroscope measurements and calibrated extrinsic parameters to estimate camera rota-tion, then use closed-form solution constructed with vision and accelerometer observations to solve for the initial ve-locity and gravity vector. However, this type of method has poor accuracy on systems equipped with inexpensive and noisy IMU (e.g. cell phones), because no visual observa-tions are used to estimate the gyroscope bias. Moreover, the three-dimensional coordinates of point clouds are ob-tained with the closed-form solution, resulting in a large and time-consuming solution matrix. Both the above two kinds of methods under-utilize the complementary advantages be-tween visual and inertial sensors, resulting in limited accu-racy and robustness. According to [17, 18, 26, 38], image observations could be directly used to optimize frame-to-frame rotation and camera poses could be efficiently solved with linear global translation constraints [3]. Inspired by this, we propose a novel rotation-translation-decoupled VIO initialization framework. Gyroscope measurements are directly inte-grated into the camera rotation estimation, which greatly improves the robustness of initialization, and the translation related initial variables are solved efficiently without esti-mating the 3D structure. As shown in Fig. 1, our method achieves the lowest scale error and is significantly faster than previous methods. The scale factor error is one of themetrics for evaluating the initialization. Our main contribu-tions are -We propose a rotation-only solution to directly opti-mize gyroscope bias using image observations, which can obtain camera rotation more efficiently and more robustly compared to vision-only methods. -We propose a globally optimal solution for estimating the initial velocity and gravity vector based on linear translation constraints. Its linearity and independence of scene structure significantly benefit computational efficiency. -Our proposed initialization framework outperforms the state-of-the-art in both accuracy and robustness on public datasets while being 8∼72times faster in cal-culation time for a 10-frame set. We published our code to facilitate communication.
Cui_Multi-Modal_Gait_Recognition_via_Effective_Spatial-Temporal_Feature_Fusion_CVPR_2023
Abstract Gait recognition is a biometric technology that iden-tifies people by their walking patterns. The silhouettes-based method and the skeletons-based method are the two most popular approaches. However, the silhouette data are easily affected by clothing occlusion, and the skeleton data lack body shape information. To obtain a more ro-bust and comprehensive gait representation for recognition, we propose a transformer-based gait recognition frame-work called MMGaitFormer, which effectively fuses and ag-gregates the spatial-temporal information from the skele-tons and silhouettes. Specifically, a Spatial Fusion Mod-ule (SFM) and a Temporal Fusion Module (TFM) are pro-posed for effective spatial-level and temporal-level feature fusion, respectively. The SFM performs fine-grained body parts spatial fusion and guides the alignment of each part of the silhouette and each joint of the skeleton through the at-tention mechanism. The TFM performs temporal modeling through Cycle Position Embedding (CPE) and fuses tempo-ral information of two modalities. Experiments demonstrate that our MMGaitFormer achieves state-of-the-art perfor-mance on popular gait datasets. For the most challenging “CL” (i.e., walking in different clothes) condition in CASIA-B, our method achieves a rank-1 accuracy of 94.8%, which outperforms the state-of-the-art single-modal methods by a large margin.
1. Introduction Gait recognition is a biometric technology that identi-fies people by their walking patterns, which is one of the most promising video-based biometric technologies in the long-distance recognition system. However, it is still chal-lenging to perform reliable gait recognition, as its perfor-mance is severely affected by many complex factors, in-cluding clothing, carrying conditions, cross-view, etc.. To alleviate these issues, various methods have been proposed. The appearance-based and model-based methods are the †Corresponding Author. Figure 1. Comparison of different gait representations of a subject from the CASIA-B gait dataset at different timesteps of normal walks (a) and walking in different clothes (b). Each row depicts the same frames as silhouette image, and 2D skeleton pose, the combination of skeletons and silhouettes, respectively, from top-to-bottom. Combines the complementary strengths of silhouette and skeleton, it is expected to be a more comprehensive represen-tation for gait. two most popular approaches for video-based gait recogni-tion. The appearance-based ( i.e., silhouettes-based) meth-ods [5, 9, 14, 19, 27] rely on binary human silhouette im-ages segmented from the original video frame to eliminate the influence of external factors. They utilized convolu-tional neural networks (CNN) to extract spatio-temporal features and achieved state-of-the-art performance. The model-based methods [2,16,17,23] consider the underlying physical structure of the body and express the gait in a more comprehensible model. The most recent model-based ap-proaches are skeletons-based, in which they represent gait with the skeletons obtained from videos through pose esti-mation models. With clear and robust skeleton representa-tion, recent skeletons-based methods could even show com-petitive results compared to appearance-based methods. Although both silhouette-based and skeletons-based methods have their advantages, we argue that the incom-pleteness of both input representations of the gait infor-mation limits further improvement of these methods. As shown in Fig.1(a), although the silhouettes retain most body shape information, the self-obscuring problem occurs when body areas overlap. Moreover, when clothing condition changes, as shown in Fig.1(b), the external body shape is significantly changed by clothing obscuration. However, This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 17949 skeletons only keep the internal body structure information which effectively solves the clothing-obscuring and self-obscuring problems, but completely ignoring the discrim-inative body shape information leads to poor performance. Thus, we could observe that the silhouette retains the exter-nal body shape information and omits some body-structure clues, and the skeleton preserves the internal body structure information. The two data modalities are complementary to each other, and their combination is expected to be a more comprehensive representation of gait. Motivated by the observations above, to obtain robust and comprehensive gait representation for recognition, we propose a transformer-based gait recognition framework called MMGaitFormer, which effectively fuses and aggre-gates the spatial-temporal information from the skeletons and silhouettes. Precisely, the proposed framework consists of four main modules at three stages. Firstly, the silhou-ette sequence and skeleton sequence are extracted from the original RGB video by segmentation and pose estimation methods, respectively. After that, we feed the silhouettes and skeletons into independent encoding modules to ex-tract unique spatio-temporal feature maps for each modal. Finally, we propose a Spatial Fusion Module (SFM) and a Temporal Fusion Module (TFM) for spatial and tempo-ral feature fusion, respectively. As a video-based recog-nition task, how to effectively extract discriminative gait features from spatio-temporal information is the most crit-ical issue. In this work, we consider both fine-grained fu-sion at the spatial level and fine-aligned fusion at the tem-poral level. In the SFM, we design a co-attention mod-ule to enable the interactions between the silhouettes and skeletons. Specifically, we construct strategies called Fine-grained Body Parts Fusion (FBPF) to guide SFM for fine-grained feature fusion learning based on prior positional re-lationships between joints in the skeleton and corresponding parts in the silhouette. In the TFM, we introduced an em-bedding modeling operation for fine-aligned temporal mod-eling, in which we design the Cycle Position Embedding (CPE) to efficiently capture gait cycle features and better model the temporal information for gait sequences. The main contributions of the proposed method are sum-marized as follows: (1) We propose an effective and novel multi-modal gait recognition framework called MMGait-Former, which utilizes a more comprehensive gait represen-tation constructed from silhouettes and skeletons for better recognition. (2) A co-attention-based Spatial Fusion Mod-ule is proposed to perform a fine-grained body parts fusion (FBPF) of spatial gait features by using the prior positional relationships of each skeleton joint and each silhouette part. (3) We propose a novel Temporal Fusion Module for feature fusion at the temporal level, in which we design the Cycle Position Embedding (CPE) to model temporal relationships for gait sequences of arbitrary length. Experiments demon-strate that our MMGaitFormer achieves state-of-the-art per-formance on popular gait datasets. For the most challeng-ing condition ( i.e., walking in different clothes) in CASIA-B [26], our method achieves a rank-1 accuracy of 94.8%, which outperforms the state-of-the-art Single-modal meth-ods by a large margin (+11.2% accuracy improvement ).
Jiang_Hierarchical_Discriminative_Learning_Improves_Visual_Representations_of_Biomedical_Microscopy_CVPR_2023
Abstract Learning high-quality, self-supervised, visual represen-tations is essential to advance the role of computer vision in biomedical microscopy and clinical medicine. Previous work has focused on self-supervised representation learn-ing (SSL) methods developed for instance discrimination and applied them directly to image patches, or fields-of-view, sampled from gigapixel whole-slide images (WSIs) used for cancer diagnosis. However, this strategy is lim-ited because it (1) assumes patches from the same patient are independent, (2) neglects the patient-slide-patch hier-archy of clinical biomedical microscopy, and (3) requires strong data augmentations that can degrade downstream performance. Importantly, sampled patches from WSIs of a patient’s tumor are a diverse set of image examples that capture the same underlying cancer diagnosis. This moti-vated HiDisc, a data-driven method that leverages the in-herent patient-slide-patch hierarchy of clinical biomedical microscopy to define a hi erarchical disc riminative learning task that implicitly learns features of the underlying diag-nosis. HiDisc uses a self-supervised contrastive learning framework in which positive patch pairs are defined based on a common ancestry in the data hierarchy, and a unified patch, slide, and patient discriminative learning objective is used for visual SSL. We benchmark HiDisc visual represen-tations on two vision tasks using two biomedical microscopy datasets, and demonstrate that (1) HiDisc pretraining out-performs current state-of-the-art self-supervised pretrain-ing methods for cancer diagnosis and genetic mutation pre-diction, and (2) HiDisc learns high-quality visual repre-sentations using natural patch diversity without strong data augmentations. Increasing DiversitySlideDiscrimination PatientDiscrimination PatchDiscrimination 10K+ Pixels10K+ Pixels 10K+ Pixels10K+ Pixels 10K+ Pixels10K+ Pixels Figure 1. Hierarchical self-supervised discriminative learn-ing for visual representations . Clinical biomedical microscopy has a hierarchical patch-slide-patient data structure. HiDisc com-bines patch, slide, and patient discrimination into a unified self-supervised learning task.
1. Introduction Biomedical microscopy is an essential imaging method and diagnostic modality in biomedical research and clini-cal medicine. The rise of digital pathology and whole-slide images (WSIs) has increased the role of computer vision and machine learning-based approaches for analyzing mi-croscopy data [ 51]. Improving the quality of visual repre-sentation learning of biomedical microscopy is critical to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 19798 introducing decision support systems and automated diag-nostic tools into clinical and laboratory medicine. Biomedical microscopy and WSIs present several unique computer vision challenges, including that image resolu-tions can be large (10K ⇥10K pixels) and annotations are of-ten limited to weak slide-level or patient-level labels. More-over, even weak annotations are challenging to obtain in or-der to protect patient health information and ensure patient privacy [ 54]. Additionally, data that predates newly devel-oped or future clinical testing methods, such as genomic or methylation assays, also lack associated weak annota-tions. Because of large WSI sizes and weak annotations, the majority of computer vision research in biomedical mi-croscopy has focused on WSI classification using a weakly supervised, patch-based, multiple instance learning (MIL) framework [ 2,7,20,37,38,48]. Patches are arbitrarily de-fined fields-of-view (e.g., 256 ⇥256 pixels) that can be used for model input. The classification tasks include identify-ing the presence of cancerous tissue, such as breast can-cer metastases in lymph node biopsies [ 13], differentiating specific cancer types [ 7,11,18], predicting genetic muta-tions [ 11,26,32], and patient prognostication [ 8,29]. A limitation of end-to-end MIL frameworks for WSI classi-fication is the reliance on weak annotations to train a patch feature extractor and achieve high-quality patch-level repre-sentation learning. This limitation, combined with the chal-lenge of obtaining fully annotated, high-quality WSIs, ne-cessitates better methods for self-supervised representation learning (SSL) of biomedical microscopy. To date, research into improving the quality and effi-ciency of patch-level representation learning with outan-notations has been limited. Previous studies have focused on using known SSL methods, such as contrastive learn-ing [35,47,50], and applying them directly to WSI patches for visual pretraining. These SSL methods are not optimal because the majority use instance (i.e., patch) discrimina-tion as the pretext learning task [ 5,9,10,15,55]. Patches belonging to the same slide or patient are correlated, which can decrease the learning efficiency. Instance discrimina-tion alone does not account for patches from a common slide or patient being different and diverse views of the same underlying pathology. Moreover, previous SSL methods ne-glect the inherent patient-slide-patch data hierarchy of clin-ical biomedical microscopy as shown in Figure 1. This hi-erarchical data structure is not used to improve representa-tion learning when training via a standard SSL objective. Lastly, most SSL methods require strong data augmenta-tions for instance discrimination tasks [ 9]. However, strong and domain-agnostic augmentations can worsen representa-tion learning in microscopy images by corrupting semanti-cally important and discriminative features [ 21,50]. Here, we introduce a method that leverages the in-herent patient-slide-patch hierarchy of clinical biomedi-cal microscopy to define a self-supervised hi erarchical discriminative learning task, called HiDisc. HiDisc uses a self-supervised contrastive learning framework such that positive patch pairs are defined based on a common ances-try in the data hierarchy, and a combined patch, slide, and patient discriminative learning objective is used for visual SSL. By sampling patches across the data hierarchy, we in-troduce increased diversity between the positive examples, allowing for better visual representation learning and by-passing the need for strong, out-of-domain data augmenta-tions. While we examine the HiDisc learning objective in the context of contrastive learning, it can be generalized to any siamese representation learning method [ 10]. We benchmark HiDisc self-supervised pretraining on two computer vision tasks using two diverse biomedical mi-croscopy datasets: (1) multiclass histopathologic cancer di-agnosis using stimulated Raman scattering microscopy [ 41] and (2) molecular genetic mutation prediction using light microscopy of hematoxylin and eosin (H&E)-stained can-cer specimens [ 30]. These tasks are selected because of their clinical importance and they represent examples of how deep learning-based computer vision methods can push the limits of what is achievable through biomedical mi-croscopy [ 18,24,26,31]. We benchmark HiDisc in com-parison to several state-of-the-art SSL methods, including SimCLR [ 9], BYOL [ 15], and VICReg [ 1]. We demon-strate that HiDisc has superior performance compared to other SSL methods across both datasets and computer vi-sion tasks. Our results demonstrate how hierarchical dis-criminative learning can improve self-supervised visual rep-resentations of biomedical microscopy.
Han_Clothing-Change_Feature_Augmentation_for_Person_Re-Identification_CVPR_2023
Abstract Clothing-change person re-identification (CC Re-ID) aims to match the same person who changes clothes across cameras. Current methods are usually limited by the insuffi-cient number and variation of clothing in training data, e.g. each person only has 2 outfits in the PRCC dataset. In this work, we propose a novel Clothing-Change Feature Aug-mentation (CCFA) model for CC Re-ID to largely expand clothing-change data in the feature space rather than visual image space. It automatically models the feature distribu-tion expansion that reflects a person’s clothing colour and texture variations to augment model training. Specifically, to formulate meaningful clothing variations in the feature space, our method first estimates a clothing-change normal distribution with intra-ID cross-clothing variances. Then an augmentation generator learns to follow the estimat-ed distribution to augment plausible clothing-change fea-tures. The augmented features are guaranteed to maximise the change of clothing and minimise the change of identity properties by adversarial learning to assure the effective-ness. Such augmentation is performed iteratively with an ID-correlated augmentation strategy to increase intra-ID clothing variations and reduce inter-ID clothing variation-s, enforcing the Re-ID model to learn clothing-independent features inherently. Extensive experiments demonstrate the effectiveness of our method with state-of-the-art results on CC Re-ID datasets.
1. Introduction Person re-identification (Re-ID) aims to match images of the same person across different locations over time. In early Re-ID methods, the target person was assumed to move within a short span of time and space, wearing the same clothes appearing in different camera views. There-fore most methods [20, 26, 28, 40, 41] leverage clothing in-formation to identify persons. However, they cannot cope with the clothing-change situations, e.g. when a person Change to blue shirt and shorts Change to black shirtChange viewpoints and background Change clothes and identity (e.g. face and hair)Features Feature spaceSemantic directions Corresponded images Change to blue shirt Change to black shirt Change to shorts Change identity (e.g., face and hair)Features Feature spaceSemantic directions Corresponded images Change to blue shirt and shorts Change to black shirtChange viewpoints and background Change clothes and identity (e.g. face and hair)Features Feature spaceSemantic directions Corresponded images Figure 1. Transforming features towards specific directions can change specific semantics [37], e.g. clothing colours or textures, background, viewpoints and identities. changes clothes significantly over a few days. To overcome this problem, more recent research has made an effort to consider clothing-change in model training and test, known as clothing-change person re-identification (CC Re-ID). Current methods for CC Re-ID focus mostly on mod-eling clothing-independent identity information. They can be broadly divided into two categories: one-modality and multi-modality based methods. The one-modality methods learn clothing-independent representations solely from RG-B images [7,15]. The multi-modality methods exploit other auxiliary information, such as human silhouettes [13, 17], parsing masks [22, 27], keypoints [3, 25], 3D shape [2, 9], to help capture clothing-independent information like facial features and/or body shape characteristics. However, existing methods’ robustness to clothing varia-tions is limited by the quite limited number and diversity of clothing in training data. For example, each person only has 2 outfits in the PRCC [39] dataset, which is insufficient to train a very robust Re-ID model. One direct approach is to synthesise more images of different clothes for each person using a generative model to augment training data. Howev-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 22066 er, such direct data augmentation in the image space dramat-ically increases computational time and storage space, and its effectiveness on model generalisation is also not directly measurable. Moreover, due to the complexity of image syn-thesis, current generative Re-ID methods [21, 42] typically only model the exchange of clothes between two person-s, and cannot generate plausible new clothes to expand the clothing-change library more freely. To tackle the above problems, we propose a novel Clothing-Change Feature Augmentation (CCFA) model for CC Re-ID by augmenting implicitly clothing-change data in the feature space rather than image space. It aims to explore the plausible feature distribution expansion that re-flects meaningful clothing colour and texture variations on a person’ s appearance. Such feature augmentation is not only computationally more efficient, but also can expand signifi-cantly more new clothes that do not exist in the dataset in or-der to increase clothing-change variations in model training. Our work is motivated by recent findings that there exist many semantic directions in the deep feature space [36,37]. Transforming a feature representation along specific direc-tions can result in a representation corresponding to another image data sample of different semantics. For example in Fig.1, features can be transformed towards some direction-s to change information of clothing colours and textures, such as the blue shirt and shorts. We wish to explore such characteristics in CC Re-ID model training. However, it remains challenging to properly implemen-t clothing-change augmentation in the feature space. First, there are many semantic directions of feature expansion ir-relevant to clothing, e.g. viewpoints and background in Fig.1, and there are also many semantically meaningless di-rections. It is nontrivial to find out meaningful clothing-change directions in order to maximise the diversity of clothing-change to benefit CC Re-ID model training. Criti-cally, we do not have annotations for these feature augmen-tation directions. Second, changing clothes may damage the identity property, i.e.person-specific unique character-istics. For example, the red direction in Fig.1 changes both clothes and identity properties like the face and hair style, and makes the man changed to a different woman, causing a meaningless augmentation for the man. It is significant that a person’s intrinsic identity property is maintained during feature augmentation in order to make meaningful clothing-change augmentation. To address these challenges, we formulate clothing-change ID-unchange feature augmentation learning in our model. Specifically, our model first includes a clothing-change covariance estimation method to discover semanti-cally meaningful clothing-change directions in the feature space. It statistically aggregates the intra-ID cross-clothing variances into a zero-mean multi-variate normal distribu-tion, from which new plausible clothing variations can beformulated automatically. Then an augmentation generator is trained to generate feature augmentation. It is guaran-teed not only to satisfy the estimated clothing-change direc-tions (normal distribution), but also to maximise clothing-change and meanwhile minimise identity-change by adver-sarial learning to assure the effectiveness of augmentation. Given this generator, we can iteratively augment cloth-ing information on person features of each sample to ex-pand clothing-change data in model training. To exploit the augmentation more efficiently, we further propose an ID-correlated augmentation strategy. Instead of augment-ing each sample independently, we perform different aug-mentation for the samples of the same person, and the same augmentation for the samples of different persons in each mini-batch. This increases intra-ID clothing vari-ations and reduces inter-ID clothing variations, enforcing the Re-ID model to automatically discover each person’s clothing-independent unique (implicit identity) information more fully. Extensive experiments demonstrate that our method improves the model’s accuracy significantly on CC Re-ID datasets PRCC [39] and LTCC [25]. We summarise the contributions of this work as follows. For the first time we propose a CCFA model for C-C Re-ID to implicitly augment clothing-change data in the feature space, by maximising clothing-change whilst minimising identity-change for person features. We present a clothing-change covariance estimation method to formulate clothing-change semantic direc-tions of feature distribution expansion, and introduce an augmentation generator to implement the clothing-change ID-unchange augmentation. An ID-correlated augmentation strategy is proposed to increase intra-ID clothing variations and simulta-neously to reduce inter-ID clothing variations, explic-itly enforcing the Re-ID model to explore clothing-independent information more fully. Our method improves the model’s robustness to cloth-ing variations and achieves state-of-the-art results.
Bowman_A-La-Carte_Prompt_Tuning_APT_Combining_Distinct_Data_via_Composable_Prompting_CVPR_2023
Abstract We introduce `A-la-carte Prompt Tuning (APT), a transformer-based scheme to tune prompts on distinct data so that they can be arbitrarily composed at inference time. The individual prompts can be trained in isolation, possibly on different devices, at different times, and on dif-ferent distributions or domains. Furthermore each prompt only contains information about the subset of data it was exposed to during training. During inference, models can be assembled based on arbitrary selections of data sources, which we call `a-la-carte learning. `A-la-carte learning enables constructing bespoke models specific to each user’s individual access rights and preferences. We can add or remove information from the model by simply adding or removing the corresponding prompts without retraining from scratch. We demonstrate that `a-la-carte built models achieve accuracy within 5%of models trained on the union of the respective sources, with comparable cost in terms of training and inference time. For the continual learning benchmarks Split CIFAR-100 and CORe50, we achieve state-of-the-art performance.
1. Introduction As large neural network models make their way into com-mercial applications, the basic paradigm of training them on a monolithic dataset leads to a number of challenges. First, as new data become available, updating the whole model can be prohibitively expensive. Even when training time is not an issue, some users may still require access and main-tenance of previous versions of the model to avoid disrup-tions of their downstream workflows. Second, owners of the training data may modify their sharing preferences at any time, leading to datasets that shrink over time (machine unlearning) or to different subsets of the training data being *Work done during an internship at AWS AI Labs.usable by different users (compartmentalization). Finally, the users themselves may want to use custom subsets of the data to better tailor their model to their use cases (model customization). These challenges are well known and addressed separately in different fields such as continual learning, forgetting, and model adaption. However, in order for a commercial sys-tem to be viable at scale, these issues have to be tackled concurrently. Ideally, one would have a large model that each user can run, trained using only data the specific user wants and has rights to, that can evolve without the need for fine-tuning as new data becomes available, or as individ-ual data owners exercise their right to have their data erased (“the right to be forgotten”). We refer to the problem of building such a model as `a-la-carte learning since, depending on the data availability and the user, the service may need to select and use different data chunks from a menu of available training data. More specifically, let D={D1,...,D n}be a variable collection of data sources (a data pool ). In `a-la-carte learning a user at inference time can specify a subset S⇢D of training data together with an input sample xto receive a personalized `a-la-carte output f(x, S)from the model f. Critically, the output f(x, S)must not depend on any data source Di/2S. `A-la-carte learning can be na ¨ıvely tackled in two ways. The service could pre-train one model for each possible sub-set of the data pool, and serve each user the most power-ful model they have rights to. While optimal from the user view-point, this requires a prohibitive exponential complex-ityO(2|D|)in both training time and storage. On the other extreme, the service could train a separate model on each data source individually and, at inference time, ensemble all models obtained from the sources in S. This requires only linear O(|D|)training time complexity to pre-train each model, but still has a significant storage cost. Fur-thermore due to the ensembling inference time is signifi-This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 14984 Figure 1. `A-la-carte Learning and APT. Given a pool of multiple data sources, the goal of `A-la-carte Learning is to allow the user to select – at inference time – an arbitrary subset S⇢Dof sources to use. The performance of the `a-la-carte model should be comparable to the performance of a model trained on S.(A)APT enables efficient `A-la-carte Learning by converting each source into a prompt, and composing together the relevant prompts at inference time. (B)To perform inference, APT uses a modified attention mechanism that prevents the prompts from interfering with each other and ensembles the individual outputs to construct the final prediction. cantly increased while also potentially suffering from lower performance than the ideal “paragon” model trained on the union of sources in S. The goal of `a-la-carte learning is to achieve performance as close as possible to the paragon without significantly increasing inference or training time. To address these key issues, we propose `A-la-carte Prompt Tuning (APT) . APT leverages vision transformers and prompt tuning to solve the `a-la-carte learning problem. First, APT converts each dataset Diinto a learned prompt pi, thus transforming the data pool into a prompt pool . Then at inference time, given a subset of sources Sto use, APT retrieves all corresponding prompts and concatenates them together with the input. Surprisingly, we show that in most cases APT has performance comparable to the paragon of joint learning with all data in S. Moreover, since each prompt is trained on an individual dataset, information is naturally compartmentalized. Thanks to the small size of prompts and an efficient forwarding method, APT is sig-nificantly cheaper (in both storage and inference time) than ensembling models. Importantly however, we note that simply concatenating different prompts that were trained separately leads to de-structive interference in the attention block which corrupts the representations (see Table 2). To address this problem, we introduce a modified attention mechanism that elimi-nates such interference, while also significantly reducing the inference time when multiple prompts are concatenated. A priori, this change comes with a small reduction in ex-pressive power and in the ability to capture synergistic in-formation between data sources. However, one of our main contributions is to show that the resulting drop in accuracy is generally modest, while providing far more valuable ben-efits to scalability, maintainability, and privacy. We empirically demonstrate the advantage of APT-based `a-la-carte learning for forgetting and continual learning (both domain-incremental and class-incremental). We observe that in most cases the performance of APT is within 5% of the performance of the paragon at a fraction of the cost. We also show that APT outperforms all comparable base-lines with the advantage of computational scalability from the structured attention mechanism. Summary of our contributions. 1.We introduce the `A-la-carte Learning problem to ad-dress continual learning, machine unlearning, and model customization concurrently.
Chen_ViLEM_Visual-Language_Error_Modeling_for_Image-Text_Retrieval_CVPR_2023
Abstract Dominant pre-training works for image-text retrieval adopt “dual-encoder” architecture to enable high effi-ciency, where two encoders are used to extract image and text representations and contrastive learning is employed for global alignment. However, coarse-grained global alignment ignores detailed semantic associations between image and text. In this work, we propose a novel proxy task, named Visual-Language ErrorModeling ( ViLEM ), to inject detailed image-text association into “dual-encoder” model by “proofreading” each word in the text against the corresponding image. Specifically, we first edit the image-paired text to automatically generate diverse plausible neg-ative texts with pre-trained language models. ViLEM then enforces the model to discriminate the correctness of each word in the plausible negative texts and further correct the wrong words via resorting to image information. Further-more, we propose a multi-granularity interaction frame-work to perform ViLEM via interacting text features with both global and local image features, which associates lo-cal text semantics with both high-level visual context and multi-level local visual information. Our method surpasses state-of-the-art “dual-encoder” methods by a large margin on the image-text retrieval task and significantly improves discriminativeness to local textual semantics. Our model can also generalize well to video-text retrieval.
1. Introduction Pre-training vision-language models on massive image-text pairs to learn transferable representations for image-text retrieval has attracted a lot of attention in recent * Equal contribution. †Corresponding author. two dogs are sitting in the snow with its ownerthree dogs are sitting in the snow with its owner two deer are sitting in the snow with its owner two dogs are sleeping in the snow with its owner two dogs are sitting in the yard with its owner two bears are sitting in the snow watching its owner a baker is working in the kitchen rolling dough a bowl of potatoes next to a cup of carrots a green street sign sitting on the side of a pole a little girl riding a cart while in the airportVisual -Language Error Modeling Image -Text Contrastive Learning Image -paired text & texts of other imagesImagePlausible negative texts Automatic Text Editionsnowsitting with dogstwo dogs Global AlignmentDetailed AssociationFigure 1. Illustration of image-text contrastive learning (ITC) and visual-language error modeling (ViLEM). ITC learns image-text global alignment by distinguishing paired data from unpaired data. ViLEM establishes detailed image-text association via discrimi-nating and correcting wrong words in plausible negative texts. years. Previous dominant methods [11, 29, 38] adopt “dual-encoder” architecture to enable efficient retrieval, where two separate encoders are used to extract image and text representations. They learn a joint image-text embedding space via constraining the coarse-grained alignment be-tween global image and text features. However, the coarse-grained alignment constraint ignores the capture of detailed image and text semantics, and associations between them, impeding the performance improvement of image-text re-trieval. Humans achieve accurate image-text matching by care-fully discriminating whether there exists semantic diver-gence between image and text, i.e., determining whether each word can be precisely grounded to the image, which requires a comprehensive perception of each modality and well association between them. Humans can also elimi-nate semantic divergence effortlessly by correcting text er-rors through their powerful semantic association capability. Inspired by these, we propose a novel proxy task, named This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 11018 Visual-Language ErrorModeling ( ViLEM ), for image-text retrieval. As shown in Figure 1, compared with image-text contrastive learning for global alignment, ViLEM enforces the model to discriminate and eliminate the local seman-tic divergence by “proofreading” plausible negative texts against image information, which enhances fine-grained se-mantic perception and establishes detailed image-text asso-ciation. Collaborating with image-text contrastive learning, ViLEM significantly improves the retrieval performance of “dual-encoder” architecture. ViLEM is divided into two sub-tasks: text error detec-tion and text error correction. Given an image and a plau-sible negative text, the goal of error detection is training the model to exhaustively discriminate the correctness of each word in the form of binary classification. Meanwhile, error correction enforces the model to predict the correct words for the wrong ones from a fixed vocabulary under the condition of image information. However, finding plau-sible negative text for images and obtaining corresponding labels of error detection and correction requires high human annotation costs. Thus, we propose to automatically con-struct plausible negative texts and corresponding labels with a pre-trained language model BERT [12], where we exploit its rich linguistic knowledge to edit the image-paired texts and generate local text errors. The generated errors can be related to objects, actions, scenes, relationships, etc. (as shown in Figure 1), with which the model can learn various fine-grained semantics. The detection and correction labels can also be obtained by comparing generated negative texts with image-paired texts. To further leverage ViLEM’s ability to establish seman-tics associations, we propose a multi-granularity interaction framework to enable effective interaction between visual and textual encoders while maintaining high retrieval effi-ciency. Specifically, global visual features and local visual features are both fully exploited for text error detection and correction. For global visual features, we inject them into the local text representations to provide visual conditions for discriminating and correcting text errors, which asso-ciates local text information with high-level visual context and enhances the discriminativeness to fine-grained text se-mantics. For local visual features, we employ additional cross-attention modules to adaptively aggregate them into word-related visual concepts for error detection and cor-rection, which establishes the association between detailed text semantics with multi-level local visual information and facilitates fine-grained image-text alignment. The cross-attention modules will be removed in the inference, intro-ducing no additional computation cost and parameters com-pared with vanilla “dual-encoder”. The contributions of this work are listed as follows: (1) We introduce a novel proxy task, Visual-Language Error Modeling (ViLEM), to inject detailed seman-tic association between images and texts into “dual-encoder” architecture. (2) We propose a multi-granularity interaction framework to further leverage the ability of ViLEM while main-taining the high retrieval efficiency, which enhances the capture of fine-grained semantics and associates lo-cal text semantics with both high-level visual context and multi-level local visual information. (3) The extensive experimental results show that our method surpasses previous state-of-the-art “dual-encoder” methods by a large margin on the image-text retrieval task and significantly improves the dis-criminativeness to local text semantics. Moreover, our model can also generalize well to video-text retrieval.
Chen_Cascaded_Local_Implicit_Transformer_for_Arbitrary-Scale_Super-Resolution_CVPR_2023
Abstract Implicit neural representation has recently shown a promising ability in representing images with arbitrary res-olutions. In this paper, we present a Local Implicit Trans-former (LIT), which integrates the attention mechanism and frequency encoding technique into a local implicit image function. We design a cross-scale local attention block to ef-fectively aggregate local features and a local frequency en-coding block to combine positional encoding with Fourier domain information for constructing high-resolution im-ages. To further improve representative power, we pro-pose a Cascaded LIT (CLIT) that exploits multi-scale fea-tures, along with a cumulative training strategy that grad-ually increases the upsampling scales during training. We have conducted extensive experiments to validate the effec-tiveness of these components and analyze various training strategies. The qualitative and quantitative results demon-strate that LIT and CLIT achieve favorable results and out-perform the prior works in arbitrary super-resolution tasks.
1. Introduction Single Image Super-Resolution (SISR) is the process of reconstructing high-resolution (HR) images from their corresponding low-resolution (LR) counterparts. SISR has long been recognized as a challenging task in the low-level vision domain due to its ill-posed nature, and has attracted a number of researchers dedicated to this field of study over the past decade [1–21]. A line of SISR research referred to as ‘fixed-scale SR ’ [1–15] focuses on extracting feature em-beddings from LR images and leveraging these embeddings to upsample images with a predefined factor through learn-able deconvolutions [3] or sub-pixel convolutions [4]. De-spite their success, many of the proposed approaches neces-sitate a distinct deep neural network model for each upsam-pling scale, which is usually restricted to a limited selection of integers ( e.g.,2,3,4). Such a limitation constrains the potential applications and deployment options of SISR models. To overcome this limitation, approaches for up-sampling LR images in a continuous manner via a single model emerge and attracted considerable attention recently. Over the past few years, arbitrary-scale SR has emerged and attracted considerable attention from researchers [16– 21]. Apart from the pioneering work Meta-SR [16], recent *Equal contribution LR HR LIIF Ours Attention MapLocal Ensemble LR HR LIIF Local Ensemble Ours Attention Map High LowHigh Low(a) (b)Figure 1. An illustration and comparison of different approaches that take into account nearby pixels for continuous upsampling: (a) the local ensem-ble method used in [17], and (b) our proposed local attention mechanism. endeavors [17–21] have achieved arbitrary-scale SR by re-placing the upsampling layers commonly adopted by pre-vious approaches with local implicit image functions, and have demonstrated favorable performance. These local im-plicit functions employ multi-layer perceptrons (MLPs) to map 2D coordinates and corresponding latent representa-tions to RGB values. Fig. 1 illustrates how different ap-proaches sample latent representations based on the queried coordinates (depicted as the red dots). Fig. 1 (a) illus-trates the local ensemble technique adopted by contempo-rary mainstream methods [17–21]. It calculates the RGB value of the queried coordinate by taking the weighted av-erage of those of the surrounding four pixels based on their relative distances to the queried coordinate. This approach, however, does not consider contextual information and re-lies solely on distance. For instance, in Fig. 1, the queried coordinates are intentionally designed to lie on edges. How-ever, merely calculating the weighted average of pixels fails to reflect the contextual information about the image con-tent, thereby preventing the accurate capture of the neces-sary features for performing SR. As a result, although pixel distance plays a vital role in SR tasks, it is essential to con-centrate more on the contextual information in an image. In light of the above observations, we propose a Lo-cal Implicit Transformer (LIT), which expands the num-bers of referenced latent vectors and accounts for the fea-ture correlation in the context by exploiting the attention mechanism [22]. LIT comprises a Cross-Scale Local Atten-tion Block (CSLAB), a Local Frequency Encoding Block This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 18257 (LFEB), and a decoder. CSLAB generates attention maps based on the bilinearly interpolated latent vectors at the queried coordinates and the key latent vectors sampled from a grid of coordinates with relative positional bias [23, 24]. The first and second columns of Fig. 1 (b) visualize the at-tention maps generated by LIT, where the attention areas align closely with the edges. By applying attention maps to feature embeddings, the RGB values of the queried co-ordinates can be contextually predicted. Moreover, inspired by [21, 25, 26], we introduce LFEB, which projects rela-tive coordinates into latent space to address the spectral bias problem [27] of an implicit neural function. Specifically, relative coordinates are encoded into relative positional en-coding, which are multiplied with the frequency encoding extracted from the feature embedding in the Fourier domain to generate the frequency embeddings. This design enables the frequency embedding to integrate the relative positional encoding with texture information, thereby augmenting the expressivity of relative coordinates. Finally, a decoder is adopted to produce RGB values by taking advantage of the attention feature embedding and the frequency embedding. In order to address the issue of diverse scaling factors and achieve arbitrary-scale super-resolution, it is crucial to consider the role of upsampling factors in constructing high-resolution images within the local implicit image func-tion. However, simultaneously training a local implicit im-age function with a wide range of upsampling factors (e.g., 1 30) poses significant challenges. As a result, we propose a cumulative training strategy to incrementally en-hance the fuction’s its representative power. The strategy initially trains the local implicit image function with small upsampling factors and then finetunes it with alternatively sampled small and large ones. Furthermore, we present Cascaded LIT (CLIT) to harness the advantages of multi-scale feature embeddings, complementing missing details and information during one-step upsampling. The combi-nation of the cumulative training strategy and CLIT enables efficient and effective handling of arbitrary-scale SR tasks. The main contributions of our work are summarized as follows: (1) We introduce the LIT architecture, which incor-porates the local attention mechanism into arbitrary-scale SR (2) We further develop a cumulative training strategy and the cascaded framework CLIT to effectively handle large-scale upsampling. (3) We carry out comprehensive analyses of the performance impacts for LIT and CLIT. The extensive experimental findings demonstrate that the pro-posed LIT and CLIT are able to yield remarkable or com-parable results across a wide range of benchmark datasets. The paper is organized as follows. Section 2 reviews the related work. Section 3 walks through the proposed LIT and CLIT frameworks and the implementation details. Section 4 presents the experimental results. Section 5 concludes.2. Related Work Implicit neural representation. Implicit neural repre-sentation is a technique for representing continuous-domain signals via coordinate-based multi-layer percep-trons (MLPs). Its concept has been adopted in various 3D tasks, e.g., 3D object shape modeling [28–32], 3D scene re-construction [33–36], and 3D structure rendering [25, 37– 39]. For example, NeRF [25] employs implicit neural rep-resentation to perform novel view synthesis, which maps coordinates to RGB colors for a specific scene. In the past few years, 2D applications of implicit neural representa-tion have been attempted as well, such as image represen-tation [40, 41] and super-resolution [17–21]. Our work is related to a technique called ‘ local implicit neural repre-sentation ’ [17, 21], which encodes LR images to feature embeddings such that similar information could be shared within local regions. Such local implicit neural representa-tions are exploited to upscale LR images to HR ones. Single image super-resolution. In the past several years, various deep neural network (DNN) based architectures [1– 15] have been proposed for SISR. Among these works, SR-CNN [1] pioneered the use of convolutional neural net-works (CNNs) to achieve SISR in an end-to-end manner. It is later followed by several subsequent works that incorpo-rated more complicated model architectures, such as resid-ual blocks [6, 7], dense connections [8, 9], attention based mechanisms [10–12], or cascaded frameworks [5, 42, 43], to extract more effective feature representations for SISR. Recently, transformer-based methods [13–15] were intro-duced to SISR and achieved promising performance. Arbitrary-scale super-resolution. As discussed in Sec-tion 1, most of the contemporary SISR works limit their up-sampling scales to specific integer values, and are required to train a distinct model for each upsampling scale. To over-come such a limitation, several approaches [16–21] were proposed to train a unified model for arbitrary upsampling scales. Meta-SR [16] proposed a meta-upscale module for predicting the weights of their convolutional filters from co-ordinates and scales. The predicted weights are then utilized to perform convolutions to generate HR images. In contrast to Meta-SR, LIIF [17] employs an MLP as a local implicit function, which takes a queried coordinate in an HR im-age, its nearby feature representations extracted from the corresponding LR image, as well as a cell size to predict an RGB value for that coordinate. UltraSR [18] and IPE [19] extended LIIF by replacing coordinates with the embedded ones to deal with the spectral bias issue [25, 27, 41, 44, 45] inherent in MLPs. LTE [21] further introduced a local tex-ture estimator that transforms coordinates into Fourier do-main information to enrich the representational capability of its local implicit function. Different from the above ap-18258 HRC Encoder EθDecoder Dϕ(a) Overview Cross-Scale Local Attention Local Frequency Encodingconv conv conv convconv(b) Local Implicit Transformer Element-wise multiplication C SoftmaxElement-wise addition Inner productChannel-wise concatenation Local sampling Bilinear samplingS S Bilinear upsamplingSS S SS (c) Cross-Scale Local Attention (d) Local Frequency Encoding Positional Encoding Positional EncodingFFTFC Figure 2. The proposed LIT framework. The local sampling operation samples input embeddings based on a grid of coordinates. proaches, our proposed methodology exploits a novel local attention mechanism and a cascaded framework to deal with the arbitrary-scale SR. In order to fairly compare with the above approaches, we similarly adopt EDSR [7], RDN [9] and SwinIR [14] as the encoders for our LIT and CLIT. 3. Methodology In this section, we first provide an overview of the pro-posed LIT framework, followed by the implementation de-tails of it and its main modules.We then discuss our cumu-lative training strategy, as well as the framework of CLIT. 3.1. Overview of the LIT Framework LIT is a framework that employs a novel cross-scaled local attention mechanism and a local frequency encoding technique to perform arbitrary-scale SR tasks. Fig. 2 (a) provides an overview of the proposed framework, which aims at producing an HR image IHR2RrhHrwW3at 2D HR coordinates xHR2 X from a given LR image ILR2RHW3at 2D LR coordinates xLR2X based on an arbitrary upsampling scale r=frh;rwg, whereXis the 2D coordinate space that is used to represent an image in the continuous domain. An encoder Efirst extracts a fea-ture embeddingZ2RHWCfromILR. The extractedZ is then forwarded into LIT along with the 2D coordinates of IHRand a cell = (2=sh;2=sw)to generate the
Cai_Open-World_Multi-Task_Control_Through_Goal-Aware_Representation_Learning_and_Adaptive_Horizon_CVPR_2023
Abstract We study the problem of learning goal-conditioned poli-cies in Minecraft, a popular, widely accessible yet challeng-ing open-ended environment for developing human-level multi-task agents. We first identify two main challenges of learning such policies: 1) the indistinguishability of tasks from the state distribution, due to the vast scene diversity, and 2) the non-stationary nature of environment dynamics caused by partial observability. To tackle the first challenge, we propose Goal-Sensitive Backbone (GSB) for the policy to encourage the emergence of goal-relevant visual state representations. To tackle the second challenge, the pol-icy is further fueled by an adaptive horizon prediction mod-ule that helps alleviate the learning uncertainty brought by the non-stationary dynamics. Experiments on 20 Minecraft tasks show that our method significantly outperforms the best baseline so far; in many of them, we double the perfor-mance. Our ablation and exploratory studies then explain how our approach beat the counterparts and also unveil the surprising bonus of zero-shot generalization to new scenes (biomes). We hope our agent could help shed some light on learning goal-conditioned, multi-task agents in challeng-ing, open-ended environments like Minecraft. The code is released at https://github.com/CraftJarvis/ MC-Controller .
1. Introduction Building agents that can accomplish a vast and diverse suite of tasks in an open-ended world is considered a key challenge towards devising generally capable artificial in-telligence [ 2,3,6,35]. In recent years, environments like Minecraft have drawn much attention from the related re-search communities [ 16,18–20,26], since they are not only combatpigharvestpoppy harvestwoodcombatsheep pick-place window-openbox-close stick-push Minecraft Meta -world Figure 1. Comparison of states between Meta-world [ 49] (left) and Minecraft [ 24] (right) based on t-SNE visualization. The points with the same color represent states from the trajectories that com-plete the same task. It can be seen that the states are much more distinguishable in terms of tasks in Meta-world than in Minecraft, implying the higher diversity of states and tasks in open worlds like Minecraft over traditional multi-task agent learning environ-ments like Meta-world. popular, and widely accessible, but also offer an open-ended universe with myriad of tasks, making them great platforms for developing human-level multi-task agents. Although groundbreaking successes have been observed in many challenging sequential decision-making problems such as Atari [ 32], Go [ 39], and MOBA games [ 13,44,45], such successes have not been transferred to those open worlds. To understand the gap and design corresponding so-lutions, we need to first understand the distinct challenges brought by these environments. Let’s take Minecraft [ 24] as an example: there are over twenty types of landscapes This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 13734 ranging from flat lands like Savannah and desert to rough mountains with forests and caves. These diverse land-scapes also enable countless tasks that could be achieved by the agents: mining, harvesting, farming, combating, con-structing, etc. Compared to canonical agent learning en-vironments like Go [ 39], Atari [ 32], and robotic control suite [ 41,43,48], Minecraft provides a substantially more diverse distribution of states thanks to the rich scenes and tasks built with the game, making it exceptionally diffi-cult to extract the pivotal task-relevant visual state repre-sentations for goal-conditioned policies. To help our read-ers understand the significance of this challenge, we visual-ize the states from trajectories that complete some tasks in Minecraft and Meta-world [ 48] (a popular multi-task learn-ing environment but with fewer states and tasks) in Fig. 1. States of different tasks are annotated with different colors. Clearly, the states in Minecraft are much less distinguish-able in terms of tasks than in Meta-world. Therefore goal-conditioned policies are more likely to struggle in mapping those states and tasks (served as goals) to actions. Another grand challenge in an open-ended environment like Minecraft hails from the setting of such games, where an agent can only have very limited observations of the world. For example, in MineDoJo [ 16] (a recent agent benchmark built on Minecraft), the observation space com-prises a first-person view image and a list of possessed items. However, many more aspects of the surroundings re-main hidden from the agents. That is, the agent now has to work with a partially observable environment . A plague embedded with such an environment is non-stationary dy-namics , which makes it almost impossible to predict what will happen next. Therefore, the distances from states to the current goal become much less clear due to the world un-certainty, leading to less distinguishable states in terms of goal completeness and more faulty decisions emitted by the goal-conditioned policies. This paper aims at mitigating both aforementioned chal-lenges that emerge from most open-world environments. First, we observe that the architecture of the policy network is crucial to learning goal-relevant visual state representa-tions that allow goal-conditioned actions in domains with low inter-goal state diversity (cf. Fig. 1). To this end, we propose Goal-Sensitive Backbone (GSB), which enables ef-fective learning goal-conditioned policies over 20 tasks in the Minecraft domain. Next, to mitigate the challenge posed by the partially observed and non-stationary environment, we introduce horizon as an extra condition for the policy and a corresponding horizon prediction module. Specifi-cally, the policy is also explicitly conditioned on the remain-ing time steps till achieving certain goals (i.e., distance-to-goal). We find it significantly boosts the performance of our agents in open-world multi-task domains. However, the ground-truth distance-to-goal is unavailable during evalu-ation. To fix this problem, we train a horizon prediction module and feed the estimated distance-to-goal to the hori-zon commanding policy in evaluation. This leads to a 27% gain in average success rate under the multi-task settings. We evaluate the proposed approaches based on the sim-ple yet effective behavior cloning algorithm [ 10]. The ex-periments are conducted in three common biomes. In multi-task settings, our proposed method outperforms the base-line in terms of success rate and precision by a large mar-gin. It also achieves consistent improvement in single-task settings. Our ablation and exploratory studies then explain how our approach beat the counterparts and also unveil the surprising bonus of zero-shot generalization to new scenes (biomes). To summarize, targeting two identified challenges dis-tinct to open worlds, our contributions are threefold: •We propose Goal-Sensitive Backbone (GSB), a neural network that enables effective learning goal-relevant vi-sual state representations at multiple levels for goal-conditioned policies, aiming at addressing the challenge of diverse state distribution in open-ended environments. •We further introduce adaptive horizon prediction to ex-plicitly condition the policy on the distance from the cur-rent state to the goal, yielding much better performances in a partially observable open-ended environment with non-stationary dynamics. •We conduct extensive studies on the popular yet challeng-ing Minecraft domain with baselines and our proposed method. The results demonstrate superior advantages of our approach over the counterparts in terms of both suc-cess rate and precision of task completion.
Feng_Network-Free_Unsupervised_Semantic_Segmentation_With_Synthetic_Images_CVPR_2023
Abstract We derive a method that yields highly accurate semantic segmentation maps without the use of any additional neu-ral network, layers, manually annotated training data, or supervised training. Our method is based on the observa-tion that the correlation of a set of pixels belonging to the same semantic segment do not change when generating syn-thetic variants of an image using the style mixing approach in GANs. We show how we can use GAN inversion to ac-curately semantically segment synthetic and real photos as well as generate large training image-semantic segmenta-tion mask pairs for downstream tasks.1. Introduction Semantic segmentation is a computer vision prob-lem with countless important applications, including self-driving cars, medical image analysis, and image content generation and editing [5, 11, 12, 19]. Yet, attaining accu-rate semantic segmentation masks remains an open prob-lem [18, 28]. A recent proposed solution is to synthesize large training data-sets of photo-realistic images and their masks using generative models like Generative Adversarial Networks (GANs) [1, 6, 18, 19, 28]. However, these meth-ods require 1. adding and training an extra neural network to synthesize the mask, increasing model and training com-plexity, and 2. very costly pixel-wise human annotations on a large set of training images for every type of object and This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 23602 scene of interest [19, 28]. Here, we propose a new algorithm that does not require the addition of any extra network, costly pixel-wise human annotations, or supervised training. Our key observation is that the correlation of a set of pixels belonging to the same semantic segment do not change when generating synthetic variants of an image using the style mixing approach [14]. This allows us to derive an unsupervised algorithm to gen-erate highly accurate semantic segmentation masks without the need to incorporate new nets or layers to existing ones or the need to re-train them. We show how our algorithm can be used to semantically segment real photos, generate synthetic data to successfully train semantic segmentation algorithms, and create semantic segmentation masks for ap-plications like style mixing Fig. 1. In recent years, a number of works [1,4,6,19,21,24,28] have emerged to address the problem of semantic segmenta-tion with synthetic images. The difference of this solution, compared to a classical semantic segmentation methods on photos, is that we can take advantage of the rich semantic structured in models like StyleGAN2. This, combined with cheap photo-realistic image synthesis at scale, provides the possibility to synthesize large training sets with their se-mantic masks to train semantic segmentation algorithms at low cost while attaining better or state-of-the-art results [4]. 2. Related Works The works most relevant to this study include supervised and unsupervised method that do semantic segmentation on synthetic images (fine-grained semantic masks and/or fore-ground vs. background extraction). One of the first attempts to do semantic segmentation on synthetic images is DatasetGAN [18, 28]. DatasetGAN is a few-shot fully supervised solution, where a small MLP net-work is trained on the activation of a StyleGAN synthesis networks to regress a fully annotated fine-grained segmen-tation mask. DatasetGAN still requires pixel-wise human annotations though. A number of efforts have been made to remove this human annotation requirement, e.g., La-bels4Free [1] and FurryGAN [4]. Both of these approaches use an independent masking network that’s trained unsuper-visedly to discriminate foreground vs background. Unfortu-nately, extensions to a full, fine-grained semantic map is not available and unclear how to achieve it. A potential solution is to use unsupervised clustering on a CLIP-based map [21] but this leads to inaccurate segmentations. Another characteristic of existing methods is that the al-gorithms operate on intermediate features of the StyleGAN generators, introducing additional dependencies on the gen-erator architectures and the trained weights. Thus, when-ever the pre-trained generator is updated, either with new weights or new architectures, it is often necessary to re-configure the masking network branch correspondingly, fol-Figure 2. The key insight of our paper is to use generative model’s editing techniques like style mixing in StyleGAN2 to identify im-age segments that co-vary vs segments that do not (mixing cutoff c= 8). Notice that across style-mixed images, pixels vary con-sistently within the same semantic segment but differently across them. lowed by re-training. The high dimensionality of the inter-mediate features also poses significant computational de-mand, which has to be resolved by often more costly ma-chines, or segmentation at lower resolution. In contrast, the method we derive below achieves highly accurate semantic segmentation maps in a fully unsuper-vised way without the need of adding any new net/layers, re-train any components of the existing models, or the use of human annotations. As our method operate on raw pix-els, it also gives flexibility and adaptability to new models, 23603 Figure 3. Overview of our method. Given a photo (and its synthetic version obtained with GAN inversion) or a synthetic image generated by StyleGAN2, we first construct a style summary tensor by concatenating style-mixed images. Unsupervised pixel-wise clustering is then applied on the style summary tensor, yielding semantic specific masks. These can be further combined to create the desirable semantic segment. reducing computational and operational cost. 3. Method This section provides detailed derivation of the proposed algorithm, Fig. 2 and Fig. 3. 3.1. Prelimiaries on style mixing Style mixing is a technique first proposed in Style-GAN [15] as a regularization during training, but was later adopted as a method for synthesizing synthetic image vari-ants, Fig. 2(a). Figure 4. Style mixing process of StyleGAN2. More formally, a StyleGAN generator G(·) = Gs(Gm(·))is composed by two sub-networks Gmthe map-ping network and Gsthe synthesis network. Gmmaps from input latent zto intermediate latent w∈Rd×l, where dand lare the latent dimensions and number of modulated lay-ers in Gsrespectively. Gsthen maps wto the image space X∈Rw×h×3. As shown in Fig. 4, style mixing operates on two latent codes z0,z1for a trained generator. Given a layer cutoff c∈{0, ..., l}, a new code w01is generated by concatenating w0 before layer candw1after layer c. The style-mixed image is then given by X01=Gs(w01). This process is called style mixing, as the image X01 is a mix of X0=Gs(w0)andX1=Gs(w1). The level of combination depends on the cutoff c. From 0to l, the style-mixed image X01will change from X1toX0. As illustrated in Fig. 5, when cincreases, X01becomes closer to X0with the mixing happens in the orders of [high-level (pose, identity, etc.) ]→[low-level (texture, color style, color shift, etc.) ]. In the rest of the paper, w0will be referred as the structure latent, and w1,w2, ...,wnthe style latent(s), X01,X02, ...,X0nstyle-mixed images. 3.2. Semantic clustering through Style Mixing Given a synthetic image X0∈Rw×h×3generated by its latent code z0for a trained StyleGAN generator and a content of interest o, we wish to find a binary mask Y0∈ Rw×hforosuch that each element yijfollows: yij=( 1if pixel [i, j]belongs to o, 0otherwise(1) We will first address when ois the image foreground and extending to object-wise masking in Sec. 3.3. To generate high-precision object mask without training, the key question is which pixels in X0belongs to the same semantic segment. We note that this can be readily achieved by leveraging the properties of style mixing in StyleGAN generators. How can style mixing help cluster pixels by objects? We observe that with a properly selected c, style-mixed images generally maintain all the semantic structure of the origi-nal image, Fig. 5. This allows pixel-wise color correlation 23604 Figure 5. Changing cchanges the level of style mixing. across different style-mixed images serves as an surrogate of their semantic categories. Specifically, for a given query image X0,nstyle-mixed images X1,...,Xnare generated. First, we construct a style summary tensor Xs∈Rw×h×3nby concatenating style-mixed images at each pixel location, Fig. 3 columns 1-2. Second, K-means clustering is applied on Xsacross pixels with3n-dimension style summary features. That is, Y′=kmeans (Xs, k), (2) where kis the number of clusters and Y′∈[1, ..., k ]w×h is the w×hcluster assignment map of X0whose elements ∈ {1, ..., k}. This creates pixel sets where within-cluster pixels change their color similarly, and across-cluster pixels display much wider, random changes, Fig. 2(b). Hence, Y′ is a surrogate of the desirable semantic segmentation map ofX0, Fig. 3 column 3. At this stage, the cluster map Y′is intermediate as we do not know which clusters correspond to the content of inter-esto. Thus, foreground identification has to be performed to map Y′toY. Many algorithms can be used to decide the foreground cluster. In this paper, we give examples on two methods: 1. corner minority, 2. saliency. Corner minority approach. For a given bounding box around an object and the intermediate mask Y′, we can gen-erally assume that the object is locate at the center of the bounding box and the four corners are mostly occupied by background pixels. Thus, we examine a b×barea in the 4 corners of Y′. For a pre-defined threshold θcorner, if a clus-ter occupies greater or equal to round (θcornerb2)pixels, then it is a background cluster. We iterate through all kclusters.Pixels within background clusters are assigned to 0, other-wise1, yielding the final binary mask Y. This method is simple, but particularly effective against rigid, convex objects without substantial shape variation across images. Saliency approach. An alternative foreground detection algorithm utilizes a saliency map S∈Rw×h. Each element sij∈[0,1]inSapproximates how likely it is for a pixel to belong to oinY′. Given a predefined threshold θsaliency and cluster index m∈ {1, ..., k}, the foreground clusters can be identified by examining the average saliency for all the pixels within the cluster m. Specifically, cluster mbelongs tooif, 1 NX i,jS(i, j)> θ saliency ,for all i, j∈ {Y′=m},(3) where Nis the number of pixels belonging to cluster m. For most single convex objects, a pre-defined Gaussian heat map peaked at the center of the image is sufficiently good as the saliency map, which is what we use in our ex-periments for human and animal faces. 3.3. Object and instance segmentation In this section we extend our method to the object level. This is crucial for complex scenes. In scenes the foreground is not always consistently defined by a type of object, nor is its appearance and alignment [12]. This complexity poses special challenges for existing unsupervised segmentation algorithms on synthetic images, as most of them rely on a stable foreground-background decomposition. To extend our method to object-or even instance-level, one only needs to apply the aforementioned algorithm to the pixels within the bounding box of the object/instance
( Xo is an image crop instead of full image). With the current significant progress on pre-trained object detector [10, 25] and zero-shot object detector [26], one can obtain high-performance model on a wide range of object categories. In this study, we use GLIP [26] for its accuracy on zero-shot object detection. 3.4. Assumptions and limitations Not relying on an additional network branch to perform the task of segmentation makes our method clearer in its assumption and limitation, as well as higher interpretability when the program fails. Here, we provide an analysis of our algorithm to provide initial applicability assessment and troubleshooting directions. Our method currently uses images from StyleGAN2 models. As the nature of our algorithm relies on the prop-erty of style mixing, directly applying the method on syn-thetic images generated by other architectures that signifi-cantly different from StyleGAN is not straightforward. This 23605 is specially true for generative models where style mixing (or variants) cannot be used. This can be resolved indirectly by using a StyleGAN model of the same domain with an accurate GAN inversion algorithm like [3, 11, 22]. On the other hand, one should notice that our method relies on the style mixing property on a set of images. StyleGAN is cur-rently a straightforward source of such images. With the recent development such as ControlNet [27], one could gen-eralize our method to diffusion models as well. Our foreground modeling only works when the corner background assumption is met or the saliency map can ap-proximately reflect the actual content of interest. When the interested object violate those assumptions, one will find our foreground identification fails and a custom foreground heuristic has to be used. We have not seen this in any of our experimental results but one can always compile a scene where these assumptions are violated. We will pro-vide more analysis on the failure mode of our method in Sec. 5. 4. Experiments We report the results of our approach in three different applications and provide comparative results against state-of-the-art methods. 4.1. Implementation details Due to its simplicity and memory efficiency, we run our algorithm on the native resolution of the pre-trained Style-GAN2 generators, that is 1024 ×1024 pixels for faces and scenes, 512 ×512 for horses and the face of the other ani-mals. In all experiments, we set n= 50 for the style sum-mary tensor described above. For facial images, the number of clusters kin K-means is set to 3 for foreground segmen-tation and 8 for eyes and mouth segmentation, both using the saliency approach. For horses, animal faces and scenes, kis set to 2 and we use the corner approach. We use the K-means implementation faiss [13]. Each image takes less than 1 second to process, taking 4GBs of GPU memory on a single A100. 4.2. Synthetic Image Segmentation We test our algorithm’s accuracy at extracting object masks on the FFHQ [14], LSUN-Horses, AFHQ [8], and DeepRooms [12] datasets. Since synthetic images do not have golden ground truth segmentation from humans, we use off-the-shelf state-of-the-art semantic segmentation algorithm as pseudo-ground truth, which is similar to the testing process used in prior art like Labels4Free [1]. For experiments on FFHQ and LSUN-horses, we use the DeepLabV3+ [7] model trained on the augmented PASCAL-VOC12 dataset. For Deep-Rooms, we used SwinTransformer [20] pre-trained on theADE20K dataset [29]. The foreground class is obtained by choosing the mask from the appropriate semantic classes sofa, table in our experiments. All the trained models are taken from [9]. For LSUN-horses and DeepRoom, where multiple foreground objects might appears in the images, we first apply GLIP zero-shot object detector [26], then use our algorithm within the detected bounding boxes, as de-scribed in Sec. 3.3. We use “horses” and“sofa, coffee ta-ble, lamp, side table, rug, in a livingroom” as GLIP text caption for LSUN-horse and DeepRoom dataset images re-spectively. We report comparative results using mIOU (mean Inter-section Over Union). For foreground segmentation, we re-port foreground IOU, background IOU and their average as mIOU. For object-wise semantic segmentation, we report mIOU over object categories and individual object IOUs. We compare the synthetic semantic segmentation per-formance of our algorithm with DatasetGAN [28], La-bels4Free(L4F) [1], and Semantic In Style (SiS) [21]. For DatasetGAN, we train the model on stylegan2-ffhq-config-f generator with the authors’ provided annotations on 16 fa-cial images with the official optimization-based inversion algorithm in StyleGAN2 at 512 ×512 resolution. For both L4F and SiS, we re-train the model at 1024 ×1024 for a fair comparison to ours. L4F requires around 10k synthetic im-ages to train its Alpha Network while SiS requires 50 im-ages for clustering and 15k images for training its masking branch, Tab. 1. As shown in Tab. 1 and Tab. 2, our method outperforms the supervised baseline DatasetGAN on synthetic human faces in terms of mIOU. Comparing to the state-of-the-art unsupervised foreground segmentation algorithm, even if with no training data is used to extract across sample in-formation, we are able to achieve similar or better perfor-mance. 4.3. Semantic segmentation of real photos We compare our results against the golden ground-truth given by human annotations. We perform real photo seman-tic segmentation on the CelebAHQ-Mask [17] dataset. To use our method and other synthetic segmentation algorithms on photos, we use ReStyle encoder [2] to first perform GAN inversion and then apply our algorithm to compute the se-mantic segmentation mask. Similar to our previous exper-iments, mIOU is used as the main evaluation metric, see Tab. 1 CelebAMask-HQ columns. 4.4. Synthetic image and segmentation masks as training data As mentioned above, an important application of syn-thetic semantic segmentation is to generate training data for semantic segmentation algorithms. We use a similar eval-uation framework as in [21, 28]. For this experiment, we 23606 FFHQ CelabAHQ-Mask Methods Training data supervision additional network IOU (fg/bg) mIOU IOU (fg/bg) mIOU DatasetGAN [28] 16 ✓ ✓ 0.83/0.73 0.78 0.87/0.73 0.80 L4F [1] 10k × ✓ 0.92/0.85 0.88 0.92/0.80 0.86 SiS [21] 50+15k × ✓ 0.89/0.77 0.83 0.92/0.81 0.87 Ours 0 × × 0.87/0.73 0.80 0.91/0.81 0.86 Table 1. Image segmentation performance on FFHQ ( i.e., on synthetic data) and CelebA-Mask-HQ ( i.e., on real data). IOU (fg/bg) is the IOU for foreground/background segmentation. mIOU is the average between the IOU (fg) and IOU (bg). LSUN-Horse DeepRoom-livingroom Methods IOU (horse-fg/bg) mIOU IOU (sofa-fg/bg) mIOU IOU (table-fg/bg) mIOU L4F [1] 0.51/0.73 0.62 × × × × SiS [21] 0.44/0.78 0.61 × × × × Ours 0.64/0.89 0.77 0.88/0.97 0.93 0.14/0.96 0.55 Table 2. Semantic segmentation performance on LSUN-horses, and DeepRoom-livingroom datasets, all with synthetic images and DeepLabV3 as psuedo ground-truth. ×: method not easily extendable to segment the target class. IOU mIOU Trimap IOU Trimap mIOU Methods # manual gt fg bg fg/bg fg bg fg/bg U-net [23] 1000 0.95 0.87 0.91 0.53 0.45 0.49 w/ DatasetGAN [28] 16 0.90 0.79 0.84 0.43 0.39 0.41 w/ L4F [1] 0 0.92 0.82 0.87 0.43 0.38 0.41 w/ SiS [21] 0 0.92 0.80 0.86 0.45 0.33 0.39 w/ Ours 0 0.92 0.82 0.87 0.42 0.43 0.42 Table 3. Using synthetic data as training data for image segmentation. Trained on images generated from FFHQ model, test on CelebA-Mask-HQ (real data). The supervised segmentation method is DeepLabV3. All synthetic data performances are trained from scratch using synthetic data only. Trimap width is 3 pixels. generate a synthetic segmentation dataset for faces using the FFHQ generator [15]. Using the image and pixel-wise labels we train a U-Net [23] from scratch, for 40K itera-tions, to evaluate on photos from the test partition of the CelebA-Mask-HQ dataset. Models are trained using the public codebase from [9]. In addition to the standard mIOU computed using entire images, we also report mIOU com-puted on a Trimap of width 3 pixels following [16]. Such a metric focuses on performance along the boundary pixels. The more precise the boundary is, the higher the Trimap mIOU. We report our results in Tab. 3. 4.5. Qualitative results In this section, we provide qualitative results of our seg-mentation. Fig. 6 shows facial foreground segmentation in fine details. Fig. 8 shows alpha composition between the original images and distinct style-mixed images with our masks as alpha channel. We directly use the hard binary mask without any feathering or Gaussian blur. These results show the quality of our masks, since an inaccurate mask leads to obvious artifact in the image composition that can be readily detected by humans. We provide these visualiza-tions on FFHQ, AFHQ-wild, and DeepRooms in Fig. 8.RGB LAB c k=2 k=4 k=10 k=2 k=4 k=10 4 0.66 0.73 0.67 0.70 0.70 0.66 6 0.74 0.75 0.68 0.74 0.79 0.69 8 0.76 0.76 0.67 0.76 0.78 0.69 12 0.44 0.58 0.68 0.45 0.61 0.69 14 0.45 0.59 0.66 0.51 0.60 0.62 Table 4. Ablation study on 500 randomly selected FFHQ images, measured in mIOU. 4.6. Ablation Study To test the effect of the parameters in our algorithm, we perform a series of ablation studies. These correspond to the layer cutoff c, the color space of the style summary tensor, and the number of clusters k. Effect of cutoff c. As described in Sec. 3.1, the higher the c, the closer the style-mixed image is to the original X0. Thus, if cis too 23607 Figure 6. Segmentation details of people from CelebAMask-HQ photos and FFHQ synthetic images. Details zoomed in to compare the segmentation precision. high, the style summary tensor might lack the diversity and style disentanglement between objects needed to yield ac-curate results. If cis too low, then the tensor might contain images of very different structures, losing their spatial and semantic correspondences. In these cases, the performance of clustering will be negatively affected. In Tab. 4, we com-pare segmentation performance with cranging from 4 to 12. The mIOU is generally the worst when c= 4 andc= 12 while holding kconstant. Color space The choice of color space also affects the performance of the clustering algorithm. We observe that when construct-ing the style summary tensor in RGB space, the K-means clustering might be overly focused on the highlight of the objects, especially when reflective surfaces are present. This is because of highlights mostly vary in conjunction with scene illumination, not with semantic segments (ob-jects). Changing the color space to LAB addresses this problem, as we show in Tab. 4. Number of clusters The number of clusters kis another parameter of impor-tance. Generally, when the input X0is an image cropped from a tight bounding box, k= 2yields very good semantic segmentations. However, when estimating the foreground of a main object class (e.g., a person or a cat) on an entire image, it is likely that the foreground pixels clustered withsome background pixels. We wish to have a slightly larger value for k that allows for shading and textural changes in the same semantic segment. For this reson, we selected k=4. We test the choice of kfrom 2to10on faces. Tab. 4 shows that both k= 2 andk= 10 give low performance and a better mIOU is indeed achieved when k=4. 5. Failure Cases There are two potential modes of failure in our method:
Fei_Generative_Diffusion_Prior_for_Unified_Image_Restoration_and_Enhancement_CVPR_2023
Abstract Existing image restoration methods mostly leverage the posterior distribution of natural images. However, they often assume known degradation and also require super-vised training, which restricts their adaptation to complex real applications. In this work, we propose the Generative Diffusion Prior (GDP) to effectively model the posterior distributions in an unsupervised sampling manner. GDP utilizes a pre-train denoising diffusion generative model (DDPM) for solving linear inverse, non-linear, or blind problems. Specifically, GDP systematically explores a pro-tocol of conditional guidance, which is verified more prac-tical than the commonly used guidance way. Furthermore, GDP is strength at optimizing the parameters of degrada-tion model during the denoising process, achieving blind image restoration. Besides, we devise hierarchical guid-ance and patch-based methods, enabling the GDP to gen-erate images of arbitrary resolutions. Experimentally, we demonstrate GDP ’s versatility on several image datasets for linear problems, such as super-resolution, deblurring, inpainting, and colorization, as well as non-linear and blind issues, such as low-light enhancement and HDR image re-∗Equal contribution, †Corresponding author.covery. GDP outperforms the current leading unsupervised methods on the diverse benchmarks in reconstruction qual-ity and perceptual quality. Moreover, GDP also general-izes well for natural images or synthesized images with ar-bitrary sizes from various tasks out of the distribution of the ImageNet training set. The project page is available at https://generativediffusionprior.github.io/
1. Introduction Image quality often degrades during capture, storage, transmission, and rendering. Image restoration and en-hancement [42] aim to inverse the degradation and im-prove the image quality. Typically, restoration and enhance-ment tasks can be divided into two main categories: 1) Linear inverse problems , such as image super-resolution (SR) [24, 38], deblurring [36, 75], inpainting [88], coloriza-tion [37, 94], where the degradation model is usually linear and known; 2) Non-linear or blind problems [1], such as image low-light enhancement [39] and HDR image recov-ery [10, 79], where the degradation model is non-linear and unknown. For a specific linear degradation model, image restoration can be tackled through end-to-end supervised training of neural networks [16,94]. Nonetheless, corrupted images in the real world often have multiple complex degra-dations [57], where fully supervised approaches suffer to This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore. 9935 generalize. There is a surge of interest to seek for more general im-age priors through generative models [1, 21,69], and tackle image restoration in an unsupervised setting [8, 19], where multiple restoration tasks of different degradation models can be addressed during inference without re-training. For instance, Generative Adversarial Networks (GANs) [20] that are trained on a large dataset of clean images learn rich knowledge of the real-world scenes have succeeded in various linear inverse problems through GAN inver-sion [21, 51,58]. In parallel, Denoising Diffusion Proba-bilistic Models (DDPMs) [2, 7,35,67,72,77] have demon-strated impressive generative capabilities, level of details, and diversity on top of GAN [26, 61,62,71,73,76]. As an early attempt, Kawar et al. [31] explore pre-trained DDPMs with variational inference, and achieve satisfactory results on multiple restoration tasks, but their Denoising Diffusion Restoration Model (DDRM) leverages the singular value decomposition (SVD) on a known linear degradation ma-trix, making it still limited to linear inverse problems. In this study, we take a step further and propose an effi-cient approach named Generative Diffusion Prior (GDP). It exploits a well-trained DDPM as effective prior for general-purpose image restoration and enhancement, using degraded image as guidance. As a unified framework, GDP not only works on various linear inverse problems, but also generalizes to non-linear, and blind image restoration and enhancement tasks for the first time. However, solving the blind inverse problem is not trivial, as one would need to concurrently estimate the degradation model and recover the clean image with high fidelity. Thanks to the generative prior in a pre-trained DDPM, denoising within the DDPM manifold naturally regularizes the realness and fidelity of the recovered image. Therefore, we adopt a blind degrada-tion estimation strategy, where the degradation model pa-rameters of GDP are randomly initialized and optimized during the denoising process. Moreover, to further improve the photorealism and image quality, we systematically in-vestigate an effective way to guide the diffusion models. Specifically, in the sampling process, the pre-trained DDPM first predicts a clean image ˜x0from the noisy image xtby estimating the noise in xt. We can add guidance on this in-termediate variable ˜x0to control the generation process of the DDPMs. In addition, with the help of the proposed hier-archical guidance and patch-based generation strategy, GDP is able to recover images of arbitrary resolutions, where low-resolution images and degradation models are first pre-dicted to guide the generation of high-resolution images. We demonstrate the empirical effectiveness of GDP by comparing it with various competitive unsupervised meth-ods under the linear or multi-linear inverse problem on ImageNet [14], LSUN [89], and CelebA [30] datasets in terms of consistency and FID. Over the low-light [39] andNTIRE [59] datasets, we further show GDP results on non-linear and blind issues, including low-light enhancement and HDR recovery, superior to other zero-shot baselines both qualitatively and quantitively, manifesting that GDP trained on ImageNet also works on images out of its train-ing set distribution. Our contributions are fourfold: (1) To our best knowl-edge, GDP is the first unified problem solver that can ef-fectively use a single unconditional DDPM pre-trained on ImageNet provide by [15] to produce diverse and high-fidelity outputs for unified image restoration and enhance-ment in an unsupervised manner. (2) GDP is capable of op-timizing randomly initiated parameters of degradation that are unknown, resulting in a powerful framework that can tackle any blind image restoration. (3) Further, to achieve arbitrary size image generation, we propose hierarchical guidance and patch-based methods, greatly promoting GDP on natural image enhancement. (4) Moreover, the compre-hensive experiments are carried out, different from the con-ventional guidance way, where GDP directly predicts the temporary output given the noisy image in every step, which will be leveraged to guide the generation of images in the next step.