tmlr-md-dump / eN9CjU3h1b /eN9CjU3h1b.md
RedTachyon's picture
Upload folder using huggingface_hub
fe0f242 verified
|
raw
history blame
152 kB

Mmd-Regularized Unbalanced Optimal Transport

Piyushi Manupriya cs18m20p100002@iith.ac.in Department of Computer Science and Engineering, IIT Hyderabad, INDIA.

J. SakethaNath saketha@cse.iith.ac.in Department of Computer Science and Engineering, IIT Hyderabad, INDIA.

Pratik Jawanpuria pratik.jawanpuria@microsoft.com Microsoft, INDIA.

Reviewed on OpenReview: https: // openreview. net/ forum? id= eN9CjU3h1b

Abstract

We study the unbalanced optimal transport (UOT) problem, where the marginal constraints are enforced using Maximum Mean Discrepancy (MMD) regularization. Our work is motivated by the observation that the literature on UOT is focused on regularization based on ϕ-divergence (e.g., KL divergence). Despite the popularity of MMD, its role as a regularizer in the context of UOT seems less understood. We begin by deriving a specific dual of MMD-regularized UOT (MMD-UOT), which helps us prove several useful properties. One interesting outcome of this duality result is that MMD-UOT induces novel metrics, which not only lift the ground metric like the Wasserstein but are also sample-wise efficient to estimate like the MMD. Further, for real-world applications involving non-discrete measures, we present an estimator for the transport plan that is supported only on the given (m) samples. Under certain conditions, we prove that the estimation error with this finitelysupported transport plan is also O(1/ √m). As far as we know, such error bounds that are free from the curse of dimensionality are not known for ϕ-divergence regularized UOT.

Finally, we discuss how the proposed estimator can be computed efficiently using accelerated gradient descent. Our experiments show that MMD-UOT consistently outperforms popular baselines, including KL-regularized UOT and MMD, in diverse machine learning applications.

1 Introduction

Optimal transport (OT) is a popular tool for comparing probability measures while incorporating geometry over their support. OT has witnessed a lot of success in machine learning applications (Peyré & Cuturi, 2019), where distributions play a central role. The Kantorovich's formulation for OT aims to find an optimal plan for the transport of mass between the source and the target distributions that incurs the least expected cost of transportation. While classical OT strictly enforces the marginals of the transport plan to be the source and target, one would want to relax this constraint when the measures are noisy (Frogner et al., 2015) or when the source and target are un-normalized (Chizat, 2017; Liero et al., 2018). Unbalanced optimal transport (UOT) (Chizat, 2017), a variant of OT, is employed in such cases, which performs a regularization-based soft-matching of the transport plan's marginals with the source and the target distributions.

Unbalanced optimal transport with Kullback Leibler (KL) divergence and, in general, with ϕdivergence (Csiszar, 1967) based regularization is well-explored in literature (Liero et al., 2016; 2018).

Entropy regularized UOT with KL divergence (Chizat et al., 2017; 2018) has been employed in applications such as domain adaptation (Fatras et al., 2021), natural language processing (Chen et al., 2020b), and computer vision (De Plaen et al., 2023). Existing works (Piccoli & Rossi, 2014; 2016; Hanin, 1992; Georgiou et al., 2009) have also studied total variation (TV)-regularization-based UOT formulations. While MMD-based methods have been popularly employed in several machine learning (ML) applications (Gretton et al., 2012; Li et al., 2017; 2021; Nguyen et al., 2021), the applicability of MMD-based regularization for UOT is not well-understood. To the best of our knowledge, interesting questions like the following, have not been answered in prior works:

  • Will MMD regularization for UOT also lead to novel metrics over measures, analogous to the ones obtained with the KL divergence (Liero et al., 2018) or the TV distance (Piccoli & Rossi, 2014)?

  • What will be the statistical estimation properties of these?

  • How can such MMD regularized UOT metrics be estimated in practice such that they are suitable for large-scale applications?

In order to bridge this gap, we study MMD-based regularization for matching the marginals of the transport plan in the UOT formulation (henceforth termed MMD-UOT).

We first derive a specific dual of the MMD-UOT formulation (Theorem 4.1), which helps further analyze its properties. One interesting consequence of this duality result is that the optimal objective of MMD-UOT is a valid distance between the source and target measures (Corollary 4.2), whenever the transport cost is valid (ground) metric over the data points. Popularly, this is known as the phenomenon of lifting metrics to measures. This result is significant as it shows that MMD-regularization in UOT can parallel the metricitypreservation that happens with KL-regularization (Liero et al., 2018) and TV-regularization (Piccoli & Rossi, 2014). Furthermore, our duality result shows that this induced metric is a novel metric belonging to the family of integral probability metrics (IPMs) with a generating set that is the intersection of the generating sets of MMD and the Kantorovich-Wasserstein metric. Because of this important relation, the proposed distance is always smaller than the MMD distance, and hence, estimating MMD-UOT from samples is at least as efficient as that with MMD (Corollary 4.6). This is interesting as minimax estimation rates for MMD can be completely dimension-free. As far as we know, there are no such results that show that estimation with KL/TV-regularized UOT can be as efficient sample-wise. Thus, the proposed metrics not only lift the ground metrics to measures, like the Wasserstein, but also are sample-wise efficient to estimate, like MMD.

However, like any formulation of optimal transport problems, the computation of MMD-UOT involves optimization over all possible joint measures. This may be challenging, especially when the measures are continuous. Hence, we present a convex program-based estimator, which only involves a search over joints supported at the samples. We prove that the proposed estimator is statistically consistent and converges to MMD-UOT between the true measures at a rate O m− 12 , where m is the number of samples. Such efficient estimators are particularly useful in machine learning applications, where typically only samples from the underlying measures are available. Such applications include hypothesis testing, domain adaptation, and model interpolation, to name a few. In contrast, the minimax estimation rate for the Wasserstein distance is itself O m− 1d , where d is the dimensionality of the samples (Niles-Weed & Rigollet, 2019). That is, even if a search over all possible joints is performed, estimating Wasserstein may be challenging. Since MMD-UOT can approximate Wasserstein arbitrarily closely (as the regularization hyperparameter goes ∞), our result can also be understood as a way of alleviating the curse of dimensionality problem in Wasserstein. We summarize the comparison between MMD-UOT and relevant OT variants in Table 1. Finally, our result of MMD-UOT being a metric facilitates its application whenever the metric properties of OT are desired, for example, while computing the barycenter-based interpolation for single-cell RNA sequencing (Tong et al., 2020). Accordingly, we also present a finite-dimensional convex-program-based estimator for the barycenter with MMD-UOT. We prove that this estimator is also consistent with an efficient sample complexity. We discuss how the formulations for estimating MMD-UOT (and barycenter) can be solved efficiently using accelerated (projected) gradient descent. This solver helps us scale well to large datasets. We empirically show the utility of MMD-UOT in several applications including twosample hypothesis testing, single-cell RNA sequencing, domain adaptation, and prompt learning for fewshot classification. In particular, we observe that MMD-UOT outperforms popular baselines such as KLregularized UOT and MMD in our experiments.

We summarize our main contributions below: Table 1: Summarizing interesting properties of MMD and several OT/UOT approaches. ϵOT (Cuturi, 2013) and ϵKL-UOT (Chizat, 2017) denote the entropy-regularized scalable variants OT and KL-UOT (Liero et al., 2018), respectively. MMD and the proposed MMD-UOT are shown with characteristic kernels. By 'finite-parameterization bounds' we mean results similar to Theorem 4.10.

Property MMD OT ϵOT TV-UOT KL-UOT ϵKL-UOT MMD-UOT
Metricity Lifting of ground metric No curse of dimensionality Finite-parametrization bounds N/A
  • Dual of MMD-UOT and its analysis. We prove that MMD-UOT induces novel metrics that not only lift ground metrics like the Wasserstein but also are sample-wise efficient to estimate like the MMD.

  • Finite-dimensional convex-program-based estimators for MMD-UOT and the corresponding barycenter. We prove that the estimators are both statistically and computationally efficient.

  • We illustrate the efficacy of MMD-UOT in several real-world applications. Empirically, we observe that MMD-UOT consistently outperforms popular baseline approaches.

We present proofs for all our theory results in Appendix B. As a side-remark, we note that most of our results not only hold for MMD-UOT but also for a UOT formulation where a general IPM replaces MMD.

Proofs in the appendix are hence written for general IPM-based regularization and then specialized to the case when the IPM is MMD. This generalization to IPMs may itself be of independent interest.

2 Preliminaries

Notations. Let X be a set (domain) that forms a compact Hausdorff space. Let R+(X ), R(X ) denote the set of all non-negative, signed (finite) Radon measures defined over X ; while the set of all probability measures is denoted by R + 1 (X ). For a measure on the product space, π ∈ R+(X × X ), let π1, π2 denote the first and second marginals, respectively (i.e., they are the push-forwards under the canonical projection maps onto X ). Let L(X ), C(X ) denote the set of all real-valued measurable functions and all real-valued continuous functions, respectively, over X .

Integral Probability Metric (IPM): Given a set G ⊂ L(X ), the integral probability metric (IPM) (Muller, 1997; Sriperumbudur et al., 2009; Agrawal & Horel, 2020) associated with G, is defined by:

γQ(s0,t0)maxfGXf ds0Xf dt0  s0,t0R+(X).\gamma_{\mathcal{Q}}(s_{0},t_{0})\equiv\max_{f\in\mathcal{G}}\left|\int_{\mathcal{X}}f\ \mathrm{d}s_{0}-\int_{\mathcal{X}}f\ \mathrm{d}t_{0}\right|\ \forall\ s_{0},t_{0}\in\mathcal{R}^{+}(\mathcal{X}).

G is called the generating set of the IPM, γG.

Maximum Mean Discrepancy (MMD) Let k be a characteristic kernel (Sriperumbudur et al., 2011) over the domain X , let ∥f∥k denote the norm of f in the canonical reproducing kernel Hilbert space (RKHS), Hk, corresponding to k. MMDk is the IPM associated with the generating set: Gk ≡ {f ∈ Hk| ∥f∥k ≤ 1}.

Using a characteristic kernel k, MMD metric between s0, t0 ∈ R+(X ) is defined as:

\mboxMMDk(s0,t0)maxfΦkXf  ds0Xf  dt0=μk(s0)μk(t0)k,(1)\begin{array}{ll}\mbox{MMD}_{k}\left(s_{0},t_{0}\right)&\equiv\max_{f\in\Phi_{k}}\left|\int_{X}f\;\mathrm{d}s_{0}-\int_{X}f\;\mathrm{d}t_{0}\right|\\ &=\left\|\mu_{k}\left(s_{0}\right)-\mu_{k}\left(t_{0}\right)\right\|_{k},\end{array}\tag{1} (2)\left(2\right)

where µk (s) ≡Rϕk(x) ds(x), is the kernel mean embedding of s (Muandet et al., 2017), ϕk is the canonical feature map of k. A kernel k is called a characteristic kernel if the map µk is injective. MMD can be computed analytically using evaluations of the kernel k. MMDk is a metric when the kernel k is characteristic. A continuous positive-definite kernel k on X is called c-universal if the RKHS Hk is dense in C(X ) w.r.t. the

(1)(1)

sup-norm, i.e., for every function g ∈ C(X ) and all ϵ > 0, there exists an f ∈ Hk such that ∥f − g∥∞ ≤ ϵ.

Universal kernels are also characteristic. Gaussian kernel (RBF kernel) is an example of a universal kernel over the continuous domain. Dirac delta kernel is an example of a universal kernel over the discrete domain. Optimal Transport (OT) Optimal transport provides a tool to compare distributions while incorporating the underlying geometry of their support points. Given a cost function, c : X × X 7→ R, and two probability measures s0 ∈ R+ 1 (X ), t0 ∈ R+ 1 (X ), the p-Wasserstein Kantorovich OT formulation is given by:

Wˉpp(s0,t0)minπR1+(X×X)ep dπ, s.t.  π1=s0, π2=t0,(3)\bar{W}_{p}^{p}(s_{0},t_{0})\equiv\min_{\pi\in\mathcal{R}_{1}^{+}(X\times X)}\int e^{p}\ \mathrm{d}\pi,\ \mathrm{s.t.}\ \ \pi_{1}=s_{0},\ \pi_{2}=t_{0},\tag{3}

where p ≥ 1. An optimal solution of (3) is called an optimal transport plan. Whenever the cost is a metric, d, over X × X (ground metric), W¯p defines a metric over measures, known as the p-Wasserstein metric, over R

  • 1 (X ) × R+ 1 (X ).

Kantorovich metric (Kc) Kantorovich metric also belongs to the family of integral probability metrics associated with the generating set Wc ≡ f : X 7→ R | max x∈X ̸=y∈X |f(x)−f(y)| c(x,y) ≤ 1 , where c is a metric over X × X . The Kantorovich-Rubinstein duality result shows that the 1-Wasserstein metric is the same as the Kantorovich metric when restricted to probability measures (refer for e.g. (5.11) in Villani (2009)):

Wˉ1(s0,t0)minπR1n(X×X)CC dπ,=maxfψˉXf ds0Xf dt0Kc(s0,t0),s.t.π1=s0, π2=t0\begin{array}{ccccccccc}\bar{W}_{1}(s_{0},t_{0})&\equiv&\min_{\pi\in\mathbb{R}_{1}^{n}(X\times X)}\int\limits_{\mathcal{C}}\mathcal{C}\ \mathrm{d}\pi,&=&\max_{f\in\bar{\psi}}\left|\int_{X}f\ \mathrm{d}s_{0}-\int_{X}f\ \mathrm{d}t_{0}\right|&\equiv&\mathcal{K}_{c}(s_{0},t_{0}),\\ \mathrm{s.t.}&\pi_{1}=s_{0},\ \pi_{2}=t_{0}\end{array}

where s0, t0 ∈ R+ 1 (X ).

3 Related Work

Given the source and target measures, s0 ∈ R+(X ) and t0 ∈ R+(X ), respectively, the unbalanced optimal transport (UOT) approach (Liero et al., 2018; Chizat et al., 2018) aims to learn the transport plan by replacing the mass conservation marginal constraints (enforced strictly in 'balanced' OT setting) by a soft regularization/penalization on the marginals. KL-divergence and, in general, ϕ-divergence (Csiszar, 1967), (Sriperumbudur et al., 2009) based regularizations have been most popularly studied in UOT setting. The ϕ-divergence regularized UOT formulation may be written as (Frogner et al., 2015), (Chizat, 2017):

minπR+(X×X) c dπ+λDϕ(π1,s0)+λDϕ(π2,t0),(4)\min_{\pi\in{\cal R}^{+}(X\times X)}\ \int c\ {\rm d}\pi+\lambda D_{\phi}(\pi_{1},s_{0})+\lambda D_{\phi}(\pi_{2},t_{0}),\tag{4}

where c is the ground cost metric and Dϕ(·, ·) denotes the ϕ-divergence (Csiszar, 1967; Sriperumbudur et al., 2009) between two measures. Since in UOT settings, the measures s0, t0 may be un-normalized, following (Chizat, 2017; Liero et al., 2018) the transport plan is also allowed to be un-normalized. UOT with KL-divergence-based regularization induces the so-called Gaussian Hellinger-Kantorovich metric (Liero et al., 2018) between the measures whenever 0 < λ ≤ 1 and the ground cost c is the squared-Euclidean distance. Similar to the balanced OT setup (Cuturi, 2013), an additional entropy regularization in KL-UOT formulation facilitates Sinkhorn iteration (Knight, 2008) based efficient solver for KL-UOT (Chizat et al., 2017) and has been popularly employed in several machine learning applications (Fatras et al., 2021; Chen et al., 2020b; Arase et al., 2023; De Plaen et al., 2023). Total Variation (TV) distance is another popular metric between measures and is the only common member of the ϕ-divergence family and the IPM family. UOT formulation with TV regularization (denoted by |·|TV) has been studied in (Piccoli & Rossi, 2014):

minπR+(X×X)  c  dπ+λπ1s0TV+λπ2t0TV.\operatorname*{min}_{\pi\in\mathcal{R}^{+}(\mathcal{X}\times\mathcal{X})}\;\int c\;\mathrm{d}\pi+\lambda|\pi_{1}-s_{0}|_{\mathrm{TV}}+\lambda|\pi_{2}-t_{0}|_{\mathrm{TV}}.

UOT with TV-divergence-based regularization induces the so-called Generalized Wasserstein metric (Piccoli & Rossi, 2014) between the measures whenever λ > 0 and the ground cost c is a valid metric. As far as

Σ\mathbf{\Sigma}

we know, none of the existing works study the sample complexity of estimating these metrics from samples.

More importantly, algorithms for solving (5) with empirical measures that computationally scale well to ML applications seem to be absent in the literature. Besides the family of ϕ-divergences, the family of integral probability metrics is popularly used for comparing measures. An important member of the IPM family is the MMD metric, which also incorporates the geometry over supports through the underlying kernel. Due to its attractive statistical properties (Gretton et al., 2006), MMD has been successfully applied in a diverse set of applications including hypothesis testing (Gretton et al., 2012), generative modelling (Li et al., 2017), self-supervised learning (Li et al., 2021), etc.

Recently, (Nath & Jawanpuria, 2020) explored learning the transport plan's kernel mean embeddings in the balanced OT setup. They proposed learning the kernel mean embedding of a joint distribution with the least expected cost and whose marginal embeddings are close to the given-sample-based estimates of the marginal embeddings. As kernel mean embedding induces MMD distance, MMD-based regularization features in the balanced OT formulation of (Nath & Jawanpuria, 2020) as a means to control overfitting. To ensure that valid conditional embeddings are obtained from the learned joint embeddings, (Nath & Jawanpuria, 2020) required additional feasibility constraints that restrict their solvers in scaling well to machine learning applications. We also note that (Nath & Jawanpuria, 2020) neither analyze the dual of their formulation nor study its metric-related properties and their sample complexity result of O(m− 12 ) does not apply to our MMD-UOT estimator as their formulation is different from the proposed MMD-UOT formulation (6).

In contrast, we bypass the issues related to the validity of conditional embeddings as our formulation involves directly learning the transport plan and avoids kernel mean embedding of the transport plan. We perform a detailed study of MMD regularization for UOT, which includes analyzing its dual and proving metric properties that are crucial for optimal transport formulations. To the best of our knowledge, the metricity of MMD-regularized UOT formulations has not been studied previously. The proposed algorithm scales well to large-scale machine learning applications. While we also obtain O(m− 12 ) estimation error rate, we require a different proof strategy than (Nath & Jawanpuria, 2020). Finally, as discussed in Appendix B, most of our theoretical results apply to a general IPM-regularized UOT formulation and are not limited to the MMD-regularized UOT formulation. This generalization does not hold for (Nath & Jawanpuria, 2020).

Wasserstein auto-encoders (WAE) also employ MMD for regularization. However, there are some important differences. The regularization in WAEs is only performed for one of the marginals, and the other marginal is matched exactly. This not only breaks the symmetry (and hence the metric properties) but also brings back the curse of dimensionality in estimation (for the same reasons as with unregularized OT). Further, their work does not attempt to study any theoretical properties with MMD regularization and merely employs it as a practical tool for matching marginals. Our goal is to theoretically study the metric and estimation properties with MMD regularization. We present more details in Appendix B.18.

We end this section by noting key differences between MMD and OT-based approaches (including MMDUOT). A distinguishing feature of OT-based approaches is the phenomenon of lifting the ground-metric geometry to that over distributions. One such result is visualized in Figure 2(b), where the MMD-basedinterpolate of the two unimodal distributions comes out to be bimodal. This is because MMD's interpolation is the (literal) average of the source and the target densities, irrespective of the kernel. This has been well-established in the literature (Bottou et al., 2017). On the other hand, OT-based approaches obtain a unimodal barycenter. This is a 'geometric' interpolation that captures the characteristic aspects of the source and the target distributions. Another feature of OT-based methods is that we obtain a transport plan between the source and the target points which can be used for various alignment-based applications, e.g., cross-lingual word mapping (Alvarez-Melis & Jaakkola, 2018; Jawanpuria et al., 2020), domain adaptation (Courty et al., 2017; Courty et al., 2017; Gurumoorthy et al., 2021), etc. On the other hand, it is unclear how MMD can be used to align the source and target data points.

4 Mmd Regularization For Uot

We propose to study the following UOT formulation, where the marginal constraints are enforced using MMD regularization.

Uk,c,λ1,λ2(s0,t0)minπRn{(π×X)}c  dπ+λ1MMDk(π1,s0)+λ2MMDk(π2,t0)=minπRn((π×X))c  dπ+λ1μk(π1)μk(s0)k+λ2μk(π2)μk(t0)k,(6)\begin{array}{ll}\mathcal{U}_{k,c,\lambda_{1},\lambda_{2}}\left(s_{0},t_{0}\right)&\equiv\min_{\pi\in\mathbb{R}^{n}\left\{\left(\pi\times X\right)\right\}}\int c\;\mathrm{d}\pi+\lambda_{1}\mathrm{MMD}_{k}(\pi_{1},s_{0})+\lambda_{2}\mathrm{MMD}_{k}(\pi_{2},t_{0})\\ &=\min_{\pi\in\mathbb{R}^{n}\left(\left(\pi\times X\right)\right)}\int c\;\mathrm{d}\pi+\lambda_{1}\|\mu_{k}\left(\pi_{1}\right)-\mu_{k}\left(s_{0}\right)\|_{k}+\lambda_{2}\|\mu_{k}\left(\pi_{2}\right)-\mu_{k}\left(t_{0}\right)\|_{k},\end{array}\tag{6}

where µk(s) is the kernel mean embedding of s (defined in Section 2) induced by the characteristic kernel k used in the generating set Gk ≡ {f ∈ Hk | ∥f∥k ≤ 1}, and λ1, λ2 > 0 are the regularization hyper-parameters.

We begin by presenting a key duality result.

Theorem 4.1. (Duality) Whenever c, k ∈ C(X × X ) and X is compact, we have that:

\mathcal{U}_{k,c,\lambda_{1},\lambda_{2}}\left(s_{0},t_{0}\right)=\operatorname*{max}_{\begin{array}{c}{{f\in\mathcal{G}_{k}(\lambda_{1}),g\in\mathcal{G}_{k}(\lambda_{2})}\\ {{\mathrm{s.t.}}}\end{array}}\int_{\mathcal{X}}f\ \mathrm{d}s_{0}+\int_{\mathcal{X}}g\ \mathrm{d}t_{0}, λ\leq\lambda λ}\lambda\}\cdot (7)\left(7\right)

Here, Gk(λ) ≡ {g ∈ Hk | ∥g∥k ≤ λ}.

The duality result helps us to study several properties of the MMD-UOT (6), discussed in the corollaries below. The proof of Theorem 4.1 is based on an application of Sion's minimax exchange theorem (Sion, 1958) and is detailed in Appendix B.1. Applications in machine learning often involve comparing distributions for which the Wasserstein metric is a popular choice. While prior works have shown metric-preservation happens under KL-regularization (Liero et al., 2018) and TV-regularization (Piccoli & Rossi, 2016), it is an open question if MMD-regularization in UOT can also lead to valid metrics. The following result answers this affirmatively.

Corollary 4.2. (Metricity) In addition to assumptions in Theorem (4.1), whenever c is a metric, Uk,c,λ,λ belongs to the family of integral probability metrics (IPMs). Also, the generating set of this IPM is the intersection of the generating set of the Kantorovich metric and the generating set of MMD. Finally, Uk,c,λ,λ is a valid norm-induced metric over measures whenever k is characteristic. Thus, U lifts the ground metric c to that over measures. The proof of Corollary 4.2 is detailed in Appendix B.2. This result also reveals interesting relationships between Uk,c,λ,λ, the Kantorovich metric, Kc, and the MMD metric used for regularization. This is summarized in the following two results.

Corollary 4.3. (Interpolant) In addition to assumptions in Corollary 4.2, if the kernel is c-universal (continuous and universal), then ∀ s0, t0 ∈ R+(X ), limλ→∞ Uk,c,λ,λ(s0, t0) = Kc(s0, t0). Furp ther, if the cost metric, c, dominates the characteristic kernel, k, induced metric, i.e., c(x, y) ≥ k(x, x) + k(y, y) − 2k(x, y) ∀ x, y ∈ X , then Uk,c,λ,λ(s0, t0) = λMMDk(s0, t0) whenever 0 < λ ≤ 1.

Finally, when λ ∈ (0, 1), MMD-UOT interpolates between the scaled MMD and the Kantorovich metric. The nature of this interpolation is already described in terms of generating sets in Corollary 4.2.

We illustrate this interpolation result in Figure 1. Our proof of Corollary 4.3, presented in Appendix B.3, also shows that the Euclidean distance satisfies such a dominating cost assumption when the kernel employed is the Gaussian kernel and the inputs lie on a unit-norm ball. The next result presents another relationship between the metrics in the discussion.

Corollary 4.4. Uk,c,λ,λ(s, t) ≤ min (λMMDk(s, t), Kc(s, t)).

The proof of Corollary 4.4 is straightforward and is presented in Appendix B.5. This result enables us to show properties like weak metrization and sample efficiency with MMD-UOT. For a sequence sn ∈ R+ 1 (X ), n ≥ 1, we say that sn weakly converges to s ∈ R+ 1 (X ) (denoted as sn ⇀ s), if and only if EX∼sn [f(X)] → EX∼s[f(X)] for all bounded continuous functions over X . It is natural to ask when is the convergence in metric over measures equivalent to weak convergence on measures. The metric is then said to metrize the

6_image_0.png

Figure 1: For illustration, the generating set of Kantorovich-Wasserstein is depicted as a triangle, and the scaled generating set of MMD is depicted as a disc. The intersection represents the generating set of the IPM metric induced by MMD-UOT. (a) shows the special case when our MMD-UOT metric recovers back the sample-efficient MMD metric, (b) shows the special case when our MMD-UOT metric reduces to the Kantorovich-Wasserstein metric that lifts the ground metric to measures, and (c) shows the resulting family of new UOT metrics which are both sample-efficient and can lift ground metrics to measures. weak convergence of measures or is equivalently said to weakly metrize measures. The weak metrization properties of the Wasserstein metric and MMD are well-understood (e.g., refer to Theorem 6.9 in (Villani, 2009) and Theorem 7 in (Simon-Gabriel et al., 2020)). The weak metrization property of Uk,c,λ,λ follows from the above Corollary 4.4.

Corollary 4.5. (Weak Metrization) Uk,c,λ,λ metrizes the weak convergence of normalized measures. The proof is presented in Appendix B.6. We now show that the metric induced by MMD-UOT inherits the attractive statistical efficiency of the MMD metric. In typical machine learning applications, only finite samples are given from the measures. Hence, it is important to study statistically efficient metrics that alleviate the curse of dimensionality problem prevalent in OT (Niles-Weed & Rigollet, 2019). Sample complexity result with the metric induced by MMD-UOT is presented as follows.

Corollary 4.6. (Sample Complexity) Let us denote Uk,c,λ,λ, defined in (6), by U¯. Let sˆm,tˆm denote the empirical estimates of s0, t0 ∈ R+(X ) respectively with m samples. Then, U¯(ˆsm,tˆm) → U¯(s0, t0) at a rate (apart from constants) same as that of MMDk(ˆsm, s0) → 0.

Since the sample complexity of MMD with a normalized characteristic kernel is O(m− 12 ) (Smola et al., 2007), the same will be the complexity bound for the corresponding MMD-UOT. The proof of Corollary 4.6 is presented in Appendix B.7. This is interesting because, though MMD-UOT can arbitrarily well approximate Wasserstein (as λ → ∞), its estimation can be far more efficient than O m− 1d , which is the minimax estimation rate for the Wasserstein (Niles-Weed & Rigollet, 2019). Here, d is the dimensionality of the samples. Further, in Lemma B4, we show that even when MMDqk (q ≥ 2 ∈ N) is used for regularization, the sample complexity again comes out to be O m− 12 . We conclude this section with a couple of remarks.

Remark 4.7. As a side result, we prove the following theorem (Appendix B.8) that relates our MMD-UOT to the MMD-regularized Kantorovich metric. We believe this connection is interesting as it generalizes the popular Kantorovich-Rubinstein duality result on relating (unregularized) OT to the (unregularized) Kantorovich metric. Theorem 4.8. In addition to the assumptions in Theorem 4.1, if c is a valid metric, then

Uk,c,λ1,λ2(s0,t0)=mins,tR(X) Kc(s,t)+λ1MMDk(s,s0)+λ2MMDk(t,t0).{\mathcal{U}}_{k,c,\lambda_{1},\lambda_{2}}\left(s_{0},t_{0}\right)=\operatorname*{min}_{s,t\in{\mathcal{R}}({\mathcal{X}})}\ {\mathcal{K}}_{c}(s,t)+\lambda_{1}\mathrm{MMD}_{k}(s,s_{0})+\lambda_{2}\mathrm{MMD}_{k}(t,t_{0}).

Remark 4.9. It is noteworthy that most of our theoretical results presented in this section not only hold with the MMD-UOT formulation (9) but also with a general IPM-regularized UOT formulation, which we discuss in Appendix B. This generalization may be of independent interest for future work.

Finally, minor results on robustness and connections with spectral normalized GAN (Miyato et al., 2018) are discussed in Appendix B.16 and Appendix B.17, respectively.

(δ)({\boldsymbol{\delta}})

4.1 Finite-Sample-Based Estimation

As noted in Corollary 4.6, MMD-UOT can be efficiently estimated from samples of source and target. However, one needs to solve an optimization problem over all possible joint (un-normalized) measures. This can be computationally expensive1(for example, optimization over the set of all joint density functions).

Hence, in this section, we propose a simple estimator where the optimization is only over the joint measures supported at sample-based points. We show that our estimator is statistically consistent and that the estimation is free from the curse of dimensionality.

Let m samples be given from the source, target, s0, t0 ∈ R+(X ) respectively2. We denote Di = {xi1, · · · xim}, i = 1, 2 as the set of samples given from s0, t0 respectively. Let sˆm,tˆm denote the empirical measures using samples D1, D2. Let us denote the Gram-matrix of Di by Gii. Let C12 be the m × m cost matrix with entries as evaluations of the cost function over D1 × D2. Following the common practice in OT literature (Chizat et al., 2017; Cuturi, 2013; Damodaran et al., 2018; Fatras et al., 2021; Le et al., 2021; Balaji et al., 2020; Nath & Jawanpuria, 2020; Peyré & Cuturi, 2019), we restrict the transport plan to be supported on the finite samples from each of the measures in order to avoid the computational issues in optimizing over all possible joint densities. More specifically, let α be the m × m (parameter/variable) matrix with entries as αij ≡ π(x1i, x2j ) where i, j ∈ {1, · · · , m}. With these notations and the mentioned restricted feasibility set, Problem (6) simplifies to the following, denoted by Uˆm(ˆsm,tˆm):

minα0Rn×mTr(αC12)+λ1α1σ1m1G11+λ2α1σ2m1G22,(9)\min_{\alpha\geq0\in\mathbb{R}^{n\times m}}\operatorname{Tr}\left(\alpha C_{12}^{\top}\right)+\lambda_{1}\left\|\alpha\mathbf{1}-\frac{\sigma_{1}}{m}\mathbf{1}\right\|_{G_{11}}+\lambda_{2}\left\|\alpha^{\top}\mathbf{1}-\frac{\sigma_{2}}{m}\mathbf{1}\right\|_{G_{22}},\tag{9}

where Tr(M) denotes the trace of matrix M, ∥x∥M ≡ √x⊤Mx, and σ1, σ2 are the masses of the source, target measures, s0, t0, respectively. Since this is a Convex Program over a finite-dimensional variable, it can be solved in a computationally efficient manner (refer Section 4.2).

However, as the transport plan is now supported on the given samples alone, Corollary 4.6 does not apply. The following result shows that our estimator (9) is consistent, and the estimation error decays at a favourable rate.

Theorem 4.10. (Consistency of the proposed estimator) Let us denote Uk,c,λ1,λ2 , defined in (6), by U¯. Assume the domain X is compact, ground cost is continuous, c ∈ C(X × X ), and the kernel k is c-universal, normalized. Let the source measure (s0), the target measure (t0), as well as the corresponding MMD-UOT transport plan be absolutely continuous. Also assume s0(x), t0(x) > 0 ∀ x ∈ X . Then, we have w.h.p. and any (arbitrarily small) ϵ > 0 that Uˆm(ˆsm,tˆm) − U¯(s0, t0) ≤ O λ1√ +λ2 m + g(ϵ) m + ϵσ*. Here,* g(ϵ) ≡ minv∈Hk⊗Hk ∥v∥k s.t. ∥v − c∥∞ ≤ ϵ, and σ is the mass of the optimal MMD-UOT transport plan. Further, if c belongs to Hk ⊗ Hk*, then w.h.p.* Uˆm(ˆsm,tˆm) − U¯(s0, t0) ≤ O λ1√ +λ2 m .

We discuss the proof of the above theorem in Appendix B.9. Because k is universal, g(ϵ) < ∞ ∀ ϵ > 0. The consistency of our estimator as m → ∞ can be realized, if, for example, one employs the scheme λ1 = λ2 = O(m1/4) and ϵ → 0 at a slow enough rate such that g(ϵ) m → 0. In Appendix B.9.1, we show that even if ϵ decays as fast as O1 m2/3 , then g(ϵ) blows-up atmost as Om1/3. Hence, overall, the estimation error still decays as O1 m1/4 . To the best of our knowledge, such consistency results have not been studied in the context of KL-regularized UOT.

4.2 Computational Aspects

Problem (9) is an instance of a convex program and can be solved using the mirror descent algorithm detailed in Appendix B.10. In the following, we propose to solve an equivalent optimization problem which helps us leverage faster solvers for MMD-UOT:

minα0Rm×mTr(αC12)+λ1α1σ1m1G112+λ2α1σ2m1G222.\operatorname*{min}_{\alpha\geq0\in\mathbb{R}^{m\times m}}\mathrm{Tr}\left(\alpha{\mathcal{C}}_{12}^{\top}\right)+\lambda_{1}\left\|\alpha{\bf1}-{\frac{\sigma_{1}}{m}}{\bf1}\right\|_{G_{11}}^{2}+\lambda_{2}\left\|\alpha^{\top}{\bf1}-{\frac{\sigma_{2}}{m}}{\bf1}\right\|_{G_{22}}^{2}. . (10) 1Note that this challenge is inherent to OT (and all its variants). It is not a consequence of our choice of MMD regularization. 2The no. of samples from source and target need not be the same, in general.

(10)(10)

Algorithm 1 Accelerated Projected Gradient Descent for solving Problem (10).

Require: Lipschitz constant L, initial α0 ≥ 0 ∈ R m×m.

f(α) = Tr αC ⊤ 12+ λ1 α1 − σ1 m 1 2 G11

  • λ2 α ⊤1 − σ2 m 1 2 G22 .

γ1 = 1. y1 = α0.

i = 0. while not converged do αi = Project≥0 yi − 1 L∇f(yi).

γi+1 = 1+√1+4γ 2 i 2.

yi+1 = αi + γi−1 γi+1 (αi − αi−1).

i = i + 1.

end while return αi.

The equivalence between (9) and (10) follows from standard arguments and is detailed in Appendix B.11.

Our next result shows that the objective in (10) is L-smooth (proof provided in Appendix B.12). Lemma 4.11. The objective in Problem (10) is L*-smooth with* L = 2p(λ1m) 2∥G11∥ 2 F + (λ2m) 2∥G22∥ 2 F + 2λ1λ2(1⊤mG111m + 1⊤mG221m).

The above result enables us to use the accelerated projected gradient descent (APGD) algorithm (Nesterov, 2003; Beck & Teboulle, 2009) with fixed step-size τ = 1/L for solving (10). The detailed steps are presented in Algorithm 1. The overall computation cost for solving MMD-UOT (10) is O( m2 √ϵ ), where ϵ is the optimality gap. In Section 5, we empirically observe that the APGD-based solver for MMD-UOT is indeed computationally efficient.

4.3 Barycenter

A related problem is that of barycenter interpolation of measures (Agueh & Carlier, 2011), which has interesting applications (Solomon et al., 2014; 2015; Gramfort et al., 2015). Given measures s1*, . . . , sn with total masses σ1, . . . , σn respectively, and interpolation weights ρ1, . . . , ρn, the barycenter s ∈ R+(X ) is defined as the solution of B¯(s1, · · · , sn) ≡ mins∈R+(X )Pn i=1 ρiUk,c,λ*1,λ2 (si, s).

In typical applications, only sample sets, Di, from si are available instead of si themselves. Let us denote the corresponding empirical measures by sˆ1*, . . . ,* sˆn. One way to estimate the barycenter is to consider B¯(ˆs1, · · · , sˆn). However, this may be computationally challenging to optimize, especially when the measures involved are continuous. So we propose estimating the barycenter with the restriction that the transport plan π icorresponding to Uk,c,λ1,λ2 (ˆsi, s) is supported on Di × ∪n i=1Di. And, let αi ≥ 0 ∈ R mi×m denote the corresponding probabilities. Following (Cuturi & Doucet, 2014), we also assume that the barycenter, s, is supported on ∪ n i=1Di. Let us denote the barycenter problem with this support restriction on the transport plans and the Barycenter as Bˆm(ˆs1, · · · , sˆn). Let G be the Gram-matrix of ∪ n i=1Di and Ci be the mi × m matrix with entries as evaluations of the cost function.

Lemma 4.12. The barycenter problem Bˆm(ˆs1, · · · , sˆn) can be equivalently written as:

minα1,··· ,αn≥0Pn \sum_{i=1}^{n}\rho_{i}\Big{(}\text{Tr}\left(\alpha_{i}G_{i}^{\top}\right)+\lambda_{1}\|\alpha_{i}\mathbf{1}-\frac{g_{i}}{m_{i}}\mathbf{1}\|_{G_{ii}}^{2}+\lambda_{2}\|\alpha_{i}^{\top}\mathbf{1}-\sum_{j=1}^{n}\rho_{j}\alpha_{j}^{\top}\mathbf{1}\|_{G}^{2}\Big{)}.\tag{11} We present the proof in Appendix B.14.1. Similar to Problem (10), the objective in Problem (11) is a smooth quadratic program in each αi and is jointly convex in αi's. In Appendix B.14.2, we also present the details for solving Problem (11) using APGD as well as its statistical consistency in Appendix B.14.3.

9_image_0.png

Figure 2: (a) Optimal Transport plans of ϵKL-UOT and MMD-UOT; (b) Barycenter interpolating between Gaussian measures. For the chosen hyperparameter, the barycenters of ϵKL-UOT and MMD-UOT overlap and can be looked as smooth approximations of the OT barycenter; (c) Objective vs Time plot comparing ϵKL-UOT solved using the popular Sinkhorn algorithm (Chizat et al., 2017; Pham et al., 2020) and MMDUOT (10) solved using APGD. A plot showing ϵKL-UOT's progress at the initial phase is given in Figure 4.

5 Experiments

In Section 4, we examined the theoretical properties of the proposed MMD-UOT formulation. In this section, we show that MMD-UOT is a good practical alternative to the popular entropy-regularized ϵKL-UOT. We emphasize that our purpose is not to benchmark state-of-the-art performance. Our codes are publicly available at https://github.com/Piyushi-0/MMD-reg-OT.

5.1 Synthetic Experiments

We present some synthetic experiments to visualize the quality of our solution. Please refer to Appendix C.1 for more details.

Transport Plan and Barycenter We perform synthetic experiments with the source and target as Gaussian measures. We compare the OT plan of ϵKL-UOT and MMD-UOT in Figure 2(a). We observe that the MMD-UOT plan is sparser compared to the ϵKL-UOT plan. In Figure 2(b), we visualize the barycenter interpolating between the source and target, obtained with MMD, ϵKL-UOT and MMD-UOT.

While MMD barycenter is an empirical average of the measures and hence has two modes, the geometry of measures is considered in both ϵKL-UOT and MMD-UOT formulations. Barycenters obtained by these methods have the same number of modes (one) as in the source and the target. Moreover, they appear to smoothly approximate the barycenter obtained with OT (solved using a linear program).

Visualizing the Level Sets Applications like generative modeling deal with optimization over the parameter (θ) of the source distribution to match the target distribution. In such cases, it is desirable that the level sets of the distance function over the measures show a lesser number of stationary points that are not global optima (Bottou et al., 2017). Similar to (Bottou et al., 2017), we consider a model family for source distributions as F = {Pθ = 1 2 (δθ + δ−θ) : θ ∈ [−1, 1] x [−1, 1]} and a fixed target distribution Q as P(2,2) ∈ F / . We compute the distances between Pθ and Q according to various divergences. Figure 3 presents level sets showing the set of distances {d(Pθ, Q) : θ ∈ [−1, 1] x [−1, 1]} where the distance d(., .) is measured using MMD, Kantorovich metric, ϵKL-UOT, and MMD-UOT (9), respectively. While all methods correctly identify global minima (green arrow), level sets with MMD-UOT and ϵKL-UOT show no local minima (encircled in red for MMD) and have a lesser number of non-optimal stationary points (marked with black arrows) compared to the Kantorovich metric in Figure 3(b).

Computation Time In Figure 2(c), we present the objective versus time plot. The source and target measures are chosen to be the same, in which case the optimal objective is 0. MMD-UOT (10) solved using

10_image_0.png

Figure 3: Level sets of distance function between a family of source distributions and a fixed target distribution with the task of finding the source distribution closest to the target distribution using (a) MMD, (b) W¯2, (c) ϵKL-UOT, and (d) MMD-UOT. While all methods correctly identify global minima (green arrows), level sets with MMD-UOT and ϵKL-UOT show no local minima (encircled in red for MMD) and have a lesser number of non-optimal stationary points (marked with black arrows) compared to (b).

Table 2: Average Test Power (between 0 and 1; higher is better) on MNIST. MMD-UOT obtains the highest average test power at all timesteps.

N MMD ϵKL-UOT MMD-UOT
100 0.137 0.099 0.154
200 0.258 0.197 0.333
300 0.467 0.242 0.588
400 0.656 0.324 0.762
500 0.792 0.357 0.873
1000 0.909 0.506 0.909

APGD (described in Section 4.2) gives a much faster rate of decrease in objective compared to the Sinkhorn algorithm used for solving KL-UOT.

5.2 Two-Sample Hypothesis Test

Given two sets of samples {x1, . . . , xm} ∼ s0 and {y1, . . . , ym} ∼ t0, the two-sample test aims to determine whether the two sets of samples are drawn from the same distributions, viz., to predict if s0 = t0. The performance evaluation in the two-sample test relies on two types of errors. Type-I error occurs when s0 = t0, but the algorithm predicts otherwise. Type-II error occurs when the algorithm incorrectly predicts s0 = t0. The probability of Type-I error is called the significance level. The significance level can be controlled using permutation test-based setups (Ernst, 2004; Liu et al., 2020). Algorithms are typically compared based on the empirical estimate of their test power (higher is better), defined as the probability of not making a Type-II error and the average Type-I error (lower is better).

Dataset and experimental setup. Following (Liu et al., 2020), we consider the two sets of samples, one from the true MNIST (LeCun & Cortes, 2010) and another from fake MNIST generated by the DCGAN (Bian et al., 2019). The data lies in 1024 dimensions. We take an increasing number of samples (N) and compute the average test power over 100 pairs of sets for each value of N. We repeat the experiment 10 times and report the average test power in Table 2 for the significance level α = 0.05. By the design of the test, the average Type-I error was upper-bounded, and we noted the Type-II error in our experiment. We detail the procedure for choosing the hyperparameters and the list of chosen hyperparameters for each method in Appendix C.2.

Results. In Table 2, we observe that MMD-UOT obtains the highest test power for all values of N. The average test power of MMD-UOT is 1.5−2.4 times better than that of ϵKL-UOT across N. MMD-UOT also outperforms EMD and 2-Wasserstein, which suffer from the curse of dimensionality, for all values of N. Our results match the sample efficient MMD metric's result on increasing N to 1000, but for lesser sample-size, MMD-UOT is always better than MMD.

Table 3: MMD distance (lower is better) between computed barycenter and the ground truth distribution.

A sigma-heuristics based RBF kernel is used to compute the MMD distance. We observe that MMD-UOT's results are closer to the ground truth than the baselines' results at all timesteps.

Timestep MMD ϵKL-UOT MMD-UOT
t1 0.375 0.391 0.334
t2 0.190 0.184 0.179
t3 0.125 0.138 0.116
Avg. 0.230 0.238 0.210

5.3 Single-Cell Rna Sequencing

We empirically evaluate the quality of our barycenter in the Single-cell RNA sequencing experiment. Singlecell RNA sequencing technique (scRNA-seq) helps us understand how the expression profile of the cells changes (Schiebinger et al., 2019). Barycenter estimation in the OT framework offers a principled approach to estimate the trajectory of a measure at an intermediate timestep t (ti < t < tj ) when we have measurements available only at ti (source) and tj (target) time steps.

Dataset and experimental setup. We perform experiments on the Embryoid Body (EB) single-cell dataset (Moon et al., 2019). The dataset has samples available at five timesteps (tj with j = 0*, . . . ,* 4), which were collected during a 25-day period of development of the human embryo. Following (Tong et al., 2020), we project the data onto two-dimensional space and associate uniform measures to the source and the target samples given at different timesteps. We consider the samples at timestep ti and ti+2 as the samples from the source and target measures where 0 ≤ i ≤ 2 and aim at estimating the measure at ti timestep as their barycenter with equal interpolation weights ρ1 = ρ2 = 0.5.

We compute the barycenters using MMD-UOT (11) and the ϵKL-UOT (Chizat et al., 2018; Liero et al., 2018) approaches. For both, a simplex constraint is used to cater to the case of uniform measures. We also compare against the empirical average of the source and target measures, which is the barycenter obtained with the MMD metric. The computed barycenter is evaluated against the measure corresponding to the ground truth samples available at the corresponding timestep. We compute the distance between the two using the MMD metric with RBF kernel (Gretton et al., 2012). The hyperparameters are chosen based on the leave-one-out validation protocol. More details and some additional results are in Appendix C.3. Results. Table 3 shows that MMD-UOT achieves the lowest distance from the ground truth for all the timesteps, illustrating its superior interpolation quality.

5.4 Domain Adaptation In Jumbot Framework

OT has been widely employed in domain adaptation problems (Courty et al., 2017; Courty et al., 2017; Seguy et al., 2018; Damodaran et al., 2018). JUMBOT (Fatras et al., 2021) is a popular domain adaptation method based on ϵKL-UOT that outperforms OT-based baselines. JUMBOT's loss function involves a cross-entropy term and ϵKL-UOT discrepancy term between the source and target distributions. We showcase the utility of MMD-UOT (10) in the JUMBOT (Fatras et al., 2021) framework. Dataset and experimental setup: We perform the domain adaptation experiment with and Digits datasets comprising of MNIST (LeCun & Cortes, 2010), M-MNIST (Ganin et al., 2016), SVHN (Netzer et al., 2011), USPS (Hull, 1994) datasets. We replace the ϵKL-UOT based loss with the MMD-UOT loss (10), keeping the other experimental set-up the same as JUMBOT. We obtain JUMBOT's result with ϵKL-UOT with the best-reported hyperparameters (Fatras et al., 2021). Following JUMBOT, we tune hyperparameters of MMD-UOT for the Digits experiment on USPS to MNIST (U7→M) domain adaptation task and use the same hyperparameters for the rest of the domain adaptation tasks on Digits. More details are in Appendix C.4.

Table 4: Target domain accuracy (higher is better) obtained in domain adaptation experiments. Results for ϵKL-UOT are reproduced from the code open-sourced for JUMBOT in (Fatras et al., 2021). MMD-UOT outperforms ϵKL-UOT in all the domain adaptation tasks considered.

Source Target ϵKL-UOT MMD-UOT
M-MNIST USPS 91.53 94.97
M-MNIST MNIST 99.35 99.50
MNIST M-MNIST 96.51 96.96
MNIST USPS 96.51 97.01
SVHN M-MNIST 94.26 95.35
SVHN MNIST 98.68 98.98
SVHN USPS 92.78 93.22
USPS MNIST 96.76 98.53
Avg. 95.80 96.82

Results: Table 4 reports the accuracy obtained on target datasets. We observe that MMD-UOT-based loss performs better than ϵKL-UOT-based loss for all the domain adaptation tasks. In Figure 8 (appendix), we also compare the t-SNE plot of the embeddings learned with the MMD-UOT and the ϵKL-UOT-based loss functions. The clusters learned with MMD-UOT are better separated (e.g., red- and cyan-colored clusters).

5.5 More Results On Domain Adaptation

In Section 5.4, we compared the proposed MMD-UOT-based loss function with the ϵKL-UOT based loss function in the JUMBOT framework (Fatras et al., 2021). It should be noted that JUMBOT has a ResNet50 backbone. Hence, in this section, we also compare with popular domain adaptation baselines having ResNet-50 backbone. These include DANN (Ganin et al., 2015), CDANN-E (Long et al., 2017), DEEPJDOT (Damodaran et al., 2018), ALDA (Chen et al., 2020a), ROT (Balaji et al., 2020), and BombOT (Nguyen et al., 2022). BombOT is a recent state-of-the-art OT-based method for unsupervised domain adaptation (UDA). As in JUMBOT (Fatras et al., 2021), BombOT also employs ϵKL-UOT based loss function. We also include the results of the baseline ResNet-50 model, where the model is trained on the source and is evaluated on the target without employing any adaptation techniques.

Office-Home dataset: We evaluate the proposed method on the Office-Home dataset (Venkateswara et al., 2017), popular for unsupervised domain adaptation. We use the backbone network of ResNet-50 following. The Office-Home dataset has 15,500 images from four domains: Artistic images (A), Clip Art (C), Product images (P) and Real-World (R). The dataset contains images of 65 object categories common in office and home scenarios for each domain. Following (Fatras et al., 2021; Nguyen et al., 2022), evaluation is done in 12 adaptation tasks. Following JUMBOT, we validate the proposed method on the A→C task and use the chosen hyperparameters for the rest of the tasks.

Table 5 reports the target accuracies obtained by different methods. The results of the BombOT method are quoted from (Nguyen et al., 2022), and the results of other baselines are quoted from (Fatras et al., 2021). We observe that the proposed MMD-UOT-based method achieves the best target accuracy in 11 out of 12 adaptation tasks.

VisDA-2017 dataset: We next consider the next domain adaptation task between the training and validation sets of the VisDA-2017 (Recht et al., 2018) dataset. We follow the experimental setup detailed in (Fatras et al., 2021). The source domain of VisDA has 152,397 synthetic images, while the target domain has 55,388 real-world images. Both the domains have 12 object categories.

Table 6 compares the performance of different methods. The results of the BombOT method are quoted from (Nguyen et al., 2022), and the results of other baselines are quoted from (Fatras et al., 2021). The proposed Table 5: Target accuracies (higher is better) on the Office-Home dataset in the UDA setting. The letters denote different domains: 'A' for Artistic images, 'P' for Product images, 'C' for Clip art and 'R' for RealWorld images. The proposed method achieves the highest accuracy on almost all the domain adaptation tasks and achieves the best accuracy averaged across the tasks.

MMD-UOT method achieves the highest accuracy. Dataset CDAN-E ALDA DEEPJDOT ROT ϵKL-UOT (JUMBOT) BombOT Proposed
VisDA-2017 70.1 70.5 68.0 66.3 72.5 74.6 77.0
Method A→C A→P A→R C→A C→P C→R P→A P→C P→R R→A R→C R→P Avg
ResNet-50 34.9 50.0 58.0 37.4 41.9 46.2 38.5 31.2 60.4 53.9 41.2 59.9 46.1
DANN 44.3 59.8 69.8 48.0 58.3 63.0 49.7 42.7 70.6 64.0 51.7 78.3 58.3
(Ganin et al., 2015) CDAN-E 52.5 71.4 76.1 59.7 69.9 71.5 58.7 50.3 77.5 70.5 57.9 83.5 66.6
(Long et al., 2017) DEEPJDOT 50.7 68.7 74.4 59.9 65.8 68.1 55.2 46.3 73.8 66.0 54.9 78.3 63.5
(Damodaran et al., 2018) ALDA 52.2 69.3 76.4 58.7 68.2 71.1 57.4 49.6 76.8 70.6 57.3 82.5 65.8
(Chen et al., 2020a) ROT 47.2 71.8 76.4 58.6 68.1 70.2 56.5 45.0 75.8 69.4 52.1 80.6 64.3
(Balaji et al., 2020) ϵKL-UOT (JUMBOT) 55.2 75.5 80.8 65.5 74.4 74.9 65.2 52.7 79.2 73.0 59.9 83.4 70.0
(Fatras et al., 2021) BombOT 56.2 75.2 80.5 65.8 74.6 75.4 66.2 53.2 80.0 74.2 60.1 83.3 70.4
(Nguyen et al., 2022) Proposed 56.5 77.2 82.0 70.0 77.1 77.8 69.3 55.1 82.0 75.5 59.3 84.0 72.2

method achieves the best performance, improving the accuracy obtained by ϵKL-UOT based JUMBOT and BombOT methods by 4.5% and 2.4%, respectively.

5.6 Prompt Learning For Few-Shot Classification

The task of learning prompts (e.g. "a tall bird of [class]") for vision-language models has emerged as a promising approach to adapt large pre-trained models like CLIP (Radford et al., 2021) for downstream tasks. The similarity between prompt features (which are class-specific) and visual features of a given image can help us classify the image. A recent OT-based prompt learning approach, PLOT (Chen et al., 2023), obtained state-of-the-art results on the K-shot recognition task in which only K images per class are available during training. We evaluate the performance of MMD-UOT following the setup of (Chen et al., 2023) on the benchmark EuroSAT (Helber et al., 2018) dataset consisting of satellite images, DTD (Cimpoi et al., 2014) dataset having images of textures and Oxford-Pets (Parkhi et al., 2012) dataset having images of pets. Results With the same evaluation protocol as in (Chen et al., 2023), we report the classification accuracy averaged over three seeds in Table 7. We note that MMD-UOT-based prompt-learning achieves better results than PLOT, especially when K is less (more challenging case due to lesser training data). With the EuroSAT dataset, the improvement is as high as 4% for a challenging case of K=1. More details are in Appendix C.5.

Table 7: Average and standard deviation (over 3 runs) of accuracy (higher is better) on the k-shot classification task, shown for different values of shots (k) in the state-of-the-art PLOT framework. The proposed method replaces OT with MMD-UOT in PLOT, keeping all other hyperparameters the same. The results of PLOT are taken from their paper (Chen et al., 2023).

Dataset Method 1 2 4 8 16
EuroSAT PLOT 54.05 ± 5.95 64.21 ± 1.90 72.36 ± 2.29 78.15 ± 2.65 82.23 ± 0.91
Proposed 58.47 ± 1.37 66.0 ± 0.93 71.97 ± 2.21 79.03 ± 1.91 83.23 ± 0.24
DTD PLOT 46.55 ± 2.62 51.24 ± 1.95 56.03 ± 0.43 61.70 ± 0.35 65.60 ± 0.82
Proposed 47.27±1.46 51.0±1.71 56.40±0.73 63.17±0.69 65.90 ± 0.29

6 Conclusion

The literature on unbalanced optimal transport (UOT) has largely focused on ϕ-divergence-based regularization. Our work provides a comprehensive analysis of MMD-regularization in UOT, answering many open questions. We prove novel results on the metricity and the sample efficiency of MMD-UOT, propose consistent estimators which can be computed efficiently, and illustrate its empirical effectiveness on several machine learning applications. Our theoretical and empirical contributions for MMD-UOT and its corresponding barycenter demonstrate the potential of MMD-regularization in UOT as an effective alternative to ϕ-divergence-based regularization. Interesting directions of future work include exploring applications of IPM-regularized UOT (Remark 4.9) and the generalization of Kantorovich-Rubinstein duality (Remark 4.7).

7 Funding Disclosure And Acknowledgements

We thank Kilian Fatras for the discussions on the JUMBOT baseline, and Bharath Sriperumbudur (PSU) and G. Ramesh (IITH) for discussions related to Appendix B.9.1. We are grateful to Rudraram Siddhi Vinayaka. We also thank the anonymous reviewers for constructive feedback. PM and JSN acknowledge the support of Google PhD Fellowship and Fujitsu Limited (Japan), respectively.

References

Rohit Agrawal and Thibaut Horel. Optimal bounds between f-divergences and integral probability metrics.

In ICML, 2020.

Martial Agueh and Guillaume Carlier. Barycenters in the wasserstein space. SIAM Journal on Mathematical Analysis, 43(2):904–924, 2011.

David Alvarez-Melis and Tommi Jaakkola. Gromov-Wasserstein alignment of word embedding spaces. In EMNLP, 2018.

Yuki Arase, Han Bao, and Sho Yokoi. Unbalanced optimal transport for unbalanced word alignment. In ACL, 2023.

Yogesh Balaji, Rama Chellappa, and Soheil Feizi. Robust optimal transport with applications in generative modeling and domain adaptation. In NeurIPS, 2020.

Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.

SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.

A. Ben-Tal and A. Nemirovski. Lectures On Modern Convex Optimization, 2021.

Yuemin Bian, Junmei Wang, Jaden Jungho Jun, and Xiang-Qun Xie. Deep convolutional generative adversarial network (dcgan) models for screening and design of small molecules targeting cannabinoid receptors.

Molecular Pharmaceutics, 16(11):4451–4460, 2019.

Alberto Bietti and Julien Mairal. Group invariance, stability to deformations, and complexity of deep convolutional representations. Journal of Machine Learning Research, 20:25:1–25:49, 2017.

Alberto Bietti, Grégoire Mialon, Dexiong Chen, and Julien Mairal. A kernel perspective for regularizing deep neural networks. In ICML, 2019.

Leon Bottou, Martin Arjovsky, David Lopez-Paz, and Maxime Oquab. Geometrical insights for implicit generative modeling. Braverman Readings in Machine Learning 2017, pp. 229–268, 2017.

Guangyi Chen, Weiran Yao, Xiangchen Song, Xinyue Li, Yongming Rao, and Kun Zhang. Prompt learning with optimal transport for vision-language models. In ICLR, 2023.

Minghao Chen, Shuai Zhao, Haifeng Liu, and Deng Cai. Adversarial-learned loss for domain adaptation. In AAAI, 2020a.

Yimeng Chen, Yanyan Lan, Ruinbin Xiong, Liang Pang, Zhiming Ma, and Xueqi Cheng. Evaluating natural language generation via unbalanced optimal transport. In IJCAI, 2020b.

Xiuyuan Cheng and Alexander Cloninger. Classification logit two-sample testing by neural networks for differentiating near manifold densities. IEEE Transactions on Information Theory, 68:6631–6662, 2019.

L. Chizat, G. Peyre, B. Schmitzer, and F.-X. Vialard. Unbalanced optimal transport: Dynamic and kantorovich formulations. Journal of Functional Analysis, 274(11):3090–3123, 2018.

Lénaïc Chizat, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. Scaling algorithms for unbalanced optimal transport problems. Math. Comput., 87:2563–2609, 2017.

Lenaïc Chizat. Unbalanced optimal transport : Models, numerical methods, applications. Technical report, Universite Paris sciences et lettres, 2017.

Kacper P. Chwialkowski, Aaditya Ramdas, D. Sejdinovic, and Arthur Gretton. Fast two-sample testing with analytic representations of probability measures. In NIPS, 2015.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. Describing textures in the wild. In CVPR, 2014.

Samuel Cohen, Michael Arbel, and Marc Peter Deisenroth. Estimating barycenters of measures in high dimensions. arXiv preprint arXiv:2007.07105, 2020.

N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(9):1853–1865, 2017.

Nicolas Courty, Rémi Flamary, Amaury Habrard, and Alain Rakotomamonjy. Joint distribution optimal transportation for domain adaptation. In NIPS, 2017.

I. Csiszar. Information-type measures of difference of probability distributions and indirect observations.

Studia Scientiarum Mathematicarum Hungarica, 2:299–318, 1967.

M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, 2013. Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In ICML, 2014. Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and Nicolas Courty.

DeepJDOT: Deep Joint Distribution Optimal Transport for Unsupervised Domain Adaptation. In ECCV, 2018.

Henri De Plaen, Pierre-François De Plaen, Johan A. K. Suykens, Marc Proesmans, Tinne Tuytelaars, and Luc Van Gool. Unbalanced optimal transport: A unified framework for object detection. In CVPR, 2023.

Michael D. Ernst. Permutation Methods: A Basis for Exact Inference. Statistical Science, 19(4):676 - 685, 2004.

Kilian Fatras, Thibault Séjourné, Nicolas Courty, and Rémi Flamary. Unbalanced minibatch optimal transport; applications to domain adaptation. In ICML, 2021.

Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H.

Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8, 2021.

Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya-Polo, and Tomaso Poggio. Learning with a wasserstein loss. In NIPS, 2015.

Yaroslav Ganin, E. Ustinova, Hana Ajakan, Pascal Germain, H. Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. Domain-adversarial training of neural networks. In Journal of Machine Learning Research, 2015.

Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096–2030, 2016.

Tryphon T. Georgiou, Johan Karlsson, and Mir Shahrouz Takyar. Metrics for power spectra: An axiomatic approach. IEEE Transactions on Signal Processing, 57(3):859–867, 2009.

Alexandre Gramfort, Gabriel Peyré, and Marco Cuturi. Fast optimal transport averaging of neuroimaging data. In Proceedings of 24th International Conference on Information Processing in Medical Imaging, 2015.

Arthur Gretton. A simpler condition for consistency of a kernel independence test. arXiv: Machine Learning, 2015.

Arthur Gretton, Karsten M. Borgwardt, Malte Rasch, Bernhard Schölkopf, and Alexander J. Smola. A kernel method for the two-sample-problem. In NIPS, 2006.

Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(25):723–773, 2012.

Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In NIPS, 2017.

K. Gurumoorthy, P. Jawanpuria, and B. Mishra. SPOT: A framework for selection of prototypes using optimal transport. In European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 2021.

Leonid G. Hanin. Kantorovich-rubinstein norm and its application in the theory of lipschitz spaces. In Proceedings of the Americal Mathematical Society, volume 115, 1992.

Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Introducing eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. In IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 204–207. IEEE, 2018.

J.J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):550–554, 1994.

P. Jawanpuria, M. Meghwanshi, and B. Mishra. Geometry-aware domain adaptation for unsupervised alignment of word embeddings. In Annual Meeting of the Association for Computational Linguistics, 2020.

Wittawat Jitkrittum, Zoltán Szabó, Kacper P. Chwialkowski, and Arthur Gretton. Interpretable distribution features with maximum testing power. In NIPS, 2016.

Philip A. Knight. The sinkhorn–knopp algorithm: Convergence and applications. SIAM Journal on Matrix Analysis and Applications, 30(1):261–275, 2008.

Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.

Khang Le, Huy Nguyen, Quang M Nguyen, Tung Pham, Hung Bui, and Nhat Ho. On robust optimal transport: Computational complexity and barycenter computation. In NeurIPS, 2021.

Yann LeCun and Corinna Cortes. MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/, 2010.

Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos. MMD GAN: Towards Deeper Understanding of Moment Matching Network. In NIPS, 2017.

Yazhe Li, Roman Pogodin, Danica J. Sutherland, and Arthur Gretton. Self-supervised learning with kernel dependence maximization. In NeurIPS, 2021.

Matthias Liero, Alexander Mielke, and Giuseppe Savaré. Optimal transport in competition with reaction: The hellinger-kantorovich distance and geodesic curves. SIAM J. Math. Anal., 48:2869–2911, 2016.

Matthias Liero, Alexander Mielke, and Giuseppe Savaré. Optimal entropy-transport problems and a new hellinger–kantorovich distance between positive measures. Inventiones mathematicae, 211(3):969–1117, 2018.

Feng Liu, Wenkai Xu, Jie Lu, Guangquan Zhang, Arthur Gretton, and Danica J. Sutherland. Learning deep kernels for non-parametric two-sample tests. In ICML, 2020.

Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. Conditional adversarial domain adaptation. In NIPS, 2017.

David Lopez-Paz and Maxime Oquab. evisiting classifier two-sample tests. In ICLR, 2017.

Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.

Kevin R. Moon, David van Dijk, Zheng Wang, Scott Gigante, Daniel B. Burkhardt, William S. Chen, Kristina Yim, Antonia van den Elzen, Matthew J. Hirn, Ronald R. Coifman, Natalia B. Ivanova, Guy Wolf, and Smita Krishnaswamy. Visualizing structure and transitions for biological data exploration.

Nature Biotechnology, 37(12):1482–1492, 2019.

Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, and Bernhard Schölkopf. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® in Machine Learning, 10(1–2): 1–141, 2017.

Alfred Muller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29:429–443, 1997.

J. Saketha Nath and Pratik Kumar Jawanpuria. Statistical optimal transport posed as learning kernel embedding. In NeurIPS, 2020.

Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2003.

Yuval Netzer, Tiejie Wang, Adam Coates, A. Bissacco, Bo Wu, and A. Ng. Reading digits in natural images with unsupervised feature learning. In NeurIPS, 2011.

Khai Nguyen, Dang Nguyen, Quoc Nguyen, Tung Pham, Hung Bui, Dinh Phung, Trung Le, and Nhat Ho.

On transportation of mini-batches: A hierarchical approach. In ICML, 2022.

Thanh Tang Nguyen, Sunil Gupta, and Svetha Venkatesh. Distributional reinforcement learning via moment matching. In AAAI, 2021.

Jonathan Niles-Weed and Philippe Rigollet. Estimation of Wasserstein distances in the spiked transport model. In Bernoulli, 2019.

O. M. Parkhi, A. Vedaldi, A. Zisserman, and C. V. Jawahar. Cats and dogs. In CVPR, 2012.

Gabriel Peyré and Marco Cuturi. Computational optimal transport. Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019.

Khiem Pham, Khang Le, Nhat Ho, Tung Pham, and Hung Bui. On unbalanced optimal transport: An analysis of sinkhorn algorithm. In ICML, 2020.

Benedetto Piccoli and Francesco Rossi. Generalized wasserstein distance and its application to transport equations with source. Archive for Rational Mechanics and Analysis, 211:335–358, 2014.

Benedetto Piccoli and Francesco Rossi. On properties of the generalized wasserstein distance. Archive for Rational Mechanics and Analysis, 222, 12 2016.

Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021.

Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do CIFAR-10 classifiers generalize to CIFAR-10? arXiv, 2018.

Geoffrey Schiebinger, Jian Shu, Marcin Tabaka, Brian Cleary, Vidya Subramanian, Aryeh Solomon, Joshua Gould, Siyan Liu, Stacie Lin, Peter Berube, Lia Lee, Jenny Chen, Justin Brumbaugh, Philippe Rigollet, Konrad Hochedlinger, Rudolf Jaenisch, Aviv Regev, and Eric S. Lander. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming. Cell, 176(4):928– 943.e22, 2019.

Vivien. Seguy, Bharath B. Damodaran, Remi Flamary, Nicolas Courty, Antoine Rolet, and Mathieu Blondel.

Large-scale optimal transport and mapping estimation. In ICLR, 2018.

Carl-Johann Simon-Gabriel, Alessandro Barp, Bernhard Schölkopf, and Lester Mackey. Metrizing weak convergence with maximum mean discrepancies. arXiv, 2020.

Maurice Sion. On general minimax theorems. Pacific Journal of Mathematics, 8(1):171 - 176, 1958.

Alexander J. Smola, Arthur Gretton, Le Song, and Bernhard Schölkopf. A hilbert space embedding for distributions. In ALT, 2007.

Justin Solomon, Raif Rustamov, Leonidas Guibas, and Adrian Butscher. Wasserstein propagation for semisupervised learning. In ICML, 2014.

Justin Solomon, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. Convolutional wasserstein distances: Efficient optimal transportation on geometric domains. ACM Trans. Graph., 34(4), 2015.

L. Song. Learning via hilbert space embedding of distributions. In PhD Thesis, 2008.

Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. CoRR, 2012.

Bharath K. Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert R. G. Lanckriet.

On integral probability metrics, phi-divergences and binary classification. arXiv, 2009.

Bharath K. Sriperumbudur, Kenji Fukumizu, and Gert R. G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. Journal of Machine Learning Research, 12:2389–2410, 2011.

Ilya O. Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schölkopf. Wasserstein auto-encoders. In ICLR, 2018.

Alexander Tong, Jessie Huang, Guy Wolf, David Van Dijk, and Smita Krishnaswamy. TrajectoryNet: A dynamic optimal transport network for modeling cellular dynamics. In ICML, 2020.

Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In CVPR, 2017.

Cédric Villani. Optimal Transport: Old and New. A series of Comprehensive Studies in Mathematics.

Springer, 2009.

A Preliminaries A.1 Integral Probability Metric (Ipm):

Given a set G ⊂ L(X ), the integral probability metric (IPM) (Muller, 1997; Sriperumbudur et al., 2009; Agrawal & Horel, 2020) associated with G, is defined by:

γG(s0,t0)maxfGXf ds0Xf dt0  s0,t0R+(X).\gamma_{\mathcal{G}}(s_{0},t_{0})\equiv\operatorname*{max}_{f\in\mathcal{G}}\left|\int_{\mathcal{X}}f\ \mathrm{d}s_{0}-\int_{\mathcal{X}}f\ \mathrm{d}t_{0}\right|\ \forall\ s_{0},t_{0}\in\mathcal{R}^{+}(\mathcal{X}). (12)\left(12\right)

G is called the generating set of the IPM, γG.

In order that the IPM metrizes weak convergence, we assume the following (Muller, 1997): Assumption A.1. G ⊆ C(X ) and is compact. Since the IPM generated by G and its absolute convex hull is the same (without loss of generality), we additionally assume the following: Assumption A.2. G is absolutely convex. Remark A.3. We note that the assumptions A.1 and A.2 are needed only to generalize our theoretical results to an IPM-regularized UOT formulation (Formulation 13). These assumptions are satisfied whenever the IPM employed for regularization is the MMD (Formulation 6) with a kernel that is continuous and universal (i.e., c-universal).

A.2 Classical Examples Of Ipms

  • Maximum Mean Discrepancy (MMD): Let k be a characteristic kernel (Sriperumbudur et al.,
  1. over the domain X , let ∥f∥k denote the norm of f in the canonical reproducing kernel Hilbert space (RKHS), Hk, corresponding to k. MMDk is the IPM associated with the generating set: Gk ≡ {f ∈ Hk| ∥f∥k ≤ 1}.

MMDk(s0,t0)maxfGkXf ds0Xf dt0.\mathrm{MMD}_{k}(s_{0},t_{0})\equiv\operatorname*{max}_{f\in{\mathcal{G}}_{k}}\left|\int_{{\mathcal{X}}}f\ \mathrm{d}s_{0}-\int_{{\mathcal{X}}}f\ \mathrm{d}t_{0}\right|.

  • **Kantorovich metric (**Kc): Kantorovich metric also belongs to the family of integral probability metrics associated with the generating set Wc ≡ f : X 7→ R | max x∈X ̸=y∈X |f(x)−f(y)| c(x,y) ≤ 1 , where c is a metric over X . The Kantorovich-Fenchel duality result shows that the 1-Wasserstein metric is the same as the Kantorovich metric when restricted to probability measures.

  • Dudley: This is the IPM associated with the generating set: Dd ≡ {f : X 7→ R | ∥f∥∞ + ∥f∥d ≤ 1} , where d is a ground metric over X × X . The so-called Flat metric is related to the Dudley metric. It's generating set is: Fd ≡ {f : X 7→ R | ∥f∥∞ ≤ 1, ∥f∥d ≤ 1}.

  • Kolmogorov: Let X = R n. Then, the Kolmogorov metric is the IPM associated with the generating set: K ≡¯1(−∞,x)| x ∈ R n .

  • Total Variation (TV): This is the IPM associated with the generating set: $\mathcal{T}\equiv\left{f:\mathcal{X}\mapsto\mathbb{R}\mid|f|{\infty}\leq1\right},$ where $|f|{\infty}\equiv\max\limits_{x\in\mathcal{X}}|f(x)|.$ Total Variation metric over measures $s_{0},t_{0}\in\mathcal{R}^{+}(\mathcal{X})$ is defined as: $\mathrm{TV}(s,t)\equiv\int_{\mathcal{Y}}\mathrm{d}|s-t|(y),$ where $|s-t|(y)\equiv\left{\begin{array}{ll}s(y)-t(y)&\text{if}s(y)\geq t(y)\ t(y)-s(y)&\text{otherwise}\end{array}\right.$

B Proofs And Additional Theory Results

As mentioned in the main paper and Remark 4.9, most of our proofs hold even with a general IPM-regularized UOT formulation (13) under mild assumptions. We restate such results and give a general proof that holds for IPM-regularized UOT (Formulation 13), of which MMD-regularized UOT (Formulation 6) is a special case.

The proposed IPM-regularized UOT formulation is presented as follows.

UG,c,λ1,λ2(s0,t0)minπR+(X×X)c dπ+λ1γG(π1,s0)+λ2γG(π2,t0),{\mathcal{U}}_{G,c,\lambda_{1},\lambda_{2}}(s_{0},t_{0})\equiv\operatorname*{min}_{\pi\in{\mathcal{R}}^{+}({\mathcal{X}}\times{\mathcal{X}})}\int c\ \mathrm{d}\pi+\lambda_{1}\gamma_{G}(\pi_{1},s_{0})+\lambda_{2}\gamma_{G}(\pi_{2},t_{0}),

where γG is defined in equation (12).

We now present the theoretical results and proofs with IPM-regularized UOT (Formulation 13), of which MMD-regularized UOT (Formulation 6) is a special case. To the best of our knowledge, such an analysis for IPM-regularized UOT has not been done before.

B.1 Proof Of Theorem 4.1

Theorem 4.1. (Duality) Whenever G satisfies Assumptions A.1 and A.2, c, k ∈ C(X × X ) and X is compact, we have that:

UG,c,λ1,λ2(s0,t0)=\mathcal{U}_{\mathcal{G},c,\lambda_{1},\lambda_{2}}\left(s_{0},t_{0}\right)= (s0, t0) = max maxfG(λ1),gG(λ2)Xf ds0+Xg dt0,\operatorname*{max}_{f\in{\mathcal{G}}(\lambda_{1}),g\in{\mathcal{G}}(\lambda_{2})}\int_{\mathcal{X}}f\ \mathrm{d}s_{0}+\int_{\mathcal{X}}g\ \mathrm{d}t_{0}, s.t. $$f(x)+g(y)\leq c(x,y)\ \forall\ x,y\in{\mathcal{X}}.$$ (13)(13) (14)(14) Proof. We begin by re-writing the RHS of (13) using the definition of IPMs given in (12): UG*,c,λ*1,λ2 (s0, t0) ≡ min π∈R+(X×X ) Z X×X c dπ + λ1 max f∈G Z X f ds0 − Z X f dπ1

  • λ2 max g∈G

Z X g dt0 − Z X g dπ2

∵(A.2) = min π∈R+(X×X ) Z X×X c dπ + λ1 max f∈G ZX f ds0 − Z X f dπ1

  • λ2 max g∈G ZX g dt0 − Z X g dπ2

= min π∈R+(X×X ) Z X×X c dπ + max f∈G(λ1) Z X f ds0 − Z X f dπ1 + max g∈G(λ2) Z X g dt0 − Z X g dπ2

= max f∈G(λ1),g∈G(λ2) Z X f ds0 + Z X g dt0 + min π∈R+(X×X ) Z X×X c dπ − Z X f dπ1 − Z X g dπ2 = max f∈G(λ1),g∈G(λ2) Z X f ds0 + Z X g dt0 + min π∈R+(X×X ) Z X×X c − ¯f − g¯ dπ = max f∈G(λ1),g∈G(λ2) Z X f ds0 + Z X g dt0 + 0 if f(x) + g(y) ≤ c(x, y) ∀ x, y ∈ X , −∞ otherwise. = max f∈G(λ1),g∈G(λ2) Z X f ds0 + Z X g dt0, s.t. f(x) + g(y) ≤ c(x, y) ∀ x, y ∈ X . (15) Here, ¯f(x, y) ≡ f(x), g¯(x, y) ≡ g(y). The min-max interchange in the third equation is due to Sion's minimax theorem: (i) since R(X ) is a topological dual of C(X ) whenever X is compact, the objective is bilinear (inner-product in this duality), whenever c, f, g are continuous. This is true from Assumption A.1 and c ∈ C(X ×X ). (ii) one of the feasibility sets involves G, which is convex compact by Assumptions A.1, A.2.

The other feasibility set is convex (the closed conic set of non-negative measures). Remark B.1. Whenever the kernel, k, employed is continuous, the generating set of the corresponding MMD satisfies assumptions A.2 and Gk ⊆ C(X ). Hence, the above proof also works in our case of MMD-regularized UOT (i.e., to prove Theorem 4.1 in the main paper).

We first derive an equivalent re-formulation of 13, which will be used in our proof.

Lemma B1.: $$\mathcal{U}{\phi,c,\lambda{1},\lambda_{2}}\left(s_{0},t_{0}\right)\equiv\min_{s,t\in\mathbb{R}^{n}(\mathcal{X})}\ \left|s|W_{1}(s,t)+\lambda_{1}\gamma_{\mathcal{U}}(s,s_{0})+\lambda_{2}\gamma_{\mathcal{U}}(t,t_{0}),\right.$$ (16) where $W{1}(s,t)\equiv\left{\begin{array}{cc}\hat{W}{1}(\frac{s{0}}{s_{0}},\frac{t_{0}}{t_{0}})&\mbox{if}\left|s\right|=\left|t\right|,\mbox{with}\hat{W}{1}\mbox{as the}1\mbox{-Wasserstein metric},\ \infty&\mbox{otherwise.}\end{array}\right.$ Proof.

min s,t∈R+(X ) |s|W1(s, t) + λ1γG(s, s0) + λ2γG(t, t0) = min s,t∈R+(X ); |s|=|t| |s| min π¯∈R+ 1 (X×X ) Zc dπ¯ + λ1γG(s, s0) + λ2γG(t, t0) s.t. π¯1 =s |s| , π¯2 =t |t| = min η>0 η min π¯∈R+ 1 (X×X ) Zc dπ¯ + λ1γG(ηπ¯1, s0) + λ2γG(ηπ¯2, t0) = min η>0min π¯∈R+ 1 (X×X ) Zc ηdπ¯ + λ1γG(ηπ¯1, s0) + λ2γG(ηπ¯2, t0) = min π∈R+(X×X ) Zc dπ + λ1γG(π1, s0) + λ2γG(π2, t0) The first equality holds from the definition of W1: W1(s, t) ≡ W¯1( s |s| , t |t| ) if |s| = |t|, ∞ otherwise.

. Eliminating normalized versions s and t using the equality constraints and introducing η to denote their common mass gives the second equality. The last equality comes after changing the variable of optimization to π ∈ R+ (X × X ) ≡ ηπ¯. Recall that R+(X ) denotes the set of all non-negative Radon measures defined over X ; while the set of all probability measures is denoted by R

  • 1 (X ).

Corollary 4.2 in the main paper is restated below with the IPM-regularized UOT formulation (13), followed by its proof.

Corollary 4.2. (Metricity) In addition to assumptions in Theorem (4.1), whenever c is a metric, UG*,c,λ,λ* belongs to the family of integral probability metrics (IPMs). Also, the generating set of this IPM is the intersection of the generating set of the Kantorovich metric and the generating set of the IPM used for regularization. Finally, UG,c,λ,λ is a valid norm-induced metric over measures whenever the IPM used for regularization is norm-induced (e.g. MMD with a characteristic kernel). Thus, U lifts the ground metric c to that over measures.

Proof. The constraints in dual, (7), are equivalent to: g(y) ≤ min x∈X c(x, y) − f(x) ∀ y ∈ X . The RHS is nothing but the c-conjugate (c-transform) of f. From Proposition 6.1 in (Peyré & Cuturi, 2019), whenever c is a metric we have: min x∈X c(x, y) − f(x) = −f(y) if f ∈ Wc, −∞ otherwise.Here, Wc is the generating set of the Kantorovich metric lifting c. Thus the constraints are equivalent to: g(y) ≤ −f(y) ∀ y ∈ X , f ∈ Wc.

Now, since the dual, (7), seeks to maximize the objective with respect to g, and monotonically increases with values of g; at optimality, we have that g(y) = −f(y) ∀ y ∈ X . Note that this equality is possible to achieve as both g, −f ∈ G(λ) ∩ Wc (these sets are absolutely convex). Eliminating g, one obtains:

UG,c,λ,λ(s0,t0)=maxfG(λ)WcXfd ⁣s0Xfd ⁣t0,{\mathcal{U}}_{{\mathcal{G}},c,\lambda,\lambda}\left(s_{0},t_{0}\right)=\operatorname*{max}_{f\in{\mathcal{G}}(\lambda)\cap{\mathcal{W}}_{c}}\int_{{\mathcal{X}}}f\,\operatorname{d}\!s_{0}-\int_{{\mathcal{X}}}f\,\operatorname{d}\!t_{0},

Comparing this and the definition of IPMs 12, we have that UG*,c,λ,λ* belongs to the family of IPMs. Since any IPM is a pseudo-metric (induced by a semi-norm) over measures (Muller, 1997), the only condition left to be proved is positive definiteness with UG,c,λ,λ(s0, t0). Following Lemma B1, we have that for optimal s ∗, t∗in (16), UG*,c,λ,λ*(s0, t0) = 0 ⇐⇒ (i) W1(s ∗, t∗) = 0,(ii) γG(s ∗, s0) = 0,(iii) γG(t ∗, t0) = 0 as each term in the RHS is non-negative. When the IPM used for regularization is a norm-induced metric (e.g. the MMD metric or the Dudley metric), the conditions (i),(ii),(iii) ⇐⇒ s ∗ = t ∗ = s0 = t0, which proves the positive definiteness. Hence, we proved that UG*,c,λ,λ* is a norm-induced metric over measures whenever the IPM used for regularization is a metric.

Remark B.2. Recall that MMD is a valid norm-induced IPM metric whenever the kernel employed is characteristic. Hence, our proof above also shows the metricity of the MMD-regularized UOT (as per corollary 4.2 in the main paper).

Remark B.3. If G is the unit uniform-norm ball (corresponding to TV), our result specializes to that in (Piccoli & Rossi, 2016), which proves that UG,c,λ,λ coincides with the so-called Flat metric (or the bounded Lipschitz distance).

Remark B.4. If the regularizer is the Kantorovich metric3, i.e., G = Wc, and λ1 = λ2 = λ ≥ 1*, then* UWc,c,λ,λ coincides with the Kantorovich metric. In other words, the Kantorovich-regularized OT is the same as the Kantorovich metric. Hence providing an OT interpretation for the Kantorovich metric that is valid for potentially un-normalized measures in R+(X ).

Proof. As discussed in Theorem 4.1 and Corollary 4.2, the MMD-regularized UOT (Formulation 6) is an IPM with the generating set as an intersection of the generating sets of the MMD and the KantorovichWasserstein metrics. We now present special cases when MMD-regularized UOT (Formulation 6) recovers back the Kantorovich-Wasserstein metric and the MMD metric.

Recovering Kantorovich. Recall that Gk(λ) = {λg | g ∈ Gk}. From the definition of Gk(λ), f ∈ Gk(λ) =⇒ f ∈ Hk, ∥f∥k ≤ λ. Hence, as λ → ∞, Gk(λ) = Hk. Using this in the duality result of Theorem 4.1, we have the following.

limλUk,c,λ,λ(s0,t0)=limλmaxfUk(λ)Wcfdds0fddt0=maxfUk(λ)Wcfdds0fddt0\lim_{\lambda\to\infty}\mathcal{U}_{k,c,\lambda,\lambda}(s_{0},t_{0})=\lim_{\lambda\to\infty}\max_{f\in\mathcal{U}_{k}(\lambda)\cap\mathcal{W}_{c}}\int fd\mathrm{d}s_{0}-\int fd\mathrm{d}t_{0}=\max_{f\in\mathcal{U}_{k}(\lambda)\cap\mathcal{W}_{c}}\int fd\mathrm{d}s_{0}-\int fd\mathrm{d}t_{0} $$\stackrel{{(1)}}{{=}}\max_{f\in\mathcal{C}(\lambda)\cap\mathcal{W}{c}}\int fd\mathrm{d}s{0}-\int fd\mathrm{d}t_{0}$$ $$\stackrel{{(2)}}{{=}}\max_{f\in\mathcal{W}{c}}\int fd\mathrm{d}s{0}-\int fd\mathrm{d}t_{0}$$

Equality (1) holds because Hk is dense in the set of continuous functions, C(X ). For equality (2), we use that Wc consists of only 1-Lipschitz continuous functions. Thus, ∀s0, t0 ∈ R+(X ), limλ→∞ Uk,c,λ,λ(s0, t0) = Kc(s0, t0).

Recovering MMD. We next show that when 0 < λ1 = λ2 = λ ≤ 1 and the cost metric c is such that c(x, y) ≥pk(x, x) + k(y, y) − 2k(x, y) = ∥ϕ(x) − ϕ(y)∥k ∀x, y (Dominating cost assumption discussed in B.4), then ∀s0, t0 ∈ R+(X ), Uk,c,λ,λ(s0, t0) = MMDk(s0, t0).

3The ground metric in UG*,c,λ,λ* must be the same as that defining the Kantorovich regularizer. Let f ∈ Gk(λ) =⇒ f = λg where g ∈ Hk, ∥g∥ ≤ 1. This also implies that λg ∈ Hk as λ ∈ (0, 1].

f(x)f(y)=λg,ϕ(x)ϕ(y)(RKHS property)|f(x)-f(y)|=|\left\langle\lambda g,\phi(x)-\phi(y)\right\rangle|\text{(RKHS property)} $$\leq|\left\langle g,\phi(x)-\phi(y)\right\rangle|\text{(}\because0<\lambda\leq1\text{)}$$ $$\leq|g|{k}|\phi(x)-\phi(y)|{k}\text{(Cauchy Schwarz)}$$ $$\leq|\phi(x)-\phi(y)|{k}\text{(}\because|g|\leq1\text{)}$$ $$\leq c(x,y)\text{(Domainity cost assumption,discussed in B.4)}$$ $$\implies f\in\mathcal{W}{c}$$

Therefore, Gk(λ) ⊆ WC and hence, Gk(λ) ∩ WC = Gk(λ). This relation, together with the metricity result shown in Corollary 4.2, implies that Uk,c,λ,λ(s0, t0) = λMMDk(s0, t0). In B.4, we show that the Euclidean distance satisfies the dominating cost assumption when the kernel employed is the Gaussian kernel and the inputs lie on a unit-norm ball.

B.4 Dominating Cost Assumption With Euclidean Cost And Gaussian Kernel

We present a sufficient condition for the Dominating cost assumption (used in Corollary 4.3) to be satisfied while using a Euclidean cost and a Gaussian kernel based MMD. We consider the characteristic RBF kernel, k(x, y) = exp (−s∥x − y∥ 2), and show that for the hyper-parameter, 0 < s ≤ 0.5, the Euclidean cost is greater than the Kernel cost when the inputs are normalized, i.e., ∥x∥ = ∥y∥ = 1.

xy2k(x,x)+k(y,y)2k(x,y)x2+y22x,y22k(x,y)x,yexp(2s(1x,y))(Assuming normalized inputs)\begin{array}{l}\left\|x-y\right\|^{2}\geq k(x,x)+k(y,y)-2k(x,y)\\ \Longleftrightarrow\left\|x\right\|^{2}+\left\|y\right\|^{2}-2\left\langle x,y\right\rangle\geq2-2k(x,y)\\ \Longleftrightarrow\left\langle x,y\right\rangle\leq\exp\left(-2s(1-\left\langle x,y\right\rangle)\right)\text{(Assuming normalized inputs)}\end{array} (17)(17)

From Cauchy Schwarz inequality, −∥x∥∥y∥ ≤ ⟨x, y⟩ ≤ ∥x∥∥y∥. With the assumption of normalized inputs, we have that −1 ≤ ⟨x, y⟩ ≤ 1. We consider two cases based on this. Case 1: ⟨x, y⟩ ∈ [−1, 0] In this case, condition (17) is satisfied ∀s ≥ 0 because k(x, y) ≥ 0 ∀x, y with a Gaussian kernel.

Case 2: ⟨x, y⟩ ∈ (0, 1] In this case, our problem in condition (17) is to find s ≥ 0 such that ln ⟨x, y⟩ ≤ −2s(1 − ⟨x, y⟩). We further consider two sub-cases and derive the required condition as follows: Case 2A: ⟨x, y⟩ ∈ (0, 1 e We re-parameterize ⟨x, y⟩ = e −n for n ≥ 1. With this, we need to find s ≥ 0 such that −n ≤ −2s(1 − e −n) ⇐⇒ n ≥ 2s(1 − e −n). This is satisfied when 0 < s ≤ 0.5 because e −n ≥ 1 − n.

Case 2B: ⟨x, y⟩ ∈ ( 1 e , ∞) We re-parameterize ⟨x, y⟩ = e − 1n for n > 1. With this, we need to find s ≥ 0 such that 1 n 1−e − 1n ≥ 2s. We consider the function f(n) = n 1 − e − 1n for n ≥ 1. We now show that f is an increasing function by showing that the gradient df dn = 1 −1 + 1n e − 1n is always non-negative.

dfdn0e1n(1+1n)1nln(1+1n)01n(ln(n+1)ln(n))0\begin{array}{l}{{\frac{\mathrm{d}f}{\mathrm{d}n}\geq0}}\\ {{\Longleftrightarrow e^{\frac{1}{n}}\geq\left(1+\frac{1}{n}\right)}}\\ {{\Longleftrightarrow\frac{1}{n}-\ln\left(1+\frac{1}{n}\right)\geq0}}\\ {{\Longleftrightarrow\frac{1}{n}-\left(\ln\left(n+1\right)-\ln\left(n\right)\right)\geq0}}\end{array}

Applying the Mean Value Theorem on g(n) = ln n, we get

$ \ln{(n+1)}-\ln{n}=(n+1-n)\frac{1}{z},\text{where}n\leq z\leq n+1$ 3.     ln(1+1n)=1z1n\implies\ln\left(1+\frac{1}{n}\right)=\frac{1}{z}\leq\frac{1}{n} $$\implies\frac{\mathrm{d}f}{\mathrm{d}n}=\frac{1}{n}-\ln\left(1+\frac{1}{n}\right)\geq0$$ The above shows that f is an increasing function of n. We note that limn→∞ f(n) = 1, hence, 1 f(n) = 1 n 1−e − 1n ≥ 1 which implies that condition (17) is satisfied by taking 0 < s ≤ 0.5.

Corollary 4.4 in the main paper is restated below with the IPM-regularized UOT formulation (13), followed by its proof.

Corollary 4.4. UG,c,λ,λ(s, t) ≤ min (λγG(s, t), Kc(s, t)). Proof. Theorem 4.1 shows that UG*,c,λ,λ* is an IPM whose generating set is the intersection of the generating sets of Kantorovich and the scaled version of the IPM used for regularization. Thus, from the definition of max, we have that UG,c,λ,λ(s, t) ≤ λγG(s, t) and UG,c,λ,λ(s, t) ≤ Kc(s, t). This implies that UG,c,λ,λ(s, t) ≤ min (λγG(s, t), Kc(s, t)). As a special case, Uk,c,λ,λ(s, t) ≤ min (λMMDk(s, t), Kc(s, t)).

Corollary 4.5 in the main paper is restated below with the IPM-regularized UOT formulation (13), followed by its proof.

Corollary 4.5. (Weak Metrization) UG,c,λ,λ metrizes the weak convergence of normalized measures. Proof. For convenience of notation, we denote UG*,c,λ,λ* by U. From Corollary 4.4 in the main paper, 0 ≤ U(βn, β) ≤ Kc(βn, β) From Sandwich theorem, limβn⇀β U(βn, β) → 0 as limβn⇀β Kc(βn, β)) → 0 by Theorem 6.9 in (Villani, 2009).

Corollary 4.6 in the main paper is restated below with the IPM-regularized UOT formulation (13), followed by its proof.

Corollary 4.6. (Sample Complexity) Let us denote UG,c,λ,λ, defined in 13, by U¯. Let sˆm,tˆm denote the empirical estimates of s0, t0 ∈ R+(X ) respectively with m samples. Then, U¯(ˆsm,tˆm) → U¯(s0, t0) at a rate (apart from constants) same as that of γG(ˆsm, s0) → 0.

Proof. We use metricity of U¯ proved in Corrolary 4.2. From triangle inequality of the metric U¯ and Corollary 4.4 in the main paper, we have that 0 ≤ |U¯(ˆsm,tˆm) − U¯(s0, t0)| ≤ U¯(ˆsm, s0) + U¯(t0,tˆm) ≤ λγG(ˆsm, s0) + γG(tˆm, t0).

Hence, by Sandwich theorem, U¯(ˆsm,tˆm) → U¯(s0, t0) at a rate at which γG(ˆsm, s0) → 0 and γG(tˆm, t0) → 0. If the IPM used for regularization is MMD with a normalized kernel, then MMDk (s0, sˆm) ≤ q1 m + q2 log(1/δ) m with probability at least 1 − δ (Smola et al., 2007).

From the union bound, with probability at least 1−δ, |U¯ (sm, tm)−U¯ (s0, t0)| ≤ 2λ

n)Uˉ(s0,t0)2λ(1m+2log(2/δ)m).\qed_{n})-\bar{\mathcal{U}}\left(s_{0},t_{0}\right)|\leq2\lambda\left(\sqrt{\frac{1}{m}}+\sqrt{\frac{2\log(2/\delta)}{m}}\right).\quad\qed

B.8 Proof Of Theorem 4.8

We first restate the standard Moreau-Rockafellar theorem, which we refer to in this discussion.

Theorem B2. Let X be a real Banach space and f, g : X 7→ R ∪ {∞} be closed convex functions such that dom(f)∩dom(g) is not empty, then: (f +g) ∗(y) = min x1+x2=y f ∗(x1)+g ∗(x2) ∀y ∈ X∗*. Here,* f ∗is the Fenchel conjugate of f, and X∗is the topological dual space of X.

Theorem 4.8 in the main paper is restated below with the IPM-regularized UOT formulation 13, followed by its proof.

Theorem 4.8. In addition to the assumptions in Theorem 4.1, if c is a valid metric, then

, $y\in\mathbb{R}^d$ denotes a total. UG,c,λ1,λ2(s0,t0)=mins,tR(X)  Kc(s,t)+λ1γG(s,s0)+λ2γG(t,t0).{\cal U}_{{\mathcal G},c,\lambda_{1},\lambda_{2}}\left(s_{0},t_{0}\right)=\operatorname*{min}_{s,t\in{\mathcal R}({\mathcal X})}\;{\mathcal K}_{c}(s,t)+\lambda_{1}\gamma_{{\mathcal G}}(s,s_{0})+\lambda_{2}\gamma_{{\mathcal G}}(t,t_{0}). (18)(18) Proof. Firstly, the result in the theorem is not straightforward and is not a consequence of KantorovichRubinstein duality. This is because the regularization terms in our original formulation (13, 16) enforce closeness to the marginals of a transport plan and hence necessarily must be of the same mass and must belong to R+(X ). Whereas in the RHS of 18, the regularization terms enforce closeness to marginals that belong to R(X ) and more importantly, they could be of different masses.

We begin the proof by considering indicator functions Fc and FG defined over C(X ) × C(X ) as: Fc(f, g) = n0 if f(x) + g(y) ≤ c(x, y) ∀ x, y ∈ X , ∞ otherwise., FG,λ1,λ2 (f, g) = n0 if f ∈ G(λ1), g ∈ G(λ2), ∞ otherwise .

Recall that the topological dual of C(X ) is the set of regular Radon measures R(X ) and the duality product ⟨f, s⟩ ≡ Rf ds ∀ f ∈ C(X ), s ∈ R(X ). Now, from the definition of Fenchel conjugate in the (direct sum) space C(X ) ⊕ C(X ), we have: F ∗ c (s, t) = max f∈C(X ),g∈C(X ) Rf ds +Rg dt,s.t.f(x) + g(y) ≤ c(x, y) ∀ x, y ∈ X , where s, t ∈ R(X ). Under the assumptions that X is compact and c is a continuous metric, Proposition 6.1 in (Peyré & Cuturi, 2019) shows that F ∗ c (s, t) = max f∈Wc Rfds −Rfdt = Kc(s, t).

On the other hand, $,F_{\mathcal{G},\lambda_1,\lambda_2}(f,,$ ) $=\frac{}{}$. (f, g) = max f∈G(λ1) Rf ds + max g∈G(λ2) Rg dt= λ1γG(s, 0) + λ2γG(t, 0). Now, we have that the RHS of 18 is min s,t,s1,t1∈R(X ):(s,t)+(s1,t1)=(s0,t0) F ∗ c (s, t) + F ∗ G,λ1,λ2 (s1, t1). This is because γG(s0 − s, 0) = γG(s0, s). Now, observe that the indicator functions FG,λ1,λ2 , Fc are closed, convex functions because their domains are closed, convex sets. Indeed, G is a closed, convex set by Assumptions A.1, A.2. Also, it is simple to verify that the set {(f, g) | f(x) + g(y) ≤ c(x, y) ∀ x, y ∈ X } is closed and convex. Hence by applying the Moreau-Rockafellar formula (Theorem B2), we have that the RHS of 18 is equal to (Fc + FG,λ1,λ2 ) ∗(s0, t0). But from the definition of conjugate, we have that (Fc + FG,λ1,λ2 ) ∗(s0, t0) ≡ max f∈C(X ),g∈C(X ) RX f ds0 +RX g dt0 − Fc(f, g) − FG,λ1,λ2 (f, g). Finally, from the definition of the indicator functions Fc, FG,λ1,λ2 , this is same as the final RHS in 15. Hence Proved.

Remark B.5. Whenever the kernel, k*, employed is continuous, the generating set of the corresponding* MMD satisfies assumptions A.1, A.2 and Gk ⊆ C(X ). Hence, the above proof also works in our case of MMD-UOT.

B.9 Proof Of Theorem 4.10: Consistency Of The Proposed Estimator

Proof. From triangle inequality, |Uˆm(ˆsm,tˆm) − U¯(s0, t0)| ≤ |Uˆm(ˆsm,tˆm) − Uˆm(s0, t0)| + |Uˆm(s0, t0) − U¯(s0, t0)|, (19) where Uˆm(s0, t0) is same as U¯(s0, t0) except that it employs the restricted feasibility set, F(ˆsm,tˆm), for the transport plan: set of all joints supported using the samples in sˆm,tˆm alone i.e., F(ˆsm,tˆm) ≡ nPm i=1 Pm j=1 αij δ(x1i,x2j )| αij ≥ 0 ∀ i, j = 1*, . . . , m*o. Here, δz is the Dirac measure at z. We begin by bounding the first term in RHS of (19).

(19)(19)

We denote the (common) objective in Uˆm(·, ·), U¯(·, ·) as a function of the transport plan, π, by h(π, ·, ·).

Then,

Uˆm(ˆsm,tˆm) − Uˆm(s0, t0) = min π∈F(ˆsm,tˆm) h(π, sˆm,tˆm) − min π∈F(ˆsm,tˆm) h(π, s0, t0) ≤ h(π 0∗, sˆm,tˆm) − h(π 0∗, s0, t0) where π 0∗ = arg min π∈F(ˆsm,tˆm) h(π, s0, t0) ! = λ1 MMDk(π 0∗ 1 , sˆm) − MMDk(π 0∗ 1 , s0)+ λ2 MMDk(π 0∗ 2 ,tˆm) − MMDk(π 0∗ 2 , t0) ≤ λ1MMDk(s0, sˆm) + λ2MMDk(t0,tˆm) (∵ MMDk satisfies triangle inequality) $(s_{0},t_{0})-\hat{U}{m}(\hat{s}{m},\hat{t}{m})\leq\lambda{1}\text{MMD}{k}(s{0},\hat{s}{m})+\lambda{2}\text{MMD}{k}(t{0})$.
Similarly, one can show that Uˆm(s0, t0) − Uˆm(ˆsm,tˆm) ≤ λ1MMDk(s0, sˆm) + λ2MMDk(t0,tˆm). Now, (Muandet et al., 2017, Theorem 3.4) shows that, with probability at least 1−δ, MMDk(s0, sˆm) ≤ √ 1 m + q2 log(1/δ) m , where k is a normalized kernel. Hence, the first term in inequality (19) is upper-bounded by (λ1 + λ2) √ 1 m + q2 log 2/δ m , with probability at least 1 − δ.

We next look at the second term in inequality (19): |Uˆm(s0, t0)−U¯(s0, t0)|. Let π¯ m be the optimal transport plan in definition of Uˆm(s0, t0). Let π ∗ be the optimal transport plan in the definition of U¯(s0, t0). Consider another transport plan: πˆ m ∈ F(ˆsm,tˆm) such that πˆ m(xi, yj ) = η(xi,yj ) m2 where η(xi, yj ) = π ∗(xi,yj ) s0(xi)t0(yj ) for i, j ∈ [1, m].

|Uˆm(s0, t0) − U¯(s0, t0)| = Uˆm(s0, t0) − U¯(s0, t0) = h(¯π m, s0, t0) − h(π ∗, s0, t0) ≤ h(ˆπ m, s0, t0) − h(π ∗, s0, t0) (∵ π¯ m is optimal,) ≤ Zc dπˆ m − Zc dπ ∗ + λ1∥µk(ˆπ m 1 ) − µk(π ∗ 1 )∥k + λ2∥µk(ˆπ m 2 ) − µk(π ∗ 2 )∥k (∵ Triangle inequality) To upper bound these terms, we utilize the fact that the RKHS, Hk, corresponding to a c-universal kernel, k, is dense in C(X ) wrt. the supnorm (Sriperumbudur et al., 2011) and like-wise the direct-product space, Hk ⊗ Hk, is dense in C(X × X ) (Gretton, 2015). Given any f ∈ C(X ) × C(X ), and arbitrarily small ϵ > 0, we denote by fϵ, f−ϵ the functions in Hk ⊗ Hk that satisfy the condition:

fϵ/2fϵffϵf+ϵ/2.f-\epsilon/2\leq f_{-\epsilon}\leq f\leq f_{\epsilon}\leq f+\epsilon/2.

Such an fϵ ∈ Hk ⊗ Hk will exist because: i) f + ϵ/4 ∈ C(X ) × C(X ) and ii) Hk ⊗ Hk ⊆ C(X ) × C(X ) is dense. So there must exist some fϵ ∈ Hk ⊗ Hk such that |f(x, y) + ϵ/4 − fϵ(x, y)| ≤ ϵ/4 ∀ x, y ∈ X ⇐⇒ f(x, y) ≤ fϵ(x, y) ≤ f(x, y) + ϵ/2 ∀ x, y ∈ X . Analogously, f−ϵ exists. In other words, fϵ, f−ϵ ∈ Hk ⊗ Hk are arbitrarily close upper-bound (majorant), lower-bound (minorant) of f ∈ C(X ) × C(X ).

We now upper-bound the first of the set of terms (denote s0(x)t0(y) by ξ(x, y) and ˆξ m(x, y) is the corresponding empirical measure):

Zc dπˆ m − Zc dπ ∗ ≤ Zcϵ dπˆ m − Zc−ϵ dπ ∗ = ⟨cϵ, µk(ˆπ m)⟩ − ⟨c−ϵ, µk(π ∗)⟩ = ⟨cϵ, µk(ˆπ m)⟩ − ⟨cϵ, µk(π ∗)⟩ + ⟨cϵ, µk(π ∗)⟩ − ⟨c−ϵ, µk(π ∗)⟩ = ⟨cϵ, µk(ˆπ m) − µk(π ∗)⟩ + ⟨cϵ − c−ϵ, µk(π ∗)⟩ ≤ ⟨cϵ, µk(ˆπ m) − µk(π ∗)⟩ + ϵσπ∗ (∵ ∥cϵ − c−ϵ∥∞ ≤ ϵ and define σs as the mass of measure s) ≤ ∥cϵ∥k∥µk(ˆπ m) − µk(π ∗)∥k + ϵσπ∗ One can obtain the tightest upper bound by choosing cϵ ≡ arg minv∈Hk⊗Hk ∥v∥k s.t. c ≤ v ≤ c + ϵ/2.

Accordingly, we replace ∥c∥k by g(ϵ) in the theorem statement4. Further, we have:

∥µk(ˆπ m) − µk(π ∗)∥ 2 k =

Zϕk(x) ⊗ ϕk(y)dπˆ m(x, y) − Zϕk(x) ⊗ ϕk(y)dπ ∗(x, y)

2 k

Zϕk(x) ⊗ ϕk(y)d (ˆπ m(x, y) − π ∗(x, y))

2 k

Zϕk(x) ⊗ ϕk(y)d (ˆπ m(x, y) − π ∗(x, y)), Zϕk(x ′) ⊗ ϕk(y ′)d (ˆπ m(x ′, y′) − π ∗(x ′, y′))

Zϕk(x) ⊗ ϕk(y)η(x, y)d ˆξ m(x, y) − ξ(x, y) , Zϕk(x ′) ⊗ ϕk(y ′)η(x ′, y′)d ˆξ m(x ′, y′) − ξ(x ′, y′)

= Z Z ⟨ϕk(x) ⊗ ϕk(y), ϕk(x ′) ⊗ ϕk(y ′)⟩η(x, y)η(x ′, y′)d ˆξ m(x, y) − ξ(x, y) d ˆξ m(x ′, y′) − ξ(x ′, y′)

= Z Z ⟨ϕk(x), ϕk(x ′)⟩⟨ϕk(y), ϕk(y ′)⟩η(x, y)η(x ′, y′)d ˆξ m(x, y) − ξ(x, y) d ˆξ m(x ′, y′) − ξ(x ′, y′)

= Z Z k(x, x′)k(y, y′)η(x, y)η(x ′, y′)d ˆξ m(x, y) − ξ(x, y) d ˆξ m(x ′, y′) − ξ(x ′, y′)

Now, observe that ˜k : X × X × X × X defined by ˜k ((x, y),(x ′, y′)) ≡ k(x, x′)k(y, y′)η(x, y)η(x ′, y′) is a valid kernel. This is because ˜k = kakbkc, where ka ((x, y),(x ′, y′)) ≡ k(x, x′) is a kernel, kb ((x, y),(x ′, y′)) ≡ k(y, y′) is a kernel, and kc ((x, y),(x ′, y′)) ≡ η(x, y)η(x ′, y′) is a kernel (the unit-rank kernel), and product of kernels is indeed a kernel. Let ψ(x, y) be the feature map corresponding to ˜k. Then, the final RHS in the above set of equations is:

=ψ(x,y),ψ(x,y)d(ξ^m(x,y)ξ(x,y))  d(ξ^m(x,y)ξ(x,y))=\int\int\langle\psi(x,y),\psi(x^{\prime},y^{\prime})\rangle\mathrm{d}\left(\hat{\xi}^{m}(x,y)-\xi(x,y)\right)\;\mathrm{d}\left(\hat{\xi}^{m}(x^{\prime},y^{\prime})-\xi(x^{\prime},y^{\prime})\right) $$=\left\langle\int\psi(x,y)\mathrm{d}\left(\hat{\xi}^{m}(x,y)-\xi(x,y)\right),\int\psi(x^{\prime},y^{\prime})\mathrm{d}\left(\hat{\xi}^{m}(x^{\prime},y^{\prime})-\xi(x^{\prime},y^{\prime})\right)\right\rangle.$$ Hence, we have that: $\left|\mu_{k}(\hat{\pi}^{m})-\mu_{k}(\pi^{*})\right|{k}=\left|\mu{k}(\hat{\xi}^{m})-\mu_{k}(\xi)\right|{k}$. Again, using Theorem 3.4), with probability at least $1-\delta$, $\left|\mu{\hat{\xi}}(\hat{\xi}^{m})-\mu_{\hat{\xi}}(\xi)\right|\ <\ \frac{C_{k}}{\pi}+1$ . Again, using (Muandet et al., 2017, µk˜( m) − µk˜(ξ) k˜ m + √ 2Ck˜ log(1/δ) m , where Ck˜ = max x,y,x′,y′∈X ˜k ((x, y),(x ′, y′)). Note that Ck˜ < ∞ as X is compact and s0, t0 are assumed to be positive measures and k is normalized. Now the MMD-regularizer terms can be bounded using a similar strategy. Recall that, πˆ m 1 (xi) = Pn j=1 π ∗(xi,yj ) m2s0(xi)t0(yj ) , so we have the following.

∥µk(ˆπ m 1 ) − µk(π ∗ 1 )∥ 2 k =

Zϕk(x)dπˆ m 1 (x) − Zϕk(x)dπ ∗ 1 (x)

2 k

Zϕk(x)d (ˆπ m 1 (x) − π ∗ 1 (x))

2 k

Zϕk(x)d (ˆπ m 1 (x) − π ∗ 1 (x)), Zϕk(x ′)d (ˆπ m 1 (x ′) − π ∗ 1 (x ′))

Zϕk(x)η(x, y)d ˆξ m(x, y) − ξ(x, y) , Zϕk(x ′)η(x ′, y′)d ˆξ m(x ′, y′) − ξ(x ′, y′)

= Z Z ⟨ϕk(x), ϕk(x ′)⟩η(x, y)η(x ′, y′)d ˆξ m(x, y) − ξ(x, y) d ˆξ m(x ′, y′) − ξ(x ′, y′)

= Z Z k(x, x′)η(x, y)η(x ′, y′)d ˆξ m(x, y) − ξ(x, y) d ˆξ m(x ′, y′) − ξ(x ′, y′) . 4This leads to a slightly weaker bound, but we prefer it for ease of presentation Now, observe that ¯k : X × X × X × X defined by ¯k ((x, y),(x ′, y′)) ≡ k(x, x′)η(x, y)η(x ′, y′) is a valid kernel. This is because ¯k = k1k2, where k1 ((x, y),(x ′, y′)) ≡ k(x, x′) is a kernel and k2 ((x, y),(x ′, y′)) ≡ η(x, y)η(x ′, y′) is a kernel (the unit-rank kernel), and product of kernels is indeed a kernel. Hence, we have that: ∥µk(ˆπ m 1 ) − µk(π ∗ 1 )∥k = µk¯( ˆξ m) − µk¯(ξ) k¯ . Similarly, we have: ∥µk(ˆπ m 2 ) − µk(π ∗ 2 )∥k = µk¯( ˆξ m) − µk¯(ξ) k¯ . Again, using (Muandet et al., 2017, Theorem 3.4), with probability at least 1 − δ,

µk¯( ˆξ m) − µk¯(ξ) k¯≤Ck¯ m + √ 2Ck¯ log(1/δ) m , where Ck¯ = max x,y,x′,y′∈X ¯k ((x, y),(x ′, y′)). Note that Ck¯ < ∞ as X is compact, s0, t0 are assumed to be positive measures, and k is normalized. From the union bound, we have: Uˆm(ˆsm,tˆm) − U¯(s0, t0) ≤ (λ1 + λ2) √ 1 m + q2 log (5/δ) m + Ck¯ m + g(ϵ) Ck˜ m + √ 2Ck˜ log (5/δ) m + ϵσπ∗ , with probability at least 1 − δ. In other words, w.h.p. we have: Uˆm(ˆsm,tˆm) − U¯(s0, t0) ≤ O λ1√ +λ2 m + g(ϵ) m + ϵσπ∗ for any ϵ > 0. Hence proved.

2Ck¯ Log(5/Δ) M

  • B.9.1 Bounding G(Ε)

Let the target function to be approximated be h ∗ ∈ C(X ) ⊂ L2(X ), which is the set of square-integrable functions (wrt. some measure). Since X is compact, k being c-universal, it is also L 2-universal.

Consider the inclusion map ι : Hk 7→ L2(X ), defined by ι g = g. Let's denote the adjoint of ι by ι ∗. Consider the regularized least square approximation of h ∗ defined by ht ≡ (ι ∗ι + t) −1ι ∗h ∗ ∈ Hk, where t > 0. Now, using standard results, we have:

ιhthL2=(ι(ιι+t)1ιI)hL2\|\iota h_{t}-h^{*}\|_{\mathcal{L}^{2}}=\|\left(\iota(\iota^{*}\iota+t)^{-1}\iota^{*}-I\right)h^{*}\|_{\mathcal{L}^{2}} $$=|\left(\iota\ \iota^{}(\iota^{}+t)^{-1}-I\right)h^{}|_{\mathcal{L}^{2}}$$ $$=|\left(\iota\ \iota^{}(\iota\ \iota^{}+t)^{-1}-(\iota\ \iota^{}+t)(\iota\ \iota^{}+t)^{-1}\right)h^{}|{\mathcal{L}^{2}}$$ $$=t|\left(\iota\ \iota^{}+t\right)^{-1}h^{}|{\mathcal{L}^{2}}$$ $$\leq t|\left(\iota\ \iota^{}\right)^{-1}h^{}|_{\mathcal{L}^{2}}$$

The last inequality is true because the operator ι ι∗is PD and t > 0. Thus, if t ≡ tˆ =ϵ ∥(ι ι∗) −1h∗∥L2 , then ∥ιhtˆ − h ∗∥∞ ≤ ∥ιhtˆ − h ∗∥L2 ≤ ϵ. Clearly,

g(ϵ) ≤ ∥htˆ∥Hk = q⟨htˆ, htˆ⟩Hk = q⟨(ι ∗ι + tˆ)−1ι ∗h ∗,(ι ∗ι + tˆ)−1ι ∗h ∗⟩Hk = q⟨ι ∗(ι ι∗ + tˆ)−1h ∗, ι∗(ι ι∗ + tˆ)−1h ∗⟩Hk = q⟨(ι ι∗ + tˆ)−1ι ι∗(ι ι∗ + tˆ)−1h ∗, h∗⟩L2 = q⟨(ι ι∗) 1 2 (ι ι∗ + tˆ)−1h ∗,(ι ι∗) 1 2 (ι ι∗ + tˆ)−1h ∗⟩L2 = ∥ (ι ι∗) 1 2 (ι ι∗ + tˆ) −1h ∗∥L2 . Now, consider the spectral function f(λ) = λ 1 2 λ+tˆ . This is maximized when λ = tˆ. Hence, f(λ) ≤1 2 √ tˆ . Thus, g(ϵ) ≤ ∥h ∗∥L2 √ ∥(ι ι∗) −1h∗∥L2 2 √ϵ. Therefore, as ϵ decays as 1 m2/3, then, g(ϵ) m ≤ O 1 m2/3 .

B.10 Solving Problem (9) Using Mirror Descent

Problem (9) is an instance of a convex program and can be solved using Mirror Descent (Ben-Tal & Nemirovski, 2021), presented in Algorithm 2.

Algorithm 2 Mirror Descent for solving Problem (9) Require: Initial α1 ≥ 0, max iterations N.

f(α) = Tr αC ⊤ 12+ λ1 α1 − σ1 m 1G11

  • λ2 α ⊤1 − σ2 m 1G22 .

for i ← 1 to N do if ∥∇f(αi)∥ ̸= 0 then si = 1/∥∇f(αi)∥∞.

else return αi.

end if αi+1 = αi ⊙ e −si∇f(αi).

end for return αi+1.

B.11 Equivalence Between Problems (9) And (10)

We comment on the equivalence between Problems (9) and (10) based on the equivalence of their Ivanov forms: Ivanov form for Problem (9) is

minα0Rm1×m2Tr(αC12) s.t. α1σ1m11G11r1,α1σ2m21G22r2,\operatorname*{min}_{\alpha\geq0\in\mathbb{R}^{m_{1}\times m_{2}}}\mathrm{Tr}\left(\alpha C_{12}^{\top}\right){\mathrm{~s.t.~}}\left\|\alpha{\bf1}-{\frac{\sigma_{1}}{m_{1}}}{\bf1}\right\|_{G_{11}}\leq r_{1},\left\|\alpha^{\top}{\bf1}-{\frac{\sigma_{2}}{m_{2}}}{\bf1}\right\|_{G_{22}}\leq r_{2},

where r1, r2 > 0.

Similarly, the Ivanov form for Problem (10) is

minα0Rm1×m2Tr(αC12) s.t. α1σ1m11G112rˉ1,α1σ2m21G222rˉ2,\operatorname*{min}_{\alpha\geq0\in\mathbb{R}^{m_{1}\times m_{2}}}\mathrm{Tr}\left(\alpha C_{12}^{\top}\right)\mathrm{~s.t.~}\left\|\alpha\mathbf{1}-{\frac{\sigma_{1}}{m_{1}}}\mathbf{1}\right\|_{G_{11}}^{2}\leq{\bar{r}}_{1},\left\|\alpha^{\top}\mathbf{1}-{\frac{\sigma_{2}}{m_{2}}}\mathbf{1}\right\|_{G_{22}}^{2}\leq{\bar{r}}_{2},

where r¯1, r¯2 > 0. As we can see, the Ivanov forms are the same with r¯1 = r 2 1 , r¯2 = r 2 2 , the solutions obtained for Problems (9) and (10) are the same.

B.12 Proof Of Lemma 4.11

Proof. Let f(α) denote the objective of Problem (10), G11, G22 are the Gram matrices over the source and target samples, respectively and m1, m2 as the number of source and target samples respectively.

f(α)=C12+2(λ1G11(α1m2σ1m11m1)1m2+λ21m1(1m1α1m2σ2m2)G22),\nabla f(\alpha)={\mathcal{C}}_{12}+2\Bigg(\lambda_{1}G_{11}\left(\alpha{\mathbf{1}}_{m_{2}}-{\frac{\sigma_{1}}{m_{1}}}{\mathbf{1}}_{m_{1}}\right){\mathbf{1}}_{m_{2}}^{\top}+\lambda_{2}{\mathbf{1}}_{m_{1}}\left({\mathbf{1}}_{m_{1}}^{\top}\alpha-{\mathbf{1}}_{m_{2}}^{\top}{\frac{\sigma_{2}}{m_{2}}}\right)G_{22}\Bigg),

We now derive the Lipschitz constant of this gradient.

∇f(α) − ∇f(β) = 2 λ1G11 (α − β) 1m2 1 ⊤ m2 + 1m1 1 ⊤ m1 λ2 (α − β) G22 vec (∇f(α) − ∇f(β))⊤= 2λ1vec (G11 (α − β) 1m2 1 ⊤ m2 ⊤) + λ2vec (1m1 1 ⊤ m1

(αβ)G22)))\left(\alpha-\beta\right)G_{22})^{\top}\left)\right)

= 2 λ11m2 1 ⊤ m2 ⊗ G11 + λ2G22 ⊗ 1m1 1 ⊤ m1 vec(α − β) where ⊗ denotes Kronecker product.

vec(f(α)f(β))F=vec((f(α)f(β)))F\|\text{vec}(\nabla f(\alpha)-\nabla f(\beta))\|_{F}=\|\text{vec}\left(\left(\nabla f(\alpha)-\nabla f(\beta)\right)^{\top}\right)\|_{F} $$\leq2|\lambda_{1}\mathbf{1}{m{2}}\mathbf{1}{m{2}}^{\top}\otimes G_{11}+\lambda_{2}G_{22}\otimes\mathbf{1}{m{1}}\mathbf{1}{m{1}}^{\top}|{F}|\text{vec}(\alpha-\beta)|{F}\text{(Cauchy Schwarz)}.$$ \square

This implies the Lipschitz smoothness constant

L = 2∥λ11m2 1 ⊤ m2 ⊗ G11 + λ2G22 ⊗ 1m1 1 ⊤ m1 ∥F = 2q(λ1m2) 2∥G11∥ 2 F + (λ2m1) 2∥G22∥ 2 F + 2λ1λ2 1m2 1⊤m2 ⊗ G11, G22 ⊗ 1m1 1⊤m1 F = 2q(λ1m2) 2∥G11∥ 2 F + (λ2m1) 2∥G22∥ 2 F + 2λ1λ2(1⊤m1G111m1 ) (1⊤m2G221m2 ). For the last equality, we use the following properties for Kronecker productsMixed product property: (A ⊗ B) ⊤ = A⊤ ⊗ B⊤, (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD) and Spectrum property: Tr((AC) ⊗ (BD)) = Tr(AC) Tr(BD).

B.13 Solving Problem (10) Using Accelerated Projected Gradient Descent

In Algorithm 1, we present the accelerated projected gradient descent (APGD) algorithm that we use to solve Problem (10), as discussed in Section 4.2. The projection operation involved is Project≥0 (x) = max(x, 0).

B.14 More On The Barycenter Problem B.14.1 Proof Of Lemma 4.12

Proof. Recall that we estimate the barycenter with the restriction that the transport plan π icorresponding to Uˆ(ˆsi, s) is supported on Di × ∪n i=1Di. Let β ≥ 0 ∈ R m denote the probabilities parameterizing the barycenter, s. With Uˆm as defined in Equation (9), the MMD-UOT barycenter formulation, Bˆm(ˆs1, · · · , sˆn) = min β≥0 Pn i=1 ρiUˆm (ˆsi, s(β)), becomes

\min_{\alpha_{1},\cdots,\alpha_{n},\beta\geq0}\sum_{i=1}^{n}\rho_{i}\Bigg{\{}\text{Tr}\left(\alpha_{i}\mathcal{C}_{i}^{\top}\right)+\lambda_{1}\|\alpha_{i}\mathbf{1}-\frac{\sigma_{i}}{m_{i}}\mathbf{1}\|_{G_{ii}}+\lambda_{2}\|\alpha_{i}^{\top}\mathbf{1}-\beta\|_{G}\Bigg{\}}.\tag{20}

Following our discussion in Sections 4.2 and B.11, we present an equivalent barycenter formulation with squared-MMD regularization. This not only makes the objective smooth, allowing us to exploit accelerated solvers, but also simplifies the problem, as we discuss next.

\mathcal{B}^{\prime}_{m}(\hat{s}_{1},\cdots,\hat{s}_{n})\equiv\min_{\alpha_{1},\cdots,\alpha_{n},\beta\geq0}\sum_{i=1}^{n}\rho_{i}\Bigg{\{}\mathrm{Tr}\left(\alpha_{i}\mathcal{C}_{i}^{\top}\right)+\lambda_{1}\|\alpha_{i}\mathbf{1}-\frac{\sigma_{i}}{m_{i}}\mathbf{1}\|_{\mathcal{G}_{ii}}^{2}+\lambda_{2}\|\alpha_{i}^{\top}\mathbf{1}-\beta\|_{\mathcal{G}}^{2}\Bigg{\}}.\tag{21}

The above problem is a least squares problem in terms of β with a non-negativity constraint. Equating the gradient wrt β as 0, we get G(β −Pn j=1 ρjα ⊤ j 1) = 0. As the Gram matrices of universal kernels are full-rank (Song, 2008, Corollary 32), this implies β =Pn j=1 ρjα ⊤ j 1, which also satisfies the non-negativity constraint.

Substituting β =Pn j=1 ρjα ⊤ j 1 in 21 gives us the MMD-UOT barycenter formulation:

Bm(s^1,,s^n)minα1,,αn,β0i=1nρi{Tr(αiCi)+λ1αi1σimi1Oiα2+λ2αi1j=1nρiαj1O2}.(22)\mathcal{B}^{\prime}_{m}(\hat{s}_{1},\cdots,\hat{s}_{n})\equiv\min_{\alpha_{1},\cdots,\alpha_{n},\beta\geq0}\sum_{i=1}^{n}\rho_{i}\left\{\mathrm{Tr}\left(\alpha_{i}\mathcal{C}_{i}^{\top}\right)+\lambda_{1}\|\alpha_{i}\mathbf{1}-\frac{\sigma_{i}}{m_{i}}\mathbf{1}\|_{\mathcal{O}_{i\alpha}}^{2}+\lambda_{2}\|\alpha_{i}^{\top}\mathbf{1}-\sum_{j=1}^{n}\rho_{i}\alpha_{j}^{\top}\mathbf{1}\|_{\mathcal{O}}^{2}\right\}.\tag{22}

B.14.2 Solving The Barycenter Formulation

The objective of 22, as a function of αi, has the following smoothness constant (derivation analogous to Lemma 4.11 in the main paper).

Li=2ρi(λ1m)2GiiF2+(ηimi)2GiiF2L_{i}=2\rho_{i}\sqrt{(\lambda_{1}m)^{2}\|G_{i i}\|_{F}^{2}+\left(\eta_{i}m_{i}\right)^{2}\|G_{i i}\|_{F}^{2}}

2∥G∥ 2 F + 2λ1ηi(1⊤miGii1mi )(1⊤mG1m) where ηi = λ2(1−ρi). We jointly optimize for αi's using accelerated projected gradient descent with step-size 1/Li.

B.14.3 Consistency Of The Barycenter Estimator

Similar to Theorem 4.10, we show the consistency of the proposed sample-based barycenter estimator. Let sˆi be the empirical measure supported over m samples from si. From the proof of Lemma 4.12 and 22, recall that,

\mathcal{B}^{\prime}_{m}(s_{1},\cdots,s_{n})=\min_{\alpha_{1},\cdots,\alpha_{n}\geq0}\ \sum_{i=1}^{n}\rho_{i}\Big{(}\mathrm{Tr}\left(\alpha_{i}\mathcal{C}_{i}^{\top}\right)+\lambda_{1}\|\alpha_{i}\mathbf{1}-\hat{s}_{i}\|_{G_{ii}}^{2}+\lambda_{2}\|\alpha_{i}^{\top}\mathbf{1}-\sum_{j=1}^{n}\rho_{j}\alpha_{j}^{\top}\mathbf{1}\|_{G}^{2}\Big{)}.

Now let us denote the true Barycenter with squared-MMD regularization by B(s1, · · · , sn) ≡ min s∈R+(X ) Pn i=1 ρiU(si, s) where U(si, s) ≡ min πi∈R+(X ) Rc dπ i + λ1MMD2k (π i 1 , si) + λ2MMD2k (π i 2 , s). Let π 1∗, . . . , πn∗, s∗ be the optimal solutions corresponding to B(s1, · · · , sn). It is easy to see that s P ∗ = n j=1 ρjπ j∗ 2(for e.g. refer (Cohen et al., 2020, Sec C)). After eliminating s, we have: B(s1, · · · , sn) = min π1*,...,π*n∈R+(X ) Pn i=1 ρi Rc dπ i + λ1MMD2k (π i 1 , si) + λ2MMD2k (π i 2 ,Pn j=1 ρjπ j 2 ) .

Theorem B3. Let η i(x, z) ≡π i∗(x,z) si(x)s ′(z)where s ′is the mixture density s ′ ≡Pn i=1 1 n si*. Under* mild assumptions that the functions, η i, c ∈ Hk ⊗ Hk*, we have that w.h.p., the estimation error,* |B′m(ˆs1, · · · , sˆm) − B(s1, · · · , sn)| ≤ O( max i∈[1,n] ∥η i∥k∥c∥k /m).

Proof. From triangle inequality,

Bm(s1,,sn)B(s1,,sn)Bm(s1,,sn)Bm(s1,,sn)+Bm(s1,,sn)B(s1,,sn),(23)|\mathcal{B}^{\prime}_{m}(s_{1},\cdots,s_{n})-\mathcal{B}(s_{1},\cdots,s_{n})|\leq|\mathcal{B}^{\prime}_{m}(s_{1},\cdots,s_{n})-\mathcal{B}^{\prime}_{m}(s_{1},\cdots,s_{n})|+|\mathcal{B}^{\prime}_{m}(s_{1},\cdots,s_{n})-\mathcal{B}(s_{1},\cdots,s_{n})|,\tag{23}

where B ′ m(s1, · · · , sn) is the same as B(s1, · · · , sn) except that it employs restricted feasibility sets, Fi(ˆs1, · · · , sˆn) for corresponding αi as the set of all joints supported at the samples in sˆ1, · · · , sˆn alone.

Let Di = {xi1, · · · , xim} and the union of all samples, ∪Dn i=1 = {z1, · · · , zmn}.

Fi(ˆs1, · · · , sˆn) ≡ nPm l=1 Pmn j=1 αlj δ(xil,zj )| αlj ≥ 0 ∀ l = 1*, . . . , m*; j = 1*, . . . , mn*o. Here, δr is the Dirac measure at r. We begin by bounding the first term.

We denote the (common) objective in B ′ m(·), B(·) as a function of the transport plans, (π 1, · · · , πn), by h(π 1, · · · , πn, ·).

B ′ m(ˆs1, · · · , sˆn) − B′m(s1, · · · , sn) = min πi∈Fi(ˆs1,··· ,sˆn) h(π 1, · · · , πn, sˆ1, · · · , sˆn) − min πi∈Fi(ˆs1,··· ,sˆn) h(π 1, · · · , πn, s1, · · · , sn) ≤ h(¯π 1∗, · · · , π¯ n∗, sˆ1, · · · , sˆn) − h(¯π 1∗, · · · , π¯ n∗, s1, · · · , sn) where π¯ i∗ = arg minπi∈Fi(ˆs1,··· ,sˆn) h(π 1, · · · , πn, s1, · · · , sn) for i ∈ [1, n]

=Pn i=1 λ1ρi MMD2k (¯π i∗ 1 , sˆi) − MMD2k (¯π i∗ 1 , si) =Pn i=1 ρiλ1 MMDk(¯π i∗ 1 , sˆi) − MMDk(¯π i∗ 1 , si) MMDk(¯π i∗ 1 , sˆi) + MMDk(¯π i∗, si) (1) ≤ 2λ1MPn i=1 ρi MMDk(¯π i∗ 1 , sˆi) − MMDk(¯π i∗ 1 , si) ≤ 2λ1MPn i=1 ρiMMDk(ˆsi, si) (As MMD satisfies Triangle Inequality), ≤ 2λ1M max i∈[1,n] MMDk(ˆsi, si) where for inequality (1) we use that max s,t∈R+ 1 (X ) MMDk(s, t) = M < ∞ as the generating set of MMD is compact.

As with probability at least 1 − δ, MMDk(ˆsi, si) ≤ √ 1 m + q2 log(1/δ) m (Smola et al., 2007), with union bound, we get that the first term in inequality (23) is upper-bounded by 2λ1M √ 1 m + q2 log 2n/δ m , with probability at least 1 − δ.

We next look at the second term in inequality (23): |B′m(s1, · · · , sn) − B(s1, · · · , sn)|. Let (¯π 1, · · · , π¯ n) be the solutions of B ′ m(s1, · · · , sn). Let (π 1∗, · · · , πn∗) be the solutions of B(s1, · · · , sn). Recall that s ′ denotes the mixture density s ′ ≡Pn i=1 1 n si. Let us denote the empirical distribution of s ′ by sˆ ′(i.e., uniform samples from ∪ n i=1Di). Consider the transport plans: πˆ im ∈ Fi(ˆs1, · · · , sˆn) such that πˆ im(l, j) = η i(xl,zj ) m2n where η i(xl, zj ) = π i∗(xl,zj ) si(xl)s ′(zj ) , for l ∈ [1, m]; j ∈ [1*, mn*].

|B′m(s1, · · · , sn) − B(s1, · · · , sn)| = B ′ m(s1, · · · , sn) − B(s1, · · · , sn) = h(¯π 1m, · · · , π¯ nm, s1, · · · , sn) − h(π 1∗, · · · , πn∗, s1, · · · , sn) ≤ h(ˆπ 1m, · · · , πˆ nm, s1, · · · , sn) − h(π 1∗, · · · , πn∗, s1, · · · , sn)

Xn i=1 ρi (µk(ˆπ im) − µk(π i∗), ci

  • 2λ1M ∥µk(ˆπ im
  1. − µk(si)∥k − ∥µk(π i∗ 1 ) − µk(si)∥k
  • 2λ2M

µk(ˆπ im 2) − µk

Xn j=1 ρjπˆ jm 2

k −

µk(π i∗ 2 ) − µk

Xn j=1 ρjπ j∗ 2

k

) (Upper-bounding the sum of two MMD terms by 2M) ≤ Xn i=1 ρi (µk(ˆπ im) − µk(π i∗), ci

  • 2λ1Mµk(ˆπ im
  1. − µk(π i∗ 1 )k

µk(ˆπ im 2) − µk

k −

µk(π i∗ 2 ) − µk

k

2λ2M

Xn j=1 ρjπˆ jm 2

Xn j=1 ρjπ j∗ 2

) (Using triangle inequality) ≤ Xn i=1 ρi (µk(ˆπ im) − µk(π i∗), ci

  • 2λ1M∥µk(ˆπ im
  1. − µk(π i∗ 1 )∥k+ 2λ2M

∥µk(ˆπ im 2) − µk(π i∗ 2 )∥k + Xn j=1 ρj∥µk(ˆπ jm 2) − µk(π j∗ 2 )∥k

) (Triangle Inequality and linearity of the kernel mean embedding) ≤ Xn i=1 ρi ( ∥µk(ˆπ im) − µk(π i∗)∥k ∥ci∥k + 2λ1M∥µk(ˆπ im

  1. − µk(π i∗ 1 )∥k+ 2λ2M

∥µk(ˆπ im 2) − µk(π i∗ 2 )∥k + Xn j=1 ρj∥µk(ˆπ jm 2) − µk(π j∗ 2 )∥k

) (Cauchy Schwarz) ≤ max i∈[1,n] ( ∥µk(ˆπ im) − µk(π i∗)∥k ∥ci∥k + 2λ1M∥µk(ˆπ im

  1. − µk(π i∗ 1 )∥k+ 2λ2M ∥µk(ˆπ im
  2. − µk(π i∗ 2 )∥k + max j∈[1,n] ∥µk(ˆπ jm
  3. − µk(π j∗ 2 )∥k ) We now repeat the steps similar to B.9 (for bounding the second term in the proof of Theorem 4.10) and get the following.

∥µk(ˆπ im) − µk(π i∗)∥k = max f∈Hk,∥f∥k≤1 Rf dπˆ im −Rf dπ i∗ = max f∈Hk,∥f∥k≤1 Rf dπˆ im −Rf dπ i∗ = max f∈Hk,∥f∥k≤1 Pm l=1 Pmn j=1 f(xl, zj )π i∗(xl,zj ) m2nsi(xl)s ′(zj ) −R R f(x, z) π i∗(x,z) si(x)s ′(z) si(x)s ′(z) dx dz = max f∈Hk,∥f∥k≤1 EX∼sˆi−si,Z∼sˆ ′−s ′ hf(X, Z) π i∗(X,Z) si(X)s ′(Z) i = max f∈Hk,∥f∥k≤1 EX∼sˆi−si,Z∼sˆ ′−s ′-f(X, Z)η i(X, Z) = max f∈Hk,∥f∥k≤1 EX∼sˆi−si,Z∼sˆ ′−s ′-f ⊗ η i, ϕ(X) ⊗ ϕ(Z) ⊗ ϕ(X) ⊗ ϕ(Z) = max f∈Hk,∥f∥k≤1 f ⊗ η i, EX∼sˆi−si,Z∼sˆ ′−s ′ [ϕ(X) ⊗ ϕ(Z) ⊗ ϕ(X) ⊗ ϕ(Z)] ≤ max f∈Hk,∥f∥k≤1 ∥f ⊗ η i∥k∥EX∼sˆi−si,Z∼sˆ ′−s ′ [ϕ(X) ⊗ ϕ(Z) ⊗ ϕ(X) ⊗ ϕ(Z)] ∥k (∵ Cauchy Schwarz) = max f∈Hk,∥f∥k≤1 ∥f∥k∥η i∥k∥EX∼sˆi−si,Z∼sˆ ′−s ′ [ϕ(X) ⊗ ϕ(X) ⊗ ϕ(Z) ⊗ ϕ(Z)] ∥k (∵ properties of norm of tensor product) = max f∈Hk,∥f∥k≤1 ∥f∥k∥η i∥k∥EX∼sˆi−si [ϕ(X) ⊗ ϕ(X)] ⊗ EZ∼sˆ ′−s ′ [ϕ(Z) ⊗ ϕ(Z)] ∥k ≤ ∥η i∥k∥EX∼sˆi−si [ϕ(X) ⊗ ϕ(X)] ∥k∥EZ∼sˆ ′−s ′ [ϕ(Z) ⊗ ϕ(Z)] ∥k = ∥η i∥k∥µk2 (ˆsi) − µk2 (si)∥k2 ∥µk2 (ˆs ′) − µk2 (s ′)∥k2 (∵ ϕ(·) ⊗ ϕ(·) is the feature map corresponding to k 2.) Similarly, we have the following for the marginals.

∥µk(ˆπ im

  1. − µk(π i∗ 1 )∥k = max f∈Hk,∥f∥k≤1

Zf dπˆ im 1 − Zf dπ i∗ 1 = max f∈Hk,∥f∥k≤1 Zf dπˆ im 1 − Zf dπ i∗ 1 = max f∈Hk,∥f∥k≤1 Xm l=1 Xmn j=1 f(xl)π i∗(xl, zj ) m2nsi(xl)s ′(zj ) − Z Z f(x) π i∗(x, z) si(x)s ′(z) si(x)s ′(z) dx dz = max f∈Hk,∥f∥k≤1 EX∼sˆi−si,Z∼sˆ ′−s ′ f(X) π i∗(X, Z) si(X)s ′(Z)

= max f∈Hk,∥f∥k≤1 EX∼sˆi−si,Z∼sˆ ′−s ′-f(X)η i(X, Z) = max f∈Hk,∥f∥k≤1 EX∼sˆi−si,Z∼sˆ ′−s ′-f ⊗ η i, ϕ(X) ⊗ ϕ(X) ⊗ ϕ(Z) = max f∈Hk,∥f∥k≤1 f ⊗ η i, EX∼sˆi−si,Z∼sˆ ′−s ′ [ϕ(X) ⊗ ϕ(X) ⊗ ϕ(Z)] ≤ max f∈Hk,∥f∥k≤1 ∥f ⊗ η i∥k∥EX∼sˆi−si,Z∼sˆ ′−s ′ [ϕ(X) ⊗ ϕ(X) ⊗ ϕ(Z)] ∥k (∵ Cauchy Schwarz) = max f∈Hk,∥f∥k≤1 ∥f∥k∥η i∥k∥EX∼sˆi−si,Z∼sˆ ′−s ′ [ϕ(X) ⊗ ϕ(X) ⊗ ϕ(Z)] ∥k (∵ properties of norm of tensor product) = max f∈Hk,∥f∥k≤1 ∥f∥k∥η i∥k∥EX∼sˆi−si [ϕ(X) ⊗ ϕ(X)] ⊗ EZ∼sˆ ′−s ′ [ϕ(Z)] ∥k ≤ ∥η i∥k∥EX∼sˆi−si [ϕ(X) ⊗ ϕ(X)] ∥k∥EZ∼sˆ ′−s ′ [ϕ(Z)] ∥k = ∥η i∥k∥µk2 (ˆsi) − µk2 (si)∥k2 ∥µk(ˆs ′) − µk(s ′)∥k (∵ ϕ(·) ⊗ ϕ(·) is the feature map corresponding to k 2.) Thus, with probability at least 1 − δ, |B′m(s1, · · · , sn) − B(s1, · · · , sn)| ≤ max i∈[1,n] n∥η i∥k∥ci∥k + 2λ1M∥η i∥k + 2λ2M(∥η i∥k + max j∈[1,n] ∥η j∥k) o √ 1 m + q2 log (2n+2)/δ m 2. Ap- plying union bound again for the inequality in 23, we get that with probability at least 1 − δ, |B′m(ˆs1, · · · , sˆn) − B(s1, · · · , sn)| ≤ √ 1 m + q2 log (2n+4)/δ m 2λ1M + ζ √ 1 m + q2 log (2n+4)/δ m , where ζ = max i∈[1,n] n∥η i∥k∥ci∥k + 2λ1M∥η i∥k + 2λ2M(∥η i∥k + max j∈[1,n] ∥η j∥k) o.

B.15 More On Formulation (10)

Analogous to Formulation (10) in the main paper, we consider the following formulation where an IPM raised to the q th power with q > 1 ∈ Z is used for regularization.

UQ,c,λ1,λ2,q(s0,t0)minπR+(X×X)c  dπ+λ1γQq(π1,s0)+λ2γQq(π2,t0)(24)U_{Q,c,\lambda_{1},\lambda_{2},q}\left(s_{0},t_{0}\right)\equiv\min_{\pi\in\mathcal{R}^{+}(X\times X)}\int c\;\mathrm{d}\pi+\lambda_{1}\gamma_{Q}^{q}(\pi_{1},s_{0})+\lambda_{2}\gamma_{Q}^{q}(\pi_{2},t_{0})\tag{24}

Formulation (10) in the main paper is a special case of Formulation (24), when IPM is MMD and q = 2. Following the proof in Lemma B1, one can easily show that

UG,c,λ1,λ2,q(s0,t0)mins,tR+(X) sW1(s,t)+λ1γGq(s,s0)+λ2γGq(t,t0).(25)U_{\mathcal{G},c,\lambda_{1},\lambda_{2},q}\left(s_{0},t_{0}\right)\equiv\min_{s,t\in\mathcal{R}^{+}(\mathcal{X})}\ \left|s\right|W_{1}(s,t)+\lambda_{1}\gamma_{\mathcal{G}}^{q}(s,s_{0})+\lambda_{2}\gamma_{\mathcal{G}}^{q}(t,t_{0}).\tag{25}

To simplify notations, we denote UG*,c,λ,λ,*2 by U in the following. It is easy to see that U satisfies the following properties by inheritance.

  1. U ≥ 0 as each of the terms in the objective in Formulation (25) is greater than 0.

  2. U(s0, t0) = 0 ⇐⇒ s0 = t0, whenever the IPM used for regularization is a norm-induced metric. As W1, γG are non-negative terms, U(s0, t0) = 0 ⇐⇒ s = t, γG(s, s0) = 0, γG(t, t0) = 0. If IPM used for regularization is a norm-induced metric, the above condition reduces to s0 = t0.

  3. U(s0, t0) = U(t0, s0) as each term in Formulation (25) is symmetric.

We now derive sample complexity with Formulation (24).

Lemma B4. Let us denote UG,c,λ1,λ2,q defined in Formulation (9) by U, where q > 1 ∈ Z. Let sˆm,tˆm denote the empirical estimates of s0, t0 ∈ R+ 1 (X ) respectively with m samples. Then, U(ˆsm,tˆm) → U(s0, t0) at a rate same as that of γG(ˆsm, s0) → 0.

Proof.

U(s0,t0)minπR+(X×X)  h(π,s0,t0)c  dπ+λγGq(π1,s0)+λγGq(π2,t0)  .U\left(s_{0},t_{0}\right)\equiv\operatorname*{min}_{\pi\in\mathcal{R}^{+}(\mathcal{X}\times\mathcal{X})}\;h(\pi,s_{0},t_{0})\equiv\int c\;\mathrm{d}\pi+\lambda\gamma_{\mathcal{G}}^{q}(\pi_{1},s_{0})+\lambda\gamma_{\mathcal{G}}^{q}(\pi_{2},t_{0})\;.

We have,

U (sm, tm) − U (s0, t0) = min π∈R+(X×X ) h(π, sˆm,tˆm) − min π∈R+(X×X ) h(π, s0, t0) ≤ h(π ∗, sˆm,tˆm) − h(π ∗, s0, t0) where π ∗ = arg min π∈R+(X×X ) h(π, s0, t0) ! = λγ q G (π ∗ 1 , sˆm) − γ q G (π ∗ 1 , s0) + γ q G (π ∗ 2 ,tˆm) − γ q G (π ∗ 2 , t0) = λ (γG(π ∗ 1 , sˆm) − γG(π ∗ 1 , s0)) Xq−1 i=0 γ i G (π ∗ 1 , sˆm)γ q−1−i G(π ∗ 1 , s0) ! !+ λ γG(π ∗ 2 ,tˆm) − γG(π ∗ 2 , t0) Xq−1 i=0 γ i G (π ∗ 2 ,tˆm)γ q−1−i G(π ∗ 2 , t0) ! ! ≤ λ γG(s0, sˆm) Xq−1 i=0 γ i G (π ∗ 1 , sˆm)γ q−1−i G(π ∗ 1 , s0) !! + λ γG(t0,tˆm) Xq−1 i=0 γ i G (π ∗ 2 ,tˆm)γ q−1−i G(π ∗ 2 , t0) !! (∵ γG satisfies triangle inequality) ≤ λ γG(s0, sˆm) Xq−1 i=0 q − 1 i γ i G (π ∗ 1 , sˆm)γ q−1−i G(π ∗ 1 , s0) !+ λ (γG(t0,tˆm) Xq−1 i=0 q − 1 i γ i G (π ∗ 2 ,tˆm)γ q−1−i G(π ∗ 2 , t0) !! = λ γG(s0, sˆm) (γG(π ∗ 1 , sˆm) + γG(π ∗ 1 , s0))q−1 + γG(t0,tˆm)γG(π ∗ 2 ,tˆm) + γG(π ∗ 2 , t0)q−1 ≤ λ(2M) q−1γG(s0, sˆm) + γG(t0,tˆm). For the last inequality, we use that max a∈R+ 1 (X ) max b∈R+ 1 (X ) γG(a, b) = M < ∞ as the domain is compact.

Similarly, one can show the other way inequality, resulting in the following.

U(s0,t0)U(sm,tm)λ(2M)q1(γG(s0,s^m)+γG(t0,t^m)).|U(s_{0},t_{0})-U(s_{m},t_{m})|\leq\lambda(2M)^{q-1}\left(\gamma_{\mathcal{G}}(s_{0},\hat{s}_{m})+\gamma_{\mathcal{G}}(t_{0},\hat{t}_{m})\right). (26) $\frac{1}{2}$ q−1γG(s0, sˆm) + γG(t0,tˆm). (26) The rate at which |U (sm, tm) − U (s0, t0)| goes to zero is hence the same as that with which either of the IPM terms goes to zero. For example, if the IPM used for regularization is MMD with a normalized kernel, then MMDk (s0, sˆm) ≤ q1 m + q2 log(1/δ) m with probability at least 1 − δ (Smola et al., 2007).

From the union bound, with probability at least 1 − δ, |U (sm, tm) − U (s0, t0)| ≤ 2λ(2M) q−1 q1 m + q2 log(2/δ) m . Thus, O √ 1 m is the common bound for the rate at which the LHS as well as the MMDk (s0, sˆm) decays to zero.

B.16 Robustness

We show the robustness property of IPM-regularized UOT 13 with the same assumptions on the noise model as used in (Fatras et al., 2021, Lemma 1) for KL-regularized UOT.

Lemma B5. (Robustness) Let s0, t0 ∈ R+ 1 (X ). Consider sc = ρs0 + (1 − ρ)δz (ρ ∈ [0, 1]), a distribution perturbed by a Dirac outlier located at some z outside of the support of t0*. Let* m(z) = Rc(z, y)dt0(y).

We have that, UG*,c,λ1,λ2 (sc, t0) ≤ ρ UG,c,λ*1,λ2 (s0, t0) + (1 − ρ)m(z).

Proof. Let π be the solution of UG*,c,λ*1,λ2 (s0, t0). Consider π˜ = ρπ + (1 − ρ)δz ⊗ t0. It is easy to see that π˜1 = ρπ1 + (1 − ρ)δz and π˜2 = ρπ2 + (1 − ρ)t0.

UG,c,λ1,λ2 (sc, t0) ≤ ≤ = Zc(x, y)dπ˜(x, y) + λ1γG(˜π1, sc) + λ2γG(˜π2, t0) (Using the definition of min) ≤ Zc(x, y)dπ˜(x, y) + λ1 (ργG(π1, s0) + (1 − ρ)γG(δz, δz)) + λ2 (ργG(π2, t0) + (1 − ρ)γG(t0, t0)) (∵ IPMs are jointly convex) = Zc(x, y)dπ˜(x, y) + ρ (λ1γG(π1, s0) + λ2γG(π2, t0)) = ρ Zc(x, y)dπ(x, y) + Z(1 − ρ)c(z, y)d(δz ⊗ t0)(z, y) + ρ (λ1γG(π1, s0) + λ2γG(π2, t0)) = ρ Zc(x, y)dπ(x, y) + Z(1 − ρ)c(z, y)dt0(y) + ρ (λ1γG(π1, s0) + λ2γG(π2, t0)) = ρ UG,c,λ1,λ2 (s0, t0) + (1 − ρ)m(z). $\square$ We note that m(z) is finite as t0 ∈ R+ 1 (X ).

We now present robustness guarantees with a different noise model.

Corollary B6. We say a measure q ∈ R+(X ) is corrupted with ρ ∈ [0, 1] fraction of noise when q = (1 − ρ)qc + ρqn, where qc is the clean measure and qn is the noisy measure.

Let s0, t0 ∈ R+(X ) be corrupted with ρ fraction of noise such that |sc − sn|T V ≤ ϵ1 and |tc − tn|T V ≤ ϵ2*. We* have that UG,c,λ,λ(s0, t0) ≤ UG,c,λ,λ(sc, tc) + ρβ(ϵ1 + ϵ2), where β = max f∈G(λ)∩Wc ∥f∥∞.

Proof. We use our duality result of UG*,c,λ,λ*, from Theorem 4.1. We first upper-bound UG,c,λ,λ (sn, tn) which is later used in the proof.

UG,c,λ,λ (sn, tn) = max f∈G(λ)∩Wc Zfdsn − Zfdtn = max f∈G(λ)∩Wc Zfd (sn − sc) + Zfdsc − Zfd (tn − tc) − Zfdtc ≤ max f∈G(λ)∩Wc Zfd (sn − sc) + max f∈G(λ)∩Wc Zfd (tn − tc) + max f∈G(λ)∩Wc Zfdsc − Zfdtc ≤ β(|sc − sn|T V + |tc − tn|T V ) + UG,c,λ,λ(sc, tc) = β(ϵ1 + ϵ2) + UG,c,λ,λ(sc, tc). (27) We now show the robustness result as follows.

UG,c,λ,λ(s0, t0) = max f∈G(λ)∩Wc Zfds0 − Zfdt0 = max f∈G(λ)∩Wc (1 − ρ) Zfdsc + ρ Zfdsn − (1 − ρ) Zfdtc − ρ Zfdtn = max f∈G(λ)∩Wc (1 − ρ) Zfdsc − Zfdtc + ρ Zfdsn − Zfdtn ≤ max f∈G(λ)∩Wc (1 − ρ) Zfdsc − Zfdtc + max f∈G(λ)∩Wc ρ Zfdsn − Zfdtn = (1 − ρ) UG,c,λ,λ (sc, tc) + ρ UG,c,λ,λ (sn, tn) ≤ (1 − ρ) UG,c,λ,λ (sc, tc) + ρ (UG,c,λ,λ (sc, tc) + β(ϵ1 + ϵ2)) (Using 27) = UG,c,λ,λ (sc, tc) + ρβ(ϵ1 + ϵ2). We note that $\beta=\max_{f\in\mathcal{G}(\lambda)\cap\mathcal{W}{\epsilon}}|f|{\infty}\leq\max_{f\in\mathcal{W}{\epsilon}}|f|{\infty}<\infty$. Also, as $\beta\leq\min\left(\max_{f\in\mathcal{G}{\lambda}(\lambda)}|f|{\infty},\max_{f\in\mathcal{W}{\epsilon}}|f|{\infty}\right)\leq\min\left(\lambda,\max_{f\in\mathcal{W}{\epsilon}}|f|{\infty}\right)$ (for a normalized kernel).

B.17 Connections With Spectral Normalized Gan

We comment on the applicability of MMD-UOT in generative modelling and draw connections with the Spectral Norm GAN (SN-GAN) (Miyato et al., 2018) formulation.

A popular approach in generative modelling is to define a parametric function gθ : Z 7→ X that takes a noise distribution and generates samples from Pθ distribution. We then learn θ to make Pθ closer to the real distribution, Pr. On formulating this problem with the dual of MMD-UOT derived in Theorem 4.1, we get

minθmaxfWcQk(λ)fdPθfdPr\operatorname*{min}_{\theta}\operatorname*{max}_{f\in{\mathcal{W}}_{c}\cap{\mathcal{Q}}_{k}(\lambda)}\int f\mathrm{d}P_{\theta}-\int f\mathrm{d}P_{r}

We note that in the above optimization problem, the critic function or the discriminator f should satisfy ∥f∥c ≤ 1 and ∥f∥k ≤ λ where ∥f∥c denotes the Lipschitz norm under the cost function c. Let the critic function be fW , parametrized using a deep convolution neural network (CNN) with weights W = {W1, · · · , WL}, where L is the depth of the network. Let F be the space of all such CNN models, then Problem (28) can be approximated as follows.

minθmaxfWF;fWc1,fWkλfWdPθfWdPr\min_{\theta}\max_{f_{W}\in\mathcal{F};\|f_{W}\|_{c}\leq1,\|f_{W}\|_{k}\leq\lambda}\int f_{W}\mathrm{d}P_{\theta}-\int f_{W}\mathrm{d}P_{r} popularly handled using a penalty on the gradient $|\nabla f_{W}|_{c}$ (Gulraiani) The constraint ∥f∥c ≤ 1 is popularly handled using a penalty on the gradient, ∥∇fW ∥ (Gulrajani et al., 2017). The constraint on the RKHS norm, ∥f∥k, is more challenging for an arbitrary neural network.

Thus, we follow the approximations proposed in (Bietti et al., 2019). (Bietti et al., 2019) use the result derived in (Bietti & Mairal, 2017) that constructs a kernel whose RKHS contains a CNN, ¯f, with the same architecture and parameters as f but with activations that are smooth approximations of ReLU. With this approximation, (Bietti et al., 2019) shows tractable bounds on the RKHS norm. We consider their upper bound based on spectral normalization of the weights in fW . With this, Problem (29) can be approximated with the following.

minθmaxfWFfWdPθfWdPr+ρ1fW+ρ2i=1L1λWisp2,\operatorname*{min}_{\theta}\operatorname*{max}_{f_{W}\in\mathcal{F}}\int f_{W}\mathrm{d}P_{\theta}-\int f_{W}\mathrm{d}P_{r}+\rho_{1}\|\nabla f_{W}\|+\rho_{2}\sum_{i=1}^{L}\frac{1}{\lambda}\|W_{i}\|_{\mathrm{sp}}^{2}, (28)(28) (29)(29) (30)(30) (31)(31)

where ∥.∥sp denotes the spectral norm and ρ1, ρ2 > 0. Formulations like (30) have been successfully applied as variants of Spectral Normalized GAN (SN-GAN). This shows the utility of MMD-regularized UOT in generative modelling.

B.18 Comparison With Wae

The OT problem in WAE (RHS in Theorem 1 in (Tolstikhin et al., 2018)) using our notation is:

minπR1+(X×Z)c(x,G(z)) dπ(x,z), s.t.  π1=PX, π2=PZ,\operatorname*{min}_{\pi\in{\mathcal{R}}_{1}^{+}\left(X\times{\mathcal{Z}}\right)}\int c\left(x,G(z)\right)\ \mathrm{d}\pi(x,z),\ \mathrm{s.t.}\ \ \pi_{1}=P_{X},\ \pi_{2}=P_{Z}, Zc (x, G(z)) dπ(x, z), s.t. π1 = PX, π2 = PZ, (31) where X , Z are the input and latent spaces, G is the decoder, and PX, PZ are the probability measures corresponding to the underlying distribution generating the given training set and the latent prior (e.g., Gaussian).

(Tolstikhin et al., 2018) employs a one-sided regularization. More specifically, (Tolstikhin et al., 2018, eqn.

(4)) in our notation is:

minπR1+(X×Z)c(x,G(z)) dπ(x,z)+λ2MMDk(π2,PZ), s.t.  π1=PX.\operatorname*{min}_{\pi\in\mathcal{R}_{1}^{+}(X\times\mathcal{Z})}\int c\left(x,G(z)\right)\ \mathrm{d}\pi(x,z)+\lambda_{2}\mathrm{MMD}_{k}(\pi_{2},P_{\mathcal{Z}}),\ \mathrm{s.t.}\ \ \pi_{1}=P_{X}. (32)(32)

However, in our work, the proposed MMD-UOT formulation corresponding to (31) reads as:

minπR1+(d×z)c(x,G(z)) dπ(x,z)+λ1MMDk(π1,PX)+λ2MMDk(π2,PZ).(33)\min_{\pi\in\mathbb{R}_{1}^{+}(d\times z)}\int c\left(x,G(z)\right)\ \mathrm{d}\pi(x,z)+\lambda_{1}\mathrm{MMD}_{k}(\pi_{1},P_{X})+\lambda_{2}\mathrm{MMD}_{k}(\pi_{2},P_{Z}).\tag{33} In the case $\pi\in\mathbb{R}{1}^{+}(d\times z)$, $\pi\in\mathbb{R}{1}^{+}(d\times z)$, $\pi\in\mathbb{R}{1}^{+}(d\times z)$, $\pi\in\mathbb{R}{1}^{+}(d\times z)$, $\pi\in\mathbb{R}{1}^{+}(d\times z)$, (\pi\in\mathbb{R} It is easy to see that the WAE formulation (32) is a special case of our MMD-UOT formulation (33). Indeed, as λ1 → ∞, both formulations are the same.

The theoretical advantages of MMD-UOT over WAE are that MMD-UOT induces a new family of metrics and can be efficiently estimated from samples at a rate O( √ 1 m ) whereas WAE is not expected to induce a metric as the symmetry is broken. Also, WAE is expected to be cursed with dimensions in terms of estimation, as a marginal is exactly matched, similar to unregularized OT. We now present the details of estimating (33) in the context of VAEs. The transport plan π is factorized as π(x, z) ≡ π1(x)π(z|x), where π(z|x) is the encoder. For the sake of fair comparison, we choose this encoder and the decoder, G, to be exactly the same as that in (Tolstikhin et al., 2018). Since π1(x) is not modelled by WAE, we fall back to the default parametrization in our paper of distributions supported over the training points. More specifically, if D = {x1*, . . . , x*m} is the training set (sampled from PX), then our formulation reads as:

reads as: $$\min_{\tau(z|x_{t})\in\Delta_{c}}\sum_{i=1}^{m}\alpha_{i}\int c\left(x_{i},G(z)\right)\ \mathrm{d}\tau(z|x_{t})+\lambda_{1}\mathrm{M}\mathrm{D}{k}^{2}\left(\alpha,\frac{1}{m}\mathbf{1}\right)+\lambda{2}\mathrm{M}\mathrm{D}{k}^{2}\left(\sum{i=1}^{m}\alpha_{i}\tau(z|x_{t}),P_{Z}\right),\tag{34}$$ where $G$ is the gram-matrix over the training set $\mathcal{D}$. We solve (34) using SGD, where the block over the $\alpha$ variables can employ accelerated gradient steps.

C Experimental Details And Additional Results

We present more experimental details and additional results in this section. We have followed standard practices to ensure reproducibility. We will open-source the codes to reproduce all our experiments upon acceptance of the paper.

C.1 Synthetic Experiments

We present more details for the experiments in Section 5.1, along with additional experimental results. Transport Plan and Barycenter We use squared-Euclidean cost as the ground metric. We take points [1, 2, · · · , 50] and consider Gaussian distribution over them with mean, and standard deviation as (15, 5) and (35, 3), respectively. The hyperparameters for MMD-UOT are λ as 100 and σ 2in the RBF kernel (k(x, y) = exp −∥x−y∥ 2 2σ2 ) as 1. The hyperparameters for ϵKL-UOT are λ and ϵ as 1.

For the barycenter experiment, we take points [1, 2, · · · , 100] and consider Gaussian distribution over them with mean, and standard deviation as (20, 5) and (60, 8), respectively. The hyperparameters for MMD-UOT are λ as 100 and σ 2in the RBF kernel as 10. The hyperparameters for ϵKL-UOT are λ as 100 and ϵ as 10−3.

Visualizing the Level Sets For all OT variants squared-Euclidean is used as a ground metric. For the level set with MMD, RBF kernel is used with σ 2 as 3. For MMD-UOT, λ is 1 and RBF kernel is used with σ 2 as 1. For plotting the level set contours, 20 lines are used for all methods.

Computation Time The source and target measures are Uniform distributions from which we sample 5,000 points. The dimensionality of the data is 5. The experiment is done with hyper-parameters as squared-Euclidean distance, squared-MMD regularization with RBF kernel, sigma as 1 and lambda as 0.1. ϵKL-UOT's entropic regularization coefficient is 0.01, and lambda is 1. We choose entropic regularization coefficient from the set {1e − 3, 1e − 2, 1e − 1} and lambda from the set {1e − 2, 1e − 1, 1}. This hyperparameter resulted in the fastest convergence. This experiment was done on an NVIDIA-RTX 2080 GPU.

40_image_0.png

40_image_1.png

Figure 4: Computation time: Convergence plots with m = 5000 for the case of the same source and

40_image_2.png

target measures where the optimal objective is expected to be 0. Left: MMD-UOT Problem (10) solved with accelerated projected gradient descent. Right: ϵKL-UOT's convergence plot is shown separately. We observe that ϵKL-UOT's objective plateaus in 0.3 seconds. We note that our convergence to the optimal objective is faster than that of ϵKL-UOT. Figure 5: Sample efficiency: Log-log plot of optimal objective vs number of samples. The optimal objective values of MMD-UOT and ϵKL-UOT formulation are shown as the number of samples increases. The data lies in 10 dimensions, and the source and target measures are both Uniform. MMD-UOT can be seen to have a better rate of convergence. Sample Complexity In Theorem 4.10 in the main paper, we proved an attractive sample complexity of O m− 12 for our sample-based estimators. In this section, we present a synthetic experiment to show that the convergence of MMD-UOT's metric towards the true value is faster than that of ϵKL-UOT. We sample 10-dimensional sources and target samples from Uniform sources and target marginals, respectively. As the marginals are equal, the metrics over measures should converge to 0 as the number of samples increases. We repeat the experiment with an increasing number of samples. We use squared-Euclidean cost. For ϵKL-UOT, λ = 1, ϵ = 1e − 2. For MMD-UOT, λ = 1 and RBF kernel with σ = 1 is used. In Figure 5, we plot MMD-UOT's objective and the square root of the ϵKL-UOT objective on increasing the number of samples. It can be seen from the plot that the MMD-UOT achieves a better rate of convergence compared to ϵKL-UOT. Effect of Regularization In Figures 7 and 6, we visualize matching the marginals of MMD-UOT's optimal transport plan. We show the results with both RBF kernel k(x, y) = exp −∥x−y∥ 2 2∗10−6 and the IMQ kernel k(x, y) = 10−6 + ∥x − y∥ 2−0.5. As we increase λ, the matching becomes better for unnormalized measures, and the marginals exactly match the given measures when the measures are normalized. We have also shown the unbalanced case results with KL-UOT. As the POT library (Flamary et al., 2021) doesn't allow including a simplex constraint for KL-UOT, we do not show this.

41_image_0.png

Figure 6: (With unnormalized measures) Visualizing the marginals of transport plans learnt by MMD-UOT and KL-UOT, on increasing λ.

41_image_1.png

Figure 7: (With normalized measures) Visualizing the marginals of MMD-UOT (solved with simplex constraints) plan on increasing λ. We do not show KL-UOT here as the Sinkhorn algorithm for solving KL-UOT in the POT library (Flamary et al., 2021) does not incorporate the Simplex constraints on the transport plan.

C.2 Two-Sample Test

Following (Liu et al., 2020), we repeat the experiment 10 times, and in each trial, we randomly sample a validation subset and a test subset of size N from the given real and fake MNIST datasets. We run the two-sample test experiment for type-II error on the test set for a given trial using the hyperparameters chosen for that trial. The hyperparameters were tuned for N = 100 for each trial. The hyperparameters

Table 8: Test power (higher is better) for the task of CIFAR-10.1 vs CIFAR 10. The proposed MMD-UOT method achieves the best results.

ME SCF C2ST-S C2ST-L MMD ϵKL-UOT MMD-UOT
0.588 0.171 0.452 0.529 0.316 0.132 0.643

for a given trial were chosen based on the average empirical test power (higher is better) over that trial's validation dataset. We use squared-Euclidean distance for MMD-UOT and ϵKL-UOT formulations. RBF kernel, k(x, y) = exp −∥x−y∥ 2 2σ2 , is used for MMD and for MMD-UOT formulation. The hyperparameters are chosen from the following set. For the MMD-UOT and MMD, σ was chosen from {median, 40, 60, 80, 100} where the median is the median-heuristic (Gretton et al., 2012). For the MMD-UOT an ϵKL-UOT, λ is chosen from {0.1, 1, 10}. For ϵKL-UOT, ϵ was chosen from {1, 10−1, 10−2, 10−3, 10−4}. Based on validation, σ as the median is chosen for MMD at all trials. For ϵKL-UOT, the best hyperparameters (λ, ϵ) are (10, 0.001) for trial number 3, (0.1, 0.1) for trial number 10 and (1, 0.1) for the remaining the 8 trials. For MMD-UOT, the best hyperparameters (λ, σ2) are (0.1, 60) for trial number 9 and (1, median2) for the remaining 9 trials.

Additional Results Following (Liu et al., 2020), we consider the task of verifying that the datasets CIFAR-10 (Krizhevsky, 2009) and CIFAR-10.1 (Recht et al., 2018) are statistically different. We follow the same experimental setup as given in (Liu et al., 2020). The training is done on 1,000 images from each dataset, and the test is on 1,031 images. The experiment is repeated 10 times, and the average test power is compared with the results shown in (Liu et al., 2020) with the popular baselines: ME (Chwialkowski et al., 2015; Jitkrittum et al., 2016), SCF (Chwialkowski et al., 2015; Jitkrittum et al., 2016), C2ST-S (Lopez-Paz & Oquab, 2017), C2ST-L (Cheng & Cloninger, 2019). We repeat the experiment following the same setup for the MMD and ϵKL-UOT baselines. The chosen hyperparameters (λ, ϵ) for the 10 different experimental runs ϵKL-UOT are (0.1, 0.1),(1, 0.1),(1, 0.1),(1, 0.01),(1, 0.1),(1, 0.1),(1, 0.1),(0.1, 0.1),(1, 0.1),(1, 0.1) and (1, 0.1). The chosen (λ, σ2) for the 10 different experimental runs of MMD-UOT are (0.1, median),(1, 60),(10, 100),(0.1, 80),(0.1, 40),(0.1, 40),(0.1, 40),(1, median),(0.1, 80) and (1, 40). Table 8 shows that the proposed MMD-UOT obtains the highest test power.

C.3 Single-Cell Rna Sequencing

scRNA-seq helps us understand how the expression profile of the cells changes over stages (Schiebinger et al., 2019). A population of cells is represented as a measure of the gene expression space, and as they grow/divide/die, and the measure evolves over time. While scRNA-seq records such a measure at a time stamp, it does so by destroying the cells (Schiebinger et al., 2019). Thus, it is impossible to monitor how the cell population evolves continuously over time. In fact, only a few measurements at discrete timesteps are generally taken due to the cost involved.

We perform experiments on the Embryoid Body (EB) single-cell dataset (Moon et al., 2019). The Embryoid Body dataset comprises data at 5 timesteps with sample sizes as 2381, 4163, 3278, 3665 and 3332, respectively.

The MMD barycenter interpolating between measures s0, t0 has the closed form solution as 12 (s0 + t0). For evaluating the performance at timestep ti, we select the hyperparameters based on the task of predicting for {t1, t2, t3} \ ti. We use IMQ kernel k(x, y) = 1+∥x−y∥ 2 K2 −0.5. The λ hyperparameter for the validation of MMD-UOT is chosen from {0.1, 1, 10} and K2is chosen from {1e − 4, 1e − 3, 1e − 2, 1e − 1, median}, where median denotes the median of {0.5∥x − y∥ 2∀x, y ∈ D s.t. x ̸= y} over the training dataset (D). The chosen (λ, K2) for timesteps t1, t2, t3 are (1, 0.1), (1, median) and (1, median), respectively. The λ hyperparameter for the validation of ϵKL-UOT is chosen from {0.1, 1, 10} and ϵ is chosen from {1e − 5, 1e − 4, 1e − 3, 1e − 2, 1e−1}. The chosen (λ, ϵ) for timesteps t1, t2, t3 are (10, 0.01), (1, 0.1) and (1, 0.1) respectively. In Table 9, we compare against additional OT-based methods W¯1, W¯2, ϵOT.

Table 9: Additional OT-based baselines for two-sample test: Average Test Power (between 0 and 1; higher is better) on MNIST. MMD-UOT obtains the highest average test power at all timesteps even with the additional baselines.

N W¯ 1 W¯ 2 ϵOT MMD-UOT
100 0.111 0.099 0.108 0.154
200 0.232 0.207 0.191 0.333
300 0.339 0.309 0.244 0.588
400 0.482 0.452 0.318 0.762
500 0.596 0.557 0.356 0.873
1000 0.805 0.773 0.508 0.909

43_image_0.png

Figure 8: (Best viewed in color) The t-SNE plots of the source and target embeddings learnt for the MMNIST to USPS domain adaptation task. Different cluster colors imply different classes. The quality of the learnt representations can be judged based on the separation between clusters. The clusters obtained by MMD-UOT seem better separated (for example, the red and the cyan-colored clusters).

C.4 Domain Adaptation In Jumbot Framework

The experiments are performed with the same seed as used by JUMBOT. For the experiment on the Digits dataset, the chosen hyper-parameters for MMD-UOT are K2in the IMQ kernel k(x, y) = 1+∥x−y∥ 2 K2 −0.5 as 10−2 and λ as 100. In Figure 8, we also compare the t-SNE plot of the embeddings learnt with the MMDUOT and ϵKL-UOT-based loss. The clusters formed with the proposed MMD-UOT seem better separated (for example, the red and the cyan-colored clusters). For the experiment on the Office-Home dataset, the chosen hyperparameters for MMD-UOT are λ = 100, IMQ kernel with K2 = 0.1 . For the VisDA-2017 dataset, the chosen hyperparameters for MMD-UOT are λ = 1, IMQ kernel with K2 as 10.

For the validation phase on the Digits and the Office-Home datasets, we choose λ from the set {1, 10, 100} and K2from the set {0.01, 0.1, 10, 100, median}. For the validation phase on VisDA, we choose λ from the set {1, 10, 100} and K2from the set {0.1, 10, 100}.

44_image_0.png

Proposed MMD-UOT-based Prompt Learning Figure 9: The attention maps corresponding to each of the four prompts for the baseline (PLOT) and the proposed method. The prompts learnt using the proposed MMD-UOT capture diverse attributes for identifying the cat (Oxford-Pets dataset): lower body, upper body, image background and the area near the mouth.

44_image_1.png

Proposed MMD-UOT-based Prompt Learning Figure 10: The attention maps corresponding to each of the 4 prompts for the baseline (PLOT) and the proposed method. The prompts learnt using the proposed MMD-UOT capture diverse attributes for identifying the dog (Oxford-Pets dataset): the forehead and the nose, the right portion of the face, the head along with the left portion of the face, and the ear.

C.5 Prompt Learning

Let F = {fm|M m=1} denote the set of visual features for a given image and Gr = {gn| N n=1} denote the set of textual prompt features for class r. Following the setup in the PLOT baseline, an OT distance is computed between empirical measures over 49 image features and 4 textual prompt features, taking cosine similarity cost. Let dOT (x, r) denote the OT distance between the visual features of image x and prompt features of class r. The prediction probability is given by p(y = r|x) = P exp ((1−dOT (x,r)/τ)) T r=1 exp ((1−dOT (x,r)/τ)) , where T denotes the total no. of classes and τ is the temperature of softmax. The textual prompt embeddings are then optimized with the cross-entropy loss. Additional results on Oxford-Pets (Parkhi et al., 2012) and UCF101 (Soomro et al., 2012) datasets are shown in Table 11.

Dataset 1 2 4 8 16
EuroSAT (imq2, 10−3 , 500) (imq1, 104 , 103 ) (imq1, 10−2 , 500) (imq1, 104 , 500) (rbf, 1, 500)
DTD (imq1, 10−2 , 10) (rbf, 100, 100) (imq2, 10−2 , 10) (rbf, 10−2 , 10) (rbf, 0.1, 1)
Oxford-Pets (imq2, 0.01, 500) (rbf, 10−3 , 10) (imq, 1, 10) (imq1, 0.1, 10) (imq1, 0.01, 1)
UCF101 (rbf, 1, 100) (imq2, 10, 100) (rbf, 0.01, 1000) (rbf, 10−4 , 10) (rbf, 100, 103 )

Table 10: Hyperparameters (kernel type, kernel hyperparameter, λ) for the prompt learning experiment.

Dataset Method 1 2 4 8 16
EuroSAT PLOT 54.05 ± 5.95 64.21 ± 1.90 72.36 ± 2.29 78.15 ± 2.65 82.23 ± 0.91
Proposed 58.47 ± 1.37 66.0 ± 0.93 71.97 ± 2.21 79.03 ± 1.91 83.23 ± 0.24
DTD PLOT 46.55 ± 2.62 51.24 ± 1.95 56.03 ± 0.43 61.70 ± 0.35 65.60 ± 0.82
Proposed 47.27±1.46 51.0±1.71 56.40±0.73 63.17±0.69 65.90 ± 0.29
Oxford-Pets PLOT 87.49 ± 0.57 86.64 ± 0.63 88.63 ± 0.26 87.39 ± 0.74 87.21 ± 0.40
Proposed 87.60 ± 0.65 87.47 ± 1.04 88.77 ± 0.46 87.23 ± 0.34 88.27 ± 0.29
UCF101 PLOT 64.53 ± 0.70 66.83 ± 0.43 69.60 ± 0.67 74.45 ± 0.50 77.26 ± 0.64
Proposed 64.2 ± 0.73 67.47 ± 0.82 70.87 ± 0.48 74.87 ± 0.33 77.27 ± 0.26
Avg acc. PLOT 63.16 67.23 71.66 75.42 78.08
Proposed 64.38 67.98 72.00 76.08 78.67

Table 11: Additional Prompt Learning results. Average and standard deviation (over 3 runs) of accuracy (higher is better) on the k-shot classification task, shown for different values of shots (k) in the state-ofthe-art PLOT framework. The proposed method replaces OT with MMD-UOT in PLOT, keeping all other hyperparameters the same. The results of PLOT are taken from their paper (Chen et al., 2023).

Following the PLOT baseline, we use the last-epoch model. The authors empirically found that learning 4 prompts with the PLOT method gave the best results. In our experiments, we keep the number of prompts and the other neural network hyperparameters fixed. We only choose λ and the kernel hyperparameters for prompt learning using MMD-UOT. For this experiment, we also validate the kernel type. Besides RBF, we consider two kernels belonging to the IMQ family: k(x, y) = 1+∥x−y∥ 2 K2 −0.5(referred to as imq1) and k(x, y) = (K2 + ∥x − y∥ 2) −0.5(referred to as imq2). We choose λ from {10, 100, 500, 1000} and kernel hyperparameter (K2 or σ 2) from {1e − 3, 1e − 2, 1e − 1, 1, 10, 1e + 2, 1e + 3}. The chosen hyperparameters are included in Table 10.