tmlr-md-dump / 75CcopPxIr /75CcopPxIr.md
RedTachyon's picture
Upload folder using huggingface_hub
5e90de3 verified

Partial Optimal Transport For Support Subset Selection

Bilal Riaz bilalria@udel.edu Department of Electrical & Computer Engineering University of Delaware Yüksel Karahan ykarahan@udel.edu Department of Electrical & Computer Engineering University of Delaware Austin J. Brockmeier ajbrock@udel.edu Department of Electrical & Computer Engineering Department of Computer & Information Sciences University of Delaware Reviewed on OpenReview: https: // openreview. net/ forum? id= 75CcopPxIr

Abstract

In probabilistic terms, optimal transport aims to find a joint distribution that couples two distributions and minimizes the cost of transforming one distribution to another. Any feasible coupling necessarily maintains the support of both distributions. However, maintaining the entire support is not ideal when only a subset of one of the distributions, namely the source, is assumed to align with the other target distribution. For these cases, which are common in machine learning applications, we study the semi-relaxed partial optimal transport problem that relaxes the constraints on the joint distribution allowing it to under-represent a subset of the source by over-representing other subsets of the source by a constant factor. In the discrete distribution case, such as in the case of two samples from continuous random variables, optimal transport with the relaxed constraints is a linear program. When sufficiently relaxed, the solution has a source marginal with only a subset of its original support.

We investigate the scaling path of solutions, specifically the relaxed marginal distribution for the source, across different relaxations and show that it is distinct from the solutions from penalty-based semi-relaxed unbalanced optimal transport problems and fully-relaxed partial optimal transport, which have previously been explored. We demonstrate the usefulness of this support subset selection in applications such as color transfer, partial point cloud alignment, and semisupervised machine learning, where a part of data is curated to have reliable labels and another part is unlabeled or has unreliable labels. Our experiments show that optimal transport under the relaxed constraint can improve the performance of these applications by allowing for more flexible alignment between distributions.

1 Introduction

Measuring, and subsequently minimizing, the dissimilarity between two distributions or data samples are ubiquitous tasks in machine learning. Recently, the theory of optimal transport and the family of Wasserstein distances have seen applications across the spectrum of machine learning problems including computer vision (Solomon et al., 2014; Kolouri et al., 2017; Rabin et al., 2011; Garg et al., 2020), generative modeling (Arjovsky et al., 2017; Gulrajani et al., 2017; Salimans et al., 2018; Genevay et al., 2018; Tolstikhin et al., 2018; Kolouri et al., 2018; Deshpande et al., 2018; Rout et al., 2022; Korotin et al., 2023; Mokrov et al., 2021), natural language processing (Xu et al., 2018), and domain adaptation (Kirchmeyer et al., 2022).

Optimal transport is widely applicable since it combines the statistical and geometric aspects of data and provides a correspondence to couple two samples or distributions.

However, a limitation of standard optimal transport is the strict constraint on the complete transfer of mass between the two distributions being compared (Frogner et al., 2015). This can be problematic in cases where such a transfer is not necessary or desirable, such as when dealing with distributions with different support, over- or under-representation, or the presence of outliers in a portion of the data. In order to deal with these scenarios partial optimal transport problems (Rubner et al., 1997; Figalli, 2010; Caffarelli & McCann, 2010; Bonneel & Coeurjolly, 2019; Chapel et al., 2020) have been proposed. Partial optimal transport relaxes the marginal constraints on the transport plan to inequalities, allowing transportation plans that cover only a fraction of the total mass. Similarly, unbalanced optimal transport corresponds to the case of unbalanced masses, where only a portion of the mass is transported. One or both of the marginal constraints can also be relaxed by divergence-based regularizations, such as Kullback-Leibler divergence, total-variation distance (equivalent to the ℓ1-norm of the difference in mass vectors in the discrete case), or squared ℓ2norm (Benamou, 2003; Chizat et al., 2018; Blondel et al., 2018; Peyré & Cuturi, 2019; Séjourné et al., 2019; Chapel et al., 2021). The solution of partial or unbalanced optimal transport often selects a subset (also known as an active region) of the support. This property is exploited in machine learning (Chapel et al., 2020; 2021; Phatak et al., 2023), including the partial Wasserstein covering problem, which has applications in active learning (Kawano et al., 2022). In practice, a key limiting factor is the scalability of solving the linear programs corresponding to partial, unbalanced, or standard optimal transport for large data sets. While efficient gradient based methods cannot be applied to linear programs directly, a number of regularization based remedies, including entropic (Cuturi, 2013; Cuturi & Peyré, 2018), quadratic, and group-LASSO regularization (Flamary et al., 2016; Blondel et al., 2018) have been shown to give approximate solutions. In particular, the Sinkhorn algorithm provides a solution to the entropically regularized standard optimal transport problem. The Sinkhorn algorithm has been widely applied due to its simple implementation consisting of alternating projections to the feasible sets of marginal constraints. In the work by Chizat et al. (2018), Dykstra's algorithm1is used to solve the entropically regularized partial optimal transport problem. Similarly, Sinkhorn-like iterations are used for entropically regularized unbalanced optimal transport in the work by Frogner et al. (2015), and the work by Séjourné et al. (2019) adapts the Sinkhorn algorithm via asymmetric proximal operators to handle a variety of divergence-based relaxations. Scalability is even more important in cases where the solution at different levels of relaxation are sought. Recent works have proposed algorithms for computing the entire scaling or regularization path of solution for fully-relaxed partial optimal transport (Phatak et al., 2023) and unbalanced optimal transport problems with fully or semi-relaxed marginal constraints (Chapel et al., 2021).

In this paper, we study a discrete case of partial optimal transport (Figalli, 2010; Caffarelli & McCann, 2010), which is posed as a linear program, where the solution is a joint distribution with one fixed marginal and one marginal that is constrained to be pointwise less than or equal to a constant factor c ≥ 1 of the original, typically uniform, marginal. The generalized form of this constraint where each point has its own capacity factor was proposed in the work by Rabin et al. (2014). Initially, we propose an entropically regularized version to efficiently find an approximate solution, which is equivalent to a specific case within the framework covered in the work of Séjourné et al. (2019), that we solve with an algorithm that combines Sinkhorn-like projections with an accelerated proximal gradient method.

As the regularized solution is not sparse, and sparsity of transport map is essential for support subset selection, we adopt an inexact Bregman proximal point method (Xie et al., 2020) to yield solutions closer to the original, unregularized, linear program. The work by Xie et al. (2020) is based on the observation that solving the entropically regularized optimal transport problem is equivalent to a Bregman proximal point evaluation with Kullback-Leilber divergence as proximal function. As a proximal point method that uses exact proximal point evaluations is computationally expensive, an inexact proximal point evaluation, where only a few inner Sinkhorn iterations are used, is used to make the algorithm efficient. In our case, the computational complexity is on the same order as the accelerated proximal gradient method, but it returns solution much closer to the linear program for the semi-relaxed partial optimal transport problem. While other regularization-based methods may induce sparse support, the regularized solution will generally be distinct from the linear program's solutions.

1Dykstra's algorithm can find solutions at the intersection of convex, not necessarily affine, sets. The contributions of this paper are the following: - We motivate and study support susbset selection (SS) a specific formulation of a semi-relaxed partial optimal transport problem for selecting a subset of a source distribution, parameterized in terms of a single scalar c ≥ 1 for a fixed target distribution.

  • We study the solutions obtained along the scaling path for various values of c, and compare the solution path to fully-relaxed partial and divergence-based semi-relaxed unbalanced optimal transport problems.

  • We develop an accelerated proximal gradient method-based algorithm to solve the entropically regularized version and adapt the inexact Bregman proximal method-based approach for optimal transport (POT) (Xie et al., 2020) to mitigate the effects of entropic regularization detailing an algorithm SS-Bregman that yields solutions close to the linear program formulation for subset selection and is as scalable as the Sinkhorn algorithm.

  • We apply SS-Bregman to applications including color adaptation, partial distribution alignment, partial point cloud registration problems, and positive-unlabeled learning (Bekker & Davis, 2020).

  • We incorporate the subset selection-based approach into a semi-supervised loss function for training a neural network-based classifier, which computes the optimal transport based on the learning representation.

2 Methodology

In Section 2.1, relevant preliminaries related to discrete optimal transport along with formulation of subset selection problem are discussed. In Section 2.3, the entropically regularized support subset selection problem and an algorithm to solve it are discussed. In Section 2.4, an inexact Bregman proximal point method to better approximate the solution of the unregularized support selection problem is detailed.

Notation: The set of the first n natural numbers {1, 2*, . . . , n*}, is denoted by [n]. The set of integers is denoted by Z. The ceiling function defined on real numbers x ∈ R is ⌈x⌉ = min{n ∈ Z : n ≥ x}. The floor function defined on real numbers x ∈ R is ⌊x⌋ = max{n ∈ Z : n ≤ x}. The n-dimensional real vector space is denoted by R n. Vectors are typeset in lowercase bold (x); matrices are in uppercase bold (X); and bold is dropped when an element are referenced by subscripts (xi, Xij ). When needed for clarity, elements will be referenced by subscripts on square brackets ([x1]i, [X2]ij ). The set of non-negative vectors in R n, known as the non-negative orthant, is denoted by R n +. The n-dimensional vector with all elements equal to unity is denoted by 1n and the m-by-n matrix with all unity elements is denoted by 1m×n. For vectors and matrices, the symbol ≼ denotes element-wise less than or equal to, and ≽ denotes element-wise greater than or equal to. The set denoted by ∆n = {x ∈ R n

  • :Pn i=1 xi = 1} is the probability simplex. The element-wise product for vectors and matrices is denoted by the ⊙ symbol. The element-wise division for vectors and matrices is denoted by the ⊘ symbol. The diagonal operator is a matrix valued map D : R n → R n×n, such that [D(x)]ii = xi ∀ i ∈ [n] and [D(x)]ij = 0 ∀ i ̸= j ∈ [n]. For x ∈ R n, the ℓ1, ℓ2 and ℓ∞ norms are given by ∥x∥1 =Pi |xi|, ∥x∥2 = (Pi |xi| 2) 1 2 and ∥x∥∞ = max i|xi|, respectively. Both, the Euclidean inner-product for x, y ∈ R n given by Pi xiyi, and the Frobenius inner-product for X,Y ∈ R m×n given by Pm i=1 Pn j=1 XijYij , are denoted by ⟨·, ·⟩. The element-wise exponent of a vector or a matrix is denoted by exp(·) and the element-wise logarithm of a vector or a matrix is denoted by log(·). For a matrix X ∈ R m×n, its non-negative part is represented by X+ or [X]+, with matrix components given by -X+ ij = max{Xij , 0} ∀ i ∈ [m], j ∈ [n]. Similarly, the non-positive part of matrix X ∈ R m×n is denoted by X− or [X]− and contains matrix elements -X− ij = min{Xij , 0} ∀ i ∈ [m] and j ∈ [n]. For a vector x ∈ R n, both x+ and x− denote the non-negative and non-positive parts respectively. We denote the indicator function of the singleton set {z} as δz(x) = (1, x = z 0, x ̸= z . The set of vectors {ei} n i=1 form the standard basis for R n, where [ei]i = 1 and [ei]j = 0 for i ̸= j.

2.1 Problem Formulation

We consider the discrete optimal transport between two weighted samples of size m and n corresponding to random variables X ∼ µ defined on {x (i)} m i=1 ⊂ R d and Y ∼ ν defined on {y (j)} n j=1 ⊂ R d, with probability measures µ =Pm i=1 µiδx(i) and ν =Pn j=1 νj δy(j) for probability masses µ ∈ ∆m (with {µi = µ(x (i))} m i=1) and ν ∈ ∆n (with {νj = ν(y (j))} n j=1), respectively. Let d : R d × R d → R+ denote a distance function. In practice, this is often the Euclidean distance metric between the points d(x, y) = ∥x−y∥2. Given 1 ≤ p ≤ ∞, the p-Wasserstein distance (to the p-power) between µ and ν is expressed in terms of the cost matrix M, where Mij = dp(x (i), y (j)) ∀i ∈ [m], j ∈ [n] is the cost associated with transporting x (i)to y (j), as

Wpp(μ,ν):=minP0    P,Ms.t.P1n=μ,  P1m=ν,{\mathcal W}_{p}^{p}(\mu,\nu):=\operatorname*{min}_{P\succcurlyeq0}\;\;\langle P,M\rangle\quad\mathrm{s.t.}\quad P{\mathbf1}_{n}=\mu,\;P^{\top}{\mathbf1}_{m}=\nu, (1)(1) (2)\left(2\right) ⊤1m = ν, (1) where P is the transport map and 1 ⊤ mP 1n = 1 ⊤ mµ = ν ⊤1n = 1. The constraints ensure that any solution P ∗ is a joint distribution that couples the target marginal µ and the source marginal ν. (In the computational optimal transport literature, µ is referred to as the target and ν as the source.) Therefore, any Wasserstein distance requires the complete mass transfer between a fixed source and target.

In partial optimal transport (Figalli, 2010), the marginal equality constraints are replaced by the inequalities P 1n ≼ µ, P ⊤1m ≼ ν, and the equality 1 ⊤ mP 1n = s; µ ∈ R m

  • and ν ∈ R n
  • may not have equal mass; and the transport map need only transport a fraction of the total mass s ∈ [0, min{∥µ∥1, ∥ν∥1}]. Motivated by machine learning scenarios with a trusted target sample of data and an additional source of data which cannot be assumed to be of uniform quality, we focus on the semi-relaxed case, where the constraint on the target is fixed P 1n = µ with ∥µ∥1 = 1, ensuring the total mass constraint, but relax the constraint on the source, allowing mass to redistribute among the source points ν ∗ = P ⊤1m ≤ cν, where c ≥ 1 is a scaling factor and ∥ν∥1 = 1. The resulting partial optimal transport problem,2 which we refer to as subset selection (SS), is minP0    P,Ms.t.P1n=μ, P1mν.\operatorname*{min}_{\mathbf{P}\succcurlyeq0}\;\;\langle\mathbf{P},\mathbf{M}\rangle\quad\mathrm{s.t.}\quad\mathbf{P1}_{n}=\mathbf{\mu},\ \mathbf{P}^{\top}\mathbf{1}_{m}\preccurlyeq\mathbf{\nu}. ⊤1m ≼ cν. (2) Let P ∗ c denote an optimal solution, then the source's new mass is ν ∗ c = P ∗⊤ c 1m. Since 1 ⊤ mµ = 1 ⊤ mP ∗ c 1n = 1, ∥ν ∗ c ∥1 = 1. Intuitively, this problem allows the new mass of some source points that have relatively lower cost to increase by a factor of c of the original mass, which enables higher cost source points to have less or even zero mass. In other words, due to total unit mass constraint, the mass increment at one source point results in its decrement at other source points. The subset of the source points selected is supp(ν ∗ c ), where supp(·) indicates the support of a vector, i.e., the indices of the points with non-zero mass. To explore the relaxed constraint set, we consider the case of a uniformly distributed mass ν = 1 n 1n and express the constraint as P ⊤1m ≼ 1 L 1n, where 0 < L ≤ n and c = n L . For a fixed value of L, the set of feasible source marginal distributions form a polyhedral set Ξ (L) n ⊆ ∆n bounded by linear inequalities parameterized by L. The set of feasible source marginals Ξ (L) n is defined as Ξ (L) n = {x ∈ ∆n : x ≼ 1 L 1n}.

The set Ξ (L) 3for different values of L are given in Figure 1. Using combinatoric reasoning we deduce the number of vertices of Ξ (L) n defining the feasible set for the source marginals.

Remark 1. For the support selection partial optimal transport problem 2 with n > 2 and ν = 1 n 1n, by defining L = n c , the inequality constraint can be written as P ⊤1m ≼ 1 L 1n*. Extreme points of* Ξ (L) n can be characterized as follows:

  • For 0 < L ≤ 1, the entire probability simplex ∆n is feasible due to the fact that in this case the vertices of the probability simplex correspond to extreme points of feasible set.

  • For 1 < L < 2 and n > 2*, the feasible set* Ξ (L) n has n(n − 1) number of vertices, which can can be written

as as convex combination 1L ei + (1 − 1 L )ej , where i, j ∈ [n] and i ̸= j.

  • For L = 2*, the feasible set* Ξ (2) n has n(n−1) 2vertices given as 12 (ei + ej ) for i ̸= j. 2An equivalent set of constraints are P 1n ≼ µ, P ⊤1m ≼ cν, 1⊤mP 1n = 1.
  • More generally, the number of extreme points of the feasible set Ξ (L) n is

n!L!1LL!(nL1LL)!,{\frac{n!}{\lfloor L\rfloor!\lceil1-{\frac{\lfloor L\rfloor}{L}}\rceil!(n-\lfloor L\rfloor-\lceil1-{\frac{\lfloor L\rfloor}{L}}\rceil)!}},

with the vertices given as the set of possible multi-set permutations of the vector:

z }| { 0 0 · · · 0 i⊤=- ⌊ n c ⌋ terms z }| { c ncn· · ·cn1 − c n ⌊ n c ⌋ n − 1 − ⌊ n c ⌋ terms z }| { 0 0 · · · 0⊤. (3) h ⌊L⌋ terms z }| { 1 L1L· · ·1L1− ⌊L⌋ L n − 1 − ⌊L⌋ terms Σ\mathbf{\Sigma} Σ\mathbf{\Sigma} Thus, when L ≥ 2 is an integer, which corresponds to c ≤ n 2 and 1 c mod 1n = 0 the vertices correspond to uniform distribution of mass across a support of cardinality L = n c . However, due to the equality constraints on the other marginal the optimal solution may be a convex combination of these vertices. Since the amount of mass to be transported is constant, when the scaling parameter c is increased (or equivalently, when the value of L is decreased) starting from c = 1, the behavior of the coupling P ∗ c , and the marginal ν ∗ c = (P ∗ c ) ⊤1n in the transportation problem is affected. This behavior depends on the structure of the cost matrix M and leads to a redistribution of mass, assigning more mass to certain points and less to others. At c = 1, corresponding to standard optimal transport, all source constraints are active, meaning that all constraints in the problem are considered. As the value of c is increased, constraints become inactive, which constraints depends on the structure of the cost matrix M and the distributions. Once a point's mass goes to zero it never re-enters the support for larger values of c. These observations follow from the linear nature of the problem. Analogous solution paths have been studied for fully-relaxed optimal transport (Phatak et al., 2023) and fully or semi-relaxed regularized unbalanced optimal transport (Chapel et al., 2021). Once c reaches c ∗, all inequality constraints are inactive and can be discarded. After this breakpoint c ∗, any further increments in c do not affect the resulting transport plans. In other words, for c ≥ c ∗, the transport plan P ∗ cremains the same as P ∗ c ∗ , which is a solution that can found by a greedy algorithm. The exact value of c ∗can be determined analytically, based on the possible greedy solutions. The analytical expression for c ∗ depends on the properties of the cost matrix M and the constraints involved,

c=maxj[n]iQijνj,(1)c^{*}=\max_{j\in[n]}\frac{\sum_{i}Q_{ij}}{\nu_{j}},\tag{1} where the matrix Q is found by nearest neighbor search, Qij={μi,jargmink[n]Mik0,otherwise,i[m],j[n].(1)Q_{ij}=\begin{cases}\mu_{i},&j\in\arg\min_{k\in[n]}M_{ik}\\ 0,&\text{otherwise}\end{cases},\quad i\in[m],j\in[n].\tag{1}

P ∗ c ∗ is a greedy solution with non-zero entries taken from Q by taking one non-zero element in each row, breaking ties arbitrarily. Thus, as c varies from 1 to c ∗, SS varies from standard optimal transport to a nearest-neighbor transport.

For more flexibility, a designer can provide an upper-bound on the mass assignments to source points by ζ ∈ R n

  • with ∥ζ∥1 ≥ 1 (equality corresponds to standard optimal transport), which adds the flexibility in designing partial optimal transport problems that allow variable ranges of masses for the source points. The upper-bound can be expressed in terms of a capacity vector κ ∈ R n
  • such that ζ = κ⊙ν as introduced in the work by Rabin et al. (2014). For target distribution µ and source upper-bounding measure ζ =Pm j=1 ζj δy(j) , which is not necessarily a probability measure, the support subset selection problem can be stated as

Sp(μ,ζ):=minP0P,Ms.t.P1n=μ, P1mζ,(1)\mathcal{S}_{p}(\mu,\zeta):=\min_{\mathbf{P}\geq0}\quad\langle\mathbf{P},\mathbf{M}\rangle\quad\text{s.t.}\quad\mathbf{P1}_{n}=\mu,\ \mathbf{P}^{\top}\mathbf{1}_{m}\prec\zeta,\tag{1}

and related to the p-Wasserstein distance (to the p-power) by Sp(µ, ζ) ζ=cν \left.\zeta\right)\right|_{\zeta=c\nu}=\mathcal{S}_{p}(\mu,c\nu)\leq\mathcal{W}_{p}^{p}(\mu,\nu)\;\mathrm{for}\;

(Μ, Ν) For C ≥ 1. 2.2 Relation To Prior Work

While we consider a purely linear program consisting of a semi-relaxed partial optimal transport, prior work includes fully-relaxed partial optimal transport and a variety of non-linear approaches for fully or semi-relaxed partial and unbalanced optimal transport.

(6)(6)

5_image_0.png

Figure 1: The feasible set Ξ (L) 3of the source's new marginal distribution given a uniform distribution ν for different values of L. For 0 < L ≤ 1 the whole probability simplex is feasible. (a) L = 1.3, six vertices will exist for 1 < L < 2 . (b) L = 2 yields 3 vertices. (c) L = 2.2, 2 < L < 3 also give 3 vertices. (d) L = 3 yields a singleton set corresponding to the original uniform distribution.

2.2.1 Divergence-Based Semi-Relaxed Partial And Unbalanced Optimal Transport

Many relaxed approaches for unbalanced and partial optimal transport penalize the divergence of the marginal from the source and/or target distribution (Rabin et al., 2014; Frogner et al., 2015; Chizat et al., 2018; Séjourné et al., 2019). Many of these also use entropic regularization for scalable algorithms since entropic regularization of the joint distribution defining the transport plan can be cast as the Kullback-Leibler divergence between the joint and the product of the marginals. The works by Chizat et al. (2018) and Séjourné et al. (2019) provides an extensive framework for entropic regularization and divergence-based relaxations. The latter work (Séjourné et al., 2019) mentions the adjustments necessary to perform asymmetric marginal penalty case for the semi-relaxed case. The Kullback-Leibler divergence is a member of the wider family of f-divergences that have been adopted in relaxed and semi-relaxed optimal transport (Chizat et al., 2018; Séjourné et al., 2019). In the discrete distribution case p, q ∈ ∆n, these divergences can be expressed as Dφ(p∥q) := Pn i=1 qiφ( pi qi ), where φ is the generating function of the divergence. Notably, φ(r) = KL(r) := r log r − r + 1 is the generating function for the KL divergence, and φ(r) = TV(r) := 12 |r − 1| is the generating function for total variation, which in the discrete case is equivalent to an ℓ1-norm based distance: DTV(p∥q) = 12 Pn i=1 qi| pi qi − 1| = 1 2 ∥p − q∥1.

The family of semi-relaxed optimal transport problems with divergence penalty 1ρ > 0 on the marginal is

SRWφ(ρ)(μ,ν):=minP0P,M+1ρDφ(P1m ν)s.t.P1n=μ.{\mathcal{S R W}}_{\varphi}^{(\rho)}(\mu,\nu):=\operatorname*{min}_{\mathbf{P}\nearrow0}\quad\langle\mathbf{P},\mathbf{M}\rangle+\frac{1}{\rho}{\mathcal{D}}_{\varphi}(\mathbf{P}^{\top}\mathbf{1}_{m}\|\ \nu)\quad\mathrm{s.t.}\quad\mathbf{P1}_{n}=\mu. ⊤1m∥ νs.t. P 1n = µ. (7) The choice φ(r) = ı0,c := (0, r ∈ [0, c], ∞, otherwise, yields a range constraint to the interval of [0, c] for the ratio of the source marginal (Chizat et al., 2018; Séjourné et al., 2019), which is equivalent to Sp(µ, cν) for any value of ρ and c ≥ 1: SRWı[0,c] (µ, ν) = S p p (µ, cν).

More generally, one can consider penalties which are not f-divergences such as those based on ℓ∞-norm Dℓ∞(P ⊤1m∥ ν:= ∥(P ⊤1m)⊘ν−1∥∞ and the squared ℓ2-norm Dℓ2 (P ⊤1m∥ν:= ∥P ⊤1m−ν∥ 2 2 (Benamou, 2003; Blondel et al., 2018; Chapel et al., 2021). The work by Chapel et al. (2021) details a regularization path algorithm for finding the breakpoints in terms of ρ of the piecewise linear related solutions along the path, and efficiently calculating the solutions via rank-1 updates of matrix inverse required to solve the sequence of non-negative, penalized linear regression problems.

Solutions to all of the penalty forms can also be found via equivalent constraint-based optimizations with the constraint Dφ P ⊤1m∥ ν≤ a, where a ≥ 0. In particular, for c ≥ 2, Sp(µ, cν), the SS problem 2, is

(7) $\frac{1}{2}$ equivalent to the constraint-based optimization with the ℓ∞-norm based penalty by letting a = c − 1, since

ment to the constraint-based optimization with the $\varepsilon_{\infty}$-norm based penalty by setting $u=c-1$, $$|(\boldsymbol{P}^{\top}\boldsymbol{1}{m})\otimes\nu-\boldsymbol{1}{n}|{\infty}\leq a\implies\forall j\in[n],\quad-av{j}\leq[\boldsymbol{P}^{\top}\boldsymbol{1}{m}]{j}-\nu_{j}\leq av_{j}$$ $$\implies\forall j\in[n],\quad(1-a)\nu_{j}\leq[\boldsymbol{P}^{\top}\boldsymbol{1}{m}]{j}\leq(a+1)\nu_{j}\implies(2-c)\nu\leq\boldsymbol{P}^{\top}\boldsymbol{1}_{m}\ll\alpha,$$ the above decision inequality for $\varepsilon>0$. However, for $1$, the set $\mathcal{D}$, for the choice of $\mathcal{D}$ where the lower bound is non-positive for c ≥ 2. However, for 1 < c < 2 or for other choices of Dφ the feasible set of the marginal differs from the linear program.

The work by Rabin et al. (2014) presents a semi-relaxed optimal transport that combines an additional ℓ1-norm penalty on the capacity's deviation with an inequality constraint on the marginal, which ensures the problem in terms of the transport plan P stays linear,

⟨P ,M⟩ + 1 ρ ∥κ − 1n∥1 ⟨P ,M⟩ + 1 ρ ∥ζ ⊘ ν − 1n∥1 min P ≽0 κ∈R n + ζ=κ⊙ν = min P ≽0 ζ∈R n + s.t. P 1n = µ, P ⊤1m ≼ κ ⊙ ν, ⟨κ, ν⟩ ≥ 1 s.t. P 1n = µ, P ⊤1m ≼ ζ, ζ ⊤1n ≥ 1. (δ)({\boldsymbol{\delta}}) For a given ρ, there is a value of a ≥ 0 such that the constraint-based optimization problem

there is a value of $a\geq0$ such that the constraint based optimization problem is $$\min_{\begin{subarray}{c}\mathbf{P}^{n}\geq0\ \mathbf{\zeta}\in\mathbb{R}{+}^{n}\end{subarray}}\quad(\mathbf{P},\mathbf{M})\quad\text{s.t.}\quad\mathbf{P1}{n}=\mathbf{\mu},\ \mathbf{P}^{\top}\mathbf{1}{m}\preccurlyeq\mathbf{\zeta},\ \zeta^{\top}\mathbf{1}{n}\geq1,\ |\mathbf{\zeta}\oslash\nu-\mathbf{1}{n}|{1}\leq a,$$ has an equivalent solution. The ℓ1-norm based divergence constraint will induce sparsity in the deviations between ζ ⊘ ν and 1n, such that many of the points maintain the corresponding value of ν. In the uniform distribution case ν = 1 n 1n, the solution for ζ has a maximum value of an such that ζ ⊘ ν = a , the vertices of the feasible set for the constraint-based formulations include permutations of the vector

{\overset{\begin{array}{r l}{\overbrace{\left[{\frac{a}{n}}\quad\quad{\frac{1}{n}}\quad\quad{\frac{1}{n}}\quad\cdots\quad{\frac{1}{n}}\quad\quad{\frac{[a]-a}{n}}\quad\quad{\frac{[a]-a}{0}}\quad\quad\cdots\quad0\right]}^{\top}}}.

This can be compared to the vertices of the feasible set for the marginals of the SS problem 2 explored in Remark 1, which have more uniform distribution of mass. More uniform mass distribution is motivated by the maximum entropy principle and desirable such as machine learning tasks such as semi-supervised learning where the goal is to augment the learning based on additional diversity.

By replacing the ℓ1-norm with the ℓ∞-norm in the capacity-based optimization problem 8 can be related to Sp(µ, cν) for all values of c ≥ 1 by setting a = c − 1 since ζj is only involved as an upper bound, no longer considering the lower bound of (2 − c)νj , for [P ⊤1m]j =Pm i=1 Pij ≤ ζj ≤ cνj ∀j ∈ [n], yielding the SS problem 2.

2.2.2 Strictly Uniform, Semi-Relaxed Partial Optimal Transport

The work by Chapel et al. (2020), introduced in the context of positive-unlabeled (PU) learning, further constrains the semi-relaxed partial optimal transport problem such that the non-zero source masses must be 1 n , the PU Wasserstein optimal transport problem is

PUWpw(μ,ν;s):=minT 0T,Ms.t.1mT1n=s, T1n sm1m, T1m{0,1n}n.(9)\mathcal{P}\mathcal{U}\mathcal{W}_{p}^{\mathrm{w}}(\mu,\nu;s):=\min_{\mathbf{T}\ \neq0}\quad\langle\mathbf{T},\mathbf{M}\rangle\quad\text{s.t.}\quad\mathbf{1}_{m}^{\top}\mathbf{T1}_{n}=s,\ \mathbf{T1}_{n}\ \ll\frac{s}{m}\mathbf{1}_{m},\ \mathbf{T}^{\top}\mathbf{1}_{m}\in\{0,\frac{1}{n}\}^{n}.\tag{9}

Due to the constraint that the marginal source masses are in the set {0, 1 n } this problem is not a linear program and only has a non-empty feasible set when s mod 1/n = 0, since s must be an integer multiple of 1n , which is an analogous condition to having a uniform distribution among the support as discussed in Remark 1. If this constraint is relaxed to T ⊤1m ≼ 1 n 1n, then the problem is equivalent to semi-relaxed SS problem 2, which by the linearity of the problem may result in a solution satisfying the original constraint T ⊤1m ∈ {0, 1 n } n. To solve this combinatoric problem, the work by Chapel et al. (2020) obtains the solution to a convex minimization problem involving group LASSO regularization and additional dummy points, as in problems for unbalanced optimal transport (Guittet, 2002) to account for dropped mass. The solution to this problem will create a strictly uniform distribution (after renormalization) amongst the selected subset of the source marginal. That is, the group LASSO regularization induces a solution whose marginal is a vertex of the feasible set discussed in Remark 1.

2.2.3 Fully-Relaxed Partial Optimal Transport

While the formulation we adopt for subset selection is only for the source marginal (a semi-relaxed formulation of optimal transport), partial optimal transport formulations can achieve support subset selection on both marginals using a fully-relaxed optimization (Figalli, 2010; Phatak et al., 2023). Adapting the notation in the work by Chapel et al. (2020), the partial optimal transport problem is

PWps(μ,ν;s):=minT 0 (T,M)s.t.T1nμ, T1mν, 1mT1n=s,(10)\mathcal{P}\mathcal{W}_{p}^{\mathsf{s}}(\mu,\nu;s):=\min_{T\ \geqslant0}\ (T,\mathbf{M})\quad\text{s.t.}\quad T\mathbf{1}_{n}\preccurlyeq\mu,\ T^{\top}\mathbf{1}_{m}\preccurlyeq\nu,\ \mathbf{1}_{m}^{\top}\mathbf{T}\mathbf{1}_{n}=s,\tag{10}

where s ∈ (0, 1] is the fraction of the mass transported. When the source distribution is uniform µ = 1 n 1n, after renormalization P = 1 s T , the feasible set for the source marginal P ⊤1m consists of permutations of the vector

[1ns1ns1ns]ns\terms000n(1s)\terms ,\overbrace{\left[{\frac{1}{n s}}\quad{\frac{1}{n s}}\quad\cdots\quad{\frac{1}{n s}}\right]}^{\mathrm{ns\terms}}\quad\overbrace{0\quad0\quad\cdots\quad0}^{\mathrm{n(1-s)\terms}}\ ,

which are also the vertices of the feasible set of the marginals for the semi-relaxed SS problem 2 explored in Remark 1 with L = ns. We note that solutions for the fully-relaxed problem are more likely to have marginals with this form due to the lack of the equality constraints for the target marginal, as compared to the semi-relaxed SS problem. However, the main difference of the fully-relaxed approaches compared to our semi-relaxed SS approach (or other semi-relaxed approaches) is that points in the target may lose mass or be completely dropped, which is not ideal when the goal is to filter the source distribution for any points similar to the target distribution.

The fully-relaxed optimal transport can be directly related to the divergence-based unbalanced optimal transport problem,

FRWφ(ρ)(μ,ν):=minP0P,M+1ρ(Dφ(P1n μ)+Dφ(P1m ν))).{\mathcal{F R W}}_{\varphi}^{(\rho)}(\mu,\nu):=\operatorname*{min}_{\mathbf{P}\ni0}\quad\langle\mathbf{P},\mathbf{M}\rangle+{\frac{1}{\rho}}\left({\mathcal{D}}_{\varphi}(\mathbf{P1}_{n}\|\ \mu)+{\mathcal{D}}_{\varphi}(\mathbf{P}^{\top}\mathbf{1}_{m}\|\ \nu))\right).

There exists a value of ρ ≥ 0 such that the fully-relaxed optimal transport with modified cost matrix M′ = M− 1 ρ 1m1 ⊤ n and total variation divergence penalties on both marginals will yield the same solution as PWpp (µ, ν; s) Caffarelli & McCann (2010); Chizat et al. (2018); Séjourné et al. (2019). However, as discussed above, a total variation or ℓ1-based penalty on only the one marginal with an equality constraint on the other induces a different solution. For c = 1 s ≥ 1, the fully-relaxed partial optimal transport problem is

minP0P,M=minT0    cT,Ms.t.P1ncμ,  P1mcν,  1mP1n=1,s.t.      T1nμ,  T1mν,1mT1n=1c,\begin{array}{l l l}{{\operatorname*{min}_{P\ni0}}}&{{\langle P,M\rangle}}&{{}}&{{=}}&{{\operatorname*{min}_{T\ni0}\;\;c\langle T,M\rangle}}\\ {{\mathrm{s.t.}}}&{{P{\bf1}_{n}\leqslant c\mu,\;P^{\top}{\bf1}_{m}\leqslant c\nu,\;{\bf1}_{m}^{\top}P{\bf1}_{n}=1,}}&{{}}&{{\mathrm{s.t.}\;\;\;T{\bf1}_{n}\preccurlyeq\mu,\;T^{\top}{\bf1}_{m}\preccurlyeq\nu,{\bf1}_{m}^{\top}T{\bf1}_{n}={\frac{1}{c}},}}\end{array} (11)(11) (12)\left(12\right)

where T = 1 cP . Based on equation 12, c · PWpp (µ, ν; 1 c ) is the cost for an optimal transport plan with fully-relaxed constraints P 1n ≼ cµ and P ⊤1m ≼ cν.

The recent work by Phatak et al. (2023), studies the value of ω(s) := PWpp (µ, ν; s) across all values of s ∈ (0, 1], which is known as the OT-profile as introduced in the work of Figalli (2010). In the work by Phatak et al. (2023), ω(s) is shown to be a piece-wise linear convex function of s, and the entire profile can be computed exactly and approximated efficiently. Furthermore, derivative of ω with respect to s can be used find a mass fraction where the partial transport separates inliers and outliers Phatak et al. (2023). To relate the fully-relaxed to the semi-relaxed partial optimal transport, we consider the unbalanced partial OT-profile function to denote the case where the target marginal is also scaled down by s such that it is wholly transported

$\mathcal{P}\mathcal{W}{p}^{p}(s\mu,\nu;s):=\min{\mathbf{T}\ \succcurlyeq0}\quad\langle\mathbf{T},\mathbf{M}\rangle\quad\text{s.t.}\quad\mathbf{T1}{n}\prec s\mu,\ \mathbf{T}^{\top}\mathbf{1}{m}\prec\nu,\ \mathbf{1}{m}^{\top}\mathbf{T1}{n}=s.$ Let c = 1 s , then our proposed subset support cost is

Sp(μ,1sν)=minP0P,M\mathcal{S}_{p}(\mu,\frac{1}{s}\nu)=\min_{\mathbf{P}\succcurlyeq\mathbf{0}}\langle\mathbf{P},\mathbf{M}\rangle $$\text{s.t.}\mathbf{1}{m}^{\top}\mathbf{P1}{n}=1,$$ $$\mathbf{P}\preccurlyeq\mathbf{\mu},\ \mathbf{P}^{\top}\mathbf{1}{m}\preccurlyeq\frac{1}{s}\nu$$ $$\text{s.t.}\mathbf{1}{m}^{\top}\mathbf{T1}{n}=s,$$ $$\mathbf{T1}{n}\preccurlyeq s\mu,\ \mathbf{T}^{\top}\mathbf{1}_{m}\preccurlyeq\nu.$$ (13)(13)

Thus, Sp(µ, 1 s ν) = 1 sPWpp (sµ, ν; s) and 1 c Sp(µ, cν) = PWpp ( 1 c µ, ν; 1 c ), with optimal solutions related by P ∗ = 1 s T ∗. Based on this relation we explore the adoption of the knee finding algorithms applied to selection of s (Phatak et al., 2023) to optimize the selection of c in cases where the source differs from the target by the presence of outliers.

2.3 Support Subset Selection With Entropic Regularization

The support subset selection problem 6 is a linear program, which can be exactly solved by the simplex method or interior point methods, both of which do not scale well with the dimension of transport map (Cuturi, 2013). In order to apply efficient gradient-based optimization to linear programs, entropic regularization has been added to linear objective functions (Li & Fang, 1997). In the work by Cuturi (2013), entropic regularization is added to the optimal transport problem to efficiently approximate the Wasserstein distance using Sinhkorn's matrix scaling algorithm (Cuturi, 2013; Sinkhorn, 1964). For fixed target distribution µ and upper-bounding source measure ζ with mass ζ ∈ R n +, the proposed entropically regularized support subset selection problem is

Sp(γ)(μ,ζ):=minP0P,M+γP,log(P)1m×ns.t.P1n=μ, P1mζ,{\mathcal{S}}_{p}^{(\gamma)}(\mu,\zeta):=\operatorname*{min}_{\mathbf{P}\neq0}\quad\langle\mathbf{P},\mathbf{M}\rangle+\gamma\langle\mathbf{P},\mathbf{\log}(\mathbf{P})-\mathbf{1}_{m\times n}\rangle\quad{\mathrm{s.t.}}\quad\mathbf{P1}_{n}=\mu,\ \mathbf{P}^{\top}\mathbf{1}_{m}\ll\zeta,

where γ is the regularization parameter. In the case of a uniform distribution ν = 1 n 1n, the entropically regularization problem will be equivalent to using a Kullback-Leibler divergence between the joint and product of the given marginals and additional identity and range constraints on the solution's marginals Séjourné et al.

(2019), as shown in Appendix D. It is important to mention that the regularization term ⟨P , log(P )−1m×n⟩ is negative entropy, which is 1-strongly convex with respect to the ℓ1 and ℓ2 norms in the feasible set: {P : P ≽ 0, P 1n = µ, P ⊤1m ≼ ζ} (Beck, 2017). The Lagrangian of problem 13 is

L(P,α,β)=P, M+γ(logP1m1n)+α1n+1mβα,μβ,ζ,{\mathcal{L}}(P,\alpha,\beta)=\langle P,\ M+\gamma(\log P-{\mathbf{1}_{m}}{\mathbf{1}_{n}^{\top}})+\alpha{\mathbf{1}_{n}^{\top}}+{\mathbf{1}_{m}}\beta^{\top}\rangle-\langle\alpha,\mu\rangle-\langle\beta,\zeta\rangle,

where α, β are the Lagrange multipliers. Note that we have adopted the approach of Cuturi (2013) and do not explicitly enforce the simplex constraint on P , which would lead to the log-sum-exp formulation as in the works by Cuturi & Peyré (2018); Lin et al. (2022); Guminov et al. (2021). Taking the element-wise derivative of L with respect to P and setting it to zero yields

P~(α,β)=D(exp(α/γ))exp(M/γ)D(exp(β/γ)).\tilde{P}(\alpha,\beta)=D\bigg(\exp{(-\alpha/\gamma)}\bigg)\exp{(-M/\gamma)}D\bigg(\exp{(-\beta/\gamma)}\bigg).

Substituting P˜ back into Lagrangian results in the dual problem

Lagrangian results in the dual problem $\min\limits_{\pmb{\alpha},\pmb{\beta}}\quad\left{f(\pmb{\alpha},\pmb{\beta}):=\gamma\mathbf{1}_m^\top\pmb{P}(\pmb{\alpha},\pmb{\beta})\mathbf{1}_n+\langle\pmb{\alpha},\pmb{\mu}\rangle+\langle\pmb{\beta},\pmb{\zeta}\rangle\right}$ s.t. $\quad\beta\geqslant0$. (16)(16) The constraint set β ≽ 0 is closed. The indicator function of the constraint set β ≽ 0 is defined as

(14)(14) (15)(15) ι+(β):={0,for β0,otherwise.\iota_{+}(\beta):={\begin{cases}0,&{\mathrm{for~}}\beta\succ0\\ \infty,&{\mathrm{otherwise.}}\end{cases}}

Therefore, we can convert problem 16 into an unconstrained composite optimization problem,

minα,βf(α,β)+ι+(β).\operatorname*{min}_{\alpha,\beta}\quad f(\alpha,\beta)+\iota_{+}(\beta). α,βf(α, β) + ı+(β). (17) Since f(α, β) is convex and ı+(β) is proper, closed, and convex, we can apply the accelerated proximal gradient algorithm to solve the composite optimization problem. Defining the Gibbs kernel K = exp (−M γ ), the partial gradient ∇βf(α, β) is

βf(α,β)=ζexp(β/γ)(Kexp(α/γ)).(1)\nabla_{\beta}f(\alpha,\beta)=\zeta-\exp\left(-\beta/\gamma\right)\odot\left(K^{\top}\exp\left(-\alpha/\gamma\right)\right).\tag{1}

The proximal projection for the non-negative orthant's indicator function ı+ is computed by setting any negative entries to zero.

Algorithm 1 (SS-Entropic) outlines our accelerated proximal gradient algorithm to solve the dual form of subset selection problem with entropic regularization. Similar to the standard entropically regularized optimal transport problem, the dual variable α is updated with a Sinkhorn-like update at iteration k as

(18)(18) α(k+1)=γlog((Kexp(β(k)/γ))μ).\alpha^{(k+1)}=\gamma\log\bigg(\big(K\exp(-\beta^{(k)}/\gamma)\big)\oslash\mu\bigg). (19)(19)

Whereas, β is updated at iteration k using accelerated proximal gradient based update rule (Beck, 2017; Beck & Teboulle, 2009) using the extrapolated point ξ with step size 1/η(k) s

\boldsymbol{\beta}^{(k+1)}=\big{[}\boldsymbol{\xi}^{(k)}-\frac{1}{\eta_{1}^{(k)}}\nabla_{\boldsymbol{\xi}}f(\boldsymbol{\alpha}^{(k+1)},\boldsymbol{\xi}^{(k)})\big{]}_{+}=\bigg{[}\boldsymbol{\xi}^{(k)}-\frac{1}{\eta_{1}^{(k)}}\left(\boldsymbol{\zeta}-\exp(-\boldsymbol{\xi}^{(k)}/\gamma)\ominus\boldsymbol{K}^{\top}\exp\left(-\boldsymbol{\alpha}^{(k+1)}/\gamma\right)\right), $$\boldsymbol{\xi}^{(k+1)}=\boldsymbol{\beta}^{(k+1)}+\frac{t_{k}-1}{t_{k+1}}(\boldsymbol{\beta}^{(k+1)}-\boldsymbol{\beta}^{(k)}),\text{with}t_{k+1}=\frac{1+\sqrt{1+4t_{k}^{2}}}{2},$$ +]+\left.\begin{array}{l}{{+\,}}\\ {{}}\\ {{}}\end{array}\right]_{+}

which uses equation 18 to compute the gradient with respect to the variable ξ (k) before applying the proximal

operator [·]+. In SS-Entropic, we use a constant step size 1
                                                      η
                                                       (k)
                                                       s
                                                          = γ (since the primal problem 16 is γ-strongly

convex and its semi-dual is 1 γ -Lipschitz smooth Cuturi & Peyré (2016), see Appendix A for details), but another option is a backtracking line search (Beck, 2017).

By incorporating the update of α(k+1) as in equation 19 directly into the gradient ∇βf(α(k+1), β (k)), the algorithm can be written entirely in terms of β (k). This shows that SS-Entropic consists of standard accelerated proximal gradient updates and has a O(1/k2) convergence rate. As shown in the work by Beck (2017), the required number of iterations kε to achieve an ε sub-optimal solution of the optimization problem 17 using SS-Entropic is upper-bounded as

(21)(21) kε+12γεβ(i)β,(22)k_{\varepsilon}+1\leq\sqrt{\frac{2}{\gamma_{\varepsilon}}}\cdot\|\beta^{(i)}-\beta^{*}\|,\tag{22}

where β (i)is the initialization and β ∗is the optimal solution.

If SS-Entropic is allowed to run until its convergence, it returns the optimal coupling P ∗, but in practice, if SS-Entropic does not reach convergence, Pˆ∗ ∈ R m×n

  • may violate the primal constraints on its marginals as these are not ensured by an approximate dual solution. For some applications a projection of Pˆ∗ to satisfy one or both of the marginal constraints may be required. While not explored in this paper due to the additional computational cost, projection to the feasible set can be done by the fast dual proximal gradient (FDPG) algorithm from the works by Beck & Teboulle (2014) and Beck (2017) in conjunction with Algorithm-2 in the work of Altschuler et al. (2017).

2.4 Support Subset Selection With The Inexact-Bregman Proximal-Point Method

Although the entropic regularization of the coupling distribution enables an efficient approximation of the support subset selection problem 6, the entropic regularization yields denser coupling distributions as compared to the unregularized problem. The denser coupling distributions result in a new marginal mass ν ∗that is also not sparse, yielding complete support rather than a subset of the source points. Different approaches Algorithm 1: (SS-Entropic) Fast proximal gradient algorithm to solve the dual problem 17 of the entropically regularized support subset selection problem 13 Inputs : Target distribution µ, mass assignment bounding vector ζ, cost matrix M, entropic regularization parameter γ, initial dual variable, β (i) ∈ R n +, and iteration

exp(1γM)\leftarrow\mathbf{exp}(-{\frac{1}{\gamma}}M) limit max-iter. Outputs : Pˆ∗, which approaches the optimal coupling P ∗ 1 Function EntropicSS(µ, ζ, β (i), γ, M, max-iter): Initialization: t0 ← 1, β (0) ← β (i), ξ (0) ← β (i), K ← exp(− 2 for k ← 0 to max-iter − 1 do 3 α(k+1) ← γ log K exp(− 1 γ β (k))⊘ µ 4 β (k+1) ← ξ (k) − γ∇ξf(α(k+1), ξ (k)) + 5 tk+1 ← 1+√1+4t 2 k 2 6 ξ (k+1) ← β (k+1) + ( tk−1 tk+1 )(β (k+1) − β (k)) 7 end 8 α∗ ← α(k+1) \begin{tabular}{|l|l|} \hline 7 & end & end \ 8 & $\alpha^{}\leftarrow\alpha^{(k+1)}$ \ 9 & $\beta^{}\leftarrow\beta^{(k+1)}$ \ 10 & $\hat{\mathbf{P}}^{}\leftarrow\hat{\mathbf{P}}(\alpha^{},\beta^{})=\mathbf{D}\big{(}\exp\big{(}-\frac{1}{\gamma}\alpha^{}\big{)}\big{)}\mathbf{K}\mathbf{D}\big{(}\exp(-\frac{1}{\gamma}\beta^{})\big{)}$ \ 11 & return $\hat{\mathbf{P}}^{}$, $\alpha^{}$, $\beta^{}$ \ \hline \end{tabular}
have been proposed to maintain the computational benefits of entropic regularization while yielding solutions closer to the unregularized problem (Schmitzer, 2019; Xie et al., 2020).

In this paper, we follow the work by Xie et al. (2020) and adapt an inexact Bregman proximal gradient for the negative entropy function (Teboulle, 1992) to the partial optimal transport case. The Bregman proximal gradient approach uses a proximal operator where the usual Euclidean distance(Parikh & Boyd, 2014) is replaced with the Bregman divergence associated with a continuously differentiable and strictly convex function (Beck, 2017). In the case of negative entropy, the Bregman divergence is the Kullback–Leibler divergence. Let ϕ(P ) = ⟨P , log(P ) − 1m×n⟩ denote the negative entropy of a non-negative matrix P , then given a non-negative matrix P ′∈ R m×n

  • , the Bregman divergence is

Bϕ(PP):=P,log(PP)P,1m×n+P,1m×n.{\mathcal{B}}_{\phi}(\mathbf{P}||\mathbf{P}^{'}):=\langle\mathbf{P},\mathbf{\log}(\mathbf{P}\oslash\mathbf{P}^{'})\rangle-\langle\mathbf{P},\mathbf{1}_{m\times n}\rangle+\langle\mathbf{P}^{'},\mathbf{1}_{m\times n}\rangle. ′, 1m×n⟩. (23) For the subset selection problem 6, the Bregman proximal point evaluated at P (t), is

 Breg\mboxproxϕ(P(t))=arg minP0P,M+λBϕ(PP(t))s.t.P1n=μ, P1mζ,\begin{array}{r l}{{\mathrm{\bf~Breg\mbox{-}prox}_{\phi}(P^{(t)})=\operatorname*{arg\,min}_{P\succcurlyeq0}}}&{{\langle P,M\rangle+\lambda\mathcal{B}_{\phi}(P||P^{(t)})}}\\ {{\mathrm{\mathrm{\mathrm{s.t.}}}\quad P{\bf1}_{n}=\mu,\ P^{\top}{\bf1}_{m}\preccurlyeq\zeta,}}\end{array} (23)(23)

(24)(24)

where λ is positive scaling factor. By substituting Bϕ(P ||P (t)) from equation 23 into equation 24 and ignoring the constant term ⟨P (t), 1m×n⟩ we obtain

Breg\mboxproxϕ(P(t))=arg minP0P,Mlog(P(t))+λP,log(P)1m×ns.t.P1n=μ, P1mζ,\begin{array}{r l}{{\mathrm{Breg\mbox{-}prox}_{\phi}(\mathbf{P}^{(t)})=\operatorname*{arg\,min}_{\mathbf{P}\geqslant0}}}&{{\langle\mathbf{P},\mathbf{M}-\mathbf{log}(\mathbf{P}^{(t)})\rangle+\lambda\langle\mathbf{P},\mathbf{log}(\mathbf{P})-\mathbf{1}_{m\times n}\rangle}}\\ {{\mathrm{s.t.}\quad\mathbf{P}\mathbf{1}_{n}=\mu,\ \mathbf{P}^{\top}\mathbf{1}_{m}\leqslant\zeta,}}\end{array} (25)(25)

which corresponds to the entropically regularized subset selection problem 13 with parameters γ and M in 13 replaced by λ and M − log(P (t)), respectively. Thus, solving the entropy-regularized support subset selection problem is required to solve an iteration of the proximal-step evaluation problem in equation 25. It has been shown in the work by Xie et al. (2020) that as t → ∞, the iterations P (t+1) = Breg-proxϕ (P (t)) converge to an optimal solution of the original unregularized problem. Therefore, to solve 6 we can iteratively invoke SS-Entropic to obtain P (t+1) = Breg-proxϕ (P (t)), while replacing γ and M in problem 13 by λ and M − λP (t)in problem 25, respectively.

Algorithm 2 (SS-Bregman) outlines the steps to solve the subset support selection using the Bregman proximal-point method, where the inner loop is solved by SS-Entropic. The nested loops of the exact proximal point algorithm can result in high computational costs, but this can be circumvented by choosing a lower number of iterations for the inner loop—stopping before its convergence. This is justified by the observation that the majority of the progress towards optimal solutions by gradient based methods is achieved during the first few iterations. Recently, an inertial variant of inexact-Bregman proximal point method for the optimal transport has been proposed (Yang & Toh, 2022), which may further accelerate the Bregman proximal point method, but to the best of our knowledge there are no guarantees for accelerated convergence.

Algorithm 2: (SS-Bregman) Inexact-Bregman Proximal Point Algorithm to approximately solve 6 via 25 Inputs : Target distribution µ, mass assignment upper bounding vector ζ, cost matrix M, Bregman scaling parameter λ, and initial dual variable, β (i) ∈ R n +, inner-iteration limit max-inner-iter and outer-iteration limit max-outer-iter Outputs : Pˆ∗ Initialization: β (0) ← β (i), P (0) ← 1 mn 1m×n 1 for t ← 0 to max-outer-iter − 1 do // repeatedly invoke Entropic-SS 2 P (t+1), α(t+1), β (t+1) ← EntropicSS µ, ζ, β (t)*, λ,*M − log(P (t)), max-inner-iter 3 end 4 Pˆ∗ = P (t+1) Due to early stopping, SS-Bregman can yield infeasible solutions that do not satisfy the marginal constraints.

In practice, the number of iterations depends on problem in the hand. For applications related to point cloud registration, color transfer and PU learning, where number of data points in a data-batch is small, the algorithm is allowed to run with a large number of iterations yielding a highly accurate and feasible solution. Whereas, for the applications related to neural network training where training efficiency is more important than the solution accuracy, the algorithm is allowed to run for a smaller number of iterations.

2.5 Point Cloud Registration With Subset Selection

Point cloud registration is a well studied problem that tries to find a correspondence of points in one sample (cloud) to another sample (Zhang et al., 2021; Zang et al., 2019). Practical settings include points sampled from the boundaries of 3D images such as captured by LIDAR or the points on the edges of objects in 2D images (Xu et al., 2023). More generally, points correspond to data points in two or more samples. In both of these cases it is useful to consider the case that the two samples exist in different coordinate frames such that there is an affine transformation needed to align the samples before finding the correspondence.

Partial optimal transport for point cloud registration is motivated by cases of occlusion in 2D or 3D imagery.

In the case of data, it could be that one sample has dropped modes either by the nature of the data gathering or generating process. Our proposed subset selection algorithms are applicable to cases where the source is assumed to have a complete or overcomplete representation of the target, i.e., only a subset of the target is available and all target points should be maintained.

We propose to use support subset selection as a loss function for optimizing affine transformations in partial point cloud registration. This can be posed as a bi-level optimization problem

P,M^(Θ)s.\langle P,{\hat{M}}(\Theta)\rangle\quad\mathrm{s.}

min Θmin P ≽0⟨P ,Mˆ (Θ)⟩ s.t. P 1n = µ, P ⊤1m ≼ cν, (26) where Θ = [A, b] are the parameters of the affine transform, the entries of the cost matrix Mˆ (Θ) are [Mˆ (Θ)]ij = ∥xi−yˆ Θ j ∥ 2 2i ∈ [m], j ∈ [n] for fixed target {xi} m i=1 and transformed source {yˆ Θ j = Ayj +b} n j=1.

$\mathbf{n}=\mu,;;P$ (26)(26) The standard approach to solve bi-level optimization problems in point cloud registration discussed in the works by Arun et al. (1987) and Myronenko & Song (2010), is an iterative alternating algorithm with two steps, where the sub-problem for the affine transform is solved exactly via ordinary least squares. If the coupling matrix during an iteration is given by P ∗, the next subproblem is to find the affine transformation parameters Θ = [A, b] that minimize the weighted squared errors Pi,j [P ∗]ij∥xi −(Ayj +b)∥ 2 2 . The solution can be found analytically in terms of the source mass vector ν ∗ = P ∗⊤1m, weighted means of the target point cloud X = [x1*, . . . ,* xm] ⊤ and the source point cloud Y = [y1*, . . . ,* yn] ⊤ as x¯ = X⊤µ and y¯ = Y ⊤ν ∗, and centered point clouds X˜ = X − 1mx¯ ⊤ and Y˜ = Y − 1ny¯ ⊤, as

A=(X~PY~)(Y~D(ν)Y~),b=xˉAyˉ,\begin{array}{l}{{A=\left(\tilde{X}^{\top}P^{*}\tilde{Y}\right)\left(\tilde{Y}^{\top}D(\nu^{*})\tilde{Y}\right)^{\dagger},}}\\ {{b=\bar{x}-A\bar{y},}}\end{array}

(27)(27)

where (·) †indicates the Moore-Penrose pseudo-inverse.

To find a solution to equation 26, we also use an iterative alternating algorithm with two steps. In contrast to Myronenko & Song (2010), instead of using complete source and target point clouds to obtain affine transformations, during every iteration we draw batches from both source and target point clouds to obtain the coupling matrix P ∗ and update the affine transformation parameters Θ via a gradient update. The advantage of this mini-batch based approach is an implicit regularization and faster updates for affine transformation parameters. In the first step, given the affine transformation we obtain an approximate solution Pˆ∗ to the subset selection problem 6 via SS-Bregman. In the second step, we used automatic differentiation of the cost ⟨Pˆ∗,Mˆ (Θ)⟩ and perform gradient based update for the parameters Θ = [A, b]. It is important to mention that we follow the approach adopted by Xie et al. (2020) for gradient evaluation.

Therefore, during an iteration, once the subset set selection map Pˆ∗ is obtained, it is deemed constant for the iteration in consideration, therefore the gradient is: ∇Θ⟨Pˆ∗,Mˆ (Θ)⟩ =Pi,j [Pˆ∗]ij∇Θ[Mˆ (Θ)]ij . More specifically, we used PyTorch based automatic differentiation for gradient evaluation (Paszke et al., 2017) and the Adam optimizer (Kingma & Ba, 2014) with learning rate of 0.5 for gradient based updates of parameters Θ. To initialize the affine mapping parameters we simply set A and b to the identity matrix and zero vector, respectively. However, since the bi-level optimization problem is not convex, even though the subset selection problem at each iteration is convex, in practice the algorithm could be allowed to run with multiple initialization to obtain the best fit.

3 Experimental Results And Discussion

In this section we discuss the application of subset selection. Subsections 3.1 and 3.2 discuss the application of subset selection in toy data sets: point-clouds in 2D and 3D with and without affine transformations and color transfer, respectively. Subsection 3.3 discusses subset selection for positive-unlabeled learning tasks.

Subsection 3.4 discusses the application of subset selection for semi-supervised training of neural networks.

All the experiments done in this paper use p = 2 and the Euclidean distance to define the cost matrix. Unless stated otherwise, experiments use µ = 1 m 1m and ζ = c n 1n, where c ≥ 1 is the scaling factor.

3.1 Subset Selection On Point Clouds

Circle and Square: In order to demonstrate the proposed algorithms and highlight the difference between regular optimal transport and subset selection, we consider a target sample of points from a circle centered at the origin and a source sample of points from a 2D uniform distribution also centered at the origin. We allow the scaling parameter c to vary between 1 and 100, obtain the optimal transport plans P ∗ using both SS-Entropic and SS-Bregman, and evaluate the cost values ⟨P ∗,M⟩. Results for this toy case are shown in Figure 2. It can be observed that as c is increased the transport cost decreases until it saturates to the cost of the greedy solution Pi∈[m] 1 m minj∈[n][M]ij , which corresponds to c = c ∗ where the transport map could be found by greedily choosing nearest source point for each target point as in equation 4. Figure 2 also illustrates the transport couplings for c ∈ {1, 1.25, 1.5, 1.75, 2, 4, 8, 16}. A key observation is that transport maps obtained with SS-Bregman are sparser as compared to the denser maps obtained using SS-Entropic.

Additionally, they achieve smaller values of transport cost. Therefore in the subsequent sections, we focus

13_image_0.png

on results from SS-Bregman in the main body; results for Algorithm SS-Entropic are in the Appendix B

Figure 2: (a) The toy data generated by uniformly sampling m = 100 points from a circle centered at the origin with unit diameter as the target. The source contains n = 80 points generated by sampling uniformly from [- 1 2 , 1 2 ]×[- 1 2 , 1 2 ]. (b) The optimal costs ⟨P ∗,M⟩ obtained using SS-Entropic and SS-Bregman versus c for c ∈ [1, 100]. SS-Entropic is ran for 10,000 iterations with γ = 0.1. SS-Bregman is ran for max-outer-iter = 100, max-inner-iter = 100 and λ = 0.1. (c) and (d) Support subset selection results obtained for c ∈ {1, 1.25, 1.5, 1.75, 2, 4, 8, 16} using the Algorithms SS-Entropic and SS-Bregman, respectively. Fragmented Hypercube with Mode Dropping: We demonstrate the utility of the support subset selection algorithm for partial point cloud registration on a toy case with one dropped mode and an affine transformation between the source and the target. Specifically, we consider data sampled from a uniform distribution over a hypercube (specifically, a square in 2D or a cube in 3D), which is then fragmented, where the target has one less fragment than the source. To generate the source we sample n points {vi} n i=1 from the uniform distribution over a unit hypercube centered at the origin [− 1 2 , 1 2 ] d, d ∈ {2, 3}. These points are then fragmented into 2 dfragments according to their quadrant y˜i = vi + (d − 1) sign(vi) and then offset to obtain the source points as yi = y˜i + 5(d − 1) for i ∈ [n]. The target data is generated similarly: a sample of m > m ˆ points {zi} mˆ i=1 is obtained from [− 1 2 , 1 2 ] d, then points with all negative coordinates are discarded, leaving m points, which are fragmented into 2 d − 1 fragments to obtain the target set {xi} m i=1 via xi = zi + (d − 1) sign(zi) for i ∈ [m]. Examples of the data for 2D and 3D are shown in Figure 3(a) and Figure 4(a), respectively.

14_image_0.png

Figure 3: Results for affine transformation optimization with subset selection for partial optimal transport.

Target points X are sampled from a 2D fragmented hypercube centered at the origin with negative coordinates removed, whereas source points Y are sampled from a translated fragmented hypercube. (a) Source and target sample points. (b) Loss function plotted against Θ = [A, b] updates for scaling parameter c ∈ {1, 1.25, 1.5, 1.75, 2, 4, 8, 16}. (c) Target and transformed source points after application of optimized affine transformation. Subset selection problems are solved using the SS-Bregman with λ = 0.1, max-outer-iter = 100 and max-inner-iter = 500. Due to the translation by 5(d − 1) of the source point coordinates, direct application of the transport map will not yield a meaningful registration. Instead we use the bi-level optimization algorithm described in Section 2.5. The target and the transformed source after applying the affine transformation obtained using SS-Bregman for c ∈ {1, 1.25, 1.5, 1.75, 2, 4, 8, 16} are displayed in Figure 3(c) and Figure 4(c), respectively.

Clearly, the c = 1 case corresponding to the complete optimal transport fails to identify a meaningful affine transformation, instead skewing and rotating the source fragments to minimize the Wasserstein distance to the target. The figures also display the cost ⟨P ∗,Mˆ ⟩ across iterations. It can be observed that, like the previous toy examples as the value of scaling factor c is increased, initially the value of the optimal loss ⟨P ∗,Mˆ ⟩ decreases but after certain values of c, it saturates and stops decreasing and stays constant afterwards. Partial Point Cloud for 3D Shapes: We further apply this form of subset selection based point cloud registration to point clouds for 3D objects when the target points are only taken from a portion of the entire 3D point cloud. Results for the Stanford bunny and armadillo (Turk & Levoy, 1994; Krishnamurthy & Levoy, 1996) are shown in Figure 5. It can be observed that for the case c = 1, which corresponds to complete optimal transport, the entire set of source points are coupled to the target point cloud which results in a distorted affine transform. For c ∈ {2, 5, 10, 20}, subset selection allows an appropriate subset of the source points to be well-fit by an affine transform to the target point cloud.

15_image_0.png

Figure 4: Results for affine transformation optimization with subset selection for partial optimal transport. Target points X are sampled from a 3D fragmented hypercube centered at the origin with negative co-ordinates removed, whereas source points Y are sampled from a translated fragmented hypercube. (a) Source and target sample points. (b) Loss function plotted against 0 = [4, b] updates for scaling parameter c E {1, 1.25, 1.5, 1.75, 2, 4, 8, 16}. (c) Target and transformed source points after application of the optimized affine transformation. Subset selection problems are solved using the SS-Bregman with \ = 0.1, max-outer-iter = 200 and max-inner-iter = 500.

3.2 Color Transfer

Color transfer is the problem of finding a correspondence in the colors of pixels (represented as points in a 3D color space) between two images and then using this map to assign the colors of the source image to the target image (Reinhard et al., 2001). Color transfer is essentially an optimal transport problem in the color space, but with the added context that the pixels have their image coordinates, which are not used by the algorithm. For practical application to high resolution images, the pixel colors are first quantized using k-means clustering, as using partial optimal transport on the full set of pixel colors is computationally demanding. While in standard optimal transport the relative mass of each color cluster has to be preserved, here we exploit our formulation of partial optimal transport as support subset selection to allow a subset of colors to be used at a higher proportion than in the original source and allow a subset of colors to be completely discarded. For example, if a color cluster represents 1% of the original source's pixels, then it could represent up to c% of the target's pixels.

We apply k-means clustering to the set of vectors in RGB color space representing the source's M pixels and the target's N pixels separately to obtain m < M color centroids {z;},22 C R2 for the target image and n < N color centroids {y}};_1 C R2 for the source image, with u E Am and u E An being the vectors of proportion of colors in the target and source image color clusters, respectively. After that, we define the

16_image_0.png

Figure 5: Affine transformation optimization for partial alignment of point clouds where a subset of the source point-cloud Y can be perfectly aligned (after rotation and scaling) with the target point cloud X.

We use the optimization algorithm described in Section 2.5, where SS-Bregman is employed to obtain the coupling P ∗ given the affine transformation parameters A and b, which are updated using equation 27. (a) Stanford bunny point cloud. (b) Stanford armadillo point cloud. cost matrix between the color centroids as Mij = ∥xi −yj∥ 2 2 , ∀i ∈ [m], j ∈ [n] and obtain the support subset selection map P ∗ ∈ R m×n

  • using SS-Bregman, such that P ∗1n = µ and P ∗⊤1m ≼ cν. The support subset selection is then used to obtain the barycenter projections by solving (Blondel et al., 2018)

x^i=arg minxR3j=1nPijxyj22,i[m].(28)\hat{\mathbf{x}}_{i}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{3}}\sum_{j=1}^{n}P_{ij}^{*}\|\mathbf{x}-\mathbf{y}_{j}\|_{2}^{2},\quad\forall i\in[m].\tag{28}

The analytic solution of the barycenter projections can be compactly written as

n as\mathrm{n~as} X^=(P(μ1n))YRm×3,{\hat{\mathbf{X}}}=(\mathbf{P}^{*}\oslash(\mathbf{\mu1}_{n}^{\top}))\mathbf{Y}\in\mathbb{R}^{m\times3}, (29)(29) m×3, (29) where X = [x1, x2*, . . . ,* xm] ⊤ ∈ R m×3 and Y = [y1, y2*, . . . ,* ym] ⊤ ∈ R n×3 are matrices of the color centroids.

Each pixel in the target image is assigned the corresponding barycenter projection xˆπ(i), where π(i) ∈ [m] is the cluster assignment for the ith pixel of target image, i ∈ [M].

We apply this color transfer scheme to images freely available though a Creative Commons licence, the "Louisiana Nature Scene Barataria Preserve" by Neil O as target and "Autumn in Toronto" by Bahman AMahmoodi as source. The color transfer results with m = n = 128 and SS-Bregman with λ = 0.1 are shown in Figure 6. It can be observed that the results for larger values of c are smoother within objects or areas of similar color (e.g., the dark backdrop behind the peppers) and sharper in the color transitions between different objects (the colors in the orange versus red pepper on the right side of the photograph) as compared to the optimal transport case c = 1. This is due to the fact that larger values of c allow certain colors to be reused more than their prevalence in the source image and allow some colors to be discarded, which enables smoother transitions in colors for areas of the target images with smooth color gradients. Similar observations can be seen in Figure 7 which uses the same settings and MATLAB test images: "peppers" as target and "corn" as source.

3.3 Subset Selection For Positive-Unlabeled Learning

In this section, we discuss the application of subset selection to the one-class semi-supervised classification scheme known as positive-unlabeled (PU) learning (Bekker & Davis, 2020). In PU learning, the training

17_image_0.png

Figure 6: Color transfer results for c ∈ {1, 1.25, 1.5, 1.75, 2, 4, 8, 16} for "Louisiana Nature Scene Barataria Preserve" by Neil O as target and "Autumn in Toronto" as source by Bahman A-Mahmoodi as target. The value of c for each image is indicated at the top of image.

17_image_1.png

Figure 7: Color transfer results for c ∈ {1, 1.25, 1.5, 1.75, 2, 4, 8, 16} for MATLAB image "peppers" as target and "corn" as source. The value of c for each image is indicated at the top of image. sample consists of purely positively labeled instances, and the unlabeled test sample consists of both positive and negative instances. Previous work often assumes that a prior on the probability of positive instances in unlabeled data is known (Kato et al., 2019; Hsieh et al., 2019; Chapel et al., 2020). Partial optimal transport is then used to find a subset with cardinality proportional to the prior of the test sample (source/unlabeled) that corresponds to all or a subset of the training sample (target/positive). We argue that all of the target mass should be preserved in cases of a relatively small and curated positive training sample. This motivates the application of our proposed subset selection approach to find the subset of the source that covers the positive target, compared to fully-relaxed approaches. To illustrate the difference between fully and semi-relaxed partial optimal transport for PU learning, we consider two-dimensional random variables for the positive and unlabeled points where the support of positive random variable is a subset of the support of unlabeled random variable and both have long-tailed distributions, as shown in Figure 8. The lower accuracy for the fully-relaxed solution (79.5% versus subset selection's 84.1%) can be accounted for by the increased distances between source and target points away from the origin, which causes the target points to be dropped, preventing true positives in the source from being selected. This result holds for more general settings discussed in Appendix E.

We applied subset selection to PU learning using the experimental settings adapted from the work of Chapel et al. (2020), who explored using the PU-Wasserstein (PUW) 9 and the partial Gromov-Wasserstein distance (PGW) on various UCI, MNIST, colored-MNIST, and Caltech-office data sets. For the UCI, MNIST, and

18_image_0.png

Figure 8: Results of PU learning on toy data using subset selection (accuracy 84.1%) and fully-relaxed partial optimal transport (accuracy 79.5%). Data is generated as [r cos(ψ), r sin(ψ)] where the radius r is drawn from a truncated exponential distribution with density 2 exp(−r)ı[log 2,∞)(r) and ψ is uniform over a subset of angles [0, π]. The points belong to four classes corresponding to the angle falling in one of the intervals [0, π 4 ], ( π 4 , π 2 ], ( π 2 , 3π 4 ], or ( 3π 4 , π]. The m = 100 target/positive points are all from the third class. The solutions are obtained using the known number of positives n+ out of the n = 400 source points, setting c =n n+ for our semi-relaxed approach and s = n+ nfor the fully-relaxed partial optimal transport. colored-MNIST data sets, we randomly draw m = 400 positive points and n = 800 unlabeled points. For Caltech-office data sets, we randomly sampled m = 100 positive points from the first domain and n = 100 unlabeled data points from the second domain. Following the experiments from the works by Chapel et al.

(2020) and Kato et al. (2019), for multi-class data sets, we chose the data points from the class labeled 1 as positive and a random mixture of all classes as unlabeled, and the prior probability of positive class in the unlabeled set π+ is set to be exactly the proportion of positives in unlabeled sample π+ = n+ n , where n+ is the number of true positives. This informs the PU-Wasserstein, partial Gromov-Wasserstein, and fullyrelaxed partial Wasserstein optimal transport problems on the amount of mass to be transported as s = π+, whereas for subset selection we set the scaling parameter to be c =n n+ . Classification accuracy is evaluated by assigning positive predictions to the n+ largest source mass assignments and negative predictions to the remaining source points. We also compute the ROC curve by using the source mass assignment ν ∗to rank the unlabeled source points. We ran the experiment 10 times and report the average of classification accuracy and the area under the ROC curve (ROC-AUC) in Table 1. It can be observed that the proposed subset-selection performs better than PU Wasserstein in terms of accuracy in 8 out of the 10 data sets with same domain (UCI, MNIST, and Caltech-office with same domains). Subset selection does best overall on 6 out of these 10, with fully-relaxed partial optimal transport performing better on 4 datasets. For the Caltech-office data sets with domain transfer, the partial Gromov-Wasserstein optimal transport does best.

In terms of ROC-AUC, subset selection does better than PU Wasserstein in 9 out of the 10 intra-domain data sets, which is not surprising since the relative ranking is more meaningful than when the mass assignments are restricted to be binary valued {0, 1 n+ } as in the solutions from PU Wasserstein.

We attribute, higher AUC-ROC of subset selection as compared to fully-relaxed Wasserstein, to the fact that, subset selection problem has equality constraints on target mass which ensures coverage (lower false negatives). The differences between our solutions (SS) and those for the PU Wasserstein (PUW) are subtle, as both are semi-relaxed and maintain equality constraints on the target distribution. The small, but consistent, differences in the accuracy between our method and PUW may be due to the additional uniform mass assignment constraints in PUW, which is achieved through group-LASSO. The uniform mass assignment constraints in PUW may result into larger transport costs as compared to solutions obtained using SS with same cardinality. The largest mass assignments in the solution to SS might be more reliable than constrained solutions to PU Wasserstein.

Dataset π+ Accuracy ROC-AUC
PUW PGW FR-PW SS PUW PGW FR-PW SS
mushrooms 0.518 95.15 94.85 99.45 96.63 0.9657 0.3336 0.9948 0.9883
shuttle 0.786 95.13 93.63 96.35 96.20 0.9321 0.6215 0.9467 0.9718
pageblocks 0.898 91.90 90.35 91.88 92.40 0.8036 0.7197 0.7817 0.8513
usps 0.167 98.28 95.55 97.08 98.48 0.9815 0.5096 0.9476 0.9927
connect-4 0.658 60.95 58.05 60.95 60.73 0.5692 0.5126 0.5666 0.5871
spambase 0.394 78.80 68.40 69.08 79.28 0.7952 0.5834 0.6770 0.8369
mnist 0.1 99.08 98.23 99.15 99.18 0.9874 0.7638 0.9768 0.9971
mnist-colored 0.1 91.58 96.78 97.66 91.88 0.8189 0.6619 0.9360 0.9521
surf C → surf C 0.1 90.00 87.20 82.00 90.40 0.8576 0.4622 0.7469 0.7333
surf C → surf A 0.1 81.60 86.80 81.40 81.60 0.4546 0.4764 0.5337 0.4889
surf C → surf W 0.1 82.20 86.40 81.20 82.20 0.4707 0.4807 0.4451 0.5056
surf C → surf D 0.1 80.00 87.00 80.00 80.00 0.3756 0.4328 0.4056 0.4444
decaf C → decaf C 0.1 94.00 86.20 82.00 94.40 0.9498 0.5713 0.7667 0.9682
decaf C → decaf A 0.1 80.20 88.20 81.80 80.40 0.3986 0.5031 0.5349 0.4564
decaf C → decaf W 0.1 80.20 88.60 82.00 80.00 0.4299 0.5827 0.5611 0.5965
decaf C → decaf D 0.1 80.80 92.20 80.40 80.40 0.4546 0.5042 0.4617 0.4530

Table 1: PU learning on data sets as in the work by (Chapel et al., 2020). For subset selection (SS) and fully-relaxed partial-Wasserstein (FR-PW), accuracy is evaluated by assigning label 1 to n+ largest mass assignments and label 0 to the remaining mass assignments. For PU Wasserstein (PUW), the mass assignment are constrained to be binary valued in the set {0, p}, the data points with mass assignments 0 are labeled 0 and the data points with mass p =1 nπ+ =1 n+ are labeled 1.

3.3.1 Pu Learning On Mnist/Emnist

To further illustrate how SS-Bregman operates on PU learning, we apply it to the case where the positive training sample (target) consists of MNIST digit images and the unlabeled test sample contains 50% points (MNIST digits) and 50% negative points (alphabetic letters from EMNIST). When c = 1, which is equivalent to standard optimal transport, initially all the images in the unlabeled source sample are assigned uniform masses. As c is increased, we hypothesize that the true positive MNIST digits will been assigned larger mass and remain in the selected support, whereas the EMNIST letters will receive relatively lower or zero mass. Our hypothesis is confirmed by the results displayed in Figure 9(a), which displays the ROC curve across different choices of c, and in Figure 9(c), which displays the area under the ROC curve (AUC). As c is increased, source points with largest mass assignments are mostly MNIST digits. Likewise, Figure 9(e) shows the images with highest mass for different values of c which are mainly MNIST numbers or EMNIST letters with close resemblance to a number. Figure 9(b) visualizes the distribution of source point masses by graphing the sorted masses for different values of c. From these curves the cardinality of the subset is easily seen for different values of c. Notably, for values of c ≤ 4 there are exists a subset of the selected source points with uniform mass, but for larger values of c, the mass is non-uniform across all instances. These changes correspond to the change in slope of the entropy of the distribution for different values of c is displayed in Figure 9(d). We further compared our approach for PU learning with semi-relaxed optimal transport approaches using the the squared ℓ2-norm penalty Chapel et al. (2021) and total-variation (TV) divergence Séjourné et al. (2019).

The formulation of semi-relaxed problems is discussed in Appendix C. We used POT-toolbox Flamary et al.

(2021) to solve the semi-relaxed optimal transport problems. For both squared ℓ2-norm and total-variation penalties, we varied regularization parameter ρ with 32 uniform steps on logarithmic scale between 10−3 and 103. For subset-selection, c parameter is varied uniformly on logarithmic scale with 32 steps between 1 and 32. In order to evaluate the performance of each method, we assigned label 1 to all the selected points and zero to all the remaining points. We also computed the cardinalities and entropies of mass-assignments ν ∗. In Figure 10 we compare the effect of the cardinality of the support of the mass assignment vector ν ∗ on the accuracy and entropy H(ν ∗). We observe that for the mass assignments with same cardinality, mass assignments obtained through subset selection have higher entropy as compared to both TV and ℓ2 penalized semi-relaxed optimal transport. For each regularization discussed above, mass-assignment paths across scaling parameter c for SS-Bregman are discussed in the Appendix C.

We also adapt the approach proposed in the work by Phatak et al. (2023) in the context of fully-relaxed partial optimal transport to automatically select the proportion of mass to separate inliers and outliers to automatically find a choice of c for subset selection for PU learning. The approach finds the knee of the smoothed version of the first derivative of 1 c Sp(µ, cν) as a function of 1 c ∈ (0, 1], using the kneedle method (Satopaa et al., 2011). The results in Figure 10(c) show that the automatically selected value of c is at the highest accuracy.

20_image_0.png

Figure 9: Subset selection results obtained using SS-Bregman with parameters λ = 1, max-outer-iter = 250 and max-inner-iter = 20. Source sample n = 512 consists of 50% digit images from MNIST and 50% letter images from EMNIST. The target sample contains m = 512 digit images drawn from MNIST. (a) ROC curves for different values for c ∈ {1, 1.25, 1.50, 1.75, 2, 4, 8, 16, 20, 24, 28, 32}. (b) Mass assignments to source images in descending order ν ∗↓. (c) AUC of ROC versus c. (d) Entropy H(ν ∗) of the mass assignments ν ∗ versus c.

3.3.2 Pu Learning For Cifar-10 Neural-Network Representations

We now consider the proposed subset selection algorithm for PU learning on the CIFAR-10 data set, where a single class from the training set is treated as the positive target and a mixture of all classes from the test set is the unlabeled source. Fundamentally, the performance of optimal transport methods on PU learning depends on the distance metric defining the cost matrix. Thus, the method performs poorly if a Euclidean distance metric is applied to complex data such as natural images. Instead, a learning representation extracted from

21_image_0.png

Figure 10: PU learning with semi-relaxed optimal transport approaches on MNIST/EMNIST. Solutions for different cardinalities are obtained by varying the regularization/constraint parameters across a grid, uniform on a logarithmic scale. (a) Accuracy versus cardinality of selected subset using SS-Bregman, semi-relaxed optimal transport with total variation (TV) and squared-ℓ2 penalties for PU learning on MNIST/EMNIST. (b) Entropy of mass assignments versus cardinality of selected subset using SS-Bregman, semi-relaxed optimal transport with total variation (TV) and squared ℓ2 penalties. (c) Accuracy of PU learning on MNIST/EMNIST using SS-Bregman versus scaling parameter c. Vertical lines indicate the location of the knee obtained using Phatak et al. (2023), the true peak accuracy value of c, and the break point c ∗. a pretrained neural network can be used. Here each image is represented as the vector of activations of the penultimate layer of the pre-trained ResNet-20 classifier (trained on CIFAR-10), and the Euclidean distance between the activation vectors defines the cost matrix for the transport problem.The results are given in Figure 11. The results are similar to the previous MNIST/EMNIST data set. Mass is uniformly distributed across a subset of images for values of 1 *< c <* 8. When the subset is greater than the proportion of positive instances in the unlabeled source, then the relative ranking of mass is not reliable: the top instances for c ∈ {1.5, 2, 4} are images from the target class, but c ∈ {1.25, 1.75} have images resembling it from other classes. As c is increased above c = 8 the mass assignment is non-uniform, but constant for further increment in c. Values of c greater than 2 have an AUC >90%. Similar to PU learning on MNIST/EMNIST, we also compared our approach for PU learning on neural network representations of CIFAR-10 with TV and ℓ2 penalized semi-relaxed optimal transport. For semirelaxed formulations (ℓ2 and TV penalties) we varied ρ on logarithmic scale between 10−4 and 103 with 32 steps. For subset selection we varied the scaling parameter c logarithmically between 1 and 60 in 32 steps. We adopted the same strategy as previous case (PU learning on MNIST/EMNIST) to evaluate accuracy, cardinality, and entropy of mass assignments ν ∗ at different values regularization and scaling parameters.

Figure 12 shows accuracy versus cardinality, and entropy versus cardinality, along with variation of PU learning accuracy across c and knee-based scaling parameter selection. Our observations in this case are similar to the case for EMNIST/EMNIST: subset selection has the highest entropy at a given cardinality, its accuracy is at or above the other solutions, and the knee method selects close to optimal value of c.

3.4 Subset Selection For Semi-Supervised Learning

We consider the semi-supervised training of a classifier where the training set is divided into a reliably labeled (curated) target set and an unlabeled or noisily labeled source set. We apply our proposed subset selection algorithm to perform partial optimal transport of the unlabeled source to the labeled target. The transport plan is computed without knowledge of any labels but defines how the source points will be labeled, and subset selection removes points that cannot easily be aligned to labeled training points. Additionally, the new mass assignment source points may be relatively higher for unlabeled points relatively close to labeled training points and lower for unlabeled points from existing points. Used in this way, the optimal transport with subset selection automatically tunes how far to propagate labels in a manner that takes

22_image_0.png

Figure 11: Subset selection results obtained using algorithm SS-Bregman with parameters λ = 1, max-outer-iter = 250 and max-inner-iter = 20. Target and source consist of ResNet-20 embeddings of m = 512 CIFAR-10 dog images and n = 512 randomly sampled CIFAR-10 images, respectively. (a) ROC curves for c ∈ {1, 1.25, 1.50, 1.75, 2, 4, 8, 16, 20, 24, 28, 32}. (b) Mass assignments to source images in descending order ν ∗↓. (c) AUC of the ROC versus c. (d) Entropy H(ν ∗) of mass assignments ν ∗ with versus c. (e) Source images with 10 largest mass assignments. into consideration the geometry and distribution of the curated target data set rather than only the local distances. However, using the distances defined directly in the input space may not be suitable, and a pre-trained representation may not exist for various tasks. Instead, we propose to use the internal learning representation from the neural network classifier while it is being optimized with the semi-supervised loss function.

Let S = {(xi, Li)}M i=1 denote the labeled portion of the training set with the input xi ∈ X and label encoded as a one-hot vector Li ∈ {0, 1} k ⊂ ∆k, ∥Li∥1 = 1 for i ∈ [M], and T = {yj} N j=1 denote the unlabeled portion, yj ∈ X for j ∈ [N]. We consider a neural-network classifier with soft-max activation f(· ; θ) : X → ∆k with parameters θ trained on data with k classes. The neural network's internal representation is a function g(· ; θ) : X → R d. The Euclidean distance between the internal representation of data points provides the distance function, dθ(xi, yj ) = ∥g(xi; θ) − g(yj ; θ)∥2, which is parameterized by the network's parameters.

We train the neural network using mini-batches and a semi-supervised cross-entropy loss. Equal-sized batches are drawn uniformly from the pooled training data set of size M + N. Let τ and σ denote the length-m and length-n vectors of indices of the labeled and unlabeled points in a given batch, respectively, where m + n is the constant batch size. The m-by-n ground cost matrix M(θ) is defined using the squared distances among the batch's latent representations, Mij (θ) = d2θ (xτi , yσj ) = ∥g(xτi ; θ) − g(yσj ; θ)∥ 2 2 .

23_image_0.png

Figure 12: PU learning with semi-relaxed optimal transport approaches on CIFAR-10. Solutions for different cardinalities are obtained by varying the regularization/constraint parameters across a grid, uniform on a logarithmic scale. (a) Accuracy versus cardinality of selected subset using SS-Bregman, semi-relaxed optimal transport with total variation (TV), and squared ℓ2 for PU learning on CIFAR-10. (b) Entropy of mass assignments versus cardinality. (c) Accuracy of PU learning on CIFAR-10 using SS-Bregman versus scaling parameter c. Vertical lines indicate knee obtained using Phatak et al. (2023), true peak accuracy value of c, and break point c ∗. Given the cost matrix and hyper-parameters (including c ≤ n), the subset selection transport plan P ∗ ∈ [0, 1]m×n is obtained using SS-Bregman. Given the matrix of one-hot encoded labels L = [Lτ1 , . . . , Lτm] ⊤ ∈ {0, 1} m×k, the matrix of pseudo-labels assigned by the algorithm of the unlabeled mini-batch points is computed L˜ = nP ∗⊤L ∈ [0, c] n×k, where [P ∗⊤L]jl = 1 n L˜jl ∈ [0, 1] is the estimate of the joint probability that mini-batch unlabeled instance j ∈ [n] belongs to class l ∈ [k].

3 Given the pseudo-labels, the semisupervised cross-entropy loss function for a batch is

\text{loss}(\mathbf{\theta})=-\bigg{[}\sum_{i=1}^{m}\sum_{l=1}^{k}\frac{1}{m}\mathbf{L}_{il}\log(f_{l}(\mathbf{x}_{\tau_{i}};\mathbf{\theta}))+\sum_{j=1}^{n}\sum_{l=1}^{k}\frac{1}{n}\tilde{\mathbf{L}}_{jl}\log(f_{l}(\mathbf{y}_{\sigma_{j}};\mathbf{\theta}))\bigg{]}.\tag{30}

Our approach is similar to other recent work (Damodaran et al., 2020) that also employs optimal transport using a learning representation. While we address the semi-supervised case, Damodaran et al. (2020) address supervised learning in the presence of label noise and perform self optimal transport within batches to correct for label noise. As baseline comparisons, we compare our semi-supervised approach to supervised training with either only the labeled portion or with noisy labels on the unlabeled portion. Due to the curated labeled set, the latter is not the typical label noise scenario; however, the division of a training set into a curated portion and a portion with label noise is relevant to practical scenarios. While our semi-supervised approach does not use noisy labels, future extensions could consider how to leverage the noisy labels too. In order to evaluate our approach we used MNIST, Fashion-MNIST (FMNIST), and CIFAR-10. We split training data sets into 80/20 proportions for training and validation. We further split the training part into a reliably labeled and unreliably labeled parts. Labels for the unreliably labeled part are generated by uniformly corrupting the true labels to other classes depending on the noise level. For each of our experiments, the subset selection transport underlying the loss done is found via SS-Bregman with λ = 0.1, max-outer-iter = max-inner-iter = 20 with a batch-size of 512. We used PyTorch framework for our experiments. A ResNet-18 model architecture is used on the CIFAR-10 data set. We trained the ResNet-18 for 180 epochs using Adam optimizer with an initial learning rate of 0.001, which is scheduled to be halved

3It can be seen that the total sum of this joint is 1, 1

1,1nPL1kμ=1.1,\underbrace{{\mathbf1}_{n}^{\top}{\boldsymbol{P}}^{*}{}^{\top}{\boldsymbol{L}}{\mathbf1}_{k}}_{\mu^{\top}}=1.

after every 60 epochs. The model architectures containing two convolutional layers for MNIST and FMNIST are given in Appendix F. The neural network classification models for MNIST are trained using stochastic gradient descent with a learning rate of 0.001, whereas models for FMNIST are trained using Adam with a learning rate of 0.001 and weight decay 1e-4. Model training for MNIST and Fashion-MNIST are done on a desktop system containing Intel Core-i7 9700 CPU, with 32 GB memory and NVIDIA GeForce RTX 2070 GPU. ResNet-18 based models for CIFAR-10 are trained using Lambda-labs cloud resources with 30 vCPUs, 200 GB memory, and NVIDIA A10 GPUs.

Dataset Architecture stand. Subset Selection
c = 1 c = 2 c = 3 c = 4 c = 5 c = 6 c = 7 c = 8 c = 20
MNIST 2-layer conv-net 96.57 97.15 97.39 97.33 97.40 97.43 97.32 97.38 97.44 97.35
FMNIST 2-layer conv-net 88.68 90.14 90.30 90.27 90.51 90.23 90.38 90.21 90.37 90.16
CIFAR-10 ResNet-18 79.10 87.18 89.45 89.27 89.21 89.53 89.54 89.72 89.47 89.57

Table 2: Validation accuracies for different values for neural network classification models trained with 50% reliably labeled points and 50% points with noisy labels (noise level 0.8). Standard training (stand.) treats them equally, but subset selection treats them as unlabeled and assign pseudo-labels. Subset selection is done using the SS-Bregman with λ = 0.1 and max-outer-iter = max-inner-iter = 20.

In the first step of experiments, we split the training set for each data set into 50/50 proportions for unreliably and unreliably labeled parts. Unreliably labeled data is generated by uniformly corrupting the labels with a 80% chance (noise level 0.8). Validation accuracies for each data set are displayed in the Table 2. Notably, the performance for c = 1 is higher than training with noisy labels, which shows that the semi-supervised training performs better than training with data with a high noise level. (Because the algorithm is not run to convergence, the mass assignments for unlabeled points may not be be exactly uniform in the c = 1 case.) The performance of subset selection is consistently higher for values of c > 1 compared to the c = 1, and the validation accuracies do not exhibit much change between c = 2 and c = 20. Therefore, we further evaluated our approach by varying both noise levels and clean and noisy proportions only for c = 2 and c = 20.

Progress of validation accuracies on CIFAR-10 are displayed in Figure 13 for clean/noisy proportions in {20/80, 40/60, 60/40, 80/20} with noise levels {0.2, 0.4, 0.6, 0.8}. It can be observed that standard neural network training process with label noise can divided into three phases, first in which the validation accuracy increases until a peak. In the second phase, validation accuracy decreases, where the magnitude of the decrement depends on the noise level: it decreases less for low noise levels and more for higher noise levels. In the third phase, validation accuracy increases again and then oscillates around a constant value. This kind of phenomenon is more pronounced for larger noise levels (Zheng et al., 2020). In contrast, for the proposed subset selection based semi-supervised learning the validation accuracy does not go down after hitting its peak during the training process. This indicates that the transport map tend to assign correct pseudo-labels to the data points nearest the labeled data points and does not introduce label noise. The test set accuracies are displayed in Table 3 for supervised training on only the labeled data versus the semi-supervised training. The semi-supervised training with subset selection outperforms training performs better on 2 of 3 data sets under a 20/80 split of labeled and unlabeled, but does not outperform supervised learning for the 40/60 split. Thus, the semi-supervised loss function equation 30 is most beneficial when there is a higher ratio of unlabeled to labeled points.

4 Discussion And Further Work

In this paper, we have focused on selecting a subset of one distribution's support as a special case of partial optimal transport Figalli (2010); Chapel et al. (2020). This is useful to find meaningful alignment when the support of the target distribution is assumed to be a subset of the source distribution. Results on the partial point cloud alignment, color transfer, PU learning, and semi-supervised learning all demonstrate the utility of this approach.

25_image_0.png

Figure 13: Progress of validation accuracies while training ResNet-18 for CIFAR-10 classification. (a) Uniform noise levels are varied between 0.2,0.4,0.6,0.8 for standard training. (b) and (c) Training with the subset selection based semi-supervised loss at different values of c, does not use the unreliable labels, and outperforms standard training with noisy labels when either the proportion of reliably labeled data is 40% or the noise level is 0.4 or greater. Subset selection is done using theSS-Bregman with c = 20, A = 0.1, max-outer-iter = max-inner-iter = 20.

Architecture Labeled/Unlabeled %
Dataset Labeled only Semi-supervised
with subset selection
2-layer 20/80 98.19 96.09
MNIST conv-net 40/60 98.30 97.14
2-layer 20/80 88.35 88.42
F-MNIST conv-net 40/60 89.93 89.48
20/80 81.62 82.37
ResNet-18
CIFAR-10 40/60 87.77 86.74

Table 3: Semi-supervised learning test accuracies on MINIST, Fashion-MNIST, and CIFAR-10.

Subset selection is done using SS-Bregman with c = 20, ) = 0.1, max-outer-iter = max-inner-iter = 20. In particular, the results from the PU learning show that the proposed subset selection is useful when there is known target distribution (an existing training or validation set) and an additional source distribution, which has additional diversity, but also outliers, compared to the target. One application of PU learning is to filter a source of new data for relevant examples for further modeling. Future work could explore subset selection approach for source distributions created from synthetic generation mechanisms. While not explored in a machine learning context, it is possible that the partial optimal transport with affine (or nonlinear) transformation can be applied to account for global covariate shift between the synthetic and real data. In this case, a user would want to balance the diversity (entropy) of the filtered source with its purity.

Interestingly, the solution for the subset selection are similar but consistently outperform those from PU Wasserstein Chapel et al. (2020), which by design of the constraints, have a maximum entropy, uniform distribution over the selected subset achieved through the group LASSO penalty. The results from subset selection show that it maintains close to maximum entropy amongst the selected support and is as accurate as other semi-relaxed penalty based approaches. Additionally, the manual choice of c controlling the constraint is more intuitive than selecting a penalty parameter. Finally, the automatic selection of c using the straightforward knee-based selection adapted from fully-relaxed partial optimal transport (Phatak et al., 2023) shows promising results to separate inliers and outliers.

In our experiments related to semi-supervised learning, we employed optimal transport between a labeled target and unlabeled source, to assign pseudo-labels to source points that cover the labeled data distribution, while ignoring ambiguous cases, during training. In future extension, we can consider how to use class information in the optimal transport planning, perhaps by using class conditional optimal transport, as currently the transport plan is not informed of the known target labels nor the classifier's boundaries.

Another line of exploration is how to use the support subset selection to correct noisily labeled source.

Another key contribution of this work is the proposed support subset selection algorithm using the inexact Bregman proximal point algorithm (SS-Bregman), which as shown in Appendix B yields a solution with a sparse source marginal similar to solutions to the original linear program 6—unlike the entropically regularized solution from SS-Entropic. We also demonstrate that the mass assignments of the linear program solution are piece-wise linear as a function of c. While not fully investigated here, this behavior could be exploited to find the sequence of breakpoints where points leave the support and where points leave the active set of constraints (indicated by being on the upper diagonal).

Recently, Gromov-Wasserstein optimal transport has seen applications in graph-matching and generative modeling (Brogat-Motte et al., 2022; Li et al., 2023; Nekrashevich et al., 2023; Bunne et al., 2019; Mémoli, 2009). Due to inherent ability to match structural correspondences across spaces, partial GromovWasserstein optimal transport can be used to solved robust graph-alignment problems. Recently, efficient locally convergent solutions for a relaxed Gromov-Wasserstein distance have been proposed (Peyré et al., 2016; Li et al., 2023). Future work can explore the subset selection case of the partial Gromov-Wasserstein optimal transport, where one domain is expected to have a complete or overcomplete source distribution compared to the target. This may be useful in robust domain adaptation, semi-supervised domain adaptation, and metric alignment.

Author Contributions

Bilal Riaz and Austin Brockmeier worked on the formulation and design of experiments for subset-selection.

Bilal Riaz implemented the algorithms and results. Manuscript was written and edited by all authors.

Acknowledgments

Bilal Riaz is supported by Higher Education Commission of Pakistan and University of Delaware. We would like to express gratitude to Matthew S. Emigh for insightful discussions and the Office of Naval Research for funding this research. Research at the University of Delaware was sponsored by the Department of the Navy, Office of Naval Research under ONR award number N00014-21-1-2300.

Code Availability

Code for the paper is available at https://github.com/Bilal092/support-subset-selection.

References

Akshay Agrawal, Robin Verschueren, Steven Diamond, and Stephen Boyd. A Rewriting System for Convex Optimization Problems. Journal of Control and Decision, 5(1):42–60, 2018.

Jason Altschuler, Jonathan Niles-Weed, and Philippe Rigollet. Near-Linear Time Approximation Algorithms for Optimal Transport via Sinkhorn Iteration. Advances in Neural Information Processing Systems, 30, 2017.

Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein Generative Adversarial Networks. In International Conference on Machine Learning, pp. 214–223. PMLR, 2017.

K Somani Arun, Thomas S Huang, and Steven D Blostein. Least-squares Fitting of Two 3-D Point Sets.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(5):698–700, 1987.

Amir Beck. First-Order Methods in Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA, 2017.

Amir Beck and Marc Teboulle. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.

Amir Beck and Marc Teboulle. A Fast Dual Proximal Gradient Algorithm for Convex Minimization and Applications. Operations Research Letters, 42(1):1–6, 2014.

Jessa Bekker and Jesse Davis. Learning from Positive and Unlabeled Data: A Survey. Machine Learning, 109:719–760, 2020.

Jean-David Benamou. Numerical resolution of an "unbalanced" mass transport problem. ESAIM: Mathematical Modelling and Numerical Analysis, 37(5):851–868, 2003.

Mathieu Blondel, Vivien Seguy, and Antoine Rolet. Smooth and Sparse Optimal Transport. In International Conference on Artificial Intelligence and Statistics, pp. 880–889. PMLR, 2018.

Nicolas Bonneel and David Coeurjolly. SPOT: Sliced Partial Optimal Transport. ACM Transactions on Graphics (TOG), 38(4):1–13, 2019.

Luc Brogat-Motte, Rémi Flamary, Céline Brouard, Juho Rousu, and Florence d'Alché Buc. Learning to Predict Graphs with Fused Gromov-Wasserstein Barycenters. In International Conference on Machine Learning, pp. 2321–2335. PMLR, 2022.

Charlotte Bunne, David Alvarez-Melis, Andreas Krause, and Stefanie Jegelka. Learning Generative Models across Incomparable Spaces. In International Conference on Machine Learning, pp. 851–861. PMLR, 2019.

Luis A Caffarelli and Robert J McCann. Free Boundaries in Optimal Transport and Monge-Ampere Obstacle Problems. Annals of Mathematics, pp. 673–730, 2010.

Laetitia Chapel, Mokhtar Z Alaya, and Gilles Gasso. Partial Optimal Tranport with Applications on PositiveUnlabeled Learning. Advances in Neural Information Processing Systems, 33:2903–2913, 2020.

Laetitia Chapel, Rémi Flamary, Haoran Wu, Cédric Févotte, and Gilles Gasso. Unbalanced optimal transport through non-negative penalized linear regression. Advances in Neural Information Processing Systems, 34: 23270–23282, 2021.

Lenaic Chizat, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. Scaling Algorithms for Unbalanced Optimal Transport Problems. Mathematics of Computation, 87(314):2563–2609, 2018.

Marco Cuturi. Sinkhorn Distances: Lightspeed Computation of Optimal Transport. Advances in Neural Information Processing Systems, 26, 2013.

Marco Cuturi and Gabriel Peyré. A Smoothed Dual Approach for Variational Wasserstein Problems. SIAM Journal on Imaging Sciences, 9(1):320–343, 2016.

Marco Cuturi and Gabriel Peyré. Semidual Regularized Optimal Transport. SIAM Review, 60(4):941–965, 2018.

Bharath Bhushan Damodaran, Rémi Flamary, Vivien Seguy, and Nicolas Courty. An Entropic Optimal Transport Loss for Learning Deep Neural Networks under Label Noise in Remote Sensing Images. Computer Vision and Image Understanding, 191:102863, 2020.

Ishan Deshpande, Ziyu Zhang, and Alexander G Schwing. Generative Modeling Using the Sliced Wasserstein Distance. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 3483– 3491, 2018.

Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded Modeling Language for Convex Optimization. Journal of Machine Learning Research, 17(83):1–5, 2016.

Alessio Figalli. The Optimal Partial Transport Problem. Archive for Rational Mechanics and Analysis, 195 (2):533–560, 2010.

Rémi Flamary, Nicholas Courty, Davis Tuia, and Alain Rakotomamonjy. Optimal Transport for domain Adaptation. IEEE Transactions on Pattern Analysis Machine Intelligence, 1, 2016.

Rémi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z. Alaya, Aurélie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, Léo Gautheron, Nathalie T.H.

Gayraud, Hicham Janati, Alain Rakotomamonjy, Ievgen Redko, Antoine Rolet, Antony Schutz, Vivien Seguy, Danica J. Sutherland, Romain Tavenard, Alexander Tong, and Titouan Vayer. Pot: Python optimal transport. Journal of Machine Learning Research, 22(78):1–8, 2021.

Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning with a Wasserstein loss. Advances in Neural Information Processing Systems, 28, 2015.

Divyansh Garg, Yan Wang, Bharath Hariharan, Mark Campbell, Kilian Q Weinberger, and Wei-Lun Chao.

Wasserstein Distances for Stereo Disparity Estimation. Advances in Neural Information Processing Systems, 33:22517–22529, 2020.

Aude Genevay, Gabriel Peyré, and Marco Cuturi. Learning Generative Models with Sinkhorn Divergences.

In International Conference on Artificial Intelligence and Statistics, pp. 1608–1617. PMLR, 2018.

Kevin Guittet. Extended Kantorovich norms: a tool for optimization. PhD thesis, INRIA, 2002.

Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved Training of Wasserstein GANs. Advances in Neural Information Processing Systems, 30, 2017.

Sergey Guminov, Pavel Dvurechensky, Nazarii Tupitsa, and Alexander Gasnikov. On a Combination of Alternating Minimization and Nesterov's Momentum. In International Conference on Machine Learning, pp. 3886–3898. PMLR, 2021.

Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition.

In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.

Roger A Horn and Charles R Johnson. Matrix Analysis. Cambridge University Press, 2012. Yu-Guan Hsieh, Gang Niu, and Masashi Sugiyama. Classification from Positive, Unlabeled and Biased Negative Data. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR, 2019.

Masahiro Kato, Takeshi Teshima, and Junya Honda. Learning from Positive and Unlabeled Data with a Selection Bias. In International Conference on Learning Representations, 2019.

Keisuke Kawano, Satoshi Koide, and Keisuke Otaki. Partial Wasserstein Covering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pp. 7115–7123, 2022.

Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980, 2014.

Matthieu Kirchmeyer, Alain Rakotomamonjy, Emmanuel de Bezenac, and Patrick Gallinari. Mapping Conditional Distributions for Domain Adaptation under Generalized Target Shift. In International Conference on Learning Representations, 2022.

Soheil Kolouri, Se Rim Park, Matthew Thorpe, Dejan Slepcev, and Gustavo K Rohde. Optimal Mass Transport: Signal Processing and Machine-Learning Applications. IEEE Signal Processing Magazine, 34 (4):43–59, 2017.

Soheil Kolouri, Gustavo K Rohde, and Heiko Hoffmann. Sliced Wasserstein Distance for Learning Gaussian Mixture Models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3427–3436, 2018.

Alexander Korotin, Daniil Selikhanovych, and Evgeny Burnaev. Neural Optimal Transport. In The Eleventh International Conference on Learning Representations, 2023.

Venkat Krishnamurthy and Marc Levoy. Fitting Smooth Surfaces to Dense Polygon Meshes. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 313–324, 1996.

Jiajin Li, Jianheng Tang, Lemin Kong, Huikang Liu, Jia Li, Anthony Man-Cho So, and Jose Blanchet. A Convergent single-Loop Algorithm for Relaxation of Gromov-Wasserstein in Graph Data. In The Eleventh International Conference on Learning Representations, 2023.

Xing-Si Li and Shu-Cherng Fang. On The Entropic Regularization Method for Solving Min-Max Problems with Applications. Mathematical methods of operations research, 46(1):119–130, 1997.

Tianyi Lin, Nhat Ho, and Michael Jordan. On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms. In Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research. PMLR, 2019.

Tianyi Lin, Nhat Ho, and Michael I Jordan. On the Efficiency of Entropic Regularized Algorithms for Optimal Transport. Journal of Machine Learning Research, 23(137):1–42, 2022.

Petr Mokrov, Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, and Evgeny Burnaev. LargeScale Wasserstein Gradient Flows. In Advances in Neural Information Processing Systems, volume 34, pp.

15243–15256, 2021.

Andriy Myronenko and Xubo Song. Point Set Registration: Coherent Point Drift. IEEE transactions on Pattern Analysis and Machine Intelligence, 32(12):2262–2275, 2010.

Facundo Mémoli. Spectral Gromov-Wasserstein distances for shape matching. In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pp. 256–263, 2009.

Maksim Nekrashevich, Alexander Korotin, and Evgeny Burnaev. Neural Gromov-Wasserstein optimal transport. arXiv preprint arXiv:2303.05978, 2023.

Neal Parikh and Stephen Boyd. Proximal Algorithms. Foundations and Trends® in Optimization, 1(3): 127–239, 2014.

Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic Differentiation in Pytorch. In NIPS 2017 Autodiff Workshop, 2017.

Gabriel Peyré and Marco Cuturi. Computational Optimal Transport: With Applications to Data Science.

Foundations and Trends® in Machine Learning, 11(5-6):355–607, 2019.

Gabriel Peyré, Marco Cuturi, and Justin Solomon. Gromov-Wasserstein Averaging of Kernel and Distance Matrices. In International Conference on Machine Learning, pp. 2664–2672. PMLR, 2016.

Abhijeet Phatak, Sharath Raghvendra, Chittaranjan Tripathy, and Kaiyi Zhang. Computing all optimal partial transports. In The Eleventh International Conference on Learning Representations, 2023.

Julien Rabin, Gabriel Peyré, Julie Delon, and Marc Bernot. Wasserstein Barycenter and its Application to Texture Mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, pp. 435–446. Springer, 2011.

Julien Rabin, Sira Ferradans, and Nicolas Papadakis. Adaptive color transfer with relaxed optimal transport.

In 2014 IEEE International Conference on Image Processing (ICIP), pp. 4852–4856, 2014.

Erik Reinhard, Michael Adhikhmin, Bruce Gooch, and Peter Shirley. Color Transfer Between Images. IEEE Computer Graphics and Applications, 21(5):34–41, 2001.

Litu Rout, Alexander Korotin, and Evgeny Burnaev. Generative Modeling with Optimal Transport Maps.

In International Conference on Learning Representations, 2022.

Yossi Rubner, Leonidas J Guibas, and Carlo Tomasi. The earth mover's distance, multi-dimensional scaling, and color-based image retrieval. In Proceedings of the ARPA Image Understanding Workshop, volume 661, pp. 668, 1997.

Tim Salimans, Han Zhang, Alec Radford, and Dimitris Metaxas. Improving GANs Using Optimal Transport.

In International Conference on Learning Representations, 2018.

Ville Satopaa, Jeannie Albrecht, David Irwin, and Barath Raghavan. Finding a" kneedle" in a haystack: Detecting knee points in system behavior. In 2011 31st International Conference on Distributed Computing Systems Workshops, pp. 166–171. IEEE, 2011.

Bernhard Schmitzer. Stabilized Sparse scaling algorithms for entropy regularized transport problems. SIAM Journal on Scientific Computing, 41(3):A1443–A1481, 2019.

Thibault Séjourné, Jean Feydy, François-Xavier Vialard, Alain Trouvé, and Gabriel Peyré. Sinkhorn divergences for unbalanced optimal transport. arXiv preprint arXiv:1910.12958, 2019.

Richard Sinkhorn. A Relationship between Arbitrary Positive Matrices and Doubly Stochastic Matrices.

The Annals of Mathematical Statistics, 35(2):876–879, 1964.

Justin Solomon, Raif Rustamov, Leonidas Guibas, and Adrian Butscher. Wasserstein Propagation for SemiSupervised Learning. In International Conference on Machine Learning, pp. 306–314. PMLR, 2014.

Marc Teboulle. Entropic Proximal Mappings with Applications to Nonlinear Programming. Mathematics of Operations Research, 17(3):670–690, 1992.

Ilya Tolstikhin, Olivier Bousquet, Sylvian Gelly, and Bernhard Schölkopf. Wasserstein Auto-Encoders. In 6th International Conference on Learning Representations (ICLR 2018), 2018.

Greg Turk and Marc Levoy. Zippered Polygon Meshes from Range Images. In Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 311–318, 1994.

Yujia Xie, Xiangfeng Wang, Ruijia Wang, and Hongyuan Zha. A Fast Proximal Point Method for Computing Exact Wasserstein Distance. In Uncertainty in Artificial Itelligence, pp. 433–453. PMLR, 2020.

Hongteng Xu, Wenlin Wang, Wei Liu, and Lawrence Carin. Distilled Wasserstein Learning for Word Embedding and Topic Modeling. Advances in Neural Information Processing Systems, 31, 2018.

Ningli Xu, Rongjun Qin, and Shuang Song. Point Cloud Registration for Lidar and Photogrammetric Data: A Critical Synthesis and Performance Analysis on Classic and Deep Learning Algorithms. ISPRS Open Journal of Photogrammetry and Remote Sensing, pp. 100032, 2023.

Lei Yang and Kim-Chuan Toh. Bregman Proximal Point Algorithm Revisited: A New Inexact Version and Its Inertial Variant. SIAM Journal on Optimization, 32(3):1523–1554, 2022.

Yufu Zang, Roderik Lindenbergh, Bisheng Yang, and Haiyan Guan. Density-Adaptive and Geometry-Aware Registration of TLS Point Clouds Based on Coherent Point Drift. IEEE Geoscience and Remote Sensing Letters, 17(9):1628–1632, 2019.

Juyong Zhang, Yuxin Yao, and Bailin Deng. Fast and Robust Iterative Closest Point. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(7):3450–3466, 2021.

Songzhu Zheng, Pengxiang Wu, Aman Goswami, Mayank Goswami, Dimitris Metaxas, and Chao Chen.

Error-Bounded Correction of Noisy Labels. In International Conference on Machine Learning, 2020.

A Appendix: Lipschitz Smoothness Of Dual

The dual form 16 considered in this paper does not explicitly enforce the primal problem's marginal simplex constraints on the transport plan. Consequently, the dual form is not necessarily Lipschitz smooth (Lin et al., 2019; Cuturi & Peyré, 2018; Lin et al., 2022). But in the proposed algorithm, we first update the dual variable α using the Sinkhorn-like update, which implicitly enforces the simplex constraint, making the semi-dual problem 1 γ -Lipschitz smooth with respect to ℓ1, ℓ2 and ℓ∞ norms. This justifies the use of η (k) s = 1 γ in the accelerated proximal-gradient based approach to solve 16.

Recall that the Lagrangian of 16 given in equation 14 is

L(P,α,β)=P, M+γ(log(P)1m1n)+α1n+1mβα,μβ,ζ.{\mathcal{L}}(P,\alpha,\beta)=\langle P,\ M+\gamma(\log(P)-{\mathbf{1}_{m}}{\mathbf{1}_{n}^{\top}})+\alpha{\mathbf{1}_{n}^{\top}}+{\mathbf{1}_{m}}\beta^{\top}\rangle-\langle\alpha,\mu\rangle-\langle\beta,\zeta\rangle.

By Slater's conditions, the problem 16 is strongly dual therefore

(31)(31) minP0maxα,βL(P,α,β)=maxα,βminP0L(P,α,β).(1)\min_{P\ni0}\,\max_{\mathbf{\alpha},\mathbf{\beta}}\,{\cal L}(\mathbf{P},\mathbf{\alpha},\mathbf{\beta})=\max_{\mathbf{\alpha},\mathbf{\beta}}\,\min_{P\ni0}\,{\cal L}(\mathbf{P},\mathbf{\alpha},\mathbf{\beta}).\tag{1} (32)(32) (33)(33)

In order to find the minimum of the Lagrangian with respect to P , one takes its element-wise derivative with respect to P and obtains P˜(α, β) = Dexp (− 1 γ α)exp (− 1 γ M)Dexp (− 1 γ β), which can then be substituted back into the Lagrangian 31 to obtain the problem

maxα,β{g(α,β)=γ1mP~(α,β)1nα,μβ,ζ}, s.t. β0,\operatorname*{max}_{\alpha,\beta}\quad\left\{g(\alpha,\beta)=-\gamma\mathbf{1}_{m}^{\top}{\tilde{P}}(\alpha,\beta)\mathbf{1}_{n}-\langle\alpha,\mu\rangle-\langle\beta,\zeta\rangle\right\},{\mathrm{~s.t.~}}\beta\succcurlyeq0,

which can be converted to the convex minimization problem 16 by defining f(α, β) = −g(α, β), as in min α,β f(α, β) = γ1 ⊤ mP˜(α, β)1n + ⟨α, µ⟩ + ⟨β, ζ⟩ , s.t. β ≽ 0, (34) The partial gradients of f(α, β) with respect to α and β are

y defining $f(\pmb{\alpha},\pmb{\beta})=-g(\pmb{\alpha},\pmb{\beta})$, as in ι)+β,ζ},s.t.β\iota)+\langle\beta,\zeta\rangle\,\}\,,\,\,\mathrm{s.t.}\,\,\beta (34)(34) \nabla_{\alpha}f(\alpha,\beta)=\mu-\exp\big{(}-\frac{\alpha}{\gamma}\big{)}\odot K\exp\big{(}-\frac{\beta}{\gamma}\big{)}, $$\nabla_{\beta}f(\alpha,\beta)=\zeta-\exp\big{(}-\frac{\beta}{\gamma}\big{)}\odot K^{\top}\exp\big{(}-\frac{\alpha}{\gamma}\big{)}.$$ (35a)(35\mathrm{a}) (35b)(35\mathrm{b}) For twice continuously differentiable functions, the Lipschitz smoothness parameter is determined by the Hessian. The Hessian for f(α, β) is

Hf(α,β)=[ααf(α,β)βαf(α,β)αβf(α,β)ββf(α,β)],H_{f}(\alpha,\beta)=\begin{bmatrix}\nabla_{\alpha}^{\top}\nabla_{\alpha}f(\alpha,\beta)&\nabla_{\beta}^{\top}\nabla_{\alpha}f(\alpha,\beta)\\ \nabla_{\alpha}^{\top}\nabla_{\beta}f(\alpha,\beta)&\nabla_{\beta}^{\top}\nabla_{\beta}f(\alpha,\beta)\end{bmatrix}, (36)(36)

where

∇⊤ α∇αf(α, β) = 1 γ D exp − α γ ⊙ K exp (− β γ ) = 1 γ DP˜(α, β)1n ∇⊤ β ∇βf(α, β) = 1 γ D exp − β γ ⊙ K⊤ exp (− α γ ) = 1 γ DP˜(α, β) ⊤1m ∇⊤ β ∇αf(α, β) = 1 γ K ⊙ exp − α γ exp (− β ⊤ γ ) = 1 γ P˜(α, β), (37c) ∇⊤ α∇βf(α, β) = 1 γ K⊤ ⊙ exp − β γ exp (− α⊤ γ ) = 1 γ P˜(α, β) , (37a) , (37b) ⊤. (37d) The Sinkhorn update for α in equation 19 ensures that after each update of α, the transport plan lies on the probability simplex and matches the target marginal µ = P˜(α(k+1), β (k))1n. Defining ν˜ = P˜(α(k+1), β (k)) ⊤1m, the Hessian equation 36 at (α(k+1), β (k)) is compactly written as

Hf(α(k+1),β(k))=1γ[D(μ)P~(α(k+1),β(k))P~(α(k+1),β(k))D(ν~)].{\mathbf{H}}_{f}({\mathbf{\alpha}}^{(k+1)},{\mathbf{\beta}}^{(k)})={\frac{1}{\gamma}}\begin{bmatrix}{\mathbf{D}({\mathbf{\mu}})}&{\tilde{\mathbf{P}}({\mathbf{\alpha}}^{(k+1)},{\mathbf{\beta}}^{(k)})}\\ {\tilde{\mathbf{P}}({\mathbf{\alpha}}^{(k+1)},{\mathbf{\beta}}^{(k)})^{\top}}&{\mathbf{D}({\tilde{\mathbf{\nu}}})}\end{bmatrix}. (37a) $$\left(37\text{a}\right)$$ (37b) $$\left(37\text{c}\right)$$ (37d) $$\left(37\text{d}\right)$$. We use the induced-norms of the Hessian Hf (α(k+1), β (k)) to characterize the smoothness at (α(k+1), β (k)).

For a matrix A ∈ R m×n, the induced norm ∥ · ∥p,q is defined as

Ap,q:=maxx:xp1Axq.(1)\|\mathbf{A}\|_{p,q}:=\max_{\mathbf{x}:\|\mathbf{x}\|_{p}\leq1}\|\mathbf{A}\mathbf{x}\|_{q}.\tag{1} (38)(38) (39)(39)

The twice continuously differentiable function f(α(k+1), β (k)) is L-Lipschitz with respect to ℓp norm, if ∥Hf (α(k+1), β (k))∥p,q ≤ L, where ℓq is the dual norm of the ℓp norm. Since the Hessian Hf (α(k+1), β (k)) is a non-negative matrix, one can observe that all its matrix entries are less than 1 γ max {µmax, ν˜max}, where µmax and ν˜max are maximum entries of µ and ν˜ respectively. Therefore for p = 1, if one can find the column index k corresponding to a matrix entry with value 1 γ max{µmax, ν˜max}, then x = ek is the vertex of the ℓ1 norm-ball where ∥Hf (α(k+1), β (k))x∥∞ = 1 γ max{µmax, ν˜max}. Thus,

Hf(α(k+1),β(k))1,=1γmax{μmax,ν~max}1γ,\|\mathbf{H}_{f}(\mathbf{\alpha}^{(k+1)},\mathbf{\beta}^{(k)})\|_{1,\infty}={\frac{1}{\gamma}}\operatorname*{max}\,\left\{\mu_{\operatorname*{max}},{\tilde{\nu}}_{\operatorname*{max}}\right\}\leq{\frac{1}{\gamma}},

which proves the function f(α(k+1), β (k)) is 1 γ -Lipschitz with respect to the ℓ1 norm. Since all the matrix entries of the Hessian Hf (α(k+1), β (k)) are less than 1 γ , its spectral radius is less than 1 γ (Horn & Johnson, 2012)(Theorem 8.1.18) and

Hf(α(k+1),β(k))2,2=λmax(Hf)1γ.(1)\|\mathbf{H}_{f}(\mathbf{\alpha}^{(k+1)},\mathbf{\beta}^{(k)})\|_{2,2}=\lambda_{\mathrm{max}}(\mathbf{H}_{f})\leq\frac{1}{\gamma}.\tag{1} (40)(40)

Therefore, the function f(α(k+1), β (k)) is 1 γ -Lipschitz with respect to the ℓ2 norm. For p = ∞, one can maximize the norm ∥Hf (α(k+1), β (k))x∥1, at the vertex of the ℓ∞ ball where all entries are unit magnitude, in particular x = 1m+n, which results into

Hf(α(k+1),β(k))1m+n=1γ[μ+P˙(α(k+1),β(k))1nP˙(α(k+1),β(k))1m+ν˙]=2γ[μν˙].{\mathbf{H}}_{f}({\boldsymbol{\alpha}}^{(k+1)},{\boldsymbol{\beta}}^{(k)}){\mathbf{1}}_{m+n}={\frac{1}{\gamma}}\begin{bmatrix}{\boldsymbol{\mu}}+{\dot{\mathbf{P}}}({\boldsymbol{\alpha}}^{(k+1)},{\boldsymbol{\beta}}^{(k)}){\mathbf{1}}_{n}\\ {\dot{\mathbf{P}}}({\boldsymbol{\alpha}}^{(k+1)},{\boldsymbol{\beta}}^{(k)})^{\top}{\mathbf{1}}_{m}+{\dot{\boldsymbol{\nu}}}\end{bmatrix}={\frac{2}{\gamma}}\begin{bmatrix}{\boldsymbol{\mu}}\\ {\dot{\boldsymbol{\nu}}}\end{bmatrix}.

Therefore,

(41)(41) Hf(α(k+1),β(k)),1=2γμ1=4γ,(1)\|\mathbf{H}_{f}(\mathbf{\alpha}^{(k+1)},\mathbf{\beta}^{(k)})\|_{\infty,1}=\frac{2}{\gamma}\left\|\mathbf{\mu}\right\|_{1}=\frac{4}{\gamma},\tag{1} (42)\left(42\right)

and the function f(α(k+1), β (k)) is 4 γ -Lipschitz with respect to the ℓ∞ norm. In summary, the partial gradients are all 1 γ -Lipschitz smooth with respect to ℓ1, ℓ2 and ℓ∞ norms. Additionally, considering the dual variables α and β, seperately, one can see that

aaf(α(k+1),β(k))1,=μmaxγ1γ, aaf(α(k+1),β(k))2,21γ,aaf(α(k+1),β(k)),1=1γ,\|\nabla_{\mathbf{a}}^{\top}\nabla_{\mathbf{a}}f(\alpha^{(k+1)},\beta^{(k)})\|_{1,\infty}=\frac{\mu_{\max}}{\gamma}\leq\frac{1}{\gamma},\ \|\nabla_{\mathbf{a}}^{\top}\nabla_{\mathbf{a}}f(\alpha^{(k+1)},\beta^{(k)})\|_{2,2}\leq\frac{1}{\gamma},\quad\|\nabla_{\mathbf{a}}^{\top}\nabla_{\mathbf{a}}f(\alpha^{(k+1)},\beta^{(k)})\|_{\infty,1}=\frac{1}{\gamma}, and and

ββf(α(k+1),β(k))1,=ρmaxγ1γ, ββf(α(k+1),β(k))2,21γ,ββf(α(k+1),β(k)),1=1γ.\|\nabla_{\beta}^{\top}\nabla_{\beta}f(\alpha^{(k+1)},\beta^{(k)})\|_{1,\infty}=\frac{\rho_{\max}}{\gamma}\leq\frac{1}{\gamma},\ \|\nabla_{\beta}^{\top}\nabla_{\beta}f(\alpha^{(k+1)},\beta^{(k)})\|_{2,2}\leq\frac{1}{\gamma},\quad\|\nabla_{\beta}^{\top}\nabla_{\beta}f(\alpha^{(k+1)},\beta^{(k)})\|_{\infty,1}=\frac{1}{\gamma}.

Therefore, f(α(k+1), β (k)) is separately 1 γ -Lipschitz for both α and β with respect to ℓ1, ℓ2 and ℓ∞ norms.

B Appendix: Entropic Regularization Results

In this Appendix, results for point cloud registration and color transfer for SS-Entropic are displayed, which can be compared to the results for SS-Bregman in the main body.

Fragmented Hypercubes: Figure 14 shows the results for affine transformation optimization in 2D, which can be compared with the results in Figure 3(a). Visually it is clear that the alignment is much worse for values of c ∈ {1.25, 1.5}, but quantitatively it is worse for all values of c > 1 as SS-Bregman achieves cost below 10−2 at 500 iterations of Θ updates. Figure 15 shows the results in 3D using SS-Entropic, which can be compared with the results in Figure 4(a). In this case, results are quantitatively worse for all values of c > 1 as SS-Bregman achieves cost below 10−1 at 500 iterations Θ updates.

33_image_0.png

Figure 14: Results for affine transformation optimization with subset selection for partial optimal transport. Target points X are sampled from a 2D fragmented hypercube centered at the origin with negative coordinates removed, whereas source points Y are sampled from a translated fragmented hypercube. (a) Target and transformed source points after application of optimized affine transformation. Subset selection problems are solved using the SS-Entropic with y = 0.01 with max-iter = 4000.(b) Loss function curves for scaling parameter c E {1, 1.25, 1.5, 1.75, 2, 4, 8, 16}.

33_image_1.png

Figure 15: Results for affine transformation optimization with subset selection for partial optimal transport. Target points X are sampled from a 3D fragmented hypercube centered at the origin with negative coordinates removed, whereas source points Y are sampled from a translated fragmented hypercube. (a) Target and transformed source points after application of optimized affine transformation. Subset selection problems are solved using the SS-Entropic with y = 0.01 with max-iter = 4000.(b) Loss function curves for scaling parameter c E {1, 1.25, 1.5, 1.75, 2, 4, 8, 16}. Partial point cloud registration: The results for partial point cloud registration with entropically regularized subset selection (SS-Entropic) for the Stanford bunny and armadillo point clouds. It is clear that the entropically regularized form alone fails to find a meaningful correspondence, transforming the source such that is completely covered by the partial point cloud.

The results for color transfer with the entropically regularized subset selection Color Transfer: SS-Entropic are shown in Figure 17, which can be compared to results from SS-Bregman shown in Figure 6 and Figure 7. Namely, for the first image "Louisiana Nature Scene Barataria Preserve" the entropically

34_image_0.png

Figure 16: Bunny and Armadillo partial point cloud registration using entropically regularized subset selection SS-Entropic y = 0.05, fails to find a accurate alignment of the source with the partially occluded target. regularized results appear more monochromatic with less distinct colors. In the second set of images, there is no visual difference between the outputs of SS-Entropic and SS-Bregman.

34_image_1.png

(b) Figure 17: Color transfer results for c E {1, 1.25, 1.5, 1.75, 2, 4, 8, 16} using SS-Entropic.

C Appendix: Comparison With Semi-Relaxed Formulations

We analyze the variation of subset-selection mass assignments ν ∗ j =Pm i=1 P ∗ ij , for j ∈ [n] as a function of c for source and target points (m = 100, n = 80) given in Figure 2. We use SS-Entropic and SS-Bregman and compare to solutions for the subset selection linear program using CVXPY (Diamond & Boyd, 2016; Agrawal et al., 2018). We also use CVXPY to solve following semi-relaxed unbalanced optimal transport problems of the form

minP\succcur\succcur\succcurP,M+1ρDφ ⁣(P1mν)s.t.P1n=μ,\begin{array}{r l}{{\operatorname*{min}_{\mathbf{P}\succcur\succcur\succcur}}}&{{\langle\mathbf{P},\mathbf{M}\rangle+{\frac{1}{\rho}}{\mathcal{D}}_{\varphi}\!\left(\mathbf{P}^{\top}\mathbf{1}_{m}\|\mathbf{\nu}\right)}}\\ {{\mathrm{s.t.}}}&{{\mathbf{P}\mathbf{1}_{n}=\mathbf{\mu},}}\end{array}

where Dφ ∈ {TV, ℓ2, ℓ∞} can be either TV distance (TV), squared Euclidean (ℓ2), or ℓ∞-norm based distance, and ρ > 0. It can be observed from Figure 18 that the sparsity patterns of mass assignments obtained using SS-Bregman closely match the linear program solutions across a range of c, whereas mass assignments obtained using SS-Entropic are more dense with less mass assignments equal to 0. The sparsity patterns for penalty-based relaxations differ from the linear program solutions. By comparing the mass assignments of different solutions in terms of number of non-zeros, i.e., the cardinality of the support of solution's marginal ν ∗, denoted as (card), we note that the assignments obtained using subset selection have the largest entropy, which implies that subset selection tends to assign uniform masses to the selected subset of points. In comparison, the TV-based penalty, corresponding to the ℓ1-norm tends to have sparse deviations from uniform 1n with many points at exactly this value. Entropies of ν ∗ are plotted in Figure 19.

To further elaborate our observations, we plot the mass assignments to unlabeled data points ν, across different values of scaling parameter c for MNIST/EMNIST PU learning in Figure 20 and Figure 21 for CIFAR-10 PU learning, as discussed in 3.3, and compared that with the mass assignments using semirelaxed formulations (ℓ2 and TV) with different values of regularization parameter ρ. We observe that paths taken by our formulation differs from both ℓ2 and TV . But the solutions correspond at the extreme limits: as ρ → 0 and c → 1 all solutions approach optimal transport solution, similarly as ρ → ∞ and ∀ c ≥ c ∗ mass assignments correspond to nearest neighbor solutions discussed in the Section 2.1. For PU learning experiments on MNIST/EMNIST and CIFAR10, we used the POT toolbox Flamary et al. (2021) function optim.semirelaxed_cg to obtain the solutions for semi-relaxed optimal transport problem with TV and squared-ℓ2 penalties via the Frank-Wolfe conditional gradient algorithm.

D Appendix: Relation Between Entropically Regularized Subset Selection And Unbalanced Optimal Transport

Using the notation from our paper, we have adapted the formulations for the unbalanced optimal transport with asymmetric penalties from the work by Séjourné et al. (2019). For φ1, φ2, the generating functions for the divergence penalty functions on the marginals, and entropic regularization parameter γ > 0, the unbalanced optimal transport for discrete probability measures µ, ν, and cost M is

UOT (φ1,φ2) γ(µ, ν) = min P ∈R m×n ≥0 ⟨P ,M⟩ + Dφ1 (P 1∥µ) + Dφ2 (P ⊤1∥ν) + γKL(P ∥µν⊤) = min P ∈R m×n ≥0 ⟨P ,M⟩ + Xm i=1 µiφ1( Pj Pij µi) +Xn i=1 νiφ2( Pi Pij νi) + γ( X ij Pij (log(Pij ) − log(µiνj )) − X ij Pij + X ij µiνj ) = min P ∈R m×n ≥0 ⟨P ,M⟩ + Xm i=1 µiφ1( Pj Pij µi) +Xn i=1 νiφ2( Pi Pij νi) + γ ⟨P , log(P ) − 1m×n⟩ −X i µilog(µi) +X j X i Pij log(νi) + 1 .

36_image_0.png Figure 18: (a) The variation of mass assignment vector ν ∗ with c for toy problem in Figure 2, using SS-Entropic, SS-Bregman, and the subset selection linear program 2 is solved using CVXPY Agrawal et al.

(2018); Diamond & Boyd (2016). All divergence regularized semi-relaxed problems are also solved using CVXPY. (b) Entropy versus cardinality selected set of points. It can be observed that SS-Bregman and ℓ∞ solutions match each other until a support cardinality of around 40. This is due to fact that subset selection solution matches a unique ℓ∞ penalized solution for all values of c > 2.

36_image_1.png Figure 19: (a) Entropy of mass assignment vector ν ∗ with c for toy problem in Figure 2. (b) Entropy as function of scaling parameter for penalized relaxed-OT problems c(ρ), obtained using formula c(ρ) = max i ν ∗ i (ρ) νi, which implies that for sufficiently large values of ρ, c(ρ) = c ∗.

37_image_0.png

37_image_4.png

37_image_2.png

*j

37_image_1.png

37_image_3.png

37_image_5.png

0.0202-penalty Figure 20: Mass assignments ν ∗to unlabeled data points in PU learning on MNIST/EMNIST discussed in Section 3.3.1 across c for SS-Bregman and across ρ for TV and ℓ2 penalties.

37_image_6.png

Figure 21: Mass assignments ν ∗to unlabeled data points in PU learning on CIFAR-10 neural network representations, discussed in Section 3.3.2 across c for SS-Bregman and across ρ for TV and ℓ2 penalties. When $\varphi_1(x)=\iota_{{1}}(x)=\begin{cases}0,&x=1\ \infty,&\text{otherwise}\end{cases}$ is the $\varphi_2(x)=\iota_{[0,c]}(x)=\begin{cases}0,&x\in[0,c]\ \infty,&\text{otherwise}\end{cases}$ is a range of marginal, then is the indicator function for the ratio of the target marginal and is a range constraint to the interval of [0, c] for the ratio of the source UOTγ(φ1,φ2)(μ,{\mathcal{U O T}}_{\gamma}^{(\varphi_{1},\varphi_{2})}(\mu, γ(µ, ν) = min $\nu)=\min_{\mathbf{P}\in\mathbb{R}{\geq0}^{n\times n}}\langle\mathbf{P},\mathbf{M}\rangle+\gamma\left(\langle\mathbf{P},\mathbf{\log(P)-1}{m\times n}\rangle-\langle\mathbf{\mu},\mathbf{\log(\mu)}\rangle+\langle\mathbf{P}^{\top}\mathbf{1}{m},\mathbf{\log(\nu)}\rangle+1\right)$ s.t. $\mathbf{P1}{n}=\mathbf{\mu},\ \mathbf{P}^{\top}\mathbf{1}_{m}\preccurlyeq\nu$.
With these choices of φ1, φ2 and a uniformly weighted source νi = 1 n∀i ∈ [n], we can relate S (γ) p (µ, cν) to UOT (φ1,φ2) γ(µ, ν), since ⟨P ⊤1m, log(ν)⟩ =Pj Pi Pij log(νi) = − log(n), then UOT (φ1,φ2) γ(µ, ν) = S (γ) p (µ, cν) + γ(H(µ) − log(n) + 1), where H(µ) = −⟨µ, log(µ)⟩ = −Pi µilog(µi). The uniformly weighted source is necessary to eliminate the bias towards ν for the marginal that the Kullback-Leilber divergence penalty induces compared to our entropy penalty, which only considers ν through the constraint.

E Pu Learning Toy Comparison Test

In order to show evidence that subset selection, as a semi-relaxed partial optimal transport, can consistently outperform fully-relaxed partial optimal transport on PU learning, we plot the mean accuracy for a range of examples using of the same design as those in Figure 22.

38_image_0.png

, 2 ] Figure 22: PU learning accuracy on toy data using subset selection and fully-relaxed partial optimal transport. Subset set consistently outperforms fully-relaxed on this toy data. Data is generated as [r cos(ψ), r sin(ψ)] where the radius r is drawn from a truncated exponential distribution with density 2 exp(−r)ı[log 2,∞)(r) and ψ is uniform over the set of angles [− π 2 , π 2 ] for the source and different subsets for the target as indicated in the legend. For each run m = 100 positive points are drawn and the size of the unlabeled sample is varied from n = 100 to n = 800 while are all from the third class. The solutions are obtained using the known number of positives n+ out of the n source points, setting c =n n+ for our semi-relaxed approach and s = n+ nfor the fully-relaxed partial optimal transport. Confidence intervals show ±1 standard deviations of the accuracy across 100 runs.

F Appendix: Neural Network Model Architectures

For semi-supervised learning, we used neural networks to both perform the classification and provide a

39_image_0.png

39_image_1.png

39_image_2.png

39_image_3.png

39_image_4.png

39_image_6.png learning representation space in which to perform the optimal transport to assign pseudo-labels to both unlabeled points. For the CIFAR-10 data set, we used the ResNet-18 (He et al., 2016) architecture. For MNIST and Fashion-MNIST, we used custom, but simple, model architectures. Both architectures contain two convolutional layers, followed by three fully-connected layers. The model used to train Fashion-MNIST (FMNIST) classifier contains additional batch-normalization layer between the convolutional layers. Optimization algorithms along with related hyper-parameters are in Section 3.4. The code below details the exact architectures along with types and shapes of all transformations.

1 import torch 2 import torch . nn as nn 3 import torch . nn . functional as F 4 5 class MNIST_classifier ( nn . Module ) : 6 def init ( self ): 7 super () . init () 8 self . conv1 = nn . Conv2d ( in_channels =1 , out_channels =5 , kernel_size =(5 ,5) ) 9 self . conv2 = nn . Conv2d ( in_channels =5 , out_channels =1 , kernel_size =(5 ,5) ) 10 self . fc1 = nn . Linear (400 , 128) 11 self . fc2 = nn . Linear (128 , 64) 12 self . fc3 = nn . Linear (64 , 10) 13 14 def forward ( self , x): 15 x = F . relu ( self . conv1 (x) ) 16 x = F . relu ( self . conv2 (x) ) 17 x = torch . flatten (x , 1) 18 x = F . relu ( self . fc1 (x)) 19 x_rep = F. relu ( self . fc2 (x) ) 20 x = self . fc3 ( x_rep ) 21 return x , x_rep 22 23 class FMNIST_classifier ( nn . Module ): 24 def init ( self ): 25 super () . init () 26 self . conv1 = nn . Conv2d ( in_channels =1 , out_channels =32 , kernel_size =(5 , 5) ) 27 self . batchN1 = nn . BatchNorm2d ( num_features =32) 28 self . conv2 = nn . Conv2d ( in_channels =32 , out_channels =64 , kernel_size =(5 , 5) ) 29 self . fc1 = nn . Linear ( in_features =6444 , out_features =128) 30 self . fc2 = nn . Linear ( in_features =128 , out_features =64) 31 self . fc3 = nn . Linear ( in_features =64 , out_features =10) 32 33 def forward ( self , x): 34 x = self . conv1 (x) 35 x = F . relu (F. max_pool2d ( input =x , kernel_size =2 , stride =2) ) 36 x = self . batchN1 (x) 37 x = self . conv2 (x) 38 x = F . relu (F. max_pool2d ( input =x , kernel_size =2 , stride =2) ) 39 x = torch . flatten (x , 1) 40 x = F . relu ( self . fc1 (x)) 41 x_rep = self . fc2 ( x) 42 x = self . fc3 ( x_rep ) 43 return x , x_rep Listing 1: Models used for training classifiers for MNIST and Fashion-MNIST

39_image_5.png

39_image_7.png

39_image_8.png

39_image_9.png

39_image_10.png

39_image_11.png