RedTachyon commited on
Commit
50a4b37
1 Parent(s): f69ed83

Upload folder using huggingface_hub

Browse files
AT9G5s1pOj/10_image_0.png ADDED

Git LFS Details

  • SHA256: c08bfb58f3624ff457327737269bb3aaa70c14cda6f2edf282f394a1db623ba8
  • Pointer size: 130 Bytes
  • Size of remote file: 60.5 kB
AT9G5s1pOj/11_image_0.png ADDED

Git LFS Details

  • SHA256: f3df5e96a137da85eaa600045e450fe11890ace7a22c0e9d19a837c4a7011675
  • Pointer size: 130 Bytes
  • Size of remote file: 27 kB
AT9G5s1pOj/18_image_0.png ADDED

Git LFS Details

  • SHA256: 24300181187a5187d280ad5334342c8de04c27b9861e4ef4c04afb5c93086978
  • Pointer size: 131 Bytes
  • Size of remote file: 122 kB
AT9G5s1pOj/19_image_0.png ADDED

Git LFS Details

  • SHA256: c55d0c472331d80dcf38a08e0ad85f9f268f277f6a19da8ba0fe55897a225dad
  • Pointer size: 131 Bytes
  • Size of remote file: 114 kB
AT9G5s1pOj/20_image_0.png ADDED

Git LFS Details

  • SHA256: 170831f3b1ca4574d6f7a033c4002d49ebb0f41f5345a13691710b25d1143f9c
  • Pointer size: 131 Bytes
  • Size of remote file: 118 kB
AT9G5s1pOj/21_image_0.png ADDED

Git LFS Details

  • SHA256: 6447fd3a06c2d517ddbaa3ca524b25123d22d8166fdf44f26901c37b4fbbc8ec
  • Pointer size: 131 Bytes
  • Size of remote file: 114 kB
AT9G5s1pOj/22_image_0.png ADDED

Git LFS Details

  • SHA256: 03e94f61f6613920d96571b45e5f2833c17192d9ea1f554db89edf88403e7915
  • Pointer size: 130 Bytes
  • Size of remote file: 22.6 kB
AT9G5s1pOj/22_image_1.png ADDED

Git LFS Details

  • SHA256: c2e90273f2e64eb25e7a60d03f575b506f080dd3fcd92a720a010e99e2dd2b3b
  • Pointer size: 130 Bytes
  • Size of remote file: 21.7 kB
AT9G5s1pOj/5_image_0.png ADDED

Git LFS Details

  • SHA256: bb35323610d94564cc00e146c5e6c619beda2a2a12cce42bf2e3a7042ab30a97
  • Pointer size: 130 Bytes
  • Size of remote file: 11.3 kB
AT9G5s1pOj/7_image_0.png ADDED

Git LFS Details

  • SHA256: 7068df31281ad4bcf7350dc7653f92f0b754987a9e995b859497c1b53ec20cef
  • Pointer size: 130 Bytes
  • Size of remote file: 58.1 kB
AT9G5s1pOj/AT9G5s1pOj.md ADDED
@@ -0,0 +1,643 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Population Priors For Matrix Factorization
2
+
3
+ Anonymous authors Paper under double-blind review
4
+
5
+ ## Abstract
6
+
7
+ We develop an empirical Bayes prior for probabilistic matrix factorization. Matrix factorization models each cell of a matrix with two latent variables, one associated with the cell's row and one associated with the cell's column. How to set the priors of these two latent variables? Drawing from empirical Bayes principles, we consider estimating the priors from data, to find those that best match the populations of row and column latent vectors. Thus we develop the twin population prior. We develop a variational inference algorithm to simultaneously learn the empirical priors and approximate the corresponding posterior. We evaluate this approach with both synthetic and real-world data on diverse applications: movie ratings, book ratings, single-cell gene expression data, and musical preferences. Without needing to tune Bayesian hyperparameters, we find that the twin population prior leads to high-quality predictions, outperforming manually tuned priors.
8
+
9
+ ## 1 Introduction
10
+
11
+ This paper is about empirical Bayes methods for setting the priors in Bayesian matrix factorization (Mnih & Salakhutdinov, 2007; Gopalan et al., 2015). Matrix factorization models each cell of a matrix with two latent variables, one associated with its row and one associated with its column. Matrix factorization has found broad applications across many fields, including studying consumer behavior, understanding legislative patterns, assessing pharmaceutical impacts, and exploring social networks (Gopalan et al., 2015; Koren et al., 2009; Gerrish & Blei, 2011; Jamali & Ester, 2010).
12
+
13
+ Suppose Xi,j is the observed entry in row i and cell j, such as user i's rating of movie j. As a hierarchical model, a matrix factorization generates the data from the following process:
14
+
15
+ $$\begin{array}{l}{{U_{i}\sim P_{\theta^{r}}(U)}}\\ {{V_{j}\sim P_{\theta^{c}}(V)}}\\ {{X_{i,j}\sim P(X_{i,j}\mid U_{i},V_{j}).}}\end{array}$$
16
+ $\left(1\right)$.
17
+ r (U) (1)
18
+ $\left(3\right)$.
19
+ c (V ) (2)
20
+ Xi,j ∼ P(Xi,j | Ui,Vj ). (3)
21
+ Here Ui and Vj are per row and per column specific latent vectors, and Pθ r and Pθ c are the priors, each with hyperparameters θ r and θ c.
22
+
23
+ This formulation encompasses many factorization models. In Gaussian matrix factorization (Mnih &
24
+ Salakhutdinov, 2007), the priors are Gaussians and Xi,j is drawn from a Gaussian with mean U >
25
+ i Vj and variance σ 2. In Poisson matrix factorization (Canny, 2004; Dunson & Herring, 2005; Gopalan et al., 2015),
26
+ the priors are over the positive reals and Xi,j is drawn from a Poisson with rate U >
27
+ i Vj .
28
+
29
+ An observed matrix of data X then defines a posterior distribution P(U,V | X) over the row variables and the column variables. The posterior can provide interpretations of the data and an avenue to form predictions about missing entries, for example, for a recommendation system.
30
+
31
+ The prior distributions on the row variables and column variables significantly impact the quality of the model posterior. How should we set them? Practitioners typically assume a simple parametric family for the priors, such as a Gaussian or a Gamma, and then find the prior hyperparameters best suited to the data, e.g., with cross-validation (Salakhutdinov & Mnih, 2008; Schmidt et al., 2009). This approach can be effective, but it is expensive and only allows for priors from a simple parametric family.
32
+
33
+ In this paper, we develop an empirical Bayes (EB) methodology for setting the priors (Robbins, 1992; Efron, 2012), learning them from data. The EB idea is to set the priors using the data, for example by finding the one that maximizes the marginal log-likelihood of the data. It is applicable for priors over variables that have repeated draws from them, such as the row and column variables in matrix factorization.
34
+
35
+ EB has found applications in applied sciences in diverse fields such as astronomy (Bovy et al., 2011),
36
+ actuarial sciences (Bühlmann & Gisler, 2005), genomics (Smyth, 2005; Love et al., 2014), economics (Frost
37
+ & Savarino, 1986; Angrist et al., 2017), and survey sampling (Rao & Molina, 2015). EB priors have been successfully employed in simple hierarchical models, such as in variational autoencoders (Kingma & Welling, 2013; Tomczak & Welling, 2018; Kim & Mnih, 2018). Matrix factorization, however, provides a different type of application of EB. In matrix factorization, there are two priors to set, one for the row variables and one for the column variables, and the data that informs us about them is overlapping. Thus, we will find empirical Bayes priors for both row variables and column variables. The result is the *twin empirical Bayes prior* (TwinEB), a practical EB method for matrix factorization. Other methods for EB on matrix factorization include Wang & Stephens (2021) and da Silva et al.
38
+
39
+ (2023); we discuss these below. Specifically, we model the two priors with mixture distributions, one for each prior. We use mixtures since they are a flexible family of distributions that can approximate a wide range of distributions (Titterington et al., 1985; Nguyen et al., 2020). We use variational inference (Blei et al., 2017; Wainwright & Jordan, 2008) to simultaneously estimate the priors and approximate the corresponding posterior. We verify the efficiency and the robustness of this approach with real-world data about recommendation systems and computational biology, and for both Poisson matrix factorization and Gaussian matrix factorization. We summarize the contributions as follows:
40
+ 1. We develop the *twin population prior*, an EB prior for Bayesian matrix factorization. 2. We derive a variational inference algorithm to approximate the matrix factors' posteriors and learn the twin population prior simultaneously.
41
+
42
+ 3. We study the twin population prior on both synthetic and real data and with two types of factorization. The automatically learned EB prior performs as well as the best prior chosen retrospectively.
43
+
44
+ ## 2 Related Work
45
+
46
+ One approach to setting the hyperparameters of priors involves using hierarchical Bayesian models (Gelman, 2006). In this line of work, the prior's hyperparameters are treated as unknown and assigned a prior with hyperpriors to be determined. Gopalan et al. (2015) employs this method for introducing hierarchical Poisson factorization, and Levitin et al. (2019) applies it to gene signature discovery from scRNAseq data. Our work involves more expressive priors (mixtures), which leads to a more flexible class of models while keeping a simpler model as no extra variables for the hyperparameters are introduced.
47
+
48
+ A fruitful direction of research for setting priors for matrix factorization has also been the use of empirical Bayes, a methodology by which one tries to find the prior that matches, in some sense, the population distribution of the data. In Wang & Stephens (2021), the authors formulate prior elicitation for matrix factorization (MF) as steps in a variational expectation maximization algorithm, and in that context, find that it is equivalent to solving the EB normal means (EBNM) problem (Jiang & Zhang, 2009). Our proposed approach is closely related to Wang & Stephens (2021), but we propose to update both priors simultaneously which bypasses potentially costly numerical integration required in the EBNM step. A related line of research makes the connection to EBNM via random-matrix theory, focusing specifically on denoising principal components (Zhong et al., 2022). Closely related to TwinEB, da Silva et al. (2023) optimizes hyperparameters to match the prior predictive distribution to statistics computed from the data, and derives closed-form solutions for PMF and its hierarchical extensions. In Section 4, we compare our method to Wang & Stephens (2021) and da Silva et al. (2023).
49
+
50
+ Empirical Bayes (EB) priors have been explored in the context of variational autoencoders (VAEs) (Kingma
51
+ & Welling, 2013) under the name aggregated posterior, average encoding distribution (Hoffman & Johnson, 2016) or VampPrior (Tomczak & Welling, 2018). The VampPrior learns an amortized posterior and a prior over the latent variables using a shared neural network. It models a prior on the rows only and was used to address posterior collapse (Tomczak & Welling, 2018) or to learn disentangled representations (Kim &
52
+ Mnih, 2018). Our method focuses on matrix factorization, and we derive EB priors for both row and column latent variables.
53
+
54
+ ## 3 Empirical Bayes Priors For Probabilistic Matrix Factorization
55
+
56
+ Our goal is to develop empirical Bayes (EB) priors for Bayesian matrix factorization models. We will focus here on Poisson matrix factorization (PMF). In the supplement, we derive EB priors for Gaussian matrix factorization (GMF). With matrix factorization, the presence of repeated and identically distributed latent variables for each row and each column provides the opportunity to learn their prior distribution from data. This is a form of empirical Bayes (Robbins, 1992; Efron, 2012) that prescribes a *population prior* (see Section 3.1).
57
+
58
+ This population prior aims to align the model's marginal distribution of observations with the observed population distribution. In the special case of matrix factorization, there are two distinct populations: the populations of row vectors, and the population of column vectors. With TwinEB, we learn one prior for each population. This is a form of hierarchical modeling without introducing an extra layer of latent variables.
59
+
60
+ Notations. Define [N] := 1*, . . . , N*. Let X = [Xi,j ] ∈ N
61
+ N×D represent the measured feature j in individual i, where i ∈ [N] and j ∈ [D]. The dimension of the latent variables (the number of factors) is denoted as L.
62
+
63
+ ## 3.1 Background: Population Priors For Simple Hierarchical Models
64
+
65
+ A crucial step in Bayesian statistics is the choice of the prior distribution; if done arbitrarily, it can lead to suboptimal posterior inference (Wang et al., 2021). We choose to follow an empirical Bayes principle that prescribes a *population prior* (Hoffman & Johnson, 2016; Tomczak & Welling, 2018). This prior, by design, aligns the model's marginal distribution of observations with the population distribution.
66
+
67
+ We first focus on a family of latent variable models called simple hierarchical models (Agrawal & Domke, 2021). The joint distribution factorizes as follows:
68
+
69
+ $$P(\mathbf{Z},\mathbf{X})=\prod_{i=1}^{N}P_{\pi}(\mathbf{z}_{i})P_{\theta}(\mathbf{x}_{i}\mid\mathbf{z}_{i}),$$
70
+
71
+ $$\left(4\right)$$
72
+ Pπ(zi)Pθ(xi| zi), (4)
73
+ where θ and Z = [zi] are global and local latent variables respectively and observations X = [xi] are exchangeable conditioned on the latent variables. The variable π indexes the prior distribution of zi. We assume that the global latent variables are fixed (e.g., they are learned through MAP estimation). We use a subscript notation to indicate fixed variables. To further simplify notations, we focus on the marginal likelihood of a single observation, x, and its corresponding local latent variable z.
74
+
75
+ Let P
76
+ ?(x) be the true (unobserved) distribution of observations. An empirical Bayes criterion is that the marginal distribution of observations under the model, Pπ,θ,(x), should align with their true population distribution (Ignatiadis & Wager, 2022), that is:
77
+
78
+ $$\begin{array}{r l}{P^{\star}(\mathbf{x})=P_{\pi,\theta}(\mathbf{x})}\\ {=\int P_{\pi}(\mathbf{z})P_{\theta}(\mathbf{x}\mid\mathbf{z})\ d\mathbf{z}.}\end{array}$$
79
+ ZPπ(z)Pθ(x | z) dz.(5)
80
+ $\left(5\right)$.
81
+ Our goal is to set π such that, for a fixed θ, Equation (5) holds. For a fixed likelihood function Pθ(x | z)
82
+ (i.e., θ is fixed), the expression for the prior Pπ? that satisfies this conditions is:
83
+
84
+ $$P_{\pi^{\star}}(\mathbf{z})\approx\int P_{\pi,\theta}(\mathbf{z}\mid\mathbf{x})P^{\star}(\mathbf{x})d\mathbf{x}$$ $$=\mathbb{E}_{P^{\star}(\mathbf{x})}[P_{\pi,\theta}(\mathbf{z}\mid\mathbf{x})],$$ is all the time of the last two-sided version with the above theorem.
85
+ $$(6)$$
86
+
87
+ where Pπ,θ(z | x) is the (local) posterior distribution of the latent variable z given the observation x under the model. The definition presents two issues: the unknown true marginal distribution P
88
+ ?(x), and the fact that the target prior Pπ? (z) is on both sides of Equation (6), explicitly on the left and implicitly via the posterior on the right. The research literature has approximated Equation (6) with Monte Carlo estimates of P
89
+ ?(x) and variational inference of P(z | x) (Hoffman & Johnson, 2016; Tomczak & Welling, 2018).
90
+
91
+ ## 3.2 Population Priors For Probabilistic Matrix Factorization
92
+
93
+ Our goal is to develop population priors for Bayesian matrix factorization models. The challenge is that unlike simple hierarchical models, there is no distinction between local and global random variables, rather latent variables denote row- and column-specific random variables.
94
+
95
+ ## 3.2.1 Twin Population Priors
96
+
97
+ We establish population priors for two latent variables, one for the rows PV ? (Ui) and one for the columns PU? (Vj ). These priors will match two different populations, one of the row vectors and one of the column vectors:
98
+
99
+ $P^{\star}_{\rm row}(\mathbf{X}_{i,:}):=$ Population distribution of row vectors $P^{\star}_{\rm column}(\mathbf{X}_{:,j}):=$ Population distribution of column vectors
100
+ We begin with population priors for row latent variables. As in section 3.1, we specify the prior based on an empirical Bayes principle such that the true marginal distribution of the rows P
101
+ ?
102
+
103
+ row(Xi,:) is aligned with the distribution of the rows under the model PV (Xi,:), that is:
104
+
105
+ $$P_{\rm row}^{\star}(\mathbf{X}_{i,:})=P_{\mathbf{V}}(\mathbf{X}_{i,:})$$ $$=\int P(\mathbf{U}_{i})\prod_{j=1}^{D}P(X_{i,j}\mid\mathbf{U}_{i},\mathbf{V}_{j})d\mathbf{U}_{i},$$ $\mathbf{U}_{i}$\(\mathbf{U}_
106
+ $$\left(7\right)$$
107
+
108
+ for a fixed set of column variables V . For Equation (7) to hold, a population prior should be used:
109
+
110
+ $$P_{V^{\star}}(U_{i})=\mathbb{E}_{P_{\mathrm{ref}}^{\star}}$$
111
+
112
+ row(Xi,:)[PV (Ui| Xi,:)]. (8)
113
+ Similarly for columns, for a fixed set of row latent variables U, the empirical Bayes criterion is:
114
+
115
+ $$P_{\mathrm{column}}^{\star}(X_{:,j})=P_{U}(X_{i,:})$$
116
+ $\mathbf{\hat{\varepsilon}}_{\text{row}}(\mathbf{\hat{x}})$
117
+ $P_{U}(\mathbf{X}_{i,:})$ $$=\int P(\mathbf{V}_{i})\prod_{i=1}^{N}P_{U_{j}}(X_{i,j}\mid\mathbf{V}_{i})d\mathbf{V}_{i}.\tag{1}$$
118
+ $$({\boldsymbol{\delta}})$$
119
+ $$\mathbf{\tau}\cdot(U_{i}\ |\ \mathbf{X}_{i,:})].$$
120
+ $$(10)$$
121
+ $$(9)$$
122
+
123
+ The prior that satisfies this criterion is the column population prior:
124
+
125
+ $P_{U}\cdot(V_{j})=\mathbb{E}_{P_{\rm column}^{*}}(X_{:,j})[P_{U}(V_{j}\mid X_{:,j})]$.
126
+ Since there are two populations in need of prior specification, we call Equations (8 and 10) the *twin* population priors. We have established the form of population priors in probabilistic matrix factorization. Next we focus on how to estimate the twin population priors and how to approximate posterior inference under them. In the remainder of the paper, we focus on the Poisson matrix factorization (PMF). We derive the priors for Gaussian matrix factorization (GMF) in the supplement.
127
+
128
+ ## 3.3 Twin Eb Prior For Poisson Matrix Factorization
129
+
130
+ In Poisson matrix factorization (PMF), the row and column latent variables Ui and Vi are non-negative L-vectors and the likelihood in Equation (3) is Poisson:
131
+
132
+ $$X_{i,j}\mid{\boldsymbol{U}}_{i},{\boldsymbol{V}}_{j}\sim\mathrm{Poisson}\left({\boldsymbol{U}}_{i}^{T}{\boldsymbol{V}}_{j}\right),$$
133
+ , (11)
134
+ $${\mathrm{for~}}i\in[N],j\in[D].$$
135
+
136
+ The log-likelihood of the data is:
137
+
138
+ $$\log P(\mathbf{X}\mid\mathbf{U},\mathbf{V})=\log\prod_{i=1}^{N}\prod_{j=1}^{D}P(X_{i,j}\mid\mathbf{U}_{i},\mathbf{V}_{j})\tag{1}$$ $$=\sum_{i,j}^{N,D}\log\text{Poisson}\left(X_{i,j};\mathbf{U}_{i}^{T}\mathbf{V}_{j}\right).$$
139
+ $$(11)$$
140
+ $$\left(12\right)$$
141
+
142
+ Some methods place Gamma priors on Vj and Ui (Gopalan et al., 2015). Note that this is a Bayesian formulation of non-negative matrix factorization (Cemgil, 2009).
143
+
144
+ To compute the population prior for the rows, PV (Ui) = EP ?
145
+
146
+ row(Xi,:)[PV (Ui| Xi,:)], we face two problems.
147
+
148
+ Namely, we do not know the true population distribution of the rows P
149
+ ?
150
+
151
+ row(Xi,:), and the population prior PV (Ui) appears on both sides of the equality.
152
+
153
+ To find the population prior, we first notice that a Monte Carlo estimate of Equation (8) writes as:
154
+
155
+ $$P_{\mathbf{V}}(\mathbf{U}_{i})\approx\frac{1}{N}\sum_{i^{\prime}=1}^{N}P_{\mathbf{V}}(\mathbf{U}_{i}\mid\mathbf{X}_{i^{\prime},:}).\tag{13}$$
156
+
157
+ When the prior satisfies Equation (13), this property is called self-consistency (Laird, 1978).
158
+
159
+ The structure of Equations (13) suggests to use families of mixtures of parametric distributions to approximate the row and column population priors (Tomczak & Welling, 2018). Mixtures can approximate complex distributions when their number of components increases while having the convenience of remaining parametric (Titterington et al., 1985; Nguyen et al., 2020). We choose to model the priors as,
160
+
161
+ $$P_{\mathbf{V}}(\mathbf{U}_{i}):=P_{\theta^{r}}(\mathbf{U}_{i})=\sum_{k=1}^{K_{r}}\pi_{k}P_{\mathbf{\mu}_{k},\mathbf{\sigma}_{k}}(\mathbf{U}_{i}),$$ $$P_{\mathbf{U}}(\mathbf{V}_{j}):=P_{\theta^{c}}(\mathbf{V}_{j})=\sum_{k=1}^{K_{c}}\rho_{k}P_{\mathbf{\nu}_{k},\mathbf{\eta}_{k}}(\mathbf{V}_{j}),$$
162
+ $$\left(14\right)$$
163
+ $$\left(15\right)$$
164
+ $$(16)$$
165
+
166
+ where Kr and Kc are the number of components in the mixtures, θ r = {µ,σ,π} and θ c = {ν, η, ρ}. The locations µ ∈ R
167
+ Kr×L and ν ∈ R
168
+ Kc×L, the scales σ ∈ R
169
+ Kr×L and η ∈ R
170
+ Kc×L, and the mixture weights π ∈ ∆Kr and ρ ∈ ∆Kc are the parameters of the mixtures priors. As Kr and Kc increase, the priors become more and more expressive (see Figure 4). Figure 1 shows a graphical model representation of matrix factorization with EB priors.
171
+
172
+ In a classical empirical Bayes setup, the idea is to set the priors that maximize the marginal likelihood of the data:
173
+
174
+ $$\hat{\theta}^{r},\hat{\theta}^{c}=\operatorname*{arg\,max}_{\theta^{r},\theta^{c}}\log P(X;\theta^{r},\theta^{c}),$$
175
+
176
+ where
177
+
178
+ $$\log P(\mathbf{X};\theta^{r},\theta^{c})=\log\int P_{\theta^{r}}(\mathbf{U})P_{\theta^{c}}(\mathbf{V})P\left(\mathbf{X}\mid\mathbf{U},\mathbf{V}\right)d\mathbf{U}d\mathbf{V}.$$
179
+
180
+ In Section 3.4, we find θ r, θc at the same time as we approximate the model posterior P(U,V | X; θ r, θc).
181
+
182
+ $$(17)$$
183
+
184
+ ![5_image_0.png](5_image_0.png)
185
+
186
+ Figure 1: Twin population priors for Poisson matrix factorization model. Shaded nodes are observed while other nodes represent latent random variables. The empty squares indicate that we will fit these priors to the data.
187
+
188
+ ## 3.4 Posterior Inference In Pmf With Twin Eb Priors
189
+
190
+ Given data X, our goal is to calculate the posterior P(U,V | X; θ r, θc), which also depends on our choice of priors θ r, θc. The challenges are that this posterior is intractable (for any prior) and we simultaneously want to fit the priors to satisfy the EB criterion in Equation (16).
191
+
192
+ Our strategy will be as follows. We will use variational inference (VI) (Blei et al., 2017) to approximate the posterior, taking gradient steps in the variational objective with respect to the posterior approximation (the variational family). At the same time, however, the variational objective of VI is an approximation (lower bound) of the log-marginal from Equation (16). So we also take gradient steps with respect to the EB priors to maximize it. The result is an algorithm that simultaneously approximates the posterior and learns the EB prior.
193
+
194
+ The variational posterior. Consider a parameterized mean-field variational family,
195
+
196
+ $$q_{\bf A}(U,V\mid X)=\prod_{i,l}q_{\lambda_{i,l}^{c}}(U_{i,l})\prod_{j,l}q_{\lambda_{j,l}^{c}}(V_{j,l}),\tag{1}$$
197
+
198
+ ```
199
+ This family has parameters for each row's latent vector and each column's latent vector, λ
200
+ r
201
+ i
202
+ and λ
203
+ c
204
+ i
205
+ respec-
206
+
207
+ ```
208
+
209
+ tively. We further define Λr:= [λ r i,l], and Λc:= [λ c j,l]. The full set of variational parameters is Λ = {Λr, Λc}.
210
+
211
+ From the perspective of posterior inference, our goal is to set qΛ to minimize the KL divergence to the exact posterior:
212
+
213
+ $$\hat{\mathbf{A}}=\operatorname*{arg\,min}_{\mathbf{A}}\operatorname{KL}(q_{\mathbf{A}};P(\mathbf{U},\mathbf{V}\mid\mathbf{X};\theta^{r},\theta^{c})).$$
214
+ r, θc)). (19)
215
+ In detail, the variational family is a bank of Log-Normals:
216
+
217
+ $$(18)$$
218
+ $$(19)$$
219
+
220
+ $$\lambda^{r}_{i,l}:=(a^{\prime}_{i,l},b^{\prime}_{i,l}),$$ $$\lambda^{c}_{j,l}:=(a_{j,l},b_{j,l}),$$ $$q\chi^{r}_{i,l}(U_{i,l}):=\mathcal{L}\mathcal{N}(a^{\prime}_{i,l},b^{\prime}_{i,l}),$$ $$q\chi^{c}_{j,l}(V_{j,l}):=\mathcal{L}\mathcal{N}(a_{j,l},b_{j,l}).$$
221
+
222
+ Each Log-Normal is parameterized by its natural parameters a and b:
223
+
224
+ $${\mathcal{L N}}(x;a,b)\propto\exp\left(-{\frac{a}{2b}}\log(x)-{\frac{(\log x)^{2}}{2b}}\right).$$
225
+
226
+ $$(23)$$
227
+
228
+ To minimize the KL divergence in Equation (19), VI optimizes the variational parameters Λ to, equivalently, maximize the evidence lower bound (ELBO) (Blei et al., 2017):
229
+ L(X; Λ, θr, θc) =
230
+ EqΛ(U,V | X)[log P(X | U,V )] + EqΛ
231
+ [log P(U; θ r, θc) + log P(V ; θ r, θc)] − EqΛ
232
+ [log qΛ(U,V | X)]. (24)
233
+ Here we use gradient ascent to maximize L(X; θ, Λ) with respect to Λ (Ranganath et al., 2014). We further use stochastic reparameterization gradients to take such steps (Kingma & Welling, 2013; Rezende et al., 2014). Maximum marginal likelihood. At the same time, we would like to set the prior parameters to maximize the marginal likelihood of the data (Equation (16)). The variational objective in Equation (24) conveniently also provides a lower-bound on the marginal likelihood (Blei et al., 2017):
234
+ log P(X; θ r, θc) ≥ L(X; θ, Λ). (25)
235
+ So, we will also follow stochastic gradients of the ELBO with respect to the prior parameters θ r, θcto maximize L(X; θ, Λ) with respect to θ r, θc. This strategy has been used in the context of linear regression (Mukherjee et al., 2023). Twin EB. Putting these two pieces together, our algorithm is a stochastic gradient ascent of the ELBO with respect to two sets of parameters. In optimizing with respect to Λ, we minimize the KL divergence between qΛ and the posterior; in optimizing with respect to θ r, θc, we maximize the (approximate) marginal likelihood of the data. We use the Adam algorithm for stochastic optimization (Kingma & Ba, 2014) with a batch size of 128, and using ten particles to obtain unbiased noisy estimates of the gradient of the ELBO via the reparameterization trick (the particles are samples from qΛ to estimate the expectation EqΛ
236
+ of the ELBO with Monte-Carlo).
237
+
238
+ The details of the algorithm are in Algorithm 1. Our implementation is available at git@github.com:
239
+ xxx/xxx.git.
240
+
241
+ Algorithm 1 Variational inference for Poisson matrix factorization with twin EB priors Input: Data X, number of particles S, learning rate ζ, number of iterations T, number of components Kr, Kc, number of latent dimensions L.
242
+
243
+ Output: Variational posterior parameters Λ∗, prior parameters θ r∗, θc∗.
244
+
245
+ Initialize: Λ(0), θ r(0), θc(0).
246
+
247
+ for t = 1 to T do for i = 1 to N, l = 1 to L do for s = 1 to S do Sample
248
+ (s)
249
+ i,l ∼ N (0, 1).
250
+
251
+ Compute U
252
+ (s)
253
+ i,l = exp(a 0
254
+ (t−1)
255
+ i,l + b 0
256
+ (t−1)
257
+ i,l
258
+ (s)
259
+ i,l ).
260
+
261
+ end for end for for j = 1 to D, l = 1 to L do for s = 1 to S do Sample φ
262
+ (s)
263
+ j,l ∼ N (0, 1).
264
+
265
+ Compute V
266
+ (s)
267
+ j,l = exp(a
268
+ (t−1)
269
+ j,l + b
270
+ (t−1)
271
+ j,l φ
272
+ (s)
273
+ j,l ).
274
+
275
+ end for end for Estimate L(X; θ rt−1, θc(t−1), Λ(t−1)) using Monte-Carlo in Equation (24) with samples U(s), V
276
+ (s)in place of EqΛ
277
+ .
278
+
279
+ Λ(t), θr(t), θc(t) ← Adam(∇(θ r,θc,Λ)L, ζ).
280
+
281
+ end for return Λ(T), θr(T), θc(T).
282
+
283
+ ![7_image_0.png](7_image_0.png)
284
+
285
+ Figure 2: Twin population priors induce robustness to prior selection The held-out likelihood is sensitive to the choice of the prior hyper-parameters. GMF endowed with population priors on both row and column latent variables, TwinEB (GMF), achieves comparable or better results than other methods. Each sub-panel displays held-out log-likelihood from adjusting the column prior variance with a fixed row prior, while the right sub-panel does the opposite, varying the row prior variance with a constant column prior.
286
+
287
+ We demonstrate four datasets, from the top to bottom, MovieLens 1M, Ru1322b-scRNAseq, UserArtists, and GoodBooks. In all datasets, we set L = 15. Similar results hold for other values of L (see supplement).
288
+
289
+ ## 4 Experiments
290
+
291
+ We studied Algorithm 1 in several real-world matrix factorization settings: book ratings, movie ratings, artist preferences, and single-cell RNA sequence gene counts. For all datasets, we studied both Gaussian and Poisson matrix factorization. In all datasets, we set L = 15. We found that TwinEB performs as well or better than manually searching for a parameterized prior, and performs as well or better than setting a simple parameterized prior by empirical Bayes.
292
+
293
+ ## 4.1 Datasets
294
+
295
+ In this section we first review four real-world datasets, ranging from user-preferences to genomics, and then explore the impact of twin population priors on the performance of Gaussian and Poisson matrix factorization.
296
+
297
+ In our experiments, we fix the number of row and column mixtures to K .- 70 and Kc = 100 respectively.
298
+
299
+ MovieLens 1M. This dataset comprises 1 million ratings from 6,000 users (rows) on 4,000 movies
300
+ (columns) (Harper & Konstan, 2015). The ratings are on a scale of 1 to 5. Sparsity of this dataset, defined as the number of nonzero elements divided by the total number of entries, is 0.04. We can use matrix factorization to capture different aspects of user preferences and movie characteristics. Specifically, Ui,l may signify user i's affinity for aspect l (e.g., genre), while Vj,l may represent the degree to which movie j exhibits aspect l.
301
+
302
+ Ru1322b. We analyze single cell gene expression data from a patient with small cell lung cancer (Chan et al., 2021). This dataset comprises 4,000 highly variable genes (columns) across 5,308 cells (rows). For the GMF family, we applied a two-step transformation. First, we performed a log transformation on the counts after adding a pseudo-count of one. Then, we standardized the non-zero elements. Sparsity of this dataset is 0.16. Each entry of the matrix denotes the number of transcripts of gene j in cell i. We explain the gene expression matrix via L gene-modules, with Ui,l as the activity of module l in cell i, and Vj,l as gene j's contribution to module l. Matrix factorization can be used as exploratory data analysis (finding gene modules associated with malignancy in cancer) or as a component in a more complex analysis (causal inference (Wang & Blei, 2019)). See the supplement for details on preprocessing of the sequencing data. UserArtists We use the data introduced in Cantador et al. (2011), comprising 92,834 user-listened artist relations, across 1,892 users and 17,632 artists, with a maximum value of 352, 698. Similar to MovieLens 1M, we use matrix factorization to explain different aspects of user preferences and artist groupings. Sparsity of this dataset is 0.003. We note that this dataset was analyzed in da Silva et al. (2023).
303
+
304
+ GoodBooks Contains 6 million ratings across 51,288 users and 10,000 books1. Ratings range from 1 to 5.
305
+
306
+ Sparsity of this dataset is 0.01.
307
+
308
+ ## 4.2 Evaluation Metric And Baselines
309
+
310
+ We evaluate each model based on the likelihood of held-out data. For a given row-wise data split into held-in and held-out rows, let Xout = {Xout i} denote the set of Nout masked entries of the held-out rows. We estimate the held-out log-likelihood as follows:
311
+
312
+ $$\begin{array}{l}{{\frac{1}{N^{\mathrm{out}}}\sum_{i=1}^{N^{\mathrm{out}}}\log P(X_{i}^{\mathrm{out}}\mid X^{\mathrm{in}})\approx}}\\ {{\frac{1}{N^{\mathrm{out}}}\sum_{i=1}^{N^{\mathrm{out}}}\log\frac{1}{M}\sum_{m=1}^{M}P(X_{i}^{\mathrm{out}}\mid\mathbf{U}^{(m)},\mathbf{V}^{(m)}),}}\end{array}$$
313
+ $$(26)$$
314
+ $$(27)$$
315
+ $\boldsymbol{\mathit{T}}$, $\boldsymbol{\mathit{V}}$).
316
+ where M is the number of Monte Carlo samples from qΛ(U,V ).
317
+
318
+ We randomly assign 20% of the rows as the test set, and the rest as training data. We then mask 20% of the entries at random, and train the model on this train set using ten random restarts. We use the 20% masked entries as a *validation set*. At test time, we put aside 30% of the entries of the test rows at random - these entries constitute the *test set* - then we train the model on 40% of the rest of the entries. This procedure measures *strong generalization* (Steck, 2019).
319
+
320
+ To compute the held-out likelihood, after training the model on the held-in data, we fix its column parameters and then learn row parameters of the held-out rows. We then report the likelihood of the masked entries in the test set. For each model, we report the test HOLL of the random restart that achieved the best validation HOLL. In our experiments, we set M = 500. See the supplement for the derivation of Equation (27) and more details on the experiments. Baseline. We evaluate the performance of the PMF (GMF) model with TwinEB against multiple baselines, namely (i) TwinEB-Single, (ii) PMF (GMF). The former is a simple form of TwinEB where all latent dimensions have an identical prior, which we learn. The latter is a prior of the same family as the TwinEB, but with fixed hyperparameters. We compare TwinEB against a large choice of fixed parameters, akin to hyperparameter selection.
321
+
322
+ 1Accessed at https://github.com/zygmuntz/goodbooks-10k
323
+
324
+ ## 4.3 Results: Gaussian Matrix Factorization
325
+
326
+ We preprocess the data as follows. We standardize each column by subtracting the mean from non-zero entries and dividing the result by their standard deviation. We study two scenarios: (i) maintaining a fixed prior on row-wise variables while varying the prior on column-wise variables, and (ii) holding the prior on column-wise variables constant and adjusting the prior on row-wise variables. We set the variational family as well as the mixture components for the population priors to be Gaussian. We set the fixed prior to N (1.0, 1.0) and vary the variance of the non-fixed one over {0.001, 0.01, 0.1, 1.0, 10}.
327
+
328
+ We compared the GMF model to TwinEB (GMF) and TwinEB-Single (GMF). We treat zero entries as missing values.
329
+
330
+ Figure 2 displays the outcomes for four real-world datasets. In GoodBooks and MovieLens-1M datasets, TwinEB achieves the highest test HOLL. For the UserArtists dataset, TwinEB does better than the fixed prior. Similar results were obtained by varying L to other values and are reported in Figures 7, 8 in Appendix D.
331
+
332
+ We also compare to the method of Wang & Stephens (2021), we used the corresponding package flashr, that currently supports GMF. We designed an imputation experiment where we held-out 10% of entries of a standardized matrix, and compare reconstruction accuracy on non-zero enteries. We applied flashr to the above four real-world datasets. flashr crashes on all but the least sparse dataset, Ru1322b. For the Ru1322b dataset it achieves a mean absolute error of 0.64 vs 0.51 for the TwinEB. We note that it is not straightforward to compare our method to that of Jiang & Zhang (2009); the software implementation does not immediately support missing data and imputation.
333
+
334
+ ## 4.4 Results: Poisson Matrix Factorization
335
+
336
+ For the MovieLens 1M and GoodBooks datasets, we binarize the matrix, setting entries to 'one' if a user has rated a movie and to 'zero' otherwise. For the Ru1322b dataset, we normalize the rows such that the sum of all rows are equal; we then round each value to the nearest integer. This is to account for the effect of library size (Heumos et al., 2023). We treat zeros as missing data. In all models, we set the variational family to be Log-Normal, and the prior (or the mixture components for the population priors) to be Gamma. Similar to GMF, we vary the row or column prior parameters, while keeping the other one fixed. We parameterize the Gamma prior by its mean and variance Gamma(*µ, σ*2), where µ = α/β and σ 2 = α/β 2. In each scenario, we set the fixed prior to Gamma(1, 10). For the varying prior, we set its mean to µ = 1 and change its variance along {0.01, 0.1, 0.25, 1.0, 10.0, 100.0}. Figure 3 shows the results.
337
+
338
+ In datasets except for GoodBooks, TwinEB outperforms TwinEBSingle. In the UserArtists dataset, TwinEB
339
+ outperforms other methods. TwinEB, on average, takes about 1.5X the runtime of PMF (ranging form 1.1X
340
+ to 2.1X). We point out that finding a good prior using grid-search incurs the sum of the cost of evaluations of the individual points in the grid, over 6.5X the running time of the PMF with TwinEB prior. We note that using a grid to find fixed priors (or alternatively, learning the scalar prior) does not guarantee reaching a best test HOLL. Indeed in the UserArtists dataset, TwinEB yields the best test HOLL. We also compared TwinEB to the method of da Silva et al. (2023); given the data, it estimates values for the shape and rate parameters of the Gamma prior for both the row and column r.v.s. Table 1 displays the estimated hyperparameters and the resulting test HOLL.
341
+
342
+ PMF equipped with fixed priors values calculated from this methods yields Test HOLL that are on par or slightly worse than the TwinEB method. This method yielded negative hyperparameter estimates for the MovieLens-1M and GoodBooks datasets. These parameters are effectively not usable, since the parameters of a Gamma must be positive. Hence, we emphasize that unlike ours, this methods cannot be used on all datasets.
343
+
344
+ ![10_image_0.png](10_image_0.png)
345
+
346
+ Figure 3: Twin population priors induce robustness to prior selection. As in Figure 2, but for PMF.
347
+ Table 1: Test Heldout Loglikelihood from running PMF with prior hyperparameters sets using the method of da Silva et al. (2023).
348
+
349
+ For MovieLens-1M and GoodBooks datasets, da Silva et al. (2023) yielded inadmissible hyperparameters.
350
+
351
+ | | test HOLL | |
352
+ |--------------|------------------------|--------|
353
+ | Dataset | da Silva et al. (2023) | TwinEB |
354
+ | Ru1322b | -9.53 | -9.46 |
355
+ | UserArtists | -1,583 | -1,481 |
356
+ | MovieLens-1M | NA | -3.04 |
357
+ | GoodBooks | NA | -3.95 |
358
+
359
+ ## 4.5 Simulation: Complexity Of The Prior
360
+
361
+ Here we examine the performance of the population priors as the number of mixture components are varied. To this end, we simulate a 1,000 by 1,500 dataset, with L = 64 dimensional row and column-wise r.v.s. We sample the row- and column-wise r.v.s from a mixture of 15 and 20 Gamma distributions respectively, the
362
+
363
+ Simulated Data - Study of K
364
+
365
+ ![11_image_0.png](11_image_0.png)
366
+
367
+ Figure 4: Increasing the number of mixtures improves the model performance. Each line shows the average test-HOLL over 10 seeds for a fixed value for Kc, the number of column mixtures, and varying values of Kr, the number of row mixtures. Here we set L = 64 to its true simulated value. The results are similar for L = 32 (see supplement).
368
+
369
+ rate and shape parameters of which are sampled from a Gamma(1, 1). The mixture weights are sampled from a Dirichlet(e0*, . . . , e*0) where the concentration parameter is e0 = 10.
370
+
371
+ Performance Increase with Number of Mixture Components. The more mixture components, the more flexible the EB prior is. If learning a EB prior is beneficial, then we expect the performance to increase with the number of mixture components. We run PMF with population priors with varying number of row and column mixtures, setting the dimension of the latent variable to its true value, and, to avoid model misspecification, set the mixture prior family to Gamma. We then report the average of the test HOLL across ten different seeds. Figure 4, the log-likelihood of test held-out data increases with the number of mixture components in the row prior.
372
+
373
+ ## 5 Discussion
374
+
375
+ We introduced the twin population priors for probabilistic matrix factorization. We derived a method to estimate the corresponding posterior using Monte Carlo and variational inference. On real-world data, this method finds a prior as good as the best parametric prior chosen retrospectively. One area of further work is to extend this algorithm to tensor factorization (Kolda & Bader, 2009; Schein et al., 2015). While in matrix factorization, each entry of the observed matrix is explained via two latent variables, tensor factorization models will involve more. One detail to address is how to formally define the population distribution associated with each.
376
+
377
+ ## Broader Impact Statement
378
+
379
+ The authors do not foresee any negative impact of this work.
380
+
381
+ ## References
382
+
383
+ Abhinav Agrawal and Justin Domke. Amortized Variational Inference for Simple Hierarchical Models.
384
+
385
+ NeurIPS, 34:21388–21399, 2021.
386
+
387
+ Joshua D Angrist, Peter D Hull, Parag A Pathak, and Christopher R Walters. Leveraging lotteries for school value-added: Testing and estimation. QJE, 132(2):871–919, 2017.
388
+
389
+ David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. *JASA*,
390
+ 112(518):859–877, 2017.
391
+
392
+ Jo Bovy, David W. Hogg, and Sam T. Roweis. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations. *AOAS*, 5(2B):1657 - 1677, 2011.
393
+
394
+ Hans Bühlmann and Alois Gisler. *A course in credibility theory and its applications*, volume 317. Springer, 2005.
395
+
396
+ John Canny. GaP: a factor model for discrete data. In *SIGIR*, pp. 122–129, 2004. Iván Cantador, Peter Brusilovsky, and Tsvi Kuflik. Second workshop on information heterogeneity and fusion in recommender systems (hetrec2011). In *ACM RecSys*, pp. 387–388, 2011.
397
+
398
+ Ali Taylan Cemgil. Bayesian inference for nonnegative matrix factorisation models. COMPUT INTEL
399
+ NEUROSC, 2009.
400
+
401
+ Joseph M Chan, Álvaro Quintanal-Villalonga, Vianne Ran Gao, Yubin Xie, Viola Allaj, Ojasvi Chaudhary, Ignas Masilionis, Jacklynn Egger, Andrew Chow, Thomas Walle, et al. Signatures of plasticity, metastasis, and immunosuppression in an atlas of human small cell lung cancer. *Cancer Cell*, 39(11):1479–1496, 2021.
402
+
403
+ Eliezer de Souza da Silva, Tomasz KuĹ, Marcelo Hartmann, Arto Klami, et al. Prior specification for Bayesian matrix factorization via prior predictive matching. *JMLR*, 24(67):1–51, 2023.
404
+
405
+ David B Dunson and Amy H Herring. Bayesian latent variable models for mixed discrete outcomes. *Biostatistics*, 6(1):11–25, 2005.
406
+
407
+ Bradley Efron. *Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction*,
408
+ volume 1. Cambridge University Press, 2012.
409
+
410
+ Peter A Frost and James E Savarino. An empirical Bayes approach to efficient portfolio selection. *JFQA*,
411
+ 21(3):293–305, 1986.
412
+
413
+ Andrew Gelman. Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). 2006.
414
+
415
+ Sean M Gerrish and David M Blei. Predicting legislative roll calls from text. In *ICML*, 2011. Prem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with hierarchical Poisson factorization. In UAI, pp. 326–335, 2015.
416
+
417
+ F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. *TiiS*, 5(4):1–19, 2015.
418
+
419
+ Lukas Heumos, Anna C. Schaar, et al. Best practices for single-cell analysis across modalities. Nat. Rev.
420
+
421
+ Genet., 24(8):550–572, 2023. ISSN 1471-0064.
422
+
423
+ Matthew D Hoffman and Matthew J Johnson. ELBO surgery: yet another way to carve up the variational evidence lower bound. 1(2), 2016.
424
+
425
+ Nikolaos Ignatiadis and Stefan Wager. Confidence intervals for nonparametric empirical Bayes analysis.
426
+
427
+ JASA, 117(539):1149–1166, 2022.
428
+
429
+ Mohsen Jamali and Martin Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In *RecSys*, pp. 135–142, 2010.
430
+
431
+ Wenhua Jiang and Cun-Hui Zhang. General maximum likelihood empirical Bayes estimation of normal means. *Ann. Stat.*, 2009.
432
+
433
+ Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In *ICML*, pp. 2649–2658. PMLR, 2018.
434
+
435
+ Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. *arXiv:1412.6980*, 2014.
436
+
437
+ Diederik P Kingma and Max Welling. Auto-Encoding variational Bayes. *arXiv:1312.6114*, 2013. Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. *SIAM review*, 51(3):455–500, 2009.
438
+
439
+ Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems.
440
+
441
+ Computer, 42(8):30–37, 2009.
442
+
443
+ Nan Laird. Nonparametric maximum likelihood estimation of a mixing distribution. *JASA*, 73(364):805–811, 1978.
444
+
445
+ Hanna Mendes Levitin, Jinzhou Yuan, Yim Ling Cheng, Francisco JR Ruiz, Erin C Bush, Jeffrey N Bruce, Peter Canoll, Antonio Iavarone, Anna Lasorella, David M Blei, et al. De novo gene signature identification from single-cell RNA-seq with hierarchical Poisson factorization. *Molecular systems biology*, 15(2):e8557, 2019.
446
+
447
+ Michael I Love, Wolfgang Huber, and Simon Anders. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. *Genome biology*, 15(12):1–21, 2014.
448
+
449
+ Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. *NeurIPS*, 20, 2007.
450
+
451
+ Sumit Mukherjee, Bodhisattva Sen, and Subhabrata Sen. A mean field approach to empirical Bayes estimation in high-dimensional linear regression. *arXiv:2309.16843*, 2023.
452
+
453
+ T Tin Nguyen, Hien D Nguyen, Faicel Chamroukhi, and Geoffrey J McLachlan. Approximation by finite mixtures of continuous density functions that vanish at infinity. *Cogent Mathematics & Statistics*, 7(1): 1750861, 2020.
454
+
455
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *NeurIPS*, 32, 2019.
456
+
457
+ Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In *AISTATS*, pp. 814–822.
458
+
459
+ PMLR, 2014.
460
+
461
+ John NK Rao and Isabel Molina. *Small area estimation*. John Wiley & Sons, 2015. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *ICML*, pp. 1278–1286. PMLR, 2014.
462
+
463
+ Herbert E Robbins. *An empirical Bayes approach to statistics*. Springer, 1992. Ruslan Salakhutdinov and Andriy Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In *ICML*, pp. 880–887, 2008.
464
+
465
+ Aaron Schein, John Paisley, David M Blei, and Hanna Wallach. Bayesian poisson tensor factorization for inferring multilateral relations from sparse dyadic event counts. In *SIGKDD*, pp. 1045–1054, 2015.
466
+
467
+ Mikkel N Schmidt, Ole Winther, and Lars Kai Hansen. Bayesian non-negative matrix factorization. In ICA
468
+ 2009, pp. 540–547. Springer, 2009.
469
+
470
+ Gordon K Smyth. Limma: linear models for microarray data. In *Bioinformatics and computational biology* solutions using R and Bioconductor, pp. 397–420. Springer, 2005.
471
+
472
+ Harald Steck. Embarrassingly shallow autoencoders for sparse data. In WWW, pp. 3251–3257, 2019. David Michael Titterington, Adrian FM Smith, and Udi E Makov. Statistical analysis of finite mixture distributions. 1985.
473
+
474
+ Jakub Tomczak and Max Welling. VAE with a VampPrior. In *AISTATS*, pp. 1214–1223. PMLR, 2018. Martin J Wainwright and Michael I Jordan. *Graphical models, exponential families, and variational inference*.
475
+
476
+ Now Publishers, Inc., 2008.
477
+
478
+ Wei Wang and Matthew Stephens. Empirical Bayes matrix factorization. *JMLR*, 22(1):5332–5371, 2021.
479
+
480
+ Yixin Wang and David M. Blei. The Blessings of multiple causes. *JASA*, 114(528):1574–1596, 2019. Publisher:
481
+ Taylor & Francis.
482
+
483
+ Yixin Wang, David Blei, and John P Cunningham. Posterior collapse and latent variable non-identifiability.
484
+
485
+ NeurIPS, 34:5443–5455, 2021.
486
+
487
+ F Alexander Wolf, Philipp Angerer, and Fabian J Theis. SCANPY: large-scale single-cell gene expression data analysis. *Genome biology*, 19:1–5, 2018.
488
+
489
+ Xinyi Zhong, Chang Su, and Zhou Fan. Empirical Bayes PCA in high dimensions. *J. R. Stat. Soc.*, 84(3):
490
+ 853–878, 2022.
491
+
492
+ ## A Introduction
493
+
494
+ This is the supplement for the manuscript titled "Population priors for matrix factorization". The figures in this document supplement Figures 2, 3, and 4 in the main text. We provide derivations for twin population priors for Gaussian matrix factorization, as well as computing the likelihood of the held-out data in section B.
495
+
496
+ We then give more details about our experimental setup, including the parameters used in training (e.g., batch-size) in section C. Finally, we provide results for additional experiments in section D: for additional values of the latent dimension L for the datasets studied in the main text.
497
+
498
+ Note that this document is accompanied by an archive file code.zip, that contains the source code, instructions to install and run the code, and scripts to recreate the experiments and plot the figures in the manuscript. All scripts have been de-identified.
499
+
500
+ ## B Derivations
501
+
502
+ In this section, we give more details and derivations for the quantities defined in the main text. Specifically, in section B.1 we derive the twin population priors for the Gaussian matrix factorization (GMF). In section B.2, we derive the variational approximation to the twin population priors in GMF. Finally, in section B.3, we derive the expression for held-out likelihood in matrix factorization.
503
+
504
+ ## B.1 Twin Population Priors For Gaussian Matrix Factorization
505
+
506
+ In this section we derive population priors for Gaussian matrix factorization (GMF). In the classical GMF, the likelihood and priors on the latent variables are Gaussian (Mnih & Salakhutdinov, 2007). The generative model is as follows:
507
+
508
+ $$W_{j,l}\sim{\cal N}(0,\sigma_{W}^{2}),\quad j=1\ldots D,l=1\ldots L,$$ $$Z_{i,l}\sim{\cal N}(0,\sigma_{Z}^{2}),\quad i=1\ldots N,l=1\ldots L,\tag{1}$$ $$X_{i,j}\mid\mathbf{U}_{i},\mathbf{V}_{j}\sim{\cal N}(\sum_{l=1}^{L}Z_{i,l}W_{jl},\sigma^{2}),\quad i=1\ldots N,j=1\ldots D,$$
509
+ $$(28)$$
510
+
511
+ where U = [Zi,l] ∈ R
512
+ N×L and [Wj,l] ∈ R
513
+ D×L, with Ui and Vj and are L-vectors representing the row and column wise latent variables, and σ, σW and σZ are constant.
514
+
515
+ For TwinEB, our goal is to learn the prior for the row and column latent variables, so, as in the main text, we construct mixture priors (Equations 14 and 15), where Pµk,σk and Pνk,ηk are Gaussian parameterized by their mean and scale parameters. That is,
516
+
517
+ $$P_{\theta^{r}}(\mathbf{U}_{i})=\sum_{k=1}^{K_{r}}\pi_{k}\mathcal{N}(\mathbf{U}_{i};\mathbf{\mu}_{k},\mathbf{\sigma}_{k}),$$ $$P_{\theta^{c}}(\mathbf{V}_{j})=\sum_{k=1}^{K_{c}}\rho_{k}\mathcal{N}(\mathbf{V}_{j};\mathbf{\nu}_{k},\mathbf{\eta}_{k}),$$
518
+ $$(29)$$
519
+ $$(30)$$
520
+ $$(31)$$
521
+
522
+ where Kr and Kc are the number of components in the mixtures, θ r = {µ,σ,π} and θ c = {ν, η, ρ}. The locations µ ∈ R
523
+ Kr×L and ν ∈ R
524
+ Kc×L, the scales σ ∈ R
525
+ Kr×L and η ∈ R
526
+ Kc×L, and the mixture weights π ∈ ∆Kr and ρ ∈ ∆Kc are the parameters of the mixtures priors.
527
+
528
+ ## B.2 Posterior Inference In Gmf With Twin Population Priors
529
+
530
+ Our goal is to calculate the posterior P(U,V | X.θr, θc), given data X, which also depends on our choice of priors θ r, θc. Our strategy is the same as in the main text, namely, to simultaneously optimize for the parameters of the variational distribution and the TwinEB prior. We substitute the variational family in Equations (19) and (19) with Gaussian, as follows:
531
+
532
+ $$\begin{array}{l}{{\cal X}_{i,l}^{r}:=(a_{i,l}^{\prime},b_{i,l}^{\prime}),}}\\ {{\cal X}_{j,l}^{c}:=(a_{j,l},b_{j,l}),}}\\ {{q_{{\cal X}_{i,l}^{r}}(U_{i,l}):={\cal N}(a_{i,l}^{\prime},b_{i,l}^{\prime}),}}\\ {{q_{{\cal X}_{j,l}^{c}}(V_{j,l}):={\cal N}(a_{j,l},b_{j,l}).}}\end{array}$$
533
+
534
+ Each Gaussian is parameterized by its natural parameters a and b:
535
+ N (x; *a, b*) ∝ exp ax + bx2 The log-likelihood of the data is:
536
+
537
+ $$\log P(\mathbf{X}\mid\mathbf{Z},\mathbf{W})=\log\prod_{i=1}^{N}\prod_{j=1}^{D}P_{\sigma}(X_{i,j}\mid\mathbf{Z}_{i},\mathbf{W}_{j})\tag{36}$$ $$=\sum_{i=1}^{N}\sum_{j=1}^{D}\log\mathcal{N}\left(X_{i,j};\sum_{l=1}^{L}Z_{i,l}W_{j,l},\sigma^{2}\right)$$ $$=-\sum_{i=1}^{N}\sum_{j=1}^{D}\left[\log\sigma\sqrt{2\pi}+\frac{1}{2}\left(\frac{X_{i,j}-\sum_{l=1}^{L}Z_{i,l}W_{j,l}}{\sigma}\right)^{2}\right].$$
538
+
539
+ $$(37)$$
540
+ $$(38)$$
541
+
542
+ ## B.3 Held-Out Likelihood For Matrix Factorization
543
+
544
+ We use the log likelihood of held-out data as the score for each model. Let Xout = {Xi,j} denote Nout entries of the held-out rows that were masked. We compute the likelihood of the masked entries via the posterior predictive distribution:
545
+
546
+ $$\log P(X^{\text{out}}\mid X^{\text{in}})=\log\prod_{i=1}^{N^{\text{out}}}P(X_{i}^{\text{out}}\mid X^{\text{in}})$$ $$=\sum_{i=1}^{N^{\text{out}}}\log P(X_{i}^{\text{out}}\mid X^{\text{in}}).$$
547
+ (39) $\binom{40}{41}$ (41) ...
548
+ For Bayesian matrix factorization we expand the summand in Equation (38) as follows:
549
+
550
+ $$\log P(X_{i}^{\rm out}\mid X^{\rm in})\approx\log\iint P(X_{i}^{\rm out}\mid\mathbf{U},\mathbf{V})P(\mathbf{U},\mathbf{V}\mid X^{\rm in})d\mathbf{U}d\mathbf{V}$$ $$\approx\log\iint P(X_{i}^{\rm out}\mid\mathbf{U},\mathbf{V})q_{\mathbf{A}}(\mathbf{U},\mathbf{V}\mid X^{\rm in})d\mathbf{U}d\mathbf{V}$$ $$=\log_{\mathbf{U},\mathbf{V}\sim q_{\mathbf{A}}(\cdot)}[P(X_{i}^{\rm out}\mid\mathbf{U},\mathbf{V})]$$ $$\approx\log\frac{1}{M}\sum_{m}^{M}P(X_{i}^{\rm out}\mid\mathbf{U}^{(M)},\mathbf{V}^{(M)}),$$
551
+
552
+ where M is the number of Monte Carlo samples from qΛ(U,V | Xin). In Equation (40) we approximate the true posterior P(U,V | Xin) with its variational counterpart qΛ. The log-likelihood score for the entire held-out data is then:
553
+
554
+ $$\sum_{i}^{N^{\mathrm{out}}}\log P(X_{i}^{\mathrm{out}}\mid X^{\mathrm{in}})\approx\sum_{i}^{N^{\mathrm{out}}}\log\frac{1}{M}\sum_{m}^{M}P(X_{i}^{\mathrm{out}}\mid\mathbf{U}^{(M)},\mathbf{V}^{(M)}).\tag{1}$$
555
+ $$\quad(42)$$
556
+ $$(43)$$
557
+
558
+ 17
559
+
560
+ ## C Experimental Details
561
+
562
+ In this section we give more details on our experimental studies in the main manuscript. In section C.1 we describe preprocessing for the gene expression in the Ru1322-scRNAseq dataset. In section C.2, we specify the parameters used during training. In section C.3 we give a brief description of the artifacts that accompany this supplementary material.
563
+
564
+ ## C.1 Preprocessing Of The Ru1322-Scrnaseq Gene Expression Dataset
565
+
566
+ We use CellRanger version 6.0.1 to process the FASTQ and generate the unique molecular identifier (UMI)
567
+ count matrices. We use scanpy to preprocess the data, and the seuratv3 algorithm to select highly variable genes Wolf et al. (2018).
568
+
569
+ ## C.2 Training Details
570
+
571
+ We set a batch size of 128 in all our experiments. We ran Poisson and Gaussian matrix factorization experiments for a maximum of 20, 000 iterations. By this step, all runs had converged.
572
+
573
+ We initialized the learning rate for the row and column variables, rlr and clr separately. We fix the initial learning rate rlr ∈ {0.01} and clr ∈ {0.01}. In the experiments in the main text, we use 10 Monte Carlo samples to approximate the ELBO, while in the supplemental experiments, we use a single particle.
574
+
575
+ For the PMF experiments in the supplement, we subsample zeros as is standard (Gopalan et al., 2015).
576
+
577
+ We uniformly randomly subsample the same number of zeros as non-zero values to estimate the likelihood.
578
+
579
+ Concretely, let L(Xout) denote the log-likelihood of the masked entries of the held-out rows, Xout = {Xi,j}. Then L(Xout) can be decomposed as the sum of non-zero and zero Xi,j :
580
+
581
+ $${\mathcal{L}}(X^{\mathrm{out}})={\mathcal{L}}(X_{x_{i,j}\neq0}^{\mathrm{out}})+{\mathcal{L}}(X_{x_{i,j}=0}^{\mathrm{out}}),$$
582
+ $$(44)$$
583
+ xi,j=0), (44)
584
+ where Xout xi,j 6=0 and Xout xi,j=0 have cardinalities Nout non-zero and Nout zero respectively.
585
+
586
+ We approximate Equation (44) by subsampling Nsub = min(N
587
+ out zero, N
588
+ out non-zero) of the zeros. Let Xzero Nsub denote a multiset of zeros of cardinality Nsub, then:
589
+
590
+ $$\hat{\mathcal{L}}(X^{\rm out})\approx\mathcal{L}(X^{\rm out}_{x_{i,j}\neq0})+\frac{\mathrm{N}^{\rm out}_{\mathrm{zero}}}{\mathrm{N}_{\mathrm{sub}}}\,\mathbf{X}^{\rm zero}_{\mathrm{sub}}.\tag{45}$$
591
+
592
+ We ran our experiment on a machine equipped with an NVIDIA A100 GPU with 80GB memory. We implemented all methods in pytorch (Paszke et al., 2019).
593
+
594
+ ## C.3 Implementation
595
+
596
+ Please refer to the code directory for the source code and instructions on how to run the model. The file README.md contains instructions for installing the software, and running it. Under the data directory, we included preprocessed data for the MovieLens-100K dataset, a smaller version of the MovieLens-1M studied in the main text. In the notebooks directory, we put notebooks used for preprocessing the data, and plotting the figures in the manuscript. Finally, in the pipelines directory, we put nextflow scripts that recreate experiments that we have run for the manuscript.
597
+
598
+ ## D Additional Experiments
599
+
600
+ In this section we present additional experimental results. We show results for additional values of the latent dimension L. Concretely, section D.1 studies the effects of the twin population priors on Poisson and Gaussian matrix factorization, on three real world datasets, while section D.2 examines the sensitivity of the twin population priors to the choice of its hyper-parameters. We find that these results corroborate those that were presented in the main manuscript, that is, matrix factorization with traditional priors is sensitive to the choice of the hyper-parameters of the prior, and twin population priors is a robust way to set the prior in this family of models.
601
+
602
+ ## D.1 Additional Values Of Latent Dimensions
603
+
604
+ In the main text, we study the effects of twin population priors on the Poisson and Gaussian matrix factorization over four real datasets, with the dimension of the latent variables set to L = 15. Here we present results for additional values of L. Figures 5, 6, 7, 8, and 9 show the results. However, the twin population priors find a comparable or better prior, in the sense of higher held-out likelihood.
605
+
606
+ ![18_image_0.png](18_image_0.png)
607
+
608
+ ## D.2 Performance Increase With Number Of Mixture Components
609
+
610
+ We show the performance of TwinEB on simulated data as we vary the number of row and column mixture components . Here we add results for additional values of L, and also study the MovieLens 100K dataset. The result in Figures 9 suggests that as the number of mixture components increases, the performance of TwinEB improves. The gain is apparent when the number of row components K, increases. This is expected, as our evaluation procedure, measures the generalizability for held-out rows, but not held-out columns.
611
+
612
+ ![19_image_0.png](19_image_0.png)
613
+
614
+ Figure 6: As in Figure 5, but with additional values of latent dimension L.
615
+
616
+ ![20_image_0.png](20_image_0.png)
617
+
618
+ Figure 7: As in Figure 3, but with additional values of latent dimension L.
619
+
620
+ ![21_image_0.png](21_image_0.png)
621
+
622
+ Figure 8: As in Figure 7, but with additional values of latent dimension L.
623
+ L = 32
624
+
625
+ ![22_image_1.png](22_image_1.png)
626
+
627
+ ![22_image_0.png](22_image_0.png)
628
+
629
+ .....
630
+
631
+ .....
632
+
633
+ \#Column components: 1
634
+ \#Column components: 2
635
+ \#Column components: 4
636
+ \#Column components: 8
637
+ \#Column components: 16
638
+ \#Column components: 20
639
+ \#Column components: 32
640
+
641
+ Figure 9: As in Figure 4, but with additional column components as well as latent dimension L = 32.
642
+
643
+ ## L = 64
AT9G5s1pOj/AT9G5s1pOj_meta.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "languages": null,
3
+ "filetype": "pdf",
4
+ "toc": [],
5
+ "pages": 23,
6
+ "ocr_stats": {
7
+ "ocr_pages": 7,
8
+ "ocr_failed": 0,
9
+ "ocr_success": 7,
10
+ "ocr_engine": "surya"
11
+ },
12
+ "block_stats": {
13
+ "header_footer": 19,
14
+ "code": 1,
15
+ "table": 1,
16
+ "equations": {
17
+ "successful_ocr": 64,
18
+ "unsuccessful_ocr": 5,
19
+ "equations": 69
20
+ }
21
+ },
22
+ "postprocess_stats": {
23
+ "edit": {}
24
+ }
25
+ }