|
# Population Priors For Matrix Factorization |
|
|
|
Anonymous authors Paper under double-blind review |
|
|
|
## Abstract |
|
|
|
We develop an empirical Bayes prior for probabilistic matrix factorization. Matrix factorization models each cell of a matrix with two latent variables, one associated with the cell's row and one associated with the cell's column. How to set the priors of these two latent variables? Drawing from empirical Bayes principles, we consider estimating the priors from data, to find those that best match the populations of row and column latent vectors. Thus we develop the twin population prior. We develop a variational inference algorithm to simultaneously learn the empirical priors and approximate the corresponding posterior. We evaluate this approach with both synthetic and real-world data on diverse applications: movie ratings, book ratings, single-cell gene expression data, and musical preferences. Without needing to tune Bayesian hyperparameters, we find that the twin population prior leads to high-quality predictions, outperforming manually tuned priors. |
|
|
|
## 1 Introduction |
|
|
|
This paper is about empirical Bayes methods for setting the priors in Bayesian matrix factorization (Mnih & Salakhutdinov, 2007; Gopalan et al., 2015). Matrix factorization models each cell of a matrix with two latent variables, one associated with its row and one associated with its column. Matrix factorization has found broad applications across many fields, including studying consumer behavior, understanding legislative patterns, assessing pharmaceutical impacts, and exploring social networks (Gopalan et al., 2015; Koren et al., 2009; Gerrish & Blei, 2011; Jamali & Ester, 2010). |
|
|
|
Suppose Xi,j is the observed entry in row i and cell j, such as user i's rating of movie j. As a hierarchical model, a matrix factorization generates the data from the following process: |
|
|
|
$$\begin{array}{l}{{U_{i}\sim P_{\theta^{r}}(U)}}\\ {{V_{j}\sim P_{\theta^{c}}(V)}}\\ {{X_{i,j}\sim P(X_{i,j}\mid U_{i},V_{j}).}}\end{array}$$ |
|
$\left(1\right)$. |
|
r (U) (1) |
|
$\left(3\right)$. |
|
c (V ) (2) |
|
Xi,j ∼ P(Xi,j | Ui,Vj ). (3) |
|
Here Ui and Vj are per row and per column specific latent vectors, and Pθ r and Pθ c are the priors, each with hyperparameters θ r and θ c. |
|
|
|
This formulation encompasses many factorization models. In Gaussian matrix factorization (Mnih & |
|
Salakhutdinov, 2007), the priors are Gaussians and Xi,j is drawn from a Gaussian with mean U > |
|
i Vj and variance σ 2. In Poisson matrix factorization (Canny, 2004; Dunson & Herring, 2005; Gopalan et al., 2015), |
|
the priors are over the positive reals and Xi,j is drawn from a Poisson with rate U > |
|
i Vj . |
|
|
|
An observed matrix of data X then defines a posterior distribution P(U,V | X) over the row variables and the column variables. The posterior can provide interpretations of the data and an avenue to form predictions about missing entries, for example, for a recommendation system. |
|
|
|
The prior distributions on the row variables and column variables significantly impact the quality of the model posterior. How should we set them? Practitioners typically assume a simple parametric family for the priors, such as a Gaussian or a Gamma, and then find the prior hyperparameters best suited to the data, e.g., with cross-validation (Salakhutdinov & Mnih, 2008; Schmidt et al., 2009). This approach can be effective, but it is expensive and only allows for priors from a simple parametric family. |
|
|
|
In this paper, we develop an empirical Bayes (EB) methodology for setting the priors (Robbins, 1992; Efron, 2012), learning them from data. The EB idea is to set the priors using the data, for example by finding the one that maximizes the marginal log-likelihood of the data. It is applicable for priors over variables that have repeated draws from them, such as the row and column variables in matrix factorization. |
|
|
|
EB has found applications in applied sciences in diverse fields such as astronomy (Bovy et al., 2011), |
|
actuarial sciences (Bühlmann & Gisler, 2005), genomics (Smyth, 2005; Love et al., 2014), economics (Frost |
|
& Savarino, 1986; Angrist et al., 2017), and survey sampling (Rao & Molina, 2015). EB priors have been successfully employed in simple hierarchical models, such as in variational autoencoders (Kingma & Welling, 2013; Tomczak & Welling, 2018; Kim & Mnih, 2018). Matrix factorization, however, provides a different type of application of EB. In matrix factorization, there are two priors to set, one for the row variables and one for the column variables, and the data that informs us about them is overlapping. Thus, we will find empirical Bayes priors for both row variables and column variables. The result is the *twin empirical Bayes prior* (TwinEB), a practical EB method for matrix factorization. Other methods for EB on matrix factorization include Wang & Stephens (2021) and da Silva et al. |
|
|
|
(2023); we discuss these below. Specifically, we model the two priors with mixture distributions, one for each prior. We use mixtures since they are a flexible family of distributions that can approximate a wide range of distributions (Titterington et al., 1985; Nguyen et al., 2020). We use variational inference (Blei et al., 2017; Wainwright & Jordan, 2008) to simultaneously estimate the priors and approximate the corresponding posterior. We verify the efficiency and the robustness of this approach with real-world data about recommendation systems and computational biology, and for both Poisson matrix factorization and Gaussian matrix factorization. We summarize the contributions as follows: |
|
1. We develop the *twin population prior*, an EB prior for Bayesian matrix factorization. 2. We derive a variational inference algorithm to approximate the matrix factors' posteriors and learn the twin population prior simultaneously. |
|
|
|
3. We study the twin population prior on both synthetic and real data and with two types of factorization. The automatically learned EB prior performs as well as the best prior chosen retrospectively. |
|
|
|
## 2 Related Work |
|
|
|
One approach to setting the hyperparameters of priors involves using hierarchical Bayesian models (Gelman, 2006). In this line of work, the prior's hyperparameters are treated as unknown and assigned a prior with hyperpriors to be determined. Gopalan et al. (2015) employs this method for introducing hierarchical Poisson factorization, and Levitin et al. (2019) applies it to gene signature discovery from scRNAseq data. Our work involves more expressive priors (mixtures), which leads to a more flexible class of models while keeping a simpler model as no extra variables for the hyperparameters are introduced. |
|
|
|
A fruitful direction of research for setting priors for matrix factorization has also been the use of empirical Bayes, a methodology by which one tries to find the prior that matches, in some sense, the population distribution of the data. In Wang & Stephens (2021), the authors formulate prior elicitation for matrix factorization (MF) as steps in a variational expectation maximization algorithm, and in that context, find that it is equivalent to solving the EB normal means (EBNM) problem (Jiang & Zhang, 2009). Our proposed approach is closely related to Wang & Stephens (2021), but we propose to update both priors simultaneously which bypasses potentially costly numerical integration required in the EBNM step. A related line of research makes the connection to EBNM via random-matrix theory, focusing specifically on denoising principal components (Zhong et al., 2022). Closely related to TwinEB, da Silva et al. (2023) optimizes hyperparameters to match the prior predictive distribution to statistics computed from the data, and derives closed-form solutions for PMF and its hierarchical extensions. In Section 4, we compare our method to Wang & Stephens (2021) and da Silva et al. (2023). |
|
|
|
Empirical Bayes (EB) priors have been explored in the context of variational autoencoders (VAEs) (Kingma |
|
& Welling, 2013) under the name aggregated posterior, average encoding distribution (Hoffman & Johnson, 2016) or VampPrior (Tomczak & Welling, 2018). The VampPrior learns an amortized posterior and a prior over the latent variables using a shared neural network. It models a prior on the rows only and was used to address posterior collapse (Tomczak & Welling, 2018) or to learn disentangled representations (Kim & |
|
Mnih, 2018). Our method focuses on matrix factorization, and we derive EB priors for both row and column latent variables. |
|
|
|
## 3 Empirical Bayes Priors For Probabilistic Matrix Factorization |
|
|
|
Our goal is to develop empirical Bayes (EB) priors for Bayesian matrix factorization models. We will focus here on Poisson matrix factorization (PMF). In the supplement, we derive EB priors for Gaussian matrix factorization (GMF). With matrix factorization, the presence of repeated and identically distributed latent variables for each row and each column provides the opportunity to learn their prior distribution from data. This is a form of empirical Bayes (Robbins, 1992; Efron, 2012) that prescribes a *population prior* (see Section 3.1). |
|
|
|
This population prior aims to align the model's marginal distribution of observations with the observed population distribution. In the special case of matrix factorization, there are two distinct populations: the populations of row vectors, and the population of column vectors. With TwinEB, we learn one prior for each population. This is a form of hierarchical modeling without introducing an extra layer of latent variables. |
|
|
|
Notations. Define [N] := 1*, . . . , N*. Let X = [Xi,j ] ∈ N |
|
N×D represent the measured feature j in individual i, where i ∈ [N] and j ∈ [D]. The dimension of the latent variables (the number of factors) is denoted as L. |
|
|
|
## 3.1 Background: Population Priors For Simple Hierarchical Models |
|
|
|
A crucial step in Bayesian statistics is the choice of the prior distribution; if done arbitrarily, it can lead to suboptimal posterior inference (Wang et al., 2021). We choose to follow an empirical Bayes principle that prescribes a *population prior* (Hoffman & Johnson, 2016; Tomczak & Welling, 2018). This prior, by design, aligns the model's marginal distribution of observations with the population distribution. |
|
|
|
We first focus on a family of latent variable models called simple hierarchical models (Agrawal & Domke, 2021). The joint distribution factorizes as follows: |
|
|
|
$$P(\mathbf{Z},\mathbf{X})=\prod_{i=1}^{N}P_{\pi}(\mathbf{z}_{i})P_{\theta}(\mathbf{x}_{i}\mid\mathbf{z}_{i}),$$ |
|
|
|
$$\left(4\right)$$ |
|
Pπ(zi)Pθ(xi| zi), (4) |
|
where θ and Z = [zi] are global and local latent variables respectively and observations X = [xi] are exchangeable conditioned on the latent variables. The variable π indexes the prior distribution of zi. We assume that the global latent variables are fixed (e.g., they are learned through MAP estimation). We use a subscript notation to indicate fixed variables. To further simplify notations, we focus on the marginal likelihood of a single observation, x, and its corresponding local latent variable z. |
|
|
|
Let P |
|
?(x) be the true (unobserved) distribution of observations. An empirical Bayes criterion is that the marginal distribution of observations under the model, Pπ,θ,(x), should align with their true population distribution (Ignatiadis & Wager, 2022), that is: |
|
|
|
$$\begin{array}{r l}{P^{\star}(\mathbf{x})=P_{\pi,\theta}(\mathbf{x})}\\ {=\int P_{\pi}(\mathbf{z})P_{\theta}(\mathbf{x}\mid\mathbf{z})\ d\mathbf{z}.}\end{array}$$ |
|
ZPπ(z)Pθ(x | z) dz.(5) |
|
$\left(5\right)$. |
|
Our goal is to set π such that, for a fixed θ, Equation (5) holds. For a fixed likelihood function Pθ(x | z) |
|
(i.e., θ is fixed), the expression for the prior Pπ? that satisfies this conditions is: |
|
|
|
$$P_{\pi^{\star}}(\mathbf{z})\approx\int P_{\pi,\theta}(\mathbf{z}\mid\mathbf{x})P^{\star}(\mathbf{x})d\mathbf{x}$$ $$=\mathbb{E}_{P^{\star}(\mathbf{x})}[P_{\pi,\theta}(\mathbf{z}\mid\mathbf{x})],$$ is all the time of the last two-sided version with the above theorem. |
|
$$(6)$$ |
|
|
|
where Pπ,θ(z | x) is the (local) posterior distribution of the latent variable z given the observation x under the model. The definition presents two issues: the unknown true marginal distribution P |
|
?(x), and the fact that the target prior Pπ? (z) is on both sides of Equation (6), explicitly on the left and implicitly via the posterior on the right. The research literature has approximated Equation (6) with Monte Carlo estimates of P |
|
?(x) and variational inference of P(z | x) (Hoffman & Johnson, 2016; Tomczak & Welling, 2018). |
|
|
|
## 3.2 Population Priors For Probabilistic Matrix Factorization |
|
|
|
Our goal is to develop population priors for Bayesian matrix factorization models. The challenge is that unlike simple hierarchical models, there is no distinction between local and global random variables, rather latent variables denote row- and column-specific random variables. |
|
|
|
## 3.2.1 Twin Population Priors |
|
|
|
We establish population priors for two latent variables, one for the rows PV ? (Ui) and one for the columns PU? (Vj ). These priors will match two different populations, one of the row vectors and one of the column vectors: |
|
|
|
$P^{\star}_{\rm row}(\mathbf{X}_{i,:}):=$ Population distribution of row vectors $P^{\star}_{\rm column}(\mathbf{X}_{:,j}):=$ Population distribution of column vectors |
|
We begin with population priors for row latent variables. As in section 3.1, we specify the prior based on an empirical Bayes principle such that the true marginal distribution of the rows P |
|
? |
|
|
|
row(Xi,:) is aligned with the distribution of the rows under the model PV (Xi,:), that is: |
|
|
|
$$P_{\rm row}^{\star}(\mathbf{X}_{i,:})=P_{\mathbf{V}}(\mathbf{X}_{i,:})$$ $$=\int P(\mathbf{U}_{i})\prod_{j=1}^{D}P(X_{i,j}\mid\mathbf{U}_{i},\mathbf{V}_{j})d\mathbf{U}_{i},$$ $\mathbf{U}_{i}$\(\mathbf{U}_ |
|
$$\left(7\right)$$ |
|
|
|
for a fixed set of column variables V . For Equation (7) to hold, a population prior should be used: |
|
|
|
$$P_{V^{\star}}(U_{i})=\mathbb{E}_{P_{\mathrm{ref}}^{\star}}$$ |
|
|
|
row(Xi,:)[PV (Ui| Xi,:)]. (8) |
|
Similarly for columns, for a fixed set of row latent variables U, the empirical Bayes criterion is: |
|
|
|
$$P_{\mathrm{column}}^{\star}(X_{:,j})=P_{U}(X_{i,:})$$ |
|
$\mathbf{\hat{\varepsilon}}_{\text{row}}(\mathbf{\hat{x}})$ |
|
$P_{U}(\mathbf{X}_{i,:})$ $$=\int P(\mathbf{V}_{i})\prod_{i=1}^{N}P_{U_{j}}(X_{i,j}\mid\mathbf{V}_{i})d\mathbf{V}_{i}.\tag{1}$$ |
|
$$({\boldsymbol{\delta}})$$ |
|
$$\mathbf{\tau}\cdot(U_{i}\ |\ \mathbf{X}_{i,:})].$$ |
|
$$(10)$$ |
|
$$(9)$$ |
|
|
|
The prior that satisfies this criterion is the column population prior: |
|
|
|
$P_{U}\cdot(V_{j})=\mathbb{E}_{P_{\rm column}^{*}}(X_{:,j})[P_{U}(V_{j}\mid X_{:,j})]$. |
|
Since there are two populations in need of prior specification, we call Equations (8 and 10) the *twin* population priors. We have established the form of population priors in probabilistic matrix factorization. Next we focus on how to estimate the twin population priors and how to approximate posterior inference under them. In the remainder of the paper, we focus on the Poisson matrix factorization (PMF). We derive the priors for Gaussian matrix factorization (GMF) in the supplement. |
|
|
|
## 3.3 Twin Eb Prior For Poisson Matrix Factorization |
|
|
|
In Poisson matrix factorization (PMF), the row and column latent variables Ui and Vi are non-negative L-vectors and the likelihood in Equation (3) is Poisson: |
|
|
|
$$X_{i,j}\mid{\boldsymbol{U}}_{i},{\boldsymbol{V}}_{j}\sim\mathrm{Poisson}\left({\boldsymbol{U}}_{i}^{T}{\boldsymbol{V}}_{j}\right),$$ |
|
, (11) |
|
$${\mathrm{for~}}i\in[N],j\in[D].$$ |
|
|
|
The log-likelihood of the data is: |
|
|
|
$$\log P(\mathbf{X}\mid\mathbf{U},\mathbf{V})=\log\prod_{i=1}^{N}\prod_{j=1}^{D}P(X_{i,j}\mid\mathbf{U}_{i},\mathbf{V}_{j})\tag{1}$$ $$=\sum_{i,j}^{N,D}\log\text{Poisson}\left(X_{i,j};\mathbf{U}_{i}^{T}\mathbf{V}_{j}\right).$$ |
|
$$(11)$$ |
|
$$\left(12\right)$$ |
|
|
|
Some methods place Gamma priors on Vj and Ui (Gopalan et al., 2015). Note that this is a Bayesian formulation of non-negative matrix factorization (Cemgil, 2009). |
|
|
|
To compute the population prior for the rows, PV (Ui) = EP ? |
|
|
|
row(Xi,:)[PV (Ui| Xi,:)], we face two problems. |
|
|
|
Namely, we do not know the true population distribution of the rows P |
|
? |
|
|
|
row(Xi,:), and the population prior PV (Ui) appears on both sides of the equality. |
|
|
|
To find the population prior, we first notice that a Monte Carlo estimate of Equation (8) writes as: |
|
|
|
$$P_{\mathbf{V}}(\mathbf{U}_{i})\approx\frac{1}{N}\sum_{i^{\prime}=1}^{N}P_{\mathbf{V}}(\mathbf{U}_{i}\mid\mathbf{X}_{i^{\prime},:}).\tag{13}$$ |
|
|
|
When the prior satisfies Equation (13), this property is called self-consistency (Laird, 1978). |
|
|
|
The structure of Equations (13) suggests to use families of mixtures of parametric distributions to approximate the row and column population priors (Tomczak & Welling, 2018). Mixtures can approximate complex distributions when their number of components increases while having the convenience of remaining parametric (Titterington et al., 1985; Nguyen et al., 2020). We choose to model the priors as, |
|
|
|
$$P_{\mathbf{V}}(\mathbf{U}_{i}):=P_{\theta^{r}}(\mathbf{U}_{i})=\sum_{k=1}^{K_{r}}\pi_{k}P_{\mathbf{\mu}_{k},\mathbf{\sigma}_{k}}(\mathbf{U}_{i}),$$ $$P_{\mathbf{U}}(\mathbf{V}_{j}):=P_{\theta^{c}}(\mathbf{V}_{j})=\sum_{k=1}^{K_{c}}\rho_{k}P_{\mathbf{\nu}_{k},\mathbf{\eta}_{k}}(\mathbf{V}_{j}),$$ |
|
$$\left(14\right)$$ |
|
$$\left(15\right)$$ |
|
$$(16)$$ |
|
|
|
where Kr and Kc are the number of components in the mixtures, θ r = {µ,σ,π} and θ c = {ν, η, ρ}. The locations µ ∈ R |
|
Kr×L and ν ∈ R |
|
Kc×L, the scales σ ∈ R |
|
Kr×L and η ∈ R |
|
Kc×L, and the mixture weights π ∈ ∆Kr and ρ ∈ ∆Kc are the parameters of the mixtures priors. As Kr and Kc increase, the priors become more and more expressive (see Figure 4). Figure 1 shows a graphical model representation of matrix factorization with EB priors. |
|
|
|
In a classical empirical Bayes setup, the idea is to set the priors that maximize the marginal likelihood of the data: |
|
|
|
$$\hat{\theta}^{r},\hat{\theta}^{c}=\operatorname*{arg\,max}_{\theta^{r},\theta^{c}}\log P(X;\theta^{r},\theta^{c}),$$ |
|
|
|
where |
|
|
|
$$\log P(\mathbf{X};\theta^{r},\theta^{c})=\log\int P_{\theta^{r}}(\mathbf{U})P_{\theta^{c}}(\mathbf{V})P\left(\mathbf{X}\mid\mathbf{U},\mathbf{V}\right)d\mathbf{U}d\mathbf{V}.$$ |
|
|
|
In Section 3.4, we find θ r, θc at the same time as we approximate the model posterior P(U,V | X; θ r, θc). |
|
|
|
$$(17)$$ |
|
|
|
![5_image_0.png](5_image_0.png) |
|
|
|
Figure 1: Twin population priors for Poisson matrix factorization model. Shaded nodes are observed while other nodes represent latent random variables. The empty squares indicate that we will fit these priors to the data. |
|
|
|
## 3.4 Posterior Inference In Pmf With Twin Eb Priors |
|
|
|
Given data X, our goal is to calculate the posterior P(U,V | X; θ r, θc), which also depends on our choice of priors θ r, θc. The challenges are that this posterior is intractable (for any prior) and we simultaneously want to fit the priors to satisfy the EB criterion in Equation (16). |
|
|
|
Our strategy will be as follows. We will use variational inference (VI) (Blei et al., 2017) to approximate the posterior, taking gradient steps in the variational objective with respect to the posterior approximation (the variational family). At the same time, however, the variational objective of VI is an approximation (lower bound) of the log-marginal from Equation (16). So we also take gradient steps with respect to the EB priors to maximize it. The result is an algorithm that simultaneously approximates the posterior and learns the EB prior. |
|
|
|
The variational posterior. Consider a parameterized mean-field variational family, |
|
|
|
$$q_{\bf A}(U,V\mid X)=\prod_{i,l}q_{\lambda_{i,l}^{c}}(U_{i,l})\prod_{j,l}q_{\lambda_{j,l}^{c}}(V_{j,l}),\tag{1}$$ |
|
|
|
``` |
|
This family has parameters for each row's latent vector and each column's latent vector, λ |
|
r |
|
i |
|
and λ |
|
c |
|
i |
|
respec- |
|
|
|
``` |
|
|
|
tively. We further define Λr:= [λ r i,l], and Λc:= [λ c j,l]. The full set of variational parameters is Λ = {Λr, Λc}. |
|
|
|
From the perspective of posterior inference, our goal is to set qΛ to minimize the KL divergence to the exact posterior: |
|
|
|
$$\hat{\mathbf{A}}=\operatorname*{arg\,min}_{\mathbf{A}}\operatorname{KL}(q_{\mathbf{A}};P(\mathbf{U},\mathbf{V}\mid\mathbf{X};\theta^{r},\theta^{c})).$$ |
|
r, θc)). (19) |
|
In detail, the variational family is a bank of Log-Normals: |
|
|
|
$$(18)$$ |
|
$$(19)$$ |
|
|
|
$$\lambda^{r}_{i,l}:=(a^{\prime}_{i,l},b^{\prime}_{i,l}),$$ $$\lambda^{c}_{j,l}:=(a_{j,l},b_{j,l}),$$ $$q\chi^{r}_{i,l}(U_{i,l}):=\mathcal{L}\mathcal{N}(a^{\prime}_{i,l},b^{\prime}_{i,l}),$$ $$q\chi^{c}_{j,l}(V_{j,l}):=\mathcal{L}\mathcal{N}(a_{j,l},b_{j,l}).$$ |
|
|
|
Each Log-Normal is parameterized by its natural parameters a and b: |
|
|
|
$${\mathcal{L N}}(x;a,b)\propto\exp\left(-{\frac{a}{2b}}\log(x)-{\frac{(\log x)^{2}}{2b}}\right).$$ |
|
|
|
$$(23)$$ |
|
|
|
To minimize the KL divergence in Equation (19), VI optimizes the variational parameters Λ to, equivalently, maximize the evidence lower bound (ELBO) (Blei et al., 2017): |
|
L(X; Λ, θr, θc) = |
|
EqΛ(U,V | X)[log P(X | U,V )] + EqΛ |
|
[log P(U; θ r, θc) + log P(V ; θ r, θc)] − EqΛ |
|
[log qΛ(U,V | X)]. (24) |
|
Here we use gradient ascent to maximize L(X; θ, Λ) with respect to Λ (Ranganath et al., 2014). We further use stochastic reparameterization gradients to take such steps (Kingma & Welling, 2013; Rezende et al., 2014). Maximum marginal likelihood. At the same time, we would like to set the prior parameters to maximize the marginal likelihood of the data (Equation (16)). The variational objective in Equation (24) conveniently also provides a lower-bound on the marginal likelihood (Blei et al., 2017): |
|
log P(X; θ r, θc) ≥ L(X; θ, Λ). (25) |
|
So, we will also follow stochastic gradients of the ELBO with respect to the prior parameters θ r, θcto maximize L(X; θ, Λ) with respect to θ r, θc. This strategy has been used in the context of linear regression (Mukherjee et al., 2023). Twin EB. Putting these two pieces together, our algorithm is a stochastic gradient ascent of the ELBO with respect to two sets of parameters. In optimizing with respect to Λ, we minimize the KL divergence between qΛ and the posterior; in optimizing with respect to θ r, θc, we maximize the (approximate) marginal likelihood of the data. We use the Adam algorithm for stochastic optimization (Kingma & Ba, 2014) with a batch size of 128, and using ten particles to obtain unbiased noisy estimates of the gradient of the ELBO via the reparameterization trick (the particles are samples from qΛ to estimate the expectation EqΛ |
|
of the ELBO with Monte-Carlo). |
|
|
|
The details of the algorithm are in Algorithm 1. Our implementation is available at git@github.com: |
|
xxx/xxx.git. |
|
|
|
Algorithm 1 Variational inference for Poisson matrix factorization with twin EB priors Input: Data X, number of particles S, learning rate ζ, number of iterations T, number of components Kr, Kc, number of latent dimensions L. |
|
|
|
Output: Variational posterior parameters Λ∗, prior parameters θ r∗, θc∗. |
|
|
|
Initialize: Λ(0), θ r(0), θc(0). |
|
|
|
for t = 1 to T do for i = 1 to N, l = 1 to L do for s = 1 to S do Sample |
|
(s) |
|
i,l ∼ N (0, 1). |
|
|
|
Compute U |
|
(s) |
|
i,l = exp(a 0 |
|
(t−1) |
|
i,l + b 0 |
|
(t−1) |
|
i,l |
|
(s) |
|
i,l ). |
|
|
|
end for end for for j = 1 to D, l = 1 to L do for s = 1 to S do Sample φ |
|
(s) |
|
j,l ∼ N (0, 1). |
|
|
|
Compute V |
|
(s) |
|
j,l = exp(a |
|
(t−1) |
|
j,l + b |
|
(t−1) |
|
j,l φ |
|
(s) |
|
j,l ). |
|
|
|
end for end for Estimate L(X; θ rt−1, θc(t−1), Λ(t−1)) using Monte-Carlo in Equation (24) with samples U(s), V |
|
(s)in place of EqΛ |
|
. |
|
|
|
Λ(t), θr(t), θc(t) ← Adam(∇(θ r,θc,Λ)L, ζ). |
|
|
|
end for return Λ(T), θr(T), θc(T). |
|
|
|
![7_image_0.png](7_image_0.png) |
|
|
|
Figure 2: Twin population priors induce robustness to prior selection The held-out likelihood is sensitive to the choice of the prior hyper-parameters. GMF endowed with population priors on both row and column latent variables, TwinEB (GMF), achieves comparable or better results than other methods. Each sub-panel displays held-out log-likelihood from adjusting the column prior variance with a fixed row prior, while the right sub-panel does the opposite, varying the row prior variance with a constant column prior. |
|
|
|
We demonstrate four datasets, from the top to bottom, MovieLens 1M, Ru1322b-scRNAseq, UserArtists, and GoodBooks. In all datasets, we set L = 15. Similar results hold for other values of L (see supplement). |
|
|
|
## 4 Experiments |
|
|
|
We studied Algorithm 1 in several real-world matrix factorization settings: book ratings, movie ratings, artist preferences, and single-cell RNA sequence gene counts. For all datasets, we studied both Gaussian and Poisson matrix factorization. In all datasets, we set L = 15. We found that TwinEB performs as well or better than manually searching for a parameterized prior, and performs as well or better than setting a simple parameterized prior by empirical Bayes. |
|
|
|
## 4.1 Datasets |
|
|
|
In this section we first review four real-world datasets, ranging from user-preferences to genomics, and then explore the impact of twin population priors on the performance of Gaussian and Poisson matrix factorization. |
|
|
|
In our experiments, we fix the number of row and column mixtures to K .- 70 and Kc = 100 respectively. |
|
|
|
MovieLens 1M. This dataset comprises 1 million ratings from 6,000 users (rows) on 4,000 movies |
|
(columns) (Harper & Konstan, 2015). The ratings are on a scale of 1 to 5. Sparsity of this dataset, defined as the number of nonzero elements divided by the total number of entries, is 0.04. We can use matrix factorization to capture different aspects of user preferences and movie characteristics. Specifically, Ui,l may signify user i's affinity for aspect l (e.g., genre), while Vj,l may represent the degree to which movie j exhibits aspect l. |
|
|
|
Ru1322b. We analyze single cell gene expression data from a patient with small cell lung cancer (Chan et al., 2021). This dataset comprises 4,000 highly variable genes (columns) across 5,308 cells (rows). For the GMF family, we applied a two-step transformation. First, we performed a log transformation on the counts after adding a pseudo-count of one. Then, we standardized the non-zero elements. Sparsity of this dataset is 0.16. Each entry of the matrix denotes the number of transcripts of gene j in cell i. We explain the gene expression matrix via L gene-modules, with Ui,l as the activity of module l in cell i, and Vj,l as gene j's contribution to module l. Matrix factorization can be used as exploratory data analysis (finding gene modules associated with malignancy in cancer) or as a component in a more complex analysis (causal inference (Wang & Blei, 2019)). See the supplement for details on preprocessing of the sequencing data. UserArtists We use the data introduced in Cantador et al. (2011), comprising 92,834 user-listened artist relations, across 1,892 users and 17,632 artists, with a maximum value of 352, 698. Similar to MovieLens 1M, we use matrix factorization to explain different aspects of user preferences and artist groupings. Sparsity of this dataset is 0.003. We note that this dataset was analyzed in da Silva et al. (2023). |
|
|
|
GoodBooks Contains 6 million ratings across 51,288 users and 10,000 books1. Ratings range from 1 to 5. |
|
|
|
Sparsity of this dataset is 0.01. |
|
|
|
## 4.2 Evaluation Metric And Baselines |
|
|
|
We evaluate each model based on the likelihood of held-out data. For a given row-wise data split into held-in and held-out rows, let Xout = {Xout i} denote the set of Nout masked entries of the held-out rows. We estimate the held-out log-likelihood as follows: |
|
|
|
$$\begin{array}{l}{{\frac{1}{N^{\mathrm{out}}}\sum_{i=1}^{N^{\mathrm{out}}}\log P(X_{i}^{\mathrm{out}}\mid X^{\mathrm{in}})\approx}}\\ {{\frac{1}{N^{\mathrm{out}}}\sum_{i=1}^{N^{\mathrm{out}}}\log\frac{1}{M}\sum_{m=1}^{M}P(X_{i}^{\mathrm{out}}\mid\mathbf{U}^{(m)},\mathbf{V}^{(m)}),}}\end{array}$$ |
|
$$(26)$$ |
|
$$(27)$$ |
|
$\boldsymbol{\mathit{T}}$, $\boldsymbol{\mathit{V}}$). |
|
where M is the number of Monte Carlo samples from qΛ(U,V ). |
|
|
|
We randomly assign 20% of the rows as the test set, and the rest as training data. We then mask 20% of the entries at random, and train the model on this train set using ten random restarts. We use the 20% masked entries as a *validation set*. At test time, we put aside 30% of the entries of the test rows at random - these entries constitute the *test set* - then we train the model on 40% of the rest of the entries. This procedure measures *strong generalization* (Steck, 2019). |
|
|
|
To compute the held-out likelihood, after training the model on the held-in data, we fix its column parameters and then learn row parameters of the held-out rows. We then report the likelihood of the masked entries in the test set. For each model, we report the test HOLL of the random restart that achieved the best validation HOLL. In our experiments, we set M = 500. See the supplement for the derivation of Equation (27) and more details on the experiments. Baseline. We evaluate the performance of the PMF (GMF) model with TwinEB against multiple baselines, namely (i) TwinEB-Single, (ii) PMF (GMF). The former is a simple form of TwinEB where all latent dimensions have an identical prior, which we learn. The latter is a prior of the same family as the TwinEB, but with fixed hyperparameters. We compare TwinEB against a large choice of fixed parameters, akin to hyperparameter selection. |
|
|
|
1Accessed at https://github.com/zygmuntz/goodbooks-10k |
|
|
|
## 4.3 Results: Gaussian Matrix Factorization |
|
|
|
We preprocess the data as follows. We standardize each column by subtracting the mean from non-zero entries and dividing the result by their standard deviation. We study two scenarios: (i) maintaining a fixed prior on row-wise variables while varying the prior on column-wise variables, and (ii) holding the prior on column-wise variables constant and adjusting the prior on row-wise variables. We set the variational family as well as the mixture components for the population priors to be Gaussian. We set the fixed prior to N (1.0, 1.0) and vary the variance of the non-fixed one over {0.001, 0.01, 0.1, 1.0, 10}. |
|
|
|
We compared the GMF model to TwinEB (GMF) and TwinEB-Single (GMF). We treat zero entries as missing values. |
|
|
|
Figure 2 displays the outcomes for four real-world datasets. In GoodBooks and MovieLens-1M datasets, TwinEB achieves the highest test HOLL. For the UserArtists dataset, TwinEB does better than the fixed prior. Similar results were obtained by varying L to other values and are reported in Figures 7, 8 in Appendix D. |
|
|
|
We also compare to the method of Wang & Stephens (2021), we used the corresponding package flashr, that currently supports GMF. We designed an imputation experiment where we held-out 10% of entries of a standardized matrix, and compare reconstruction accuracy on non-zero enteries. We applied flashr to the above four real-world datasets. flashr crashes on all but the least sparse dataset, Ru1322b. For the Ru1322b dataset it achieves a mean absolute error of 0.64 vs 0.51 for the TwinEB. We note that it is not straightforward to compare our method to that of Jiang & Zhang (2009); the software implementation does not immediately support missing data and imputation. |
|
|
|
## 4.4 Results: Poisson Matrix Factorization |
|
|
|
For the MovieLens 1M and GoodBooks datasets, we binarize the matrix, setting entries to 'one' if a user has rated a movie and to 'zero' otherwise. For the Ru1322b dataset, we normalize the rows such that the sum of all rows are equal; we then round each value to the nearest integer. This is to account for the effect of library size (Heumos et al., 2023). We treat zeros as missing data. In all models, we set the variational family to be Log-Normal, and the prior (or the mixture components for the population priors) to be Gamma. Similar to GMF, we vary the row or column prior parameters, while keeping the other one fixed. We parameterize the Gamma prior by its mean and variance Gamma(*µ, σ*2), where µ = α/β and σ 2 = α/β 2. In each scenario, we set the fixed prior to Gamma(1, 10). For the varying prior, we set its mean to µ = 1 and change its variance along {0.01, 0.1, 0.25, 1.0, 10.0, 100.0}. Figure 3 shows the results. |
|
|
|
In datasets except for GoodBooks, TwinEB outperforms TwinEBSingle. In the UserArtists dataset, TwinEB |
|
outperforms other methods. TwinEB, on average, takes about 1.5X the runtime of PMF (ranging form 1.1X |
|
to 2.1X). We point out that finding a good prior using grid-search incurs the sum of the cost of evaluations of the individual points in the grid, over 6.5X the running time of the PMF with TwinEB prior. We note that using a grid to find fixed priors (or alternatively, learning the scalar prior) does not guarantee reaching a best test HOLL. Indeed in the UserArtists dataset, TwinEB yields the best test HOLL. We also compared TwinEB to the method of da Silva et al. (2023); given the data, it estimates values for the shape and rate parameters of the Gamma prior for both the row and column r.v.s. Table 1 displays the estimated hyperparameters and the resulting test HOLL. |
|
|
|
PMF equipped with fixed priors values calculated from this methods yields Test HOLL that are on par or slightly worse than the TwinEB method. This method yielded negative hyperparameter estimates for the MovieLens-1M and GoodBooks datasets. These parameters are effectively not usable, since the parameters of a Gamma must be positive. Hence, we emphasize that unlike ours, this methods cannot be used on all datasets. |
|
|
|
![10_image_0.png](10_image_0.png) |
|
|
|
Figure 3: Twin population priors induce robustness to prior selection. As in Figure 2, but for PMF. |
|
Table 1: Test Heldout Loglikelihood from running PMF with prior hyperparameters sets using the method of da Silva et al. (2023). |
|
|
|
For MovieLens-1M and GoodBooks datasets, da Silva et al. (2023) yielded inadmissible hyperparameters. |
|
|
|
| | test HOLL | | |
|
|--------------|------------------------|--------| |
|
| Dataset | da Silva et al. (2023) | TwinEB | |
|
| Ru1322b | -9.53 | -9.46 | |
|
| UserArtists | -1,583 | -1,481 | |
|
| MovieLens-1M | NA | -3.04 | |
|
| GoodBooks | NA | -3.95 | |
|
|
|
## 4.5 Simulation: Complexity Of The Prior |
|
|
|
Here we examine the performance of the population priors as the number of mixture components are varied. To this end, we simulate a 1,000 by 1,500 dataset, with L = 64 dimensional row and column-wise r.v.s. We sample the row- and column-wise r.v.s from a mixture of 15 and 20 Gamma distributions respectively, the |
|
|
|
Simulated Data - Study of K |
|
|
|
![11_image_0.png](11_image_0.png) |
|
|
|
Figure 4: Increasing the number of mixtures improves the model performance. Each line shows the average test-HOLL over 10 seeds for a fixed value for Kc, the number of column mixtures, and varying values of Kr, the number of row mixtures. Here we set L = 64 to its true simulated value. The results are similar for L = 32 (see supplement). |
|
|
|
rate and shape parameters of which are sampled from a Gamma(1, 1). The mixture weights are sampled from a Dirichlet(e0*, . . . , e*0) where the concentration parameter is e0 = 10. |
|
|
|
Performance Increase with Number of Mixture Components. The more mixture components, the more flexible the EB prior is. If learning a EB prior is beneficial, then we expect the performance to increase with the number of mixture components. We run PMF with population priors with varying number of row and column mixtures, setting the dimension of the latent variable to its true value, and, to avoid model misspecification, set the mixture prior family to Gamma. We then report the average of the test HOLL across ten different seeds. Figure 4, the log-likelihood of test held-out data increases with the number of mixture components in the row prior. |
|
|
|
## 5 Discussion |
|
|
|
We introduced the twin population priors for probabilistic matrix factorization. We derived a method to estimate the corresponding posterior using Monte Carlo and variational inference. On real-world data, this method finds a prior as good as the best parametric prior chosen retrospectively. One area of further work is to extend this algorithm to tensor factorization (Kolda & Bader, 2009; Schein et al., 2015). While in matrix factorization, each entry of the observed matrix is explained via two latent variables, tensor factorization models will involve more. One detail to address is how to formally define the population distribution associated with each. |
|
|
|
## Broader Impact Statement |
|
|
|
The authors do not foresee any negative impact of this work. |
|
|
|
## References |
|
|
|
Abhinav Agrawal and Justin Domke. Amortized Variational Inference for Simple Hierarchical Models. |
|
|
|
NeurIPS, 34:21388–21399, 2021. |
|
|
|
Joshua D Angrist, Peter D Hull, Parag A Pathak, and Christopher R Walters. Leveraging lotteries for school value-added: Testing and estimation. QJE, 132(2):871–919, 2017. |
|
|
|
David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. *JASA*, |
|
112(518):859–877, 2017. |
|
|
|
Jo Bovy, David W. Hogg, and Sam T. Roweis. Extreme deconvolution: Inferring complete distribution functions from noisy, heterogeneous and incomplete observations. *AOAS*, 5(2B):1657 - 1677, 2011. |
|
|
|
Hans Bühlmann and Alois Gisler. *A course in credibility theory and its applications*, volume 317. Springer, 2005. |
|
|
|
John Canny. GaP: a factor model for discrete data. In *SIGIR*, pp. 122–129, 2004. Iván Cantador, Peter Brusilovsky, and Tsvi Kuflik. Second workshop on information heterogeneity and fusion in recommender systems (hetrec2011). In *ACM RecSys*, pp. 387–388, 2011. |
|
|
|
Ali Taylan Cemgil. Bayesian inference for nonnegative matrix factorisation models. COMPUT INTEL |
|
NEUROSC, 2009. |
|
|
|
Joseph M Chan, Álvaro Quintanal-Villalonga, Vianne Ran Gao, Yubin Xie, Viola Allaj, Ojasvi Chaudhary, Ignas Masilionis, Jacklynn Egger, Andrew Chow, Thomas Walle, et al. Signatures of plasticity, metastasis, and immunosuppression in an atlas of human small cell lung cancer. *Cancer Cell*, 39(11):1479–1496, 2021. |
|
|
|
Eliezer de Souza da Silva, Tomasz KuĹ, Marcelo Hartmann, Arto Klami, et al. Prior specification for Bayesian matrix factorization via prior predictive matching. *JMLR*, 24(67):1–51, 2023. |
|
|
|
David B Dunson and Amy H Herring. Bayesian latent variable models for mixed discrete outcomes. *Biostatistics*, 6(1):11–25, 2005. |
|
|
|
Bradley Efron. *Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction*, |
|
volume 1. Cambridge University Press, 2012. |
|
|
|
Peter A Frost and James E Savarino. An empirical Bayes approach to efficient portfolio selection. *JFQA*, |
|
21(3):293–305, 1986. |
|
|
|
Andrew Gelman. Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). 2006. |
|
|
|
Sean M Gerrish and David M Blei. Predicting legislative roll calls from text. In *ICML*, 2011. Prem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with hierarchical Poisson factorization. In UAI, pp. 326–335, 2015. |
|
|
|
F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. *TiiS*, 5(4):1–19, 2015. |
|
|
|
Lukas Heumos, Anna C. Schaar, et al. Best practices for single-cell analysis across modalities. Nat. Rev. |
|
|
|
Genet., 24(8):550–572, 2023. ISSN 1471-0064. |
|
|
|
Matthew D Hoffman and Matthew J Johnson. ELBO surgery: yet another way to carve up the variational evidence lower bound. 1(2), 2016. |
|
|
|
Nikolaos Ignatiadis and Stefan Wager. Confidence intervals for nonparametric empirical Bayes analysis. |
|
|
|
JASA, 117(539):1149–1166, 2022. |
|
|
|
Mohsen Jamali and Martin Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In *RecSys*, pp. 135–142, 2010. |
|
|
|
Wenhua Jiang and Cun-Hui Zhang. General maximum likelihood empirical Bayes estimation of normal means. *Ann. Stat.*, 2009. |
|
|
|
Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In *ICML*, pp. 2649–2658. PMLR, 2018. |
|
|
|
Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. *arXiv:1412.6980*, 2014. |
|
|
|
Diederik P Kingma and Max Welling. Auto-Encoding variational Bayes. *arXiv:1312.6114*, 2013. Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. *SIAM review*, 51(3):455–500, 2009. |
|
|
|
Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. |
|
|
|
Computer, 42(8):30–37, 2009. |
|
|
|
Nan Laird. Nonparametric maximum likelihood estimation of a mixing distribution. *JASA*, 73(364):805–811, 1978. |
|
|
|
Hanna Mendes Levitin, Jinzhou Yuan, Yim Ling Cheng, Francisco JR Ruiz, Erin C Bush, Jeffrey N Bruce, Peter Canoll, Antonio Iavarone, Anna Lasorella, David M Blei, et al. De novo gene signature identification from single-cell RNA-seq with hierarchical Poisson factorization. *Molecular systems biology*, 15(2):e8557, 2019. |
|
|
|
Michael I Love, Wolfgang Huber, and Simon Anders. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. *Genome biology*, 15(12):1–21, 2014. |
|
|
|
Andriy Mnih and Russ R Salakhutdinov. Probabilistic matrix factorization. *NeurIPS*, 20, 2007. |
|
|
|
Sumit Mukherjee, Bodhisattva Sen, and Subhabrata Sen. A mean field approach to empirical Bayes estimation in high-dimensional linear regression. *arXiv:2309.16843*, 2023. |
|
|
|
T Tin Nguyen, Hien D Nguyen, Faicel Chamroukhi, and Geoffrey J McLachlan. Approximation by finite mixtures of continuous density functions that vanish at infinity. *Cogent Mathematics & Statistics*, 7(1): 1750861, 2020. |
|
|
|
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. *NeurIPS*, 32, 2019. |
|
|
|
Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In *AISTATS*, pp. 814–822. |
|
|
|
PMLR, 2014. |
|
|
|
John NK Rao and Isabel Molina. *Small area estimation*. John Wiley & Sons, 2015. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In *ICML*, pp. 1278–1286. PMLR, 2014. |
|
|
|
Herbert E Robbins. *An empirical Bayes approach to statistics*. Springer, 1992. Ruslan Salakhutdinov and Andriy Mnih. Bayesian probabilistic matrix factorization using Markov chain Monte Carlo. In *ICML*, pp. 880–887, 2008. |
|
|
|
Aaron Schein, John Paisley, David M Blei, and Hanna Wallach. Bayesian poisson tensor factorization for inferring multilateral relations from sparse dyadic event counts. In *SIGKDD*, pp. 1045–1054, 2015. |
|
|
|
Mikkel N Schmidt, Ole Winther, and Lars Kai Hansen. Bayesian non-negative matrix factorization. In ICA |
|
2009, pp. 540–547. Springer, 2009. |
|
|
|
Gordon K Smyth. Limma: linear models for microarray data. In *Bioinformatics and computational biology* solutions using R and Bioconductor, pp. 397–420. Springer, 2005. |
|
|
|
Harald Steck. Embarrassingly shallow autoencoders for sparse data. In WWW, pp. 3251–3257, 2019. David Michael Titterington, Adrian FM Smith, and Udi E Makov. Statistical analysis of finite mixture distributions. 1985. |
|
|
|
Jakub Tomczak and Max Welling. VAE with a VampPrior. In *AISTATS*, pp. 1214–1223. PMLR, 2018. Martin J Wainwright and Michael I Jordan. *Graphical models, exponential families, and variational inference*. |
|
|
|
Now Publishers, Inc., 2008. |
|
|
|
Wei Wang and Matthew Stephens. Empirical Bayes matrix factorization. *JMLR*, 22(1):5332–5371, 2021. |
|
|
|
Yixin Wang and David M. Blei. The Blessings of multiple causes. *JASA*, 114(528):1574–1596, 2019. Publisher: |
|
Taylor & Francis. |
|
|
|
Yixin Wang, David Blei, and John P Cunningham. Posterior collapse and latent variable non-identifiability. |
|
|
|
NeurIPS, 34:5443–5455, 2021. |
|
|
|
F Alexander Wolf, Philipp Angerer, and Fabian J Theis. SCANPY: large-scale single-cell gene expression data analysis. *Genome biology*, 19:1–5, 2018. |
|
|
|
Xinyi Zhong, Chang Su, and Zhou Fan. Empirical Bayes PCA in high dimensions. *J. R. Stat. Soc.*, 84(3): |
|
853–878, 2022. |
|
|
|
## A Introduction |
|
|
|
This is the supplement for the manuscript titled "Population priors for matrix factorization". The figures in this document supplement Figures 2, 3, and 4 in the main text. We provide derivations for twin population priors for Gaussian matrix factorization, as well as computing the likelihood of the held-out data in section B. |
|
|
|
We then give more details about our experimental setup, including the parameters used in training (e.g., batch-size) in section C. Finally, we provide results for additional experiments in section D: for additional values of the latent dimension L for the datasets studied in the main text. |
|
|
|
Note that this document is accompanied by an archive file code.zip, that contains the source code, instructions to install and run the code, and scripts to recreate the experiments and plot the figures in the manuscript. All scripts have been de-identified. |
|
|
|
## B Derivations |
|
|
|
In this section, we give more details and derivations for the quantities defined in the main text. Specifically, in section B.1 we derive the twin population priors for the Gaussian matrix factorization (GMF). In section B.2, we derive the variational approximation to the twin population priors in GMF. Finally, in section B.3, we derive the expression for held-out likelihood in matrix factorization. |
|
|
|
## B.1 Twin Population Priors For Gaussian Matrix Factorization |
|
|
|
In this section we derive population priors for Gaussian matrix factorization (GMF). In the classical GMF, the likelihood and priors on the latent variables are Gaussian (Mnih & Salakhutdinov, 2007). The generative model is as follows: |
|
|
|
$$W_{j,l}\sim{\cal N}(0,\sigma_{W}^{2}),\quad j=1\ldots D,l=1\ldots L,$$ $$Z_{i,l}\sim{\cal N}(0,\sigma_{Z}^{2}),\quad i=1\ldots N,l=1\ldots L,\tag{1}$$ $$X_{i,j}\mid\mathbf{U}_{i},\mathbf{V}_{j}\sim{\cal N}(\sum_{l=1}^{L}Z_{i,l}W_{jl},\sigma^{2}),\quad i=1\ldots N,j=1\ldots D,$$ |
|
$$(28)$$ |
|
|
|
where U = [Zi,l] ∈ R |
|
N×L and [Wj,l] ∈ R |
|
D×L, with Ui and Vj and are L-vectors representing the row and column wise latent variables, and σ, σW and σZ are constant. |
|
|
|
For TwinEB, our goal is to learn the prior for the row and column latent variables, so, as in the main text, we construct mixture priors (Equations 14 and 15), where Pµk,σk and Pνk,ηk are Gaussian parameterized by their mean and scale parameters. That is, |
|
|
|
$$P_{\theta^{r}}(\mathbf{U}_{i})=\sum_{k=1}^{K_{r}}\pi_{k}\mathcal{N}(\mathbf{U}_{i};\mathbf{\mu}_{k},\mathbf{\sigma}_{k}),$$ $$P_{\theta^{c}}(\mathbf{V}_{j})=\sum_{k=1}^{K_{c}}\rho_{k}\mathcal{N}(\mathbf{V}_{j};\mathbf{\nu}_{k},\mathbf{\eta}_{k}),$$ |
|
$$(29)$$ |
|
$$(30)$$ |
|
$$(31)$$ |
|
|
|
where Kr and Kc are the number of components in the mixtures, θ r = {µ,σ,π} and θ c = {ν, η, ρ}. The locations µ ∈ R |
|
Kr×L and ν ∈ R |
|
Kc×L, the scales σ ∈ R |
|
Kr×L and η ∈ R |
|
Kc×L, and the mixture weights π ∈ ∆Kr and ρ ∈ ∆Kc are the parameters of the mixtures priors. |
|
|
|
## B.2 Posterior Inference In Gmf With Twin Population Priors |
|
|
|
Our goal is to calculate the posterior P(U,V | X.θr, θc), given data X, which also depends on our choice of priors θ r, θc. Our strategy is the same as in the main text, namely, to simultaneously optimize for the parameters of the variational distribution and the TwinEB prior. We substitute the variational family in Equations (19) and (19) with Gaussian, as follows: |
|
|
|
$$\begin{array}{l}{{\cal X}_{i,l}^{r}:=(a_{i,l}^{\prime},b_{i,l}^{\prime}),}}\\ {{\cal X}_{j,l}^{c}:=(a_{j,l},b_{j,l}),}}\\ {{q_{{\cal X}_{i,l}^{r}}(U_{i,l}):={\cal N}(a_{i,l}^{\prime},b_{i,l}^{\prime}),}}\\ {{q_{{\cal X}_{j,l}^{c}}(V_{j,l}):={\cal N}(a_{j,l},b_{j,l}).}}\end{array}$$ |
|
|
|
Each Gaussian is parameterized by its natural parameters a and b: |
|
N (x; *a, b*) ∝ exp ax + bx2 The log-likelihood of the data is: |
|
|
|
$$\log P(\mathbf{X}\mid\mathbf{Z},\mathbf{W})=\log\prod_{i=1}^{N}\prod_{j=1}^{D}P_{\sigma}(X_{i,j}\mid\mathbf{Z}_{i},\mathbf{W}_{j})\tag{36}$$ $$=\sum_{i=1}^{N}\sum_{j=1}^{D}\log\mathcal{N}\left(X_{i,j};\sum_{l=1}^{L}Z_{i,l}W_{j,l},\sigma^{2}\right)$$ $$=-\sum_{i=1}^{N}\sum_{j=1}^{D}\left[\log\sigma\sqrt{2\pi}+\frac{1}{2}\left(\frac{X_{i,j}-\sum_{l=1}^{L}Z_{i,l}W_{j,l}}{\sigma}\right)^{2}\right].$$ |
|
|
|
$$(37)$$ |
|
$$(38)$$ |
|
|
|
## B.3 Held-Out Likelihood For Matrix Factorization |
|
|
|
We use the log likelihood of held-out data as the score for each model. Let Xout = {Xi,j} denote Nout entries of the held-out rows that were masked. We compute the likelihood of the masked entries via the posterior predictive distribution: |
|
|
|
$$\log P(X^{\text{out}}\mid X^{\text{in}})=\log\prod_{i=1}^{N^{\text{out}}}P(X_{i}^{\text{out}}\mid X^{\text{in}})$$ $$=\sum_{i=1}^{N^{\text{out}}}\log P(X_{i}^{\text{out}}\mid X^{\text{in}}).$$ |
|
(39) $\binom{40}{41}$ (41) ... |
|
For Bayesian matrix factorization we expand the summand in Equation (38) as follows: |
|
|
|
$$\log P(X_{i}^{\rm out}\mid X^{\rm in})\approx\log\iint P(X_{i}^{\rm out}\mid\mathbf{U},\mathbf{V})P(\mathbf{U},\mathbf{V}\mid X^{\rm in})d\mathbf{U}d\mathbf{V}$$ $$\approx\log\iint P(X_{i}^{\rm out}\mid\mathbf{U},\mathbf{V})q_{\mathbf{A}}(\mathbf{U},\mathbf{V}\mid X^{\rm in})d\mathbf{U}d\mathbf{V}$$ $$=\log_{\mathbf{U},\mathbf{V}\sim q_{\mathbf{A}}(\cdot)}[P(X_{i}^{\rm out}\mid\mathbf{U},\mathbf{V})]$$ $$\approx\log\frac{1}{M}\sum_{m}^{M}P(X_{i}^{\rm out}\mid\mathbf{U}^{(M)},\mathbf{V}^{(M)}),$$ |
|
|
|
where M is the number of Monte Carlo samples from qΛ(U,V | Xin). In Equation (40) we approximate the true posterior P(U,V | Xin) with its variational counterpart qΛ. The log-likelihood score for the entire held-out data is then: |
|
|
|
$$\sum_{i}^{N^{\mathrm{out}}}\log P(X_{i}^{\mathrm{out}}\mid X^{\mathrm{in}})\approx\sum_{i}^{N^{\mathrm{out}}}\log\frac{1}{M}\sum_{m}^{M}P(X_{i}^{\mathrm{out}}\mid\mathbf{U}^{(M)},\mathbf{V}^{(M)}).\tag{1}$$ |
|
$$\quad(42)$$ |
|
$$(43)$$ |
|
|
|
17 |
|
|
|
## C Experimental Details |
|
|
|
In this section we give more details on our experimental studies in the main manuscript. In section C.1 we describe preprocessing for the gene expression in the Ru1322-scRNAseq dataset. In section C.2, we specify the parameters used during training. In section C.3 we give a brief description of the artifacts that accompany this supplementary material. |
|
|
|
## C.1 Preprocessing Of The Ru1322-Scrnaseq Gene Expression Dataset |
|
|
|
We use CellRanger version 6.0.1 to process the FASTQ and generate the unique molecular identifier (UMI) |
|
count matrices. We use scanpy to preprocess the data, and the seuratv3 algorithm to select highly variable genes Wolf et al. (2018). |
|
|
|
## C.2 Training Details |
|
|
|
We set a batch size of 128 in all our experiments. We ran Poisson and Gaussian matrix factorization experiments for a maximum of 20, 000 iterations. By this step, all runs had converged. |
|
|
|
We initialized the learning rate for the row and column variables, rlr and clr separately. We fix the initial learning rate rlr ∈ {0.01} and clr ∈ {0.01}. In the experiments in the main text, we use 10 Monte Carlo samples to approximate the ELBO, while in the supplemental experiments, we use a single particle. |
|
|
|
For the PMF experiments in the supplement, we subsample zeros as is standard (Gopalan et al., 2015). |
|
|
|
We uniformly randomly subsample the same number of zeros as non-zero values to estimate the likelihood. |
|
|
|
Concretely, let L(Xout) denote the log-likelihood of the masked entries of the held-out rows, Xout = {Xi,j}. Then L(Xout) can be decomposed as the sum of non-zero and zero Xi,j : |
|
|
|
$${\mathcal{L}}(X^{\mathrm{out}})={\mathcal{L}}(X_{x_{i,j}\neq0}^{\mathrm{out}})+{\mathcal{L}}(X_{x_{i,j}=0}^{\mathrm{out}}),$$ |
|
$$(44)$$ |
|
xi,j=0), (44) |
|
where Xout xi,j 6=0 and Xout xi,j=0 have cardinalities Nout non-zero and Nout zero respectively. |
|
|
|
We approximate Equation (44) by subsampling Nsub = min(N |
|
out zero, N |
|
out non-zero) of the zeros. Let Xzero Nsub denote a multiset of zeros of cardinality Nsub, then: |
|
|
|
$$\hat{\mathcal{L}}(X^{\rm out})\approx\mathcal{L}(X^{\rm out}_{x_{i,j}\neq0})+\frac{\mathrm{N}^{\rm out}_{\mathrm{zero}}}{\mathrm{N}_{\mathrm{sub}}}\,\mathbf{X}^{\rm zero}_{\mathrm{sub}}.\tag{45}$$ |
|
|
|
We ran our experiment on a machine equipped with an NVIDIA A100 GPU with 80GB memory. We implemented all methods in pytorch (Paszke et al., 2019). |
|
|
|
## C.3 Implementation |
|
|
|
Please refer to the code directory for the source code and instructions on how to run the model. The file README.md contains instructions for installing the software, and running it. Under the data directory, we included preprocessed data for the MovieLens-100K dataset, a smaller version of the MovieLens-1M studied in the main text. In the notebooks directory, we put notebooks used for preprocessing the data, and plotting the figures in the manuscript. Finally, in the pipelines directory, we put nextflow scripts that recreate experiments that we have run for the manuscript. |
|
|
|
## D Additional Experiments |
|
|
|
In this section we present additional experimental results. We show results for additional values of the latent dimension L. Concretely, section D.1 studies the effects of the twin population priors on Poisson and Gaussian matrix factorization, on three real world datasets, while section D.2 examines the sensitivity of the twin population priors to the choice of its hyper-parameters. We find that these results corroborate those that were presented in the main manuscript, that is, matrix factorization with traditional priors is sensitive to the choice of the hyper-parameters of the prior, and twin population priors is a robust way to set the prior in this family of models. |
|
|
|
## D.1 Additional Values Of Latent Dimensions |
|
|
|
In the main text, we study the effects of twin population priors on the Poisson and Gaussian matrix factorization over four real datasets, with the dimension of the latent variables set to L = 15. Here we present results for additional values of L. Figures 5, 6, 7, 8, and 9 show the results. However, the twin population priors find a comparable or better prior, in the sense of higher held-out likelihood. |
|
|
|
![18_image_0.png](18_image_0.png) |
|
|
|
## D.2 Performance Increase With Number Of Mixture Components |
|
|
|
We show the performance of TwinEB on simulated data as we vary the number of row and column mixture components . Here we add results for additional values of L, and also study the MovieLens 100K dataset. The result in Figures 9 suggests that as the number of mixture components increases, the performance of TwinEB improves. The gain is apparent when the number of row components K, increases. This is expected, as our evaluation procedure, measures the generalizability for held-out rows, but not held-out columns. |
|
|
|
![19_image_0.png](19_image_0.png) |
|
|
|
Figure 6: As in Figure 5, but with additional values of latent dimension L. |
|
|
|
![20_image_0.png](20_image_0.png) |
|
|
|
Figure 7: As in Figure 3, but with additional values of latent dimension L. |
|
|
|
![21_image_0.png](21_image_0.png) |
|
|
|
Figure 8: As in Figure 7, but with additional values of latent dimension L. |
|
L = 32 |
|
|
|
![22_image_1.png](22_image_1.png) |
|
|
|
![22_image_0.png](22_image_0.png) |
|
|
|
..... |
|
|
|
..... |
|
|
|
\#Column components: 1 |
|
\#Column components: 2 |
|
\#Column components: 4 |
|
\#Column components: 8 |
|
\#Column components: 16 |
|
\#Column components: 20 |
|
\#Column components: 32 |
|
|
|
Figure 9: As in Figure 4, but with additional column components as well as latent dimension L = 32. |
|
|
|
## L = 64 |
|
|