tmlr-md-dump / jFi4dXEOdN /jFi4dXEOdN.md
RedTachyon's picture
Upload folder using huggingface_hub
c679409 verified
|
raw
history blame
62.2 kB

Variable Complexity Weighted-Tempered Gibbs Samplers For Bayesian Variable Selection

Anonymous authors Paper under double-blind review

Abstract

A subset weighted-tempered Gibbs Sampler (subset-wTGS) has been recently introduced by Jankowiak to reduce the computation complexity per MCMC iteration in high-dimensional applications where the exact calculation of the posterior inclusion probabilities (PIP) is not essential. However, the Rao-Backwellized estimator associated with this sampler has a very high variance as the ratio between the signal dimension, P, and the number of conditional PIP estimations is large. In this paper, we design a new subset-wTGS where the expected number of computations of conditional PIPs per MCMC iteration can be much smaller than P. Different from the subset-wTGS and wTGS, our sampler has a variable complexity per MCMC iteration. We provide an upper bound on the variance of an associated RaoBlackwellized estimator for this sampler at a finite number of iterations, T, and show that the variance is O PS 2 log T Tfor any given dataset where S is the expected number of conditional PIP computations per MCMC iteration.

1 Introduction

Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a known function. MCMC methods are primarily used for calculating numerical approximations of multi-dimensional integrals, for example in Bayesian statistics, computational physics (Kasim et al., 2019), computational biology, (Gupta & Rawlings, 2014), and linear models (Truong, 2022). Monte Carlo algorithms have been very popular over the last decade (Hesterberg, 2002; Robert & Casella, 2005). Many practical problems in statistical signal processing, machine learning and statistics, demand fast and accurate procedures for drawing samples from probability distributions that exhibit arbitrary, non-standard forms (Andrieu et al., 2004; Fitzgerald, 2001; Read et al., 2012). One of the most popular Monte Carlo methods are the families of Markov chain Monte Carlo (MCMC) algorithms (Andrieu et al., 2004; Robert & Casella, 2005) and particle filters (Bugallo et al., 2007). Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference (Wills & Schön, 2023). The MCMC techniques generate a Markov chain with a pre-established target probability density function as invariant density (Liang et al., 2010).

Gibbs sampler (GS) is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations from a specific multivariate probability distribution. This sequence can be used to approximate the joint distribution, the marginal distribution of one of the variables, or some subset of the variables. It can be also used to compute the expected value (integral) of one of the variables (Bishop, 2006; Bolstad, 2010).

GS is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy (or at least, easier) to sample from.

The GS algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution.

GS is commonly used as a means of statistical inference, especially Bayesian inference. However, pure Markov chain based schemes (i.e., ones which simulate from precisely the right target distribution with no need for subsequent importance sampling correction) have been far more successful. This is because MCMC methods are usually much more scalable to high-dimensional situations, whereas importance sampling weight variances tend to grow (often exponentially) with dimension. (Zanella & Roberts, 2019) proposed a natural way to combine the best of MCMC and importance sampling in a way that is robust in high-dimensional contexts and ameliorates the slow mixing which plagues many Markov chain based schemes. The proposed scheme is called Tempered Gibbs Sampler (TGS), involving component-wise updating rule like Gibbs Sampling (GS), with improved mixing properties and associated importance weights which remain stable as dimension increases. Through an appropriately designed tempering mechanism, TGS circumvents the main limitations of standard GS, such as the slow mixing introduced by strong posterior correlations. It also avoids the requirement to visit all coordinates sequentially, instead iteratively making state-informed decisions as to which coordinate should be next updated.

TGS has been applied to Bayesian Variable Selection (BVS) problem, observing multiple orders of magnitude improvements compared to alternative Monte Carlo schemes (Zanella & Roberts, 2019). Since TGS updates each coordinate with the same frequency, in a BVS context, this may be inefficient as the resulting sampler would spend most iterations updating variables that have low or negligible posterior inclusion probability, especially when the number of covariates, P, gets large. A better solution, called weighted Tempered Gibbs Sampling (wTGS) (Zanella & Roberts, 2019), updates more often components with a larger inclusion probability, thus having a more focused computational effort. However, despite the intuitive appeal of this approach to BVS problem, approximating the resulting posterior distribution can be computationally challenging. A principal reason for this is the astronomical size of the model space whenever there more than a few dozen covariates. To scale the high-dimensional regime, (Jankowiak, 2023) has recently introduced an efficient MCMC scheme whose cost per iteration can be significantly reduced compared to wTGS. The main idea is to introduce an auxiliary variable S ⊂ {1, 2, · · · , P} that controls which conditional posterior inclusion probabilites (PIPs) are computed in a given MCMC iteration. By choosing the size S of S to be much less than P, we can reduce the computational complexity significantly. However, this scheme contains some weaknesses such as the Rao-Blackwellized estimator associated with this sampler has a very high variance when P/S is large and the number of MCMC iterations, T, is small. In addition, generating the auxiliary random set which is uniformly distributed over PS subsets in the subset wTGS algorithm (Jankowiak, 2023) requires very long running time. In this paper, we design a new subset wTGS called variable complexity wTGS (VC-wTGS) and apply this algorithm to BVS in the linear regression model. More specifically, we consider the linear regression Y = Xβ+Z where β = (β0, β1*, . . . , β*P −1) Tis controlled by an inclusion vector (γ0, γ1, · · · , γP −1). We design a Rao-Blackwellized estimator associated with VC-wTGS for posterior inclusion probabilities or PIPs, where PIP(i) := p(γi = 1|D) ∈ [0, 1], and D = {X, Y } is the observed dataset. Experiments show that our scheme converges to PIPs very fast for simulated datasets and that the variance of the Rao-Blackwellized estimator can be much smaller than the subset wTGS (Jankowiak, 2023) when P/S is very high for MNIST dataset. More specifically, our contributions include:

  • We propose a new subset wTGS, called VC-wTGS, where the expected number of conditional PIP computations per MCMC can be much smaller than the signal dimension.

  • We analyse the variance of an associated Rao-Blackwellized estimator at each finite number of MCMC iterations. We show that this variance is Olog T TPS 2for any given dataset.

  • We provide some experiments on a simulated dataset (multivariate Gaussian dataset) and the real dataset (MNIST). Experiments show that our estimator can have a better variance than the subset wTGS-based estimator (Jankowiak, 2023) at high P/S for the same number of MCMC iterations T.

Although we limit our application to the linear regression model for the simplicity of computations of the conditional PIPs in experiments, our subset wTGS can be applied to other BVS models. However, we need to change the method to estimate the conditional PIPs for each model. See (148) and Appendix E for the method that is used to estimate the conditional PIPs for the linear regression model.

2 Preliminaries 2.1 Mathematical Backgrounds

Let a Markov chain {Xn}∞ n=1 on a state space S with transition kernel Q(x, dy) and the initial state X1 ∼ ν, where S is a Polish space in R. In this paper, we consider the Markov chains which are irreducible and positive-recurrent, so the existence of a stationary distribution π is guaranteed. An irreducible and recurrent Markov chain on an infinite state-space is called Harris chain (Tuominen & Tweedie, 1979). A Markov chain is called reversible if the following detailed balance condition is satisfied:

π(dx)Q(x,dy)=π(dy)Q(y,dx),x,yS.\pi(d x)Q(x,d y)=\pi(d y)Q(y,d x),\qquad\forall x,y\in{\mathcal{S}}. π(dx)Q(x, dy) = π(dy)Q(y, dx), ∀x, y ∈ S. (1) Define

(1)(1) d(t):=supxSdTV(Qt(x,),π)tmix(ε):=min{t:d(t)ε},\begin{array}{r l}{d(t):=\operatorname*{sup}_{x\in{\mathcal{S}}}d_{\mathrm{TV}}(Q^{t}(x,\cdot),\pi)}\\ {t_{\operatorname*{mix}}(\varepsilon):=\operatorname*{min}\{t:d(t)\leq\varepsilon\},}\end{array} (2)\left(2\right) (3)\left({\mathrm{3}}\right) (4)\left(4\right) t(x, ·), π) (2) tmix(ε) := min{t : d(t) ≤ ε}, (3) and

τmin:=inf0ε1tmix(ε)(2ε1ε)2,tmix:=tmix(1/4).\tau_{\mathrm{min}}:=\operatorname*{inf}_{0\leq\varepsilon\leq1}t_{\mathrm{mix}}(\varepsilon)\bigg(\frac{2-\varepsilon}{1-\varepsilon}\bigg)^{2},\quad t_{\mathrm{mix}}:=t_{\mathrm{mix}}(1/4).

Let L2(π) be the Hilbert space of complex valued measurable functions on S that are square integrable w.r.t.

π. We endow L2(π) with inner product ⟨f, g⟩ := Rfg∗dπ, and norm ∥f∥2,π := ⟨f, f⟩ 1/2 π . Let Eπ be the associated averaging operator defined by (Eπ)(x, y) = π(y), ∀x, y ∈ S, and

λ=QEπL2(π)L2(π),\lambda=\|Q-E_{\pi}\|_{L_{2}(\pi)\to L_{2}(\pi)}, (5)\left(5\right) (6)(6) λ = ∥Q − Eπ∥L2(π)→L2(π), (5) where ∥B∥L2(π)→L2(π) = maxv:∥v∥2,π=1 ∥Bv∥2,π. Q can be viewed as a linear operator on L2(π), denoted by Q, defined as (Qf)(x) := EQ(x,·)(f), and the reversibility is equivalent to the self-adjointness of Q. The operator Q acts on measures on the left, creating a measure µQ, that is, for every measurable subset A of S, µQ(A) := Rx∈S Q(x, A)µ(dx). For a Markov chain with stationary distribution π, we define the spectrum of the chain as

S2:={ξC:(ξIQ) is not invertible on L2(π)}.S_{2}:=\big\{\xi\in\mathbb{C}:(\xi\mathbf{I}-\mathbf{Q}){\mathrm{~is~not~invertible~on~}}L_{2}(\pi)\big\}. S2 := ξ ∈ C : (ξI − Q) is not invertible on L2(π) . (6) It is known that λ = 1 − γ ∗(Paulin, 2015), where

γ:={1sup{ξ:ξS2,ξ1},if eigenvalue 1 has multiplicity 1,0,otherwise\gamma^{*}:={\begin{cases}1-\operatorname*{sup}\{|\xi|:\xi\in{\mathcal{S}}_{2},\xi\neq1\},\\ \quad\quad\quad{\mathrm{if~eigenvalue~}}1{\mathrm{~has~multiplicity~}}1,\\ 0,\quad\quad\quad{\mathrm{otherwise}}\end{cases}}

is the the absolute spectral gap of the Markov chain. The absolute spectral gap can be bounded by the mixing time tmix of the Markov chain by the following expression:

(1γ1)log2tmixlog(4/π)γ,\left({\frac{1}{\gamma^{*}}}-1\right)\log2\leq t_{\mathrm{mix}}\leq{\frac{\log(4/\pi_{*})}{\gamma_{*}}},

where π∗ = minx∈S πx is the minimum stationary probability, which is positive if Qk > 0 (entry-wise positive) for some k ≥ 1. See (Wolfer & Kontorovich, 2019) for more detailed discussions. In (Combes & Touati, 2019; Wolfer & Kontorovich, 2019), the authors provided algorithms to estimate tmix and γ ∗from a single trajectory.

Let M(S) be a measurable space on S and define

M2:={ν  defined on  M(S):ν<<π,dνdπ2<},(8)\mathcal{M}_{2}:=\left\{\nu\ \ \text{defined on}\ \ \mathcal{M}(\mathcal{S}):\nu<<\pi,\left\|\frac{d\nu}{d\pi}\right\|_{2}<\infty\right\},\tag{8}

where ∥ · ∥2 is the standard L2 norm in the Hilbert space of complex valued measurable functions on S.

Π(7)\mathbf{\Pi}(7)

2.2 Problem Set-Up

Consider the linear regression Y = Xβ + Z ∈ R N where β = (β0, β1*, . . . , βP −1) T, Z = (Z0, Z1, . . . , Z*P −1) T, and X ∈ R N×P which is a designed matrix. Denote γ by the vector (γ0, γ1, · · · , γP −1) where each γi ∈ {0, 1} controls whether the coefficient βi and the i-th covariate are included (γi = 1) or excluded (γi = 0) from the model. Let βγ be the restriction of β to the coordinates in γ and |γ| ∈ {0, 1, 2, · · · , P} be the total number of included covariates. In addition, the following are assumed:

  • inclusion variables: γi ∼ Bern(h)

  • noise variance: σ 2 γ ∈ InvGamma12 ν0, 1 2 ν0λ0

  • coefficients: βγ ∼ N (0, σ2 γ τ −1I|γ|)

  • noise distributions: Zi ∼ N (0, σ2 γ ) for all i = 0, 1, · · · , P − 1. The hyperparameter h ∈ (0, 1) controls the overall level of sparsity; in particular hP is the expected number of covariates included a priori. The |γ| coefficients βγ ∈ R |γ| are governed by the standard Gaussian prior with precision proportional to τ > 0.

An attractive feature of the model is that it explicitly reasons about variable inclusion and allows us to define posterior inclusion probabilities or PIPs, where

PIP(i):=p(γi=1D)[0,1],\mathbf{P}\mathbf{I}\mathbf{P}(i):=p(\gamma_{i}=1|{\mathcal{D}})\in[0,1], (g)({\mathfrak{g}})

PIP(i) := p(γi = 1|D) ∈ [0, 1], (9) and D = {X, Y } is the observed dataset.

3 Main Results 3.1 Introduction To Subset Wtgs

In this subsection, we review the subset wTGS which was proposed by (Jankowiak, 2023). Let P = {1, 2, · · · , P} and PS be the set of all subsets of cardinality S of P. Consider the sample space P×{0, 1} P ×PS and define the following (unnormalized) target distribution on this sample space:

f(γ, i, S) := p(γ|D) 1 2 η(γ−i) p(γi|γ−i, D) U(S|i, A). (10) Here, S ranges over all the subsets of {1, 2, · · · , P} of some size S ∈ {0, 1, · · · , P} that also contain a fixed 'anchor' set A ⊂ {1, 2, · · · , P} of size A < S, and η(·) is some weighting function. Moreover, U(S|i, A) is the uniform distribution over the all size S subsets of {1, 2, · · · , P} that contain both i and A.

In practice, the set A can be chosen during burn-in. Subset wTGS proceeds by defining a sampling scheme for the target distribution (10) that utilizes Gibbs updates w.r.t. i and S and Metropolized-Gibbs update w.r.t. γi.

  • i**-updates:** Marginalizing i from (10) yields

yields{\mathrm{yields}} f(γ,S)=p(γD)ϕ(γ,S)f(\gamma,{\mathcal{S}})=p(\gamma|{\mathcal{D}})\phi(\gamma,{\mathcal{S}}) f(γ, S) = p(γ|D)ϕ(γ, S) (11) Trginaliein\mathrm{Trginali}\mathrm{ein}

where we define

(11)(11) ϕ(γ,S):=iS12η(γi)p(γiγi,D)U(Si,A)\phi(\gamma,{\mathcal{S}}):=\sum_{i\in{\mathcal{S}}}{\frac{{\frac{1}{2}}\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},{\mathcal{D}})}}{\mathcal{U}}({\mathcal{S}}|i,{\mathcal{A}}) (12)\left(12\right)

and have leveraged that U(S|i, A) = 0 if i /∈ S. Crucially, computing ϕ(γ, S) is Θ(S) instead of Θ(P). We can do Gibbs updates w.r.t. i using the distribution

f(iγ,S)η(γi)p(γiγi,D)U(Si,A).f(i|\gamma,{\mathcal{S}})\sim\frac{\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},{\mathcal{D}})}{\mathcal{U}}({\mathcal{S}}|i,{\mathcal{A}}). (13)(13)

  • γ**-updates:** Just as for wT GS we utilized Metropolized -Gibbs updates w.r.t. γi that result in deterministic flips γi → 1 − γi. Likewise the marginal f(i) is proportional to PIP(i) + εP so that the sampler focuses computational efforts on large PIP covariates (Jankowiak, 2023).

  • S**-updates:** S is updated with Gibbs moves, S ∼ U(·|i, A). For the full algorithm, see the Algorithm 1.

Algorithm 1 The Subset S-wTGS Algorithm Input: Dataset D = {X, Y } with P covariates; prior inclusion probability h; prior precision τ ; subset size S; anchor set size A; total number of MCMC iterations T; number of burn-in iteration Tburn.

Output: Approximate weighted posterior samples {ρ (t), γ(t)} T t=Tburn+1 Initializations: γ (0) = (1, 1, · · · , 1), and choose A be the A covariate with exhibiting the largest correlations with Y . Choose i (0) randomly from {1, 2, · · · , P} and S (0) ∼ U(·|i (0), A).

for t = 1, 2, · · · , T do Estimate S conditional PIPs p(γ (t−1) j|γ (t−1) −j, D) for all j ∈ S(t−1) ϕ(γ (t−1), S (t−1)) ←Pj∈S(t−1) 1 2 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) Estimate f(j|γ (t−1)) ← ϕ −1(γ (t−1), S (t−1))12 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) for all j ∈ [P].

Sample i (t) ∼ f(·|γ (t−1)) γ (t) ← flip(γ (t−1)|i (t)) where flip(γ|i) flips the i-th coordinate of γ : γi ← 1 − γi.

Sample S (t) ∼ U(·|i (t), A) Compute the unnormalized weights ρ˜ (t) ← ϕ −1(γ (t), S (t)) if t ≤ Tburn then Adapt A using some adaptive scheme.

end if end for for t = 1, 2, · · · , T do ρ (t) ← ρ˜ (t) PT s>Tburn ρ˜ (s) end for Output: {ρ (t), γ(t)} T t=1.

The details of this algorithm is described in ALG 1. The associated estimator for this sampler is defined as (Jankowiak, 2023):

PIP(i):=t=1Tρ(t)(1{iS(t)}p(γi(t)=1γi(t),D)+1{iS(t)}γi(t)).\mathbf{P}\mathbf{I}\mathbf{P}(i):=\sum_{t=1}^{T}\rho^{(t)}\big(\mathbf{1}\{i\in{\mathcal{S}}^{(t)}\}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},D)+\mathbf{1}\{i\notin{\mathcal{S}}^{(t)}\}\gamma_{i}^{(t)}\big). i. (14)

3.2 A Variable Complexity Wtgs Scheme

In the subset wTGS in Subsection 3.1, the number of conditional PIP computations per MCMC iteration is fixed, i.e., it is equal to S. In the following, we propose a variable complexity-based wTGS scheme (VC-wTGS), say ALG 2, where the only requirement is that the expected number of the conditional PIP computations per MCMC iteration is S. This means that E[St] = S, where St is the number of conditional PIP computations at the t-th MCMC iteration.

Compared with ALG 1, ALG 2 allows us to use different subset sizes at MCMC iterations. By ALG 2, the expectation of number of conditional PIP computations in each MCMC iteration is P×(S/P)+0×(1−S/P) = S. Since we aim to bound the variance at each finite iteration T, we don't mention about Tburn in ALG 2. In practice, we usually remove some initial samples. We also use the following new version of Rao-Blackwellized

(14)(14)

estimator:

PIP(i):=t=1Tρ(t)p(γi(t)=1γi(t),D).\mathrm{\mathsf{P I P}}(i):=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}}). (15)\left(15\right) , D). (15) In ALG 2, Bernoulli random variables {Q(t)} T t=1 are used to replace for random set S in ALG 1. There are Algorithm 2 A Variable-Complexity Based wTGS Algorithm

Input: Dataset D = {X, Y } with P covariates; prior inclusion probability h; prior precision τ ; total number of MCMC iterations T; subset size S. Output: Approximate weighted posterior samples {ρ (t), γ(t)} T t=1 Initializations: γ (0) = (γ1, γ2, · · · , γP ) where γj ∼ Bern(h) for all j ∈ [P]. for t = 1, 2, · · · , T do Set Q(1) = 1. Sample a Bernoulli random variable Q(t) ∼ Bern( S P ) if t ≥ 2. if Q(t) = 1 then Estimate P conditional PIPs p(γ (t−1) j|γ (t−1) −j, D) for all j ∈ [P] ϕ(γ (t−1)) ←Pj∈[P ] 1 2 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) Estimate f(j|γ (t−1)) ← ϕ −1(γ (t−1))12 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) for all j ∈ [P]. Sample i (t) ∼ f(·|γ (t−1)) γ (t) ← flip(γ (t−1)|i (t)) where flip(γ|i) flips the i-th coordinate of γ : γi ← 1 − γi. Compute the unnormalized weights ρ˜ (t) ← ϕ −1(γ (t)) else γ ρ˜ end if end for for t = 1, 2, · · · , T do ρ (t) ← ρ˜ (t)Q(t) PT s=1 ρ˜ (s)Q(s) end for

Compute the unno $)\gets\gamma^{(t-1)}\ )\gets\phi^{-1}(\gamma^{(t)})\$ #.

Output: {ρ (t), γ(t)} T t=1.

two main reasons for this replacement: (1) generating a random set S from PS subsets of [P] takes very long running time for most pairs (P, S), (2) the associated Rao-Blackwellized estimator usually has smaller variance with ALG 2 than ALG 1 at high P/S. See Section 4 for our simulation results.

3.3 Theoretical Bounds For Algorithm 2

First, we prove the following result. The proof can be found in Appendix C.

Lemma 1. Let U and V be two positive random variables such that U/V ≤ M a.s. for some constant M.

In addition, assume that on a set D with probability at least 1 − α*, we have*

$|U-\mathbb{E}[U]|\leq\varepsilon\mathbb{E}[U]$, $|V-\mathbb{E}[V]|\leq\varepsilon\mathbb{E}[V]$, for some 0 ≤ ε < 1*. Then, it holds that*

E[UVE[U]E[V]2]4ε2(1ε)2(E[U]E[V])2+[max(M,E[U]E[V])]2α.\mathbb{E}\left[\left|{\frac{U}{V}}-{\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right|^{2}\right]\leq{\frac{4\varepsilon^{2}}{(1-\varepsilon)^{2}}}\left({\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right)^{2}+\left[\operatorname*{max}\left(M,{\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right)\right]^{2}\alpha.

2α. (18) We also recall the following Hoeffding's inequality for Markov chain:

(16)(17)\begin{array}{l}{(16)}\\ {(17)}\end{array} (18)(18)

Lemma 2. (Rao, 2018, Theorem 1.1) Let {Yi}∞ i=1 be a stationary Markov chain with state space [N], transition matrix A, stationary probability measure π, and averaging operator Eπ, so that Y1 is distributed according to π. Let λ = ∥A − Eπ∥L2(π)→L2(π) and let f1, f2, · · · , fn : [N] → R so that E[fi(Yi)] = 0 for all i and |fi(ν)| ≤ ai for all ν ∈ [N] and all i*. Then for* u ≥ 0,

P[i=1nfi(Yi)u(i=1nai2)1/2]2exp(u2(1λ)64e).\mathbb{P}\biggl[\biggl|\sum_{i=1}^{n}f_{i}(Y_{i})\biggr|\geq u\biggl(\sum_{i=1}^{n}a_{i}^{2}\biggr)^{1/2}\biggr]\leq2\exp\biggl(-\frac{u^{2}(1-\lambda)}{64e}\biggr).

Now, the following result can be shown. Lemma 3. Let

(19)(19) ϕ(γ):=j[P]12η(γj)p(γjγj,D)\phi(\gamma):=\sum_{j\in[P]}{\frac{{\frac{1}{2}}\eta(\gamma_{-j})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}} (20)(20) (21)(21) (22)(22)

and define f(γ) := ϕ(γ)p(γ|D). (21) Then, by ALG 2, the sequence {γ (t), Q(t)} T t=1 forms a reversible Markov chain with the stationary distribution proportional to f(γ)q(Q) where q is the Bernoulli (S/P) distribution. This Markov chain has transition kernel K((γ, Q) → (γ ′, Q′)) = K∗(γ → γ ′)q(Q′) where

K(γγ)=SPj=1Pf(jγ)δ(γfLip(γj))+(1SP)δ(γγ).K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-f^{\mathsf{L}}i\mathfrak{p}(\gamma|j))+\biggl(1-\frac{S}{P}\biggr)\delta(\gamma^{\prime}-\gamma).

In the classical wTGS (Zanella & Roberts, 2019), the Markov chain {γ (t)} T t=1 also form a Markov chain.

However, this Markov chain is different from the Markov chain in Lemma 3. However, the two Markov chains still have the same stationary distribution which is proportional to f(γ). See a detailed proof of Lemma 3 in Appendix B.

Lemma 4. For the Rao-Blackwellized estimator in (15) which is applied to the output sequence {ρ (t), γ(t)} T t=1 of ALG 2, it holds that

Ei,T:=t=1Tρ(t)p(γi(t)=1γi(t),D)P ⁣I ⁣P(i)(1)E_{i,T}:=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}})\to P\!I\!P(i)\tag{1}

as T → ∞.

Proof. By Lemma 3, {γ (t), Q(t)} T t=1 forms a reversible Markov chain with stationary distribution f(γ)/Zf q(Q) where Zf =Pγ f(γ). Hence, by SLLN for Markov chain (Breiman, 1960), for any bounded function h, we have

1Tt=1Tϕ1(γ(t))Q(t)h(γ(t))\frac{1}{T}\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}h(\gamma^{(t)}) $$\to\mathbb{E}{qf(\cdot)/Z{f}}\big{[}\phi^{-1}(\gamma)h(\gamma)Q\big{]}\tag{2}$$ $$=\sum_{Q}q(Q)\sum_{\gamma}\frac{f(\gamma)}{Z_{f}}\phi^{-1}(\gamma)h(\gamma)Q$$ $$=\bigg{(}\sum_{Q}q(Q)Q\bigg{)}\bigg{(}\sum_{\gamma}\frac{f(\gamma)}{Z_{f}}\phi^{-1}(\gamma)h(\gamma)\bigg{)}$$ (3) $$=\mathbb{E}{q}[Q]\frac{1}{Z{f}}\sum_{\gamma}p(\gamma|\mathcal{D})h(\gamma)$$ (4) $$=\frac{S}{P}\frac{1}{Z_{f}}\sum_{\gamma}p(\gamma|\mathcal{D})h(\gamma),\tag{5}$$ (23)(23)

(28)(28)

where (27) follows from f(γ) = p(γ|D)ϕ(γ).

Similarly, we have

1Tt=1TQ(t)ϕ1(γ(t))\frac{1}{T}\sum_{t=1}^T Q^{(t)}\phi^{-1}(\gamma^{(t)}) $$\to\mathbb{E}_{qf(\cdot)/Z_f}\big[\phi^{-1}(\gamma)Q\big]$$ $$=\sum_Q q(Q)Q\sum_\gamma\frac{f(\gamma)}{Z_f}\phi^{-1}(\gamma)$$ $$=\mathbb{E}_q[Q]\sum_\gamma\frac{1}{Z_f}p(\gamma|D)$$ $$=\frac{S}{P}\frac{1}{Z_f},$$ $$=p(\gamma|D)\phi(\gamma).$$ (29) $\binom{30}{2}$ . (31)(31) (32)(32) (33)(33) (34)(34)

where (31) also follows from f(γ) = p(γ|D)ϕ(γ).

From (28) and (32), we obtain

1Tt=1Tϕ1(γ(t))Q(t)h(γ(t))1Tt=1TQ(t)ϕ1(γ(t))γp(γD)h(γ),\begin{array}{l}{{\frac{1}{T}\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}h(\gamma^{(t)})}}\\ {{\frac{1}{T}\sum_{t=1}^{T}Q^{(t)}\phi^{-1}(\gamma^{(t)})}}\end{array}\to\sum_{\gamma}p(\gamma|{\mathcal{D}})h(\gamma),

or equivalently

t=1Tρ(t)h(γ(t))γp(γD)h(γ)\sum_{t=1}^{T}\rho^{(t)}h(\gamma^{(t)})\to\sum_{\gamma}p(\gamma|{\mathcal D})h(\gamma)

as T → ∞.

Now, by setting h(γ) = p(γi = 1|γ−i, D), from (34), we obtain

t=1Tρ(t)p(γi(t)=1γi(t),D)PIP(i)(1)\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}})\to{\tt PIP}(i)\tag{1}

for all i ∈ [P]. The following result bounds the variance of PIP estimator at finite T.

Lemma 5. For any ε ∈ [0, 1], let ν and π be the initial and stationary distributions of the reversible Markov sequence γ (t), Q(t) . Define

(35)(35) ϕ^(γ):=ϕ1(γ)maxγϕ1(γ),\hat{\phi}(\gamma):=\frac{\phi^{-1}(\gamma)}{\operatorname*{max}_{\gamma}\phi^{-1}(\gamma)}, (36)(36)

and

ε0=PPIP(i)Eπ[ϕ^(γ)]S64elogT(1λγ,Q)T.\varepsilon_{0}=\frac{P}{P I P(i)\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]S}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}}.

Then, we have

\mathbb{E}\Bigg{[}\Bigg{|}\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathit{PIP}(i)\Bigg{|}^{2}\Bigg{]} $$\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathit{PIP}^{2}(i)+\frac{4P}{S}\frac{1}{\min_{\gamma}\pi(\gamma)T}\to0,$$ which is true. Hence ($\gamma$) is the same as the distribution of $p(\gamma)$

as T → ∞ for fixed P, S and the dataset. Here, π(γ) is the marginal distribution of π(γ, Q).

(37)(37) (38)(38)

Proof. See Appendix D.

Remark 6. As in the proof of Lemma 3, we have π(γ) ∼ f(γ) = ϕ(γ)p(γ|D). Hence, it holds that

minγπ(γ)=minγϕ(γ)p(γD)γϕ(γ)p(γD),(1)\min_{\gamma}\pi(\gamma)=\min_{\gamma}\frac{\phi(\gamma)p(\gamma|\mathcal{D})}{\sum_{\gamma}\phi(\gamma)p(\gamma|\mathcal{D})},\tag{1} (39)(39)

which does not depend on S.

Next, we provide a lower bound for 1 − λγ,Q. First, we recall the following Dirichlet form on spectral gap.

Definition 7. Let f, g : Ω → R*. The Dirichlet form associated with a reversible Markov chain* Q on Ω is defined by

E(f,g)=(IQ)f,gπ\mathcal{E}(f,g)=\langle(\mathbf{I}-\mathbf{Q})f,g\rangle_{\pi} $$=\sum_{x\in\Omega}\pi(x)[f(x)-\mathbf{Q}f(x)]g(x)$$ $$=\sum_{x,y\in\Omega\times\Omega}\pi(x)Q(x,y)g(x)(f(x)-f(y)).$$

(42)(42)

Lemma 8. (Diaconis & Saloff-Coste, 1993) (Variational characterisation) For a reversible Markov chain Q with state space Ω and stationary distribution π*, it holds that*

1λ=infg0,g={g2}=1Eπ[g]=0,Eπ[g2]=1E(g,g),1-\lambda=\inf_{\begin{subarray}{c}g\to0,\,g=\{g^{2}\}=1\\ \mathbb{E}_{\pi}[g]=0,\,\mathbb{E}_{\pi}[g^{2}]=1\end{subarray}}\mathcal{E}(g,g), (43)(43) (444)(444)

where E(g, g) := ⟨(I − Q)g, g⟩π. Lemma 9. The spectral gap 1 − λγ,Q of the reversible Markov chain {γ (t), Q(t)} satisfies

1λγ,QSP(1λP)+1SP1SP,1-\lambda_{\gamma,Q}\geq{\frac{S}{P}}{\big(}1-\lambda_{P}{\big)}+1-{\frac{S}{P}}\geq1-{\frac{S}{P}},

where 1 − λP is the spectral gap of the reversible Markov chain {γ (t)} of the wTGS algorithm (i.e. S = P).

See Appendix F for a proof of this lemma. By combining Lemma 4, Lemma 5 and Lemma 9, we come up with the following theorem. Theorem 10. For the variable-complexity subset wTGS-based estimator in (15) and given dataset (X, Y ), it holds that

Ei,T:=t=1Tρ(t)p(γi(t)=1γi(t),D)P ⁣I ⁣P(i)(1)E_{i,T}:=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\cal D})\to P\!I\!P(i)\tag{1} (45)(45)

as T → ∞ and

\mathbb{E}\Bigg{[}\Bigg{|}\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}|\gamma_{-i}^{(t)},\mathcal{D})-\textit{PIP}(i)\Bigg{|}^{2}\Bigg{]} $$=O\Bigg{(}\frac{\log T}{T}\Bigg{(}\frac{P}{S}\Bigg{)}^{2}\Bigg{(}\frac{\max_{\gamma}\phi(\gamma)}{\min_{\gamma}\phi(\gamma)}\Bigg{)}^{2}\Bigg{)},$$

where

ϕ(γ)=12j[P]p(γj=1γj,D)p(γjγj,D).\phi(\gamma)=\frac{1}{2}\sum_{j\in[P]}\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}. (46)\quad(46) (47)(47)

Proof. First, (45) is shown in Lemma 4. Now, we show (46) by using Lemma 5 and Lemma 9.

Observe that

Eπ[ϕ^(γ)]=Eπ[ϕ1(γ)maxγϕ1(γ)]\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]=\mathbb{E}_{\pi}\left[\frac{\phi^{-1}(\gamma)}{\max_{\gamma}\phi^{-1}(\gamma)}\right] $$\geq\frac{\min_{\gamma}\phi(\gamma)}{\max_{\gamma}\phi(\gamma)}.\tag{1}$$

In addition, we have

(48)(48) ϕ(γ)=j[P]12η(γj)p(γjγj,D)\phi(\gamma)=\sum_{j\in[P]}\frac{\frac{1}{2}\eta(\gamma_{-j})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})} $$=\frac{1}{2}\sum_{j\in[P]}\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}.$$ (49) $\binom{49}{50}$ (50) . Now, note that

p(γj=1γj,D)p(γjγj,D)={1,γj=1p(γj=1γj,D)p(γj=0γj,D),γj=0.\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}=\begin{cases}1,&\gamma_{j}=1\\ \frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}=0|\gamma_{-j},\mathcal{D})},&\gamma_{j}=0.\end{cases} (51)\left(51\right)

In Appendix E, show how to estimate the conditional PIPs, i.e., p(γi|D, γ−i) for the linear regression model.

More specially, we have

p(γiD,γi)=p(γiD,γi)p(1γiD,γi)(1+p(γiD,γi)p(1γiD,γi))1.(52)p(\gamma_{i}|\mathcal{D},\gamma_{-i})=\frac{p(\gamma_{i}|\mathcal{D},\gamma_{-i})}{p(1-\gamma_{i}|\mathcal{D},\gamma_{-i})}\left(1+\frac{p(\gamma_{i}|\mathcal{D},\gamma_{-i})}{p(1-\gamma_{i}|\mathcal{D},\gamma_{-i})}\right)^{-1}.\tag{52}

Then, we can estimate p(γj=1|γ−j ,D) p(γj=0|γ−j ,D) based on the dataset. More specifically, let γ˜1 is given by γ−i with γi = 1, γ˜0 is given by γ−i with γi = 0, then we can show that

p(γj=1γj,D)p(γj=0γj,D)\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}=0|\gamma_{-j},\mathcal{D})} $$=\left(\frac{h}{1-h}\right)\sqrt{\tau\frac{\det(X_{i,0}^{T}X_{i0}+\tau I)}{\det(X_{i,1}^{T}X_{i1}+\tau I)}}$$ $$\quad\times\left(\frac{|Y|^{2}-|\tilde{Y}{i0}|^{2}+\nu{0}\lambda_{0}}{|Y|^{2}-|\tilde{Y}{i1}^{2}|^{2}+\nu{0}\lambda_{0}}\right)^{\frac{N+\tau_{0}}{2}}.\tag{53}$$ $\tau_{\tau}^{T}X_{\tau}+\tau I)^{-1}X_{\tau}^{T}Y$.
Here, $|\hat{Y}{\gamma}|^2=\hat{Y}{\gamma}^T\hat{Y}{\gamma}=Y^T X{\gamma}(X_{\gamma}^T X_{\gamma}+\tau I)^{-1}X_{\gamma}^T Y$. Using this algorithm, if pre-computing XT X is not possible, the computational complexity per conditional PIP is O(N|γ| 2 +|γ| 3 +P|γ| 2). Otherwise, if pre-computing XT X is possible, the computational complexity per conditional PIP is O(|γ| 3 + P|γ| 2).

Remark 11. As we can see in Appendix E, for the linear regression model in Section 2.2, if pre-computing XT X is not possible, the computational complexity for a conditional PIP is O(N|γ| 2 + |γ| 3 + P|γ| 2). Otherwise, if pre-computing XT X is possible, the computational complexity for a conditional PIP is O(|γ| 3+P|γ| 2).

Here, |γ| ≈ hP*. Hence, the average computational complexity for our algorithm is* O(S(N|γ| 2+|γ| 3+P|γ| 2)) or O(S(|γ| 3 + P|γ| 2)) which depends on whether the precomputing of XT X is possible or not. To reduce the computational complexity, we can reduce S, or we are only interested in the case P/S is large. This computational complexity reductions is more meaningful if |γ| ≈ P h << P*, i.e., we consider the sparse* linear regression regimes. However, the variance of the associated Rao-Blackwellized estimator is increased as S becomes small. Hence, there is a trade-off between the computational complexity per MCMC iteration vs. the variance of of the Rao-Blackwellized estimator. The most interesting fact is that the newly-designed Rao-Blackwellized estimator converges to PIPs for any value of S. In practice, the choice of S depends on each application and the availability of computational resources. We can choose S very small (eg., S = 2) to have a low complexity estimator and low convergence rate. We can choose S ≈ P for a high complexity estimator with high convergence rate. Furthermore, both our and Jankowiak algorithms are degenerated to the wTGS (Zanella & Roberts (2019)) at S ≈ P.

4 Experiments

In this section, we show by simulation that the PIP-estimator is convergent as T → ∞. In addition, we compare the variance of associated Rao-Blackwellized estimators for VC-wTGS and subset wTGS on simulated and real datasets. To compute p(γi|γ−i, Y ), we use the same trick as (Zanella & Roberts, 2019, Appendix B.1) for the new setting. See our derivations of this posterior distribution in Appendix E. As (Jankowiak, 2023), in ALG 1 and ALG 2, we choose

η(γi)=P(γi=1γi,D).\eta(\gamma_{-i})=\mathbb{P}(\gamma_{i}=1|\gamma_{-i},{\mathcal{D}}). (54)(54) η(γ−i) = P(γi = 1|γ−i, D). (54)

4.1 Simulated Datasets

First, we perform a simulated experiment. Let X ∈ R N×P be a realization of a multivariate (random) Gaussian matrix. We consider the case N = 100 and P = 200. We run T = 20000 iterations. Fig. 1 shows the number of conditional PIP computations per MCMC iteration over T iterations. As we can see, our algorithm (Algorithm 2) has variable complexity where the number of conditional PIP computations per MCMC is a random variable Y which takes value on {0, P} where P(Y = P) = S/P. For Jankowiak's algorithm, the number of conditional PIP computations per MCMC is always fixed, which is equal to S.

Fig. 2 shows that the Rao-Blackwellized estimator in (15) converges to the value of PIP at T → ∞ for different values of S. Since the number of PIPs, P, is very large, we only run simulations for PIP(0) and PIP(1). The behavior of PIP(0) and PIP(1) represents the behavior of other PIPs. Since VC-wTGS converges very fast at T big enough, the variance of variable-complexity wTGS is very small in the long term. In Fig. 4, we plot the estimators of VC-wTGS, subset wTGS, and wTGS for estimating PIP(0). It can our estimator converges to wTGS estimator faster than subset wTGS. This also means that the variance of VC-wTGS is smaller than the variance of subset wTGS for the same sample complexity S.

11_image_0.png

Figure 1: Computational Complexity Evolution

12_image_0.png

Figure 2: VC-wTGS Rao-Blackwellized Estimators (ALG 2)

12_image_1.png

Figure 3: Convergence of Rao-Blackwellized Estimators

4.2 Real Datasets

In this simulation, we run ALG 2 on MNIST dataset.

As Fig. 1, Fig. 4 shows the number of conditional PIP computations per MCMC iteration over T iterations.

It shows that our algorithm has variable computational complexity per MCMC iteration, which is different from Jankowiak's algorithm. Fig. 5 plots PIP(0) and PIP(1) and the estimated variances for the Rao-Blackwellized estimator in (15) at different values of S, respectively. Here, PIP(0) and PIP(1) are defined in (9), which are posterior inclusion probabilities that the components Po and B1 affect the output. These plots show a trade-off between the computational complexity and the estimated variance for estimating PIP(0) and PIP(1). The

13_image_0.png

Figure 4: Computational Complexity Evolution expected number of PIP computations is only ST in ALG 2 but T P in wTGS if we run T MCMC iterations.

However, we suffer an increasing in variance. By Theorem 10, the variance is O PS 2 log T Tfor a given dataset, i.e., increasing at most (P/S) 2times. For many applications, we don't need to estimate PIPs exactly, hence VC-wTGS can be used to reduce computational complexity especially when P is very large (million covariates). Fig. 6 shows that VC-wTGS outperforms subset wTGS (Jankowiak, 2023) at high values of P/S, which shows that our newly-designed Rao-Blackwellized estimator converges to PIP faster than Jankowiak's estimator at high P/S.

5 Conclusion

This paper proposed a variable complexity wTGS for Bayesian Variable Selection which can improve the computational complexity of the well-known wTGS. Experiments show that our Rao-Blackwellized estimator can give a smaller variance than its counterpart associated with the subset-wTGS at high P/S.

14_image_0.png

Figure 5: The variance of VC-wTGS Rao-Blackwellized Estimators (ALG 2)

14_image_1.png

Figure 6: Comparing the variance between subset wTGS and VC-wTGS at S = 2.

References

Christophe Andrieu, Nando de Freitas, A. Doucet, and Michael I. Jordan. An introduction to MCMC for machine learning. Machine Learning, 50:5-43, 2004.

C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. William M. Bolstad. Understanding Computational Bayesian Statistics. John Wiley, 2010.

L. Breiman. The strong law of large numbers for a class of Markov chains. Annals of Mathematical Statistics, 31:801-803, 1960.

Mónica F. Bugallo, Shanshan Xu, and Petar M. Djurić. Performance comparison of EKF and particle filtering methods for maneuvering targets. Digit. Signal Process., 17:774-786, 2007.

R. Combes and M. Touati. Computationally efficient estimation of the spectral gap of a markov chain.

Proceedings of the ACM on Measurement and Analysis of Computing Systems, 3:1 - 21, 2019.

Persi Diaconis and Laurent Saloff-Coste. Comparison theorems for reversible markov chains.

Annals of Applied Probability, 3:696-730, 1993.

William J. Fitzgerald. Markov chain Monte Carlo methods with applications to signal processing. Signal Process., 81:3–18, 2001.

Ankur Gupta and James B. Rawlings. Comparison of parameter estimation methods in stochastic chemical kinetic models: Examples in systems biology. AIChE journal. American Institute of Chemical Engineers, 60 4:1253–1268, 2014.

Tim Hesterberg. Monte carlo strategies in scientific computing. Technometrics, 44:403 - 404, 2002. Martin Jankowiak. Bayesian variable selection in a million dimensions. In International Conference on Artificial Intelligence and Statistics, 2023.

Muhammad F. Kasim, A. F. A. Bott, Petros Tzeferacos, Donald Q. Lamb, Gianluca Gregori, and Sam M.

Vinko. Retrieving fields from proton radiography without source profiles. Physical review. E, 100 3-1: 033208, 2019.

Faming Liang, Chuanhai Liu, and Raymond J. Carroll. Advanced Markov chain Monte Carlo methods: Learning from past samples. 2010.

Daniel Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods.

Electronic Journal of Probability, 20(79):1 - 32, 2015.

Shravas Rao. A Hoeffding inequality for Markov chains. Electronic Communications in Probability, 2018.

Jesse Read, Luca Martino, and David Luengo. Efficient Monte Carlo methods for multi-dimensional learning with classifier chains. Pattern Recognit., 47:1535–1546, 2012.

Christian P. Robert and George Casella. Monte carlo statistical methods. Technometrics, 47:243 - 243, 2005.

Lan V. Truong. On linear model with markov signal priors. In AISTATS, 2022. Pekka Tuominen and Richard L. Tweedie. Markov Chains with Continuous Components. Proceedings of the London Mathematical Society, s3-38(1):89–114, 01 1979.

Adrian G. Wills and Thomas Bo Schön. Sequential monte carlo: A unified review. Annu. Rev. Control.

Robotics Auton. Syst., 6:159–182, 2023.

G. Wolfer and A. Kontorovich. Estimating the mixing time of ergodic Markov chains. In 32nd Annual Conference on Learning Theory, 2019.

Giacomo Zanella and Gareth O. Roberts. Scalable importance tempering and Bayesian variable selection.

Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81, 2019.

A Appendix B Proof Of Lemma 3

The transition kernel for the sequence {γ (t)} can be written as

K(γγ)=SPj=1Pf(jγ)δ(γflip(γj))+(1SP)δ(γγ).K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-\mathbf{flip}(\gamma|j))+\bigg(1-\frac{S}{P}\bigg)\delta(\gamma^{\prime}-\gamma).

This implies that for any pair (γ, γ′) such that γ ′ = flip(γ|i) for some i ∈ [P], we have

$(\gamma,\gamma)$ such that $\gamma^{\prime}=\mathtt{Lip}(\gamma|i)$ for some $i\in[i]$, we have $$K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-\mathtt{flip}(\gamma|j))$$ $$=\frac{S}{P}f(i|\gamma).$$ (55)(55) $\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$. If $\gamma^{\prime}=\texttt{T11p}(\gamma|i)$ for some $i\in[P]$, we have $\gamma^{\prime}=\texttt{T11p}(\gamma|i)$. (56)(56) (57)\left(57\right) Now, by ALG 2, we also have

f(iγ)=ϕ1(γ)12η(γi)p(γiγi,D)f(i|\gamma)=\phi^{-1}(\gamma)\frac{\frac{1}{2}\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},\mathcal{D})}

p(γi|γ−i, D)(58) and

f(iγ)=ϕ1(γ)12η(γi)p(γiγi,D).f(i|\gamma^{\prime})=\phi^{-1}(\gamma^{\prime})\frac{\frac{1}{2}\eta(\gamma_{-i}^{\prime})}{p(\gamma_{i}^{\prime}|\gamma_{-i}^{\prime},\mathcal{D})}.

From (58) and (59) and γ−i = γ ′ −i , we obtain In addition, we also have K∗(γ → γ ′) = K∗(γ ′ → γ) = 0 if γ ′ ̸= γ and γ ′ ̸= flip(γ|i) for any i ∈ [P].

Furthermore, K∗(γ → γ ′) = K∗(γ ′ → γ) = 1 − S P if γ = γ ′.

By combining all these cases, it holds that

f(γ)K(γγ)=f(γ)K(γγ)f(\gamma)K^{*}(\gamma\to\gamma^{\prime})=f(\gamma^{\prime})K^{*}(\gamma^{\prime}\to\gamma)

for all γ ′, γ.

This means that {γ (t)} T t=1 form a reversible Markov chain with stationary distribution f(γ)/Zf where

Zf=γf(γ).(1)Z_{f}=\sum_{\gamma}f(\gamma).\tag{1} (65)(65) (67)(67)

Since {Qt} T t=1 is an i.i.d. Bernoulli sequence with q(1) = S/P and independent of {γ (t)} T t=1, {γ (t), Q(t)} T t=1 forms a Markov chain with the transition kernel satisfying:

K((γ,Q)(γ,Q))=q(Q)K(γγ).K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=q(Q^{\prime})K^{*}(\gamma\to\gamma^{\prime}).

It follows from (66) that

q(Q)f(γ)/ZfK((γ,Q)(γ,Q))=[K(γγ)f(γ)/Zf]q(Q)q(Q)q(Q)f(\gamma)/Z_{f}K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=[K^{*}(\gamma\to\gamma^{\prime})f(\gamma)/Z_{f}]q(Q)q(Q^{\prime}) ′) (67) for any pair (γ, Q) and (γ ′, Q′).

Finally, from (64) and (67), we have

q(Q)f(γ)/ZfK((γ,Q)(γ,Q))=q(Q)f(γ)/ZfK((γ,Q)(γ,Q)).q(Q)f(\gamma)/Z_{f}K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=q(Q^{\prime})f(\gamma)/Z_{f}K((\gamma^{\prime},Q^{\prime})\to(\gamma,Q)).

This means that {γt, Q(t)} T t=1 forms a reversible Markov chain with stationary distribution q(Q)f(γ)/Zf .

C Proof Of Lemma 1

Observe that with probability at least 1 − α, we have

Let $1-\alpha$, we have $\begin{array}{l}(1-\varepsilon)\mathbb{E}[U]\leq U\leq(1+\varepsilon)\mathbb{E}[U]\ (1-\varepsilon)\mathbb{E}[V]\leq V\leq(1+\varepsilon)\mathbb{E}[V].\end{array}$ . K(γγ)K(γγ)=SPf(iγ)SPf(iγ)\frac{K^{*}(\gamma\rightarrow\gamma^{\prime})}{K^{*}(\gamma^{\prime}\rightarrow\gamma)}=\frac{\frac{S}{P}f(i|\gamma)}{\frac{S}{P}f(i|\gamma^{\prime})} $$=\frac{f(i|\gamma)}{f(i|\gamma^{\prime})}$$ $$=\frac{\phi(\gamma^{\prime})p(\gamma^{\prime}|\mathcal{D})}{\phi(\gamma)p(\gamma|\mathcal{D})}$$ $$=\frac{f(\gamma^{\prime})}{f(\gamma)}.$$ (58)(58) (59)(59) (60)(60) (61)(61) (62)(62) (63)(63) (64)(64) (68)(68) (69)(70)\begin{array}{l}{(69)}\\ {(70)}\end{array} Hence, we have

(1ε1+ε)E[U]E[V]UV(1+ε1ε)E[U]E[V].\left({\frac{1-\varepsilon}{1+\varepsilon}}\right){\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\leq{\frac{U}{V}}\leq\left({\frac{1+\varepsilon}{1-\varepsilon}}\right){\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}. . (71) From (71), with probability at least 1 − α, we have

(71)(71) UV\left|{\frac{U}{V}}\right. (72)(72) (73) $\binom{74}{7}$ . − \left.\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|\leq\frac{2\varepsilon}{1-\varepsilon}\bigg{(}\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\bigg{)}.\tag{1}

It follows from (72) that

\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]=\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]D\right]\mathbb{P}(D)+\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]D^{c}\right]\mathbb{P}(D^{c}) $$\leq\frac{4\varepsilon^{2}}{(1-\varepsilon)^{2}}\left(\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right)^{2}+\left[\max\left(M,\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right)\right]^{2}\alpha.$$

D Proof Of Lemma 5

First, by definition of ϕˆ(γ) in (36) we have

ρ(t)=ϕ^(γ(t))Q(t)t=1Tϕ^(γ(t))Q(t).(1)\rho^{(t)}=\frac{\hat{\phi}(\gamma^{(t)})Q^{(t)}}{\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}}.\tag{1}

In addition, observe that

0ϕ^(γ)1.0\leq{\hat{\phi}}(\gamma)\leq1. 0 ≤ ϕˆ(γ) ≤ 1. (76) Now, let g : {0, 1} P → R+ such that g(γ) ≤ 1 for all γ. Then, by applying Lemma 2 and a change of measure, with probability 1 − 2 dν dπ exp(− ζ 2T(1−λ) 64e), we have

1Tt=1Tϕ^(γ(t))g(γ(t))Q(t)Eπ[t=1Tϕ^(γ(t))g(γ(t))Q(t)]ζ(1)\frac{1}{T}\bigg|\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\bigg[\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}\bigg]\bigg|\leq\zeta\tag{1} (75)(75) (76)(76) (77)\left(77\right)

for any ζ > 0.

Similarly, by using Lemma 2, with probability at least 1 − 2 dν dπ exp(− ζ 2T(1−λ) 64e), it holds that

1Tt=1Tϕ^(γ(t))Q(t)Eπ[t=1Tϕ^(γ(t))Q(t)]ζ.\frac{1}{T}\left|\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\left[\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}\right]\right|\leq\zeta.

By using the union bound, with probability at least 1 − 4 dν dπ exp(− ζ 2T(1−λ) 64e), it holds that

\frac{1}{T}\bigg{|}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\bigg{[}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}\bigg{]}\bigg{|}\leq\zeta, $$\frac{1}{T}\bigg{|}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})-\mathbb{E}{\pi}\bigg{[}\sum{t=1}^{T}\hat{\phi}(\gamma^{(t)})\bigg{]}\bigg{|}\leq\zeta.$$ (78)(78) (79)\begin{array}{c}{{(79)}}\\ {{}}\end{array} $$\begin{array}{c}{{(80)}}\ {{}}\end{array}$$

Now, by setting ζ = ζ0 := ε T min Eπ PT t=1 ϕˆ(γ (t))g(γ (t))Q(t), Eπ PT t=1 ϕˆ(γ (t)) for some ε > 0 (to be chosen later), with probability at least 1 − 4 dν dπ exp(− ζ 2 0 T(1−λ) 64e), it holds that

1 T X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) − Eπ X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) ≤ ε T Eπ X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) , (81) 1 T X T t=1 ϕˆ(γ (t))Q (t) − Eπ X T t=1 ϕˆ(γ (t))Q (t) ≤ ε T Eπ X T t=1 ϕˆ(γ (t))Q (t) . (82) (83) $\binom{84}{84}$ . (86)(86) Furthermore, by setting

U:=1Tt=1Tϕ^(γ(t))g(γ(t))Q(t),V:=1Tt=1Tϕ^(γ(t))Q(t),\begin{array}{l}{{U:=\frac{1}{T}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)},}}\\ {{V:=\frac{1}{T}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)},}}\end{array}

we have

UV=t=1Tϕ1(γ(t))g(γ(t))Q(t)t=1Tϕ1(γ(t))Q(t)\frac{U}{V}=\frac{\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}}{\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}} $$=\sum_{t=1}^{T}\rho^{(t)}g(\gamma^{(t)})$$ (85)(85)

and

M:=sup(U/V)1M:=\operatorname*{sup}(U/V)\leq1 (87)(8{\overline{{7}}}) M := sup(U/V ) ≤ 1 (87) since PT t=1 ρ (t) = 1 and g(γ (t)) ≤ 1 for all γ (t).

From (80)-(87), by Lemma 1, we have

\mathbb{E}\bigg{[}\bigg{|}\sum_{t=1}^{T}\rho^{(t)}g(\gamma^{(t)})Q^{(t)}-\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{|}^{2}\bigg{]}\leq\frac{4c^{2}}{(1-\varepsilon)^{2}}\bigg{(}\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{)}^{2}+\bigg{[}\max\bigg{(}1,\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{)}\bigg{]}^{2}\alpha,\tag{88} $\frac{d\varepsilon}{d\pi}\exp\left(-\frac{\varepsilon^{2}T(1-\lambda_{\gamma,Q})\min{\mathbb{E}{\pi}[U],\mathbb{E}{\pi}[V]}^{2}}{\varepsilon^{4}\varepsilon}\right)$, where $\lambda_{\gamma,Q}$ is the stationary distribution of the reversible

where α := 4 dν Markov chain {γ (t), Q(t)}. Now, by setting

ε=ε0=1min{Eπ[U],Eπ[V]}64elogT(1λγ,Q)T,\varepsilon=\varepsilon_{0}=\frac{1}{\operatorname*{min}\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}}, which is a $\varepsilon$-function.
we have α = 4 dν dπ 1 T . Then, we obtain

So if $ \mathbb{E}\bigg[\bigg|\sum_{t=1}^T\rho^{(t)}g(\gamma^{(t)})-\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg|^2\bigg]\leq\frac{4\varepsilon_0^2}{(1-\varepsilon_0)^2}\bigg(\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg)^2+\bigg[\max\bigg(1,\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg)\bigg]^2\alpha.$ that is.

2α. (90) Now, observe that

\mathbb{E}_{\pi}[U]=\frac{\mathbb{E}_{\pi}\big{[}g(\gamma)Q\hat{\phi}(\gamma)\big{]}}{\mathbb{E}_{\pi}\big{[}\hat{\phi}(\gamma)Q\big{]}} $$=\frac{\mathbb{E}{\pi}\big{[}g(\gamma)Q\phi^{-1}(\gamma)\big{]}}{\mathbb{E}{\pi}\big{[}\phi^{-1}(\gamma)Q\big{]}}.$$ (89)(89) (90)(90) (91) $\binom{92}{92}$ . On the other hand, by Lemma 3, we have π(γ, Q) = q(Q)f(γ) Zfwhere Zf := Pγ f(γ) and f(γ) = p(γ|D)ϕ(γ).

It follows that

Eπ[g(γ)Qϕ1(γ)]=Eq(Q)f(γ)/Zf[g(γ)Qϕ1(γ)]\mathbb{E}_{\pi}\left[g(\gamma)Q\phi^{-1}(\gamma)\right]=\mathbb{E}_{q(Q)f(\gamma)/Z_{f}}\left[g(\gamma)Q\phi^{-1}(\gamma)\right] $$=\sum_{\gamma}\sum_{Q}g(\gamma)Q\phi^{-1}(\gamma)\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\frac{1}{Z_{f}}\sum_{\gamma}\sum_{Q}g(\gamma)q(Q)Qp(\gamma|\mathcal{D})$$ $$=\frac{1}{Z_{f}}\mathbb{E}{p(\gamma|\mathcal{D})}\left[g(\gamma)\right]\mathbb{E}{q}[Q].$$ (93)(93) (94)(94) (95)(95) (gh)({\mathfrak{g h}})

Similarly, we have

Eπ[ϕ1(γ)Q]=Eq(Q)f(γ)/Zf[ϕ1(γ)Q]\mathbb{E}_{\pi}\left[\phi^{-1}(\gamma)Q\right]=\mathbb{E}_{q(Q)f(\gamma)/Z_{f}}\left[\phi^{-1}(\gamma)Q\right] $$=\sum_{Q}\sum_{\gamma}\phi^{-1}(\gamma)Q\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\frac{1}{Z_{f}}\bigg{(}\sum_{\gamma}P(\gamma|\mathcal{D})\bigg{)}\mathbb{E}_{q}[Q].$$ (97)(97) (98)(98) (99)(99) (100)12(100)^{\frac{1}{2}}

From (92), (96) and (99), we obtain

Eπ[U]=Ep(γD)[g(γ)].(1)\mathbb{E}_{\pi}[U]=\mathbb{E}_{p(\gamma|\mathcal{D})}\left[g(\gamma)\right].\tag{1}

For the given problem, by setting g(γ) = p(γi = 1|γ−i, D), from (100), we have

Eπ[U]=PP(i).(1)\mathbb{E}_{\pi}[U]=\mathbb{P}\mathbb{P}(i).\tag{1} (101)(101) \mathbb{E}_{\pi}[V]=\mathbb{E}_{\pi}\big{[}\hat{\phi}(\gamma)Q\big{]} $$=\sum_{\gamma,Q}\hat{\phi}(\gamma)Q\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\bigg{(}\sum_{\gamma}\hat{\phi}(\gamma)\frac{f(\gamma)}{Z_{f}}\bigg{)}\bigg{(}\sum_{Q}Qq(Q)\bigg{)}$$ $$=\mathbb{E}{\pi}[\hat{\phi}(\gamma)]\mathbb{E}{Q}[Q]$$ $$=\frac{S}{P}\mathbb{E}_{\pi}[\hat{\phi}(\gamma)].$$ (102)(102) (103)(103) (104)(104)

min{Eπ[U],Eπ[V]}=Eπ[V]min{1,Eπ[U]Eπ[V]}\min\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}=\mathbb{E}_{\pi}[V]\min\left\{1,\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\right\} $$=\mathbb{E}{\pi}[V]\min\left{1,\texttt{PIP}(i)\right}$$ $$=\mathbb{E}{\pi}[V]\texttt{PIP}(i)$$ $$=\frac{S}{P}\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]\texttt{PIP}(i).$$ (107)(107) (108)(108) (109)(109)

In addition, we have Hence, we obtain

(110)(110)

From (90), (101), and (110), we have

E[t=1Tρ(t)p(γi(t)=1γi(t),D)PP(i)2]4ε02(1ε0)2PP2(i)+4dνdπ1T,\mathbb{E}\left[\left|\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathbb{P}\mathbb{P}(i)\right|^{2}\right]\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathbb{P}\mathbb{P}^{2}(i)+4\frac{d\nu}{d\pi}\frac{1}{T},

, (111) and

ε0=PPIP(i)Eπ[ϕ^(γ)]S64elogT(1λγ,Q)T.(1)\varepsilon_{0}=\frac{P}{\text{PIP}(i)\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]S}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}}.\tag{1}

Now, observe that

dνdπ(γ,Q)=pγ1,Q1(γ,Q)π(γ,Q)\frac{d\nu}{d\pi}(\gamma,Q)=\frac{p_{\gamma_{1},Q_{1}}(\gamma,Q)}{\pi(\gamma,Q)} $$\leq\frac{1}{\pi(\gamma,Q)}$$ $$=\frac{1}{\pi(\gamma)q(Q)}$$ $$\leq\frac{P}{S}\frac{1}{\min_{\gamma}\pi(\gamma)}.$$ (111)(111) (112)(112)

(117)(117)

By combining (111) and (116), we have

E[t=1Tρ(t)p(γi(t)=1γi(t),D)PIP(i)2]4ε02(1ε0)2PIP2(i)+4PS1minγπ(γ)T.\mathbb{E}\left[\left|\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathsf{P I P}(i)\right|^{2}\right]\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathsf{P I P}^{2}(i)+\frac{4P}{S}\frac{1}{\operatorname*{min}_{\gamma}\pi(\gamma)T}.

. (117)

E Derive P(Γi|D, Γ−I)

Observe that

p(γiD,γi)=p(γiD,γi)p(1γiD,γi)(1+p(γiD,γi)p(1γiD,γi))1.p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})=\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}\bigg(1+\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}\bigg)^{-1}.

In addition, we have

p(γi = 1|D, γ−i) p(γi = 0|D, γ−i) = p(γi = 1, D|γ−i) p(γi = 0, D|γ−i)(119) = p(γi = 1|γ−i, X) p(γi = 0|γ−i, X) p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X)(120) = p(γi = 1) p(γi = 0)p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X) (121) = h 1 − h p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X) . (122) On the other hand, for any tuple γ = (γ1, γ2, · · · , γP ) such that γi = 1 (so |γ| ≥ 1), we have

p(Y|\gamma_{i}=1,\gamma_{-i},\beta_{\gamma},\sigma_{\gamma}^{2},X)=\frac{1}{\left(\sigma_{\gamma}\sqrt{2\pi}\right)^{N}}\exp\bigg{(}-\frac{\|Y-X_{\gamma}\beta_{\gamma}\|^{2}}{2\sigma_{\gamma}^{2}}\bigg{)}.\tag{123} (118)(118)

It follows that

p(Y |γi = 1, γ−i, X = Z βγ Z ∞ σ2γ=0 σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ p(βγ|γi = 1, γ−i)p(σ 2 γ |γi = 1, γ−i)dβγdσ2 1 = Z ∞ σ2γ=0 InvGamma12 ν0, 1 2 ν0λ0 Z βγ σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ 1 ×1 σγ √2πτ −1|γ| exp −∥βγ∥ 2 2σ 2 γ τ −1 dβγdσ2 γ . (125) Now, observe that γ(124) (124)\quad(124) $$\quad(125)$$ $$\quad(125)$$ (126)(127)(128)\begin{array}{l}{(126)}\\ {(127)}\\ {(128)}\end{array} (129)(129) (130)\left(130\right) $$\left(131\right)$$ YXγβγ2+τβγ2\|Y-X_{\gamma}\beta_{\gamma}\|^{2}+\tau\|\beta_{\gamma}\|^{2} $$\qquad=(Y-X_{\gamma}\beta_{\gamma})^{T}(Y-X_{\gamma}\beta_{\gamma})+\tau\beta_{\gamma}^{T}\beta_{\gamma}$$ $$\qquad=Y^{T}Y-2Y^{T}X_{\gamma}\beta_{\gamma}+\beta_{\gamma}^{T}X_{\gamma}^{T}X_{\gamma}\beta_{\gamma}+\tau\beta_{\gamma}^{T}\beta_{\gamma}$$ $$\qquad=Y^{T}Y-2Y^{T}X_{\gamma}\beta_{\gamma}+\beta_{\gamma}^{T}(X_{\gamma}^{T}X_{\gamma}+\tau I)\beta_{\gamma}.$$ In the above condition, the critical finite matrix $Y^{T}Y$ is $Y$-invariant.
Now, consider the EVD (singular value decomposition) of the positive definite matrix XT γ Xγ +τ I (note that τ > 0):

XγTXγ+τI=UTΛUX_{\gamma}^{T}X_{\gamma}+\tau I=U^{T}\Lambda U TΛU (129) where Λ is the a diagonal matrix consisting of all positive eigenvalue of XT γ Xγ + τ I. Let

β~γ:=ΛUβγ,Y~γ:=Λ1UXγTY.\begin{array}{l}{{\tilde{\beta}_{\gamma}:=\sqrt{\Lambda}U\beta_{\gamma},}}\\ {{\tilde{Y}_{\gamma}:=\sqrt{\Lambda^{-1}}U X_{\gamma}^{T}Y.}}\end{array} Then, we have ∥Y − Xγβγ∥ 2 + τ∥βγ∥ 2 = Y T Y − 2Y T Xγβγ + β T γ (XT γ Xγ + τ I)βγ (132) = Y T Y − 2Y T Xγ √ Λ−1U T β˜γ + β˜T γ β˜γ (133) = Y T Y − 2Y˜ T γ β˜γ + β˜T γ β˜γ (134) =∥Y ∥ 2 − ∥Y˜γ| 2+Y˜ T γ Y˜γ − 2Y˜ T γ β˜γ + β˜T γ β˜γ (135) =∥Y ∥ 2 − ∥Y˜γ| 2+ ∥Y˜γ − β˜γ∥ 2. (136) Hence, we have

dβγ=det(UTΛ1/2)dβ~γ=det(XγTXγ+τI)1/2dβ~γ.\begin{array}{l}{{d\beta_{\gamma}=\operatorname*{det}(U^{T}\Lambda^{-1/2})d\tilde{\beta}_{\gamma}}}\\ {{\qquad=\operatorname*{det}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1/2}d\tilde{\beta}_{\gamma}.}}\end{array}

Hence, we have

σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ σγ √2πτ −1|γ| exp −∥βγ∥ 2 2σ 2 γ τ −1 dβγ (139) 1 Z 1 βγ = Z β˜γ 1 σγ √2πN exp − ∥Y ∥ 2 − ∥Y˜γ| 2+ ∥Y˜γ − β˜γ∥ 2 2σ 2 γ ×1 σγ √2πτ −1|γ| det(XT γ Xγ + τ I) −1/2dβ˜γ (140) =1 σγ √2πN τ |γ|/2exp − ∥Y ∥ 2 − ∥Y˜γ| 2 2σ 2 γ det(XT γ Xγ + τ I) −1/2. (141) $\left(132\right)$ $\left(133\right)$ $\left(134\right)$ $\left(135\right)$ $\left(136\right)$ (137)(137) $$(138)$$ (139)\left({139}\right) $$\left({140}\right)$$ $$\left({141}\right)$$ ... By combining (125) and (141), we obtain p(Y |γi = 1, γ−i, X

Z βγ Z ∞ σ2γ=0 σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ 1 (142)(142) (143)(143) p(βγ|γi = 1, γ−i)p(σ 2 γ |γi = 1, γ−i)dβγdσ2 γ(142)

Z ∞ σ2γ=0 InvGamma12 ν0, 1 2 ν0λ0 1 σγ √2πN τ |γ|/2 × exp − ∥Y ∥ 2 − ∥Y˜γ| 2 2σ 2 γ det(XT γ Xγ + τ I) −1/2dσ2 γ(143) = det(XT γ Xγ + τ I) −1/2τ |γ|/2(2π) −N/2 Z ∞ σ2γ=0 InvGamma12 ν0, 1 2 ν0λ0 (σ 2 γ ) −N/2 × exp − ∥Y ∥ 2 − ∥Y˜γ∥ 2 2σ 2 γ dσ2 γ(144) = det(XT γ Xγ + τ I) −1/2τ |γ|/2(2π) −N/2 × Z ∞ σ2γ=0 (1/2λ0ν0) 1/2ν0 Γ(1/2ν0)(1/σ2 γ ) 1/2ν0+1 exp − 1/2ν0λ0/σ2 γ (σ 2 γ ) −N/2 × exp − ∥Y ∥ 2 − ∥Y˜γ∥ 2 2σ 2 γ dσ2 γ(145) = det(XT γ Xγ + τ I) −1/2τ |γ|/2(2π) −N/2 (1/2λ0ν0) 1/2ν0 Γ(1/2ν0) × Z ∞ σ2γ=0 (1/σ2 γ ) 1/2ν0+1+N/2exp − ∥Y ∥ 2 − ∥Y˜γ∥ 2 + ν0λ0

2σ 2 γ (144)(144) (145)(145) (146)(146) (147)(147) (148)(148) (149)(149) $$(150)$$ dσ2 γ(146) = det(XT γ Xγ + τ I) −1/2τ |γ|/2(2π) −N/2 (1/2λ0ν0) 1/2ν0 Γ(1/2ν0) × Γ N + ν0 2 ∥Y ∥ 2 − ∥Y˜γ∥ 2 + ν0λ0 2 − N+ν0 2 . (147) Let γ˜1 is given by γ−i with γi = 1, γ˜0 is given by γ−i with γi = 0. It follows that p(Yγi=1,γi,X)p(Yγi=0,γi,X)=τdet(Xγi2Xγ0+τI)det(Xγ12Xγ1+τI)(Y2Y^γ02+ν0λ0Y2Y^γ12+ν0λ0)N+ν02.\frac{p(Y|\gamma_{i}=1,\gamma_{-i},X)}{p(Y|\gamma_{i}=0,\gamma_{-i},X)}=\sqrt{\tau}\sqrt{\frac{\operatorname*{det}(X_{\gamma_{i}}^{2}X_{\gamma_{0}}+\tau I)}{\operatorname*{det}(X_{\gamma_{1}}^{2}X_{\gamma_{1}}+\tau I)}}\left(\frac{\|Y\|^{2}-\|\hat{Y}_{\gamma_{0}}\|^{2}+\nu_{0}\lambda_{0}}{\|Y\|^{2}-\|\hat{Y}_{\gamma_{1}}\|^{2}+\nu_{0}\lambda_{0}}\right)^{\frac{N+\nu_{0}}{2}}. . (148) Y~γ2=Y~γTY~γ\|\tilde{Y}_{\gamma}\|^{2}=\tilde{Y}_{\gamma}^{T}\tilde{Y}_{\gamma} $$=Y^{T}X_{\gamma}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1}X_{\gamma}^{T}Y.\tag{1}$$ γ Y. (150) p(Yγi=1,γi,X)p(Yγi=0,γi,X)=det(Xγ0TXγ0+τI)det(Xγ1TXγ1+τI)(Sγ0Sγ1)N+ν0,\frac{p(Y|\gamma_{i}=1,\gamma_{-i},X)}{p(Y|\gamma_{i}=0,\gamma_{-i},X)}=\sqrt{\frac{\operatorname*{det}(X_{\gamma_{0}}^{T}X\gamma_{0}+\tau I)}{\operatorname*{det}(X_{\gamma_{1}}^{T}X\gamma_{1}+\tau I)}\Big(\frac{S_{\gamma_{0}}}{S_{\gamma_{1}}}\Big)^{N+\nu_{0}}}, Sγ:=YTYYTXγ(XγTXγ+τI)1XγTY+ν0λ0.S_{\gamma}:=Y^{T}Y-Y^{T}X_{\gamma}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1}X_{\gamma}^{T}Y+\nu_{0}\lambda_{0}. γ Y + ν0λ0. (152) p(γiD,γi)=p(γiD,γi)p(1γiD,γi)(1+p(γiD,γi)p(1γiD,γi))1.p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})={\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}}\bigg(1+{\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}}\bigg)^{-1}. (151)(151) (152)(152) (153)(153) On the other hand, we have Hence, we finally have where Based on this, we can estimate Denote the set of included variables in γ˜0 as I = {j : ˜γ0,j = 1} . Define F =XT γ˜0Xγ˜0 + τ I−1, ν = XT Y and νγ˜0 = (νj )j∈I . Also define A = XT X and ai = (Aji)j∈I . Then, by using the same arguments as (Zanella & Roberts, 2019, Appendix B1), we can show that

S(˜γ1) = S(˜γ0) − di ν T γ˜0 F ai − νi 2, (154) where di = (Aii + τ − a T i F ai) −1. In addition, we can compute a T i F ai by using the Cholesky decomposition of F = LLT and

aiTFai=aiTL2a_{i}^{T}Fa_{i}=\|a_{i}^{T}L\|^{2} $$=\sum_{j\in I}(BL)_{ij}^{2},\tag{1}$$ (155)\left(155\right) $$\left(156\right)$$ (157)(157) (158)(158)

where B is the p × |γ| matrix made of the columns of A corresponding to variables included in γ.

In addition, we have

$X^T_{\tilde{\tau}1}X{\tilde{\tau}1}+\tau I=\begin{pmatrix}X^T{\tilde{\tau}0}X_{\tilde{\tau}0}+\tau I&a_i\ a^T_i&A_{ii}+\tau\end{pmatrix}$ for the last one is to find the natural numbers we want to use that. Hence, by using Schur's formula for the determinant of block matrix, we are easy to see that

$ \frac{\det(X^T_{\bar{\gamma}0}X{\bar{\gamma}0}+\tau I)}{\det(X^T{\bar{\gamma}1}X{\bar{\gamma}_1}+\tau I)}=d_i.$ $ \tau$. (159)(159) (160)(160) (161) (162) (163) (164) (165) (166) (166) (167) (168) Using this algorithm, if pre-computing XT X is not possible, the computational complexity per conditional PIP is O(N|γ| 2 +|γ| 3 +P|γ| 2). Otherwise, if pre-computing XT X is possible, the computational complexity per conditional PIP is O(|γ| 3 + P|γ| 2).

F Proof Of Lemma 9

From Lemma 8 and the fact that {γ (t), Q(t)} forms a reversible Markov chain with transition kernel K((γ, Q) → (γ ′, Q′)) = K∗(γ → γ ′)q(Q′), we have

1 − λγ,Q = inf g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 ⟨g, g⟩π − ⟨Kg, g⟩ (159) = 1 − sup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 ⟨Kg, g⟩ (160) = 1 − sup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 X γ,Q Kg(γ, Q)g(γ, Q)π(γ, Q) (161) = 1 − sup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 X γ,Q X γ′,Q′ K((γ, Q) → (γ ′, Q′))g(γ ′, Q′)g(γ, Q)π(γ, Q) (162) = 1 − S Psup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 X γ,Q X γ′,Q′ K∗(γ → γ ′)q(Q ′)g(γ ′, Q′)g(γ, Q)π(γ, Q) (163) = 1 − S Psup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 X γ,Q X γ′,Q′ K∗(γ → γ ′) f(γ) Zf q(Q)g(γ ′, Q′)g(γ, Q)q(Q ′) (164) = 1 − S Psup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 X γ,γ′ K∗(γ → γ ′) f(γ) Zf X Q,Q′ g(γ ′, Q′)g(γ, Q)q(Q)q(Q ′) (165) = 1 − S Psup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 X γ,γ′ K∗(γ → γ ′)π(γ) X Q g(γ, Q)q(Q) X Q′ π(γ ′, Q′)q(Q ′) (166) = 1 − S Psup g(γ,Q):Eπ[g]=0,Eπ[g 2]=1 X γ,γ′ K∗(γ → γ ′)π(γ)h(γ)h(γ ′) (167) where

π(γ)=f(γ)Zf,\pi(\gamma)=\frac{f(\gamma)}{Z_{f}}, $$Z_{f}=\sum_{\gamma}f(\gamma),$$ $$h(\gamma):=\sum_{Q}g(\gamma,Q)q(Q).$$ (168) $\left(169\right)$ $\left(170\right)$ . Observe that

Eπ[h(γ)]=γh(γ)π(γ)\mathbb{E}_{\pi}[h(\gamma)]=\sum_{\gamma}h(\gamma)\pi(\gamma) $$=\sum_{\gamma}\sum_{Q}g(\gamma,Q)q(Q)\pi(\gamma)$$ $$=\sum_{\gamma,Q}g(\gamma,Q)\pi(\gamma,Q)$$ $$=\mathbb{E}_{\pi}[g(\gamma,Q)]$$ $$=0.$$ (171)(171) (172)(172) (173)(173)

On the other hand, we also have where (177) follows from the convexity of the function x 2 on [0, ∞).

From (175), (180), and (167), we obtain

1λγ,Q1suph(γ):Eπ[h]=0,Eπ[h2]1γ,γK(γγ)π(γ)h(γ)h(γ).(181)1-\lambda_{\gamma,Q}\geq1-\sup_{h(\gamma):\mathbb{E}_{\pi}[h]=0,\mathbb{E}_{\pi}[h^{2}]\leq1}\sum_{\gamma,\gamma^{\prime}}K^{*}(\gamma\to\gamma^{\prime})\pi(\gamma)h(\gamma)h(\gamma^{\prime}).\tag{181}

Now, note that Eπ[h] = 0 is equivalent to h ⊥π 1. Let |Ω| = 2P +1 := n and h1, h2, · · · , hn are eigenfunctions of K∗corresponding to the decreasing ordered eigenvalues λ1 ≥ λ2 ≥ · · · ≥ λn and are orthogonal since K∗is self-adjoint. Set h1 = 1. Since ∥h∥2,π = 1 and h ⊥π 1, we have h =Pn j=2 ajhj because it is perpendicular to h1 so it can be only represented by these eigenvectors. By taking l2-norm on both sizes we have Pn j=2 a 2 j ≤ 1 since the form like ⟨hi, hj ⟩π = 0 and ⟨hi, hi⟩ = ∥hi∥ 2 2,π = 1. Thus,

suph:Eτ[h]=0,Eτ[h2]1γ,γK(γγ)π(γ)h(γ)h(γ)maxa2,a3,,an,j=2naj21j=1naj2λj\sup_{h:\mathbb{E}_{\tau}[h]=0,\mathbb{E}_{\tau}[h^{2}]\leq1}\sum_{\gamma,\gamma^{\prime}}K^{*}(\gamma\to\gamma^{\prime})\pi(\gamma)h(\gamma)h(\gamma^{\prime})\leq\max_{a_{2},a_{3},\cdots,a_{n},\sum_{j=2}^{n}a_{j}^{2}\leq1}\sum_{j=1}^{n}a_{j}^{2}\lambda_{j} $$\leq\lambda_{2}\sum_{j=2}^{n}a_{j}^{2}$$ $$=\lambda_{2},$$ jλj (182)

(174)(175)\begin{array}{l}{(174)}\\ {(175)}\end{array} \mathbb{E}_{\pi}\big{[}h^{2}(\gamma)\big{]}=\sum_{\gamma}\bigg{(}\sum_{Q}g(\gamma,Q)q(Q)\bigg{)}^{2}\pi(\gamma) $$\leq\sum_{\gamma}\bigg{(}\sum_{Q}g(\gamma,Q)^{2}q(Q)\bigg{)}\pi(\gamma)$$ $$=\sum_{\gamma,Q}g(\gamma,Q)^{2}\pi(\gamma,Q)$$ $$=\mathbb{E}_{\pi}\big{[}g(\gamma,Q)^{2}\big{]}$$ $$=1,$$ (176)(176) (177)(177) (178)(178) (179)(180)\begin{array}{l}{(179)}\\ {(180)}\end{array}

where Pn j=2 a 2 j ≤ 1 and λj ∈ spec(P) such that λ2 ≥ λ3 · · · ≥ λn. Hence, from (184), we obtain

1λγ,Q1SPλ2(185)1-\lambda_{\gamma,Q}\geq1-\frac{S}{P}\lambda_{2}\tag{185} $$=\frac{S}{P}(1-\lambda_{P})+1-\frac{S}{P}$$ (186) $$\geq1-\frac{S}{P}.\tag{187}$$