File size: 62,160 Bytes
c679409 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 |
# Variable Complexity Weighted-Tempered Gibbs Samplers For Bayesian Variable Selection
Anonymous authors Paper under double-blind review
## Abstract
A subset weighted-tempered Gibbs Sampler (subset-wTGS) has been recently introduced by Jankowiak to reduce the computation complexity per MCMC iteration in high-dimensional applications where the exact calculation of the posterior inclusion probabilities (PIP) is not essential. However, the Rao-Backwellized estimator associated with this sampler has a very high variance as the ratio between the signal dimension, P, and the number of conditional PIP estimations is large. In this paper, we design a new subset-wTGS where the expected number of computations of conditional PIPs per MCMC iteration can be much smaller than P. Different from the subset-wTGS and wTGS, our sampler has a variable complexity per MCMC iteration. We provide an upper bound on the variance of an associated RaoBlackwellized estimator for this sampler at a finite number of iterations, T, and show that the variance is O PS
2 log T
Tfor any given dataset where S is the expected number of conditional PIP computations per MCMC iteration.
## 1 Introduction
Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a known function. MCMC methods are primarily used for calculating numerical approximations of multi-dimensional integrals, for example in Bayesian statistics, computational physics (Kasim et al., 2019), computational biology, (Gupta & Rawlings, 2014), and linear models (Truong, 2022). Monte Carlo algorithms have been very popular over the last decade (Hesterberg, 2002; Robert & Casella, 2005). Many practical problems in statistical signal processing, machine learning and statistics, demand fast and accurate procedures for drawing samples from probability distributions that exhibit arbitrary, non-standard forms (Andrieu et al.,
2004; Fitzgerald, 2001; Read et al., 2012). One of the most popular Monte Carlo methods are the families of Markov chain Monte Carlo (MCMC) algorithms (Andrieu et al., 2004; Robert & Casella, 2005) and particle filters (Bugallo et al., 2007). Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference (Wills & Schön, 2023). The MCMC techniques generate a Markov chain with a pre-established target probability density function as invariant density (Liang et al.,
2010).
Gibbs sampler (GS) is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations from a specific multivariate probability distribution. This sequence can be used to approximate the joint distribution, the marginal distribution of one of the variables, or some subset of the variables. It can be also used to compute the expected value (integral) of one of the variables (Bishop, 2006; Bolstad, 2010).
GS is applicable when the joint distribution is not known explicitly or is difficult to sample from directly, but the conditional distribution of each variable is known and is easy (or at least, easier) to sample from.
The GS algorithm generates an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples constitutes a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution.
GS is commonly used as a means of statistical inference, especially Bayesian inference. However, pure Markov chain based schemes (i.e., ones which simulate from precisely the right target distribution with no need for subsequent importance sampling correction) have been far more successful. This is because MCMC methods are usually much more scalable to high-dimensional situations, whereas importance sampling weight variances tend to grow (often exponentially) with dimension. (Zanella & Roberts, 2019) proposed a natural way to combine the best of MCMC and importance sampling in a way that is robust in high-dimensional contexts and ameliorates the slow mixing which plagues many Markov chain based schemes. The proposed scheme is called Tempered Gibbs Sampler (TGS), involving component-wise updating rule like Gibbs Sampling
(GS), with improved mixing properties and associated importance weights which remain stable as dimension increases. Through an appropriately designed tempering mechanism, TGS circumvents the main limitations of standard GS, such as the slow mixing introduced by strong posterior correlations. It also avoids the requirement to visit all coordinates sequentially, instead iteratively making state-informed decisions as to which coordinate should be next updated.
TGS has been applied to Bayesian Variable Selection (BVS) problem, observing multiple orders of magnitude improvements compared to alternative Monte Carlo schemes (Zanella & Roberts, 2019). Since TGS updates each coordinate with the same frequency, in a BVS context, this may be inefficient as the resulting sampler would spend most iterations updating variables that have low or negligible posterior inclusion probability, especially when the number of covariates, P, gets large. A better solution, called weighted Tempered Gibbs Sampling (wTGS) (Zanella & Roberts, 2019), updates more often components with a larger inclusion probability, thus having a more focused computational effort. However, despite the intuitive appeal of this approach to BVS problem, approximating the resulting posterior distribution can be computationally challenging. A principal reason for this is the astronomical size of the model space whenever there more than a few dozen covariates. To scale the high-dimensional regime, (Jankowiak, 2023) has recently introduced an efficient MCMC scheme whose cost per iteration can be significantly reduced compared to wTGS. The main idea is to introduce an auxiliary variable S ⊂ {1, 2, · · · , P} that controls which conditional posterior inclusion probabilites (PIPs) are computed in a given MCMC iteration. By choosing the size S of S to be much less than P, we can reduce the computational complexity significantly. However, this scheme contains some weaknesses such as the Rao-Blackwellized estimator associated with this sampler has a very high variance when P/S is large and the number of MCMC iterations, T, is small. In addition, generating the auxiliary random set which is uniformly distributed over PS
subsets in the subset wTGS algorithm (Jankowiak, 2023)
requires very long running time. In this paper, we design a new subset wTGS called variable complexity wTGS (VC-wTGS) and apply this algorithm to BVS in the linear regression model. More specifically, we consider the linear regression Y = Xβ+Z where β = (β0, β1*, . . . , β*P −1)
Tis controlled by an inclusion vector (γ0, γ1, · · · , γP −1). We design a Rao-Blackwellized estimator associated with VC-wTGS for *posterior inclusion probabilities* or PIPs, where PIP(i) := p(γi = 1|D) ∈ [0, 1], and D = {*X, Y* } is the observed dataset. Experiments show that our scheme converges to PIPs very fast for simulated datasets and that the variance of the Rao-Blackwellized estimator can be much smaller than the subset wTGS (Jankowiak, 2023) when P/S is very high for MNIST dataset. More specifically, our contributions include:
- We propose a new subset wTGS, called VC-wTGS, where the expected number of conditional PIP
computations per MCMC can be much smaller than the signal dimension.
- We analyse the variance of an associated Rao-Blackwellized estimator at each finite number of MCMC iterations. We show that this variance is Olog T
TPS
2for any given dataset.
- We provide some experiments on a simulated dataset (multivariate Gaussian dataset) and the real dataset (MNIST). Experiments show that our estimator can have a better variance than the subset wTGS-based estimator (Jankowiak, 2023) at high P/S for the same number of MCMC iterations T.
Although we limit our application to the linear regression model for the simplicity of computations of the conditional PIPs in experiments, our subset wTGS can be applied to other BVS models. However, we need to change the method to estimate the conditional PIPs for each model. See (148) and Appendix E for the method that is used to estimate the conditional PIPs for the linear regression model.
## 2 Preliminaries 2.1 Mathematical Backgrounds
Let a Markov chain {Xn}∞
n=1 on a state space S with transition kernel Q(*x, dy*) and the initial state X1 ∼ ν, where S is a Polish space in R. In this paper, we consider the Markov chains which are irreducible and positive-recurrent, so the existence of a stationary distribution π is guaranteed. An irreducible and recurrent Markov chain on an infinite state-space is called Harris chain (Tuominen & Tweedie, 1979). A Markov chain is called *reversible* if the following detailed balance condition is satisfied:
$$\pi(d x)Q(x,d y)=\pi(d y)Q(y,d x),\qquad\forall x,y\in{\mathcal{S}}.$$
π(dx)Q(*x, dy*) = π(dy)Q(y, dx), ∀x, y ∈ S. (1)
Define
$$(1)$$
$$\begin{array}{r l}{d(t):=\operatorname*{sup}_{x\in{\mathcal{S}}}d_{\mathrm{TV}}(Q^{t}(x,\cdot),\pi)}\\ {t_{\operatorname*{mix}}(\varepsilon):=\operatorname*{min}\{t:d(t)\leq\varepsilon\},}\end{array}$$
$$\left(2\right)$$
$$\left({\mathrm{3}}\right)$$
$$\left(4\right)$$
t(x, ·), π) (2)
tmix(ε) := min{t : d(t) ≤ ε}, (3)
and
$$\tau_{\mathrm{min}}:=\operatorname*{inf}_{0\leq\varepsilon\leq1}t_{\mathrm{mix}}(\varepsilon)\bigg(\frac{2-\varepsilon}{1-\varepsilon}\bigg)^{2},\quad t_{\mathrm{mix}}:=t_{\mathrm{mix}}(1/4).$$
Let L2(π) be the Hilbert space of complex valued measurable functions on S that are square integrable w.r.t.
π. We endow L2(π) with inner product ⟨*f, g*⟩ := Rfg∗dπ, and norm ∥f∥2,π := ⟨*f, f*⟩
1/2 π . Let Eπ be the associated averaging operator defined by (Eπ)(*x, y*) = π(y), ∀x, y ∈ S, and
$$\lambda=\|Q-E_{\pi}\|_{L_{2}(\pi)\to L_{2}(\pi)},$$
$$\left(5\right)$$
$$(6)$$
λ = ∥Q − Eπ∥L2(π)→L2(π), (5)
where ∥B∥L2(π)→L2(π) = maxv:∥v∥2,π=1 ∥Bv∥2,π. Q can be viewed as a linear operator on L2(π), denoted by Q, defined as (Qf)(x) := EQ(x,·)(f), and the reversibility is equivalent to the self-adjointness of Q. The operator Q acts on measures on the left, creating a measure µQ, that is, for every measurable subset A of S, µQ(A) := Rx∈S Q(*x, A*)µ(dx). For a Markov chain with stationary distribution π, we define the *spectrum* of the chain as
$$S_{2}:=\big\{\xi\in\mathbb{C}:(\xi\mathbf{I}-\mathbf{Q}){\mathrm{~is~not~invertible~on~}}L_{2}(\pi)\big\}.$$
S2 := ξ ∈ C : (ξI − Q) is not invertible on L2(π) . (6)
It is known that λ = 1 − γ
∗(Paulin, 2015), where
$$\gamma^{*}:={\begin{cases}1-\operatorname*{sup}\{|\xi|:\xi\in{\mathcal{S}}_{2},\xi\neq1\},\\ \quad\quad\quad{\mathrm{if~eigenvalue~}}1{\mathrm{~has~multiplicity~}}1,\\ 0,\quad\quad\quad{\mathrm{otherwise}}\end{cases}}$$
is the *the absolute spectral gap* of the Markov chain. The absolute spectral gap can be bounded by the mixing time tmix of the Markov chain by the following expression:
$$\left({\frac{1}{\gamma^{*}}}-1\right)\log2\leq t_{\mathrm{mix}}\leq{\frac{\log(4/\pi_{*})}{\gamma_{*}}},$$
where π∗ = minx∈S πx is the *minimum stationary probability*, which is positive if Qk > 0 (entry-wise positive)
for some k ≥ 1. See (Wolfer & Kontorovich, 2019) for more detailed discussions. In (Combes & Touati, 2019; Wolfer & Kontorovich, 2019), the authors provided algorithms to estimate tmix and γ
∗from a single trajectory.
Let M(S) be a measurable space on S and define
$$\mathcal{M}_{2}:=\left\{\nu\ \ \text{defined on}\ \ \mathcal{M}(\mathcal{S}):\nu<<\pi,\left\|\frac{d\nu}{d\pi}\right\|_{2}<\infty\right\},\tag{8}$$
where *∥ · ∥*2 is the standard L2 norm in the Hilbert space of complex valued measurable functions on S.
$$\mathbf{\Pi}(7)$$
## 2.2 Problem Set-Up
Consider the linear regression Y = Xβ + Z ∈ R
N where β = (β0, β1*, . . . , β*P −1)
T, Z = (Z0, Z1*, . . . , Z*P −1)
T,
and X ∈ R
N×P which is a designed matrix. Denote γ by the vector (γ0, γ1, · · · , γP −1) where each γi ∈ {0, 1}
controls whether the coefficient βi and the i-th covariate are included (γi = 1) or excluded (γi = 0) from the model. Let βγ be the restriction of β to the coordinates in γ and |γ| ∈ {0, 1, 2, · · · , P} be the total number of included covariates. In addition, the following are assumed:
- inclusion variables: γi ∼ Bern(h)
- noise variance: σ 2 γ ∈ InvGamma12 ν0, 1 2 ν0λ0
- coefficients: βγ ∼ N (0, σ2 γ τ
−1I|γ|)
- noise distributions: Zi ∼ N (0, σ2 γ
)
for all i = 0, 1, · · · , P − 1. The hyperparameter h ∈ (0, 1) controls the overall level of sparsity; in particular hP is the expected number of covariates included a priori. The |γ| coefficients βγ ∈ R
|γ| are governed by the standard Gaussian prior with precision proportional to τ > 0.
An attractive feature of the model is that it explicitly reasons about variable inclusion and allows us to define *posterior inclusion probabilities* or PIPs, where
$$\mathbf{P}\mathbf{I}\mathbf{P}(i):=p(\gamma_{i}=1|{\mathcal{D}})\in[0,1],$$
$$({\mathfrak{g}})$$
PIP(i) := p(γi = 1|D) ∈ [0, 1], (9)
and D = {*X, Y* } is the observed dataset.
## 3 Main Results 3.1 Introduction To Subset Wtgs
In this subsection, we review the subset wTGS which was proposed by (Jankowiak, 2023). Let P =
{1, 2, · · · , P} and PS be the set of all subsets of cardinality S of P. Consider the sample space P×{0, 1}
P ×PS
and define the following (unnormalized) target distribution on this sample space:
f(γ, i, S) := p(γ|D) 1 2 η(γ−i) p(γi|γ−i, D) U(S|i, A). (10)
Here, S ranges over all the subsets of {1, 2, · · · , P} of some size S ∈ {0, 1, · · · , P} that also contain a fixed
'anchor' set A ⊂ {1, 2, · · · , P} of size *A < S*, and η(·) is some weighting function. Moreover, U(S|i, A) is the uniform distribution over the all size S subsets of {1, 2, · · · , P} that contain both i and A.
In practice, the set A can be chosen during burn-in. Subset wTGS proceeds by defining a sampling scheme for the target distribution (10) that utilizes Gibbs updates w.r.t. i and S and Metropolized-Gibbs update w.r.t. γi.
- i**-updates:** Marginalizing i from (10) yields
$${\mathrm{yields}}$$
$$f(\gamma,{\mathcal{S}})=p(\gamma|{\mathcal{D}})\phi(\gamma,{\mathcal{S}})$$
f(γ, S) = p(γ|D)ϕ(γ, S) (11)
$$\mathrm{Trginali}\mathrm{ein}$$
where we define
$$(11)$$
$$\phi(\gamma,{\mathcal{S}}):=\sum_{i\in{\mathcal{S}}}{\frac{{\frac{1}{2}}\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},{\mathcal{D}})}}{\mathcal{U}}({\mathcal{S}}|i,{\mathcal{A}})$$
$$\left(12\right)$$
and have leveraged that U(S|i, A) = 0 if i /∈ S. Crucially, computing ϕ(γ, S) is Θ(S) instead of Θ(P). We can do Gibbs updates w.r.t. i using the distribution
$$f(i|\gamma,{\mathcal{S}})\sim\frac{\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},{\mathcal{D}})}{\mathcal{U}}({\mathcal{S}}|i,{\mathcal{A}}).$$
$$(13)$$
- γ**-updates:** Just as for *wT GS* we utilized Metropolized -Gibbs updates w.r.t. γi that result in deterministic flips γi → 1 − γi. Likewise the marginal f(i) is proportional to PIP(i) + εP
so that the sampler focuses computational efforts on large PIP covariates (Jankowiak, 2023).
- S**-updates:** S is updated with Gibbs moves, *S ∼ U*(·|i, A). For the full algorithm, see the Algorithm 1.
Algorithm 1 The Subset S-wTGS Algorithm Input: Dataset D = {*X, Y* } with P covariates; prior inclusion probability h; prior precision τ ; subset size S; anchor set size A; total number of MCMC iterations T; number of burn-in iteration Tburn.
Output: Approximate weighted posterior samples {ρ
(t), γ(t)}
T
t=Tburn+1 Initializations: γ
(0) = (1, 1, *· · ·* , 1), and choose A be the A covariate with exhibiting the largest correlations with Y . Choose i
(0) randomly from {1, 2, · · · , P} and S
(0) ∼ U(·|i
(0), A).
for t = 1, 2, · · · , T do Estimate S conditional PIPs p(γ
(t−1)
j|γ
(t−1)
−j, D) for all j ∈ S(t−1)
ϕ(γ
(t−1), S
(t−1)) ←Pj∈S(t−1)
1 2 η(γ
(t−1)
−j)
p(γ
(t−1)
j|γ
(t−1)
−j,D)
Estimate f(j|γ
(t−1)) ← ϕ
−1(γ
(t−1), S
(t−1))12 η(γ
(t−1)
−j)
p(γ
(t−1)
j|γ
(t−1)
−j,D)
for all j ∈ [P].
Sample i
(t) ∼ f(·|γ
(t−1))
γ
(t) ← flip(γ
(t−1)|i
(t)) where flip(γ|i) flips the i-th coordinate of γ : γi ← 1 − γi.
Sample S
(t) ∼ U(·|i
(t), A)
Compute the unnormalized weights ρ˜
(t) ← ϕ
−1(γ
(t), S
(t))
if t ≤ Tburn **then**
Adapt A using some adaptive scheme.
end if end for for t = 1, 2, · · · , T do ρ
(t) ← ρ˜
(t)
PT
s>Tburn ρ˜
(s)
end for Output: {ρ
(t), γ(t)}
T
t=1.
The details of this algorithm is described in ALG 1. The associated estimator for this sampler is defined as
(Jankowiak, 2023):
$$\mathbf{P}\mathbf{I}\mathbf{P}(i):=\sum_{t=1}^{T}\rho^{(t)}\big(\mathbf{1}\{i\in{\mathcal{S}}^{(t)}\}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},D)+\mathbf{1}\{i\notin{\mathcal{S}}^{(t)}\}\gamma_{i}^{(t)}\big).$$
i. (14)
## 3.2 A Variable Complexity Wtgs Scheme
In the subset wTGS in Subsection 3.1, the number of conditional PIP computations per MCMC iteration is fixed, i.e., it is equal to S. In the following, we propose a variable complexity-based wTGS scheme (VC-wTGS), say ALG 2, where the only requirement is that the expected number of the conditional PIP
computations per MCMC iteration is S. This means that E[St] = S, where St is the number of conditional PIP computations at the t-th MCMC iteration.
Compared with ALG 1, ALG 2 allows us to use different subset sizes at MCMC iterations. By ALG 2, the expectation of number of conditional PIP computations in each MCMC iteration is P×(S/P)+0×(1−S/P) =
S. Since we aim to bound the variance at each finite iteration T, we don't mention about Tburn in ALG 2. In practice, we usually remove some initial samples. We also use the following new version of Rao-Blackwellized
$$(14)$$
estimator:
$$\mathrm{\mathsf{P I P}}(i):=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}}).$$
$$\left(15\right)$$
, D). (15)
In ALG 2, Bernoulli random variables {Q(t)}
T
t=1 are used to replace for random set S in ALG 1. There are Algorithm 2 A Variable-Complexity Based wTGS Algorithm
Input: Dataset D = {*X, Y* } with P covariates; prior inclusion probability h; prior precision τ ; total
number of MCMC iterations T; subset size S.
Output: Approximate weighted posterior samples {ρ (t), γ(t)} T t=1 Initializations: γ (0) = (γ1, γ2, · · · , γP ) where γj ∼ Bern(h) for all j ∈ [P]. for t = 1, 2, · · · , T do Set Q(1) = 1. Sample a Bernoulli random variable Q(t) ∼ Bern( S P ) if t ≥ 2. if Q(t) = 1 then Estimate P conditional PIPs p(γ (t−1) j|γ (t−1) −j, D) for all j ∈ [P] ϕ(γ (t−1)) ←Pj∈[P ] 1 2 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) Estimate f(j|γ (t−1)) ← ϕ −1(γ (t−1))12 η(γ (t−1) −j) p(γ (t−1) j|γ (t−1) −j,D) for all j ∈ [P]. Sample i (t) ∼ f(·|γ (t−1)) γ (t) ← flip(γ (t−1)|i (t)) where flip(γ|i) flips the i-th coordinate of γ : γi ← 1 − γi. Compute the unnormalized weights ρ˜ (t) ← ϕ −1(γ (t)) else
γ ρ˜
end if
end for
for t = 1, 2, · · · , T do
ρ
(t) ← ρ˜
(t)Q(t)
PT
s=1
ρ˜
(s)Q(s)
end for
# Compute the unno $)\gets\gamma^{(t-1)}\\ )\gets\phi^{-1}(\gamma^{(t)})\\$ #.
Output: {ρ
(t), γ(t)}
T
t=1.
two main reasons for this replacement: (1) generating a random set S from PS
subsets of [P] takes very long running time for most pairs (*P, S*), (2) the associated Rao-Blackwellized estimator usually has smaller variance with ALG 2 than ALG 1 at high P/S. See Section 4 for our simulation results.
## 3.3 Theoretical Bounds For Algorithm 2
First, we prove the following result. The proof can be found in Appendix C.
Lemma 1. Let U and V be two positive random variables such that U/V ≤ M a.s. for some constant M.
In addition, assume that on a set D with probability at least 1 − α*, we have*
$|U-\mathbb{E}[U]|\leq\varepsilon\mathbb{E}[U]$, $|V-\mathbb{E}[V]|\leq\varepsilon\mathbb{E}[V]$,
for some 0 ≤ ε < 1*. Then, it holds that*
$$\mathbb{E}\left[\left|{\frac{U}{V}}-{\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right|^{2}\right]\leq{\frac{4\varepsilon^{2}}{(1-\varepsilon)^{2}}}\left({\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right)^{2}+\left[\operatorname*{max}\left(M,{\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\right)\right]^{2}\alpha.$$
2α. (18)
We also recall the following Hoeffding's inequality for Markov chain:
$$\begin{array}{l}{(16)}\\ {(17)}\end{array}$$
$$(18)$$
Lemma 2. *(Rao, 2018, Theorem 1.1) Let* {Yi}∞
i=1 *be a stationary Markov chain with state space* [N],
transition matrix A, stationary probability measure π, and averaging operator Eπ, so that Y1 *is distributed* according to π. Let λ = ∥A − Eπ∥L2(π)→L2(π) and let f1, f2, · · · , fn : [N] → R *so that* E[fi(Yi)] = 0 *for all* i and |fi(ν)| ≤ ai for all ν ∈ [N] and all i*. Then for* u ≥ 0,
$$\mathbb{P}\biggl[\biggl|\sum_{i=1}^{n}f_{i}(Y_{i})\biggr|\geq u\biggl(\sum_{i=1}^{n}a_{i}^{2}\biggr)^{1/2}\biggr]\leq2\exp\biggl(-\frac{u^{2}(1-\lambda)}{64e}\biggr).$$
Now, the following result can be shown. Lemma 3. Let
$$(19)$$
$$\phi(\gamma):=\sum_{j\in[P]}{\frac{{\frac{1}{2}}\eta(\gamma_{-j})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}}$$
$$(20)$$
$$(21)$$
$$(22)$$
and define f(γ) := ϕ(γ)p(γ|D). (21)
Then, by ALG 2, the sequence {γ
(t), Q(t)}
T
t=1 forms a reversible Markov chain with the stationary distribution proportional to f(γ)q(Q) where q is the Bernoulli (S/P) *distribution. This Markov chain has transition kernel* K((*γ, Q*) → (γ
′, Q′)) = K∗(γ → γ
′)q(Q′) *where*
$$K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-f^{\mathsf{L}}i\mathfrak{p}(\gamma|j))+\biggl(1-\frac{S}{P}\biggr)\delta(\gamma^{\prime}-\gamma).$$
In the classical wTGS (Zanella & Roberts, 2019), the Markov chain {γ
(t)}
T
t=1 also form a Markov chain.
However, this Markov chain is different from the Markov chain in Lemma 3. However, the two Markov chains still have the same stationary distribution which is proportional to f(γ). See a detailed proof of Lemma 3 in Appendix B.
Lemma 4. *For the Rao-Blackwellized estimator in* (15) *which is applied to the output sequence* {ρ
(t), γ(t)}
T t=1 of ALG 2, it holds that
$$E_{i,T}:=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}})\to P\!I\!P(i)\tag{1}$$
as T → ∞.
Proof. By Lemma 3, {γ
(t), Q(t)}
T
t=1 forms a reversible Markov chain with stationary distribution f(γ)/Zf q(Q) where Zf =Pγ f(γ). Hence, by SLLN for Markov chain (Breiman, 1960), for any bounded function h, we have
$$\frac{1}{T}\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}h(\gamma^{(t)})$$ $$\to\mathbb{E}_{qf(\cdot)/Z_{f}}\big{[}\phi^{-1}(\gamma)h(\gamma)Q\big{]}\tag{2}$$ $$=\sum_{Q}q(Q)\sum_{\gamma}\frac{f(\gamma)}{Z_{f}}\phi^{-1}(\gamma)h(\gamma)Q$$ $$=\bigg{(}\sum_{Q}q(Q)Q\bigg{)}\bigg{(}\sum_{\gamma}\frac{f(\gamma)}{Z_{f}}\phi^{-1}(\gamma)h(\gamma)\bigg{)}$$ (3) $$=\mathbb{E}_{q}[Q]\frac{1}{Z_{f}}\sum_{\gamma}p(\gamma|\mathcal{D})h(\gamma)$$ (4) $$=\frac{S}{P}\frac{1}{Z_{f}}\sum_{\gamma}p(\gamma|\mathcal{D})h(\gamma),\tag{5}$$
$$(23)$$
$$(28)$$
where (27) follows from f(γ) = p(γ|D)ϕ(γ).
Similarly, we have
$$\frac{1}{T}\sum_{t=1}^T Q^{(t)}\phi^{-1}(\gamma^{(t)})$$ $$\to\mathbb{E}_{qf(\cdot)/Z_f}\big[\phi^{-1}(\gamma)Q\big]$$ $$=\sum_Q q(Q)Q\sum_\gamma\frac{f(\gamma)}{Z_f}\phi^{-1}(\gamma)$$ $$=\mathbb{E}_q[Q]\sum_\gamma\frac{1}{Z_f}p(\gamma|D)$$ $$=\frac{S}{P}\frac{1}{Z_f},$$ $$=p(\gamma|D)\phi(\gamma).$$
(29) $\binom{30}{2}$ .
$$(31)$$
$$(32)$$
$$(33)$$
$$(34)$$
where (31) also follows from f(γ) = p(γ|D)ϕ(γ).
From (28) and (32), we obtain
$$\begin{array}{l}{{\frac{1}{T}\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}h(\gamma^{(t)})}}\\ {{\frac{1}{T}\sum_{t=1}^{T}Q^{(t)}\phi^{-1}(\gamma^{(t)})}}\end{array}\to\sum_{\gamma}p(\gamma|{\mathcal{D}})h(\gamma),$$
or equivalently
$$\sum_{t=1}^{T}\rho^{(t)}h(\gamma^{(t)})\to\sum_{\gamma}p(\gamma|{\mathcal D})h(\gamma)$$
as T → ∞.
Now, by setting h(γ) = p(γi = 1|γ−i, D), from (34), we obtain
$$\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\mathcal{D}})\to{\tt PIP}(i)\tag{1}$$
for all i ∈ [P]. The following result bounds the variance of PIP estimator at finite T.
Lemma 5. For any ε ∈ [0, 1], let ν and π *be the initial and stationary distributions of the reversible Markov* sequence γ
(t), Q(t) *. Define*
$$(35)$$
$$\hat{\phi}(\gamma):=\frac{\phi^{-1}(\gamma)}{\operatorname*{max}_{\gamma}\phi^{-1}(\gamma)},$$
$$(36)$$
and
$$\varepsilon_{0}=\frac{P}{P I P(i)\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]S}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}}.$$
Then, we have
$$\mathbb{E}\Bigg{[}\Bigg{|}\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathit{PIP}(i)\Bigg{|}^{2}\Bigg{]}$$ $$\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathit{PIP}^{2}(i)+\frac{4P}{S}\frac{1}{\min_{\gamma}\pi(\gamma)T}\to0,$$ which is true. Hence ($\gamma$) is the same as the distribution of $p(\gamma)$
as T → ∞ for fixed P, S and the dataset. Here, π(γ) is the marginal distribution of π(*γ, Q*).
$$(37)$$
$$(38)$$
Proof. See Appendix D.
Remark 6. *As in the proof of Lemma 3, we have* π(γ) ∼ f(γ) = ϕ(γ)p(γ|D)*. Hence, it holds that*
$$\min_{\gamma}\pi(\gamma)=\min_{\gamma}\frac{\phi(\gamma)p(\gamma|\mathcal{D})}{\sum_{\gamma}\phi(\gamma)p(\gamma|\mathcal{D})},\tag{1}$$
$$(39)$$
which does not depend on S.
Next, we provide a lower bound for 1 − λγ,Q. First, we recall the following Dirichlet form on spectral gap.
Definition 7. Let *f, g* : Ω → R*. The Dirichlet form associated with a reversible Markov chain* Q on Ω is defined by
$$\mathcal{E}(f,g)=\langle(\mathbf{I}-\mathbf{Q})f,g\rangle_{\pi}$$ $$=\sum_{x\in\Omega}\pi(x)[f(x)-\mathbf{Q}f(x)]g(x)$$ $$=\sum_{x,y\in\Omega\times\Omega}\pi(x)Q(x,y)g(x)(f(x)-f(y)).$$
$$(42)$$
Lemma 8. *(Diaconis & Saloff-Coste, 1993) (Variational characterisation) For a reversible Markov chain* Q with state space Ω and stationary distribution π*, it holds that*
$$1-\lambda=\inf_{\begin{subarray}{c}g\to0,\,g=\{g^{2}\}=1\\ \mathbb{E}_{\pi}[g]=0,\,\mathbb{E}_{\pi}[g^{2}]=1\end{subarray}}\mathcal{E}(g,g),$$
$$(43)$$
$$(444)$$
where E(*g, g*) := ⟨(I − Q)*g, g*⟩π. Lemma 9. The spectral gap 1 − λγ,Q *of the reversible Markov chain* {γ
(t), Q(t)} *satisfies*
$$1-\lambda_{\gamma,Q}\geq{\frac{S}{P}}{\big(}1-\lambda_{P}{\big)}+1-{\frac{S}{P}}\geq1-{\frac{S}{P}},$$
where 1 − λP *is the spectral gap of the reversible Markov chain* {γ
(t)} *of the wTGS algorithm (i.e.* S = P).
See Appendix F for a proof of this lemma. By combining Lemma 4, Lemma 5 and Lemma 9, we come up with the following theorem. Theorem 10. *For the variable-complexity subset wTGS-based estimator in* (15) and given dataset (*X, Y* ), it holds that
$$E_{i,T}:=\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},{\cal D})\to P\!I\!P(i)\tag{1}$$
$$(45)$$
as T → ∞ and
$$\mathbb{E}\Bigg{[}\Bigg{|}\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}|\gamma_{-i}^{(t)},\mathcal{D})-\textit{PIP}(i)\Bigg{|}^{2}\Bigg{]}$$ $$=O\Bigg{(}\frac{\log T}{T}\Bigg{(}\frac{P}{S}\Bigg{)}^{2}\Bigg{(}\frac{\max_{\gamma}\phi(\gamma)}{\min_{\gamma}\phi(\gamma)}\Bigg{)}^{2}\Bigg{)},$$
where
$$\phi(\gamma)=\frac{1}{2}\sum_{j\in[P]}\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}.$$
$$\quad(46)$$
$$(47)$$
Proof. First, (45) is shown in Lemma 4. Now, we show (46) by using Lemma 5 and Lemma 9.
Observe that
$$\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]=\mathbb{E}_{\pi}\left[\frac{\phi^{-1}(\gamma)}{\max_{\gamma}\phi^{-1}(\gamma)}\right]$$ $$\geq\frac{\min_{\gamma}\phi(\gamma)}{\max_{\gamma}\phi(\gamma)}.\tag{1}$$
In addition, we have
$$(48)$$
$$\phi(\gamma)=\sum_{j\in[P]}\frac{\frac{1}{2}\eta(\gamma_{-j})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}$$ $$=\frac{1}{2}\sum_{j\in[P]}\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}.$$
(49) $\binom{49}{50}$ (50) .
Now, note that
$$\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}|\gamma_{-j},\mathcal{D})}=\begin{cases}1,&\gamma_{j}=1\\ \frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}=0|\gamma_{-j},\mathcal{D})},&\gamma_{j}=0.\end{cases}$$
$$\left(51\right)$$
In Appendix E, show how to estimate the conditional PIPs, i.e., p(γi|D, γ−i) for the linear regression model.
More specially, we have
$$p(\gamma_{i}|\mathcal{D},\gamma_{-i})=\frac{p(\gamma_{i}|\mathcal{D},\gamma_{-i})}{p(1-\gamma_{i}|\mathcal{D},\gamma_{-i})}\left(1+\frac{p(\gamma_{i}|\mathcal{D},\gamma_{-i})}{p(1-\gamma_{i}|\mathcal{D},\gamma_{-i})}\right)^{-1}.\tag{52}$$
Then, we can estimate p(γj=1|γ−j ,D)
p(γj=0|γ−j ,D)
based on the dataset. More specifically, let γ˜1 is given by γ−i with γi = 1, γ˜0 is given by γ−i with γi = 0, then we can show that
$$\frac{p(\gamma_{j}=1|\gamma_{-j},\mathcal{D})}{p(\gamma_{j}=0|\gamma_{-j},\mathcal{D})}$$ $$=\left(\frac{h}{1-h}\right)\sqrt{\tau\frac{\det(X_{i,0}^{T}X_{i0}+\tau I)}{\det(X_{i,1}^{T}X_{i1}+\tau I)}}$$ $$\quad\times\left(\frac{\|Y\|^{2}-\|\tilde{Y}_{i0}\|^{2}+\nu_{0}\lambda_{0}}{\|Y\|^{2}-\|\tilde{Y}_{i1}^{2}\|^{2}+\nu_{0}\lambda_{0}}\right)^{\frac{N+\tau_{0}}{2}}.\tag{53}$$ $\tau_{\tau}^{T}X_{\tau}+\tau I)^{-1}X_{\tau}^{T}Y$.
Here, $\|\hat{Y}_{\gamma}\|^2=\hat{Y}_{\gamma}^T\hat{Y}_{\gamma}=Y^T X_{\gamma}(X_{\gamma}^T X_{\gamma}+\tau I)^{-1}X_{\gamma}^T Y$.
Using this algorithm, if pre-computing XT X is not possible, the computational complexity per conditional PIP is O(N|γ| 2 +|γ| 3 +P|γ| 2). Otherwise, if pre-computing XT X is possible, the computational complexity per conditional PIP is O(|γ| 3 + P|γ| 2).
Remark 11. *As we can see in Appendix E, for the linear regression model in Section 2.2, if pre-computing* XT X *is not possible, the computational complexity for a conditional PIP is* O(N|γ| 2 + |γ| 3 + P|γ| 2). Otherwise, if pre-computing XT X *is possible, the computational complexity for a conditional PIP is* O(|γ| 3+P|γ| 2).
Here, |γ| ≈ hP*. Hence, the average computational complexity for our algorithm is* O(S(N|γ| 2+|γ| 3+P|γ| 2))
or O(S(|γ| 3 + P|γ| 2)) which depends on whether the precomputing of XT X *is possible or not. To reduce* the computational complexity, we can reduce S, or we are only interested in the case P/S *is large. This* computational complexity reductions is more meaningful if |γ| ≈ P h << P*, i.e., we consider the sparse* linear regression regimes. However, the variance of the associated Rao-Blackwellized estimator is increased as S *becomes small. Hence, there is a trade-off between the computational complexity per MCMC iteration* vs. the variance of of the Rao-Blackwellized estimator. The most interesting fact is that the newly-designed Rao-Blackwellized estimator converges to PIPs for any value of S. In practice, the choice of S *depends on* each application and the availability of computational resources. We can choose S *very small (eg.,* S = 2)
to have a low complexity estimator and low convergence rate. We can choose S ≈ P for a high complexity estimator with high convergence rate. Furthermore, both our and Jankowiak algorithms are degenerated to the wTGS (Zanella & Roberts (2019)) at S ≈ P.
## 4 Experiments
In this section, we show by simulation that the PIP-estimator is convergent as T → ∞. In addition, we compare the variance of associated Rao-Blackwellized estimators for VC-wTGS and subset wTGS on simulated and real datasets. To compute p(γi|γ−i, Y ), we use the same trick as (Zanella & Roberts, 2019, Appendix B.1) for the new setting. See our derivations of this posterior distribution in Appendix E. As
(Jankowiak, 2023), in ALG 1 and ALG 2, we choose
$$\eta(\gamma_{-i})=\mathbb{P}(\gamma_{i}=1|\gamma_{-i},{\mathcal{D}}).$$
$$(54)$$
η(γ−i) = P(γi = 1|γ−i, D). (54)
## 4.1 Simulated Datasets
First, we perform a simulated experiment. Let X ∈ R
N×P be a realization of a multivariate (random)
Gaussian matrix. We consider the case N = 100 and P = 200. We run T = 20000 iterations. Fig. 1 shows the number of conditional PIP computations per MCMC iteration over T iterations. As we can see, our algorithm (Algorithm 2) has variable complexity where the number of conditional PIP computations per MCMC is a random variable Y which takes value on {0, P} where P(Y = P) = S/P. For Jankowiak's algorithm, the number of conditional PIP computations per MCMC is always fixed, which is equal to S.
Fig. 2 shows that the Rao-Blackwellized estimator in (15) converges to the value of PIP at T → ∞ for different values of S. Since the number of PIPs, P, is very large, we only run simulations for PIP(0) and PIP(1). The behavior of PIP(0) and PIP(1) represents the behavior of other PIPs. Since VC-wTGS converges very fast at T big enough, the variance of variable-complexity wTGS is very small in the long term. In Fig. 4, we plot the estimators of VC-wTGS, subset wTGS, and wTGS for estimating PIP(0). It can our estimator converges to wTGS estimator faster than subset wTGS. This also means that the variance of VC-wTGS is smaller than the variance of subset wTGS for the same sample complexity S.
![11_image_0.png](11_image_0.png)
Figure 1: Computational Complexity Evolution
![12_image_0.png](12_image_0.png)
Figure 2: VC-wTGS Rao-Blackwellized Estimators (ALG 2)
![12_image_1.png](12_image_1.png)
Figure 3: Convergence of Rao-Blackwellized Estimators
## 4.2 Real Datasets
In this simulation, we run ALG 2 on MNIST dataset.
As Fig. 1, Fig. 4 shows the number of conditional PIP computations per MCMC iteration over T iterations.
It shows that our algorithm has variable computational complexity per MCMC iteration, which is different from Jankowiak's algorithm. Fig. 5 plots PIP(0) and PIP(1) and the estimated variances for the Rao-Blackwellized estimator in (15)
at different values of S, respectively. Here, PIP(0) and PIP(1) are defined in (9), which are posterior inclusion probabilities that the components Po and B1 affect the output. These plots show a trade-off between the computational complexity and the estimated variance for estimating PIP(0) and PIP(1). The
![13_image_0.png](13_image_0.png)
Figure 4: Computational Complexity Evolution
expected number of PIP computations is only ST in ALG 2 but T P in wTGS if we run T MCMC iterations.
However, we suffer an increasing in variance. By Theorem 10, the variance is O PS
2 log T
Tfor a given dataset, i.e., increasing at most (P/S)
2times. For many applications, we don't need to estimate PIPs exactly, hence VC-wTGS can be used to reduce computational complexity especially when P is very large (million covariates). Fig. 6 shows that VC-wTGS outperforms subset wTGS (Jankowiak, 2023) at high values of P/S, which shows that our newly-designed Rao-Blackwellized estimator converges to PIP faster than Jankowiak's estimator at high P/S.
## 5 Conclusion
This paper proposed a variable complexity wTGS for Bayesian Variable Selection which can improve the computational complexity of the well-known wTGS. Experiments show that our Rao-Blackwellized estimator can give a smaller variance than its counterpart associated with the subset-wTGS at high P/S.
![14_image_0.png](14_image_0.png)
Figure 5: The variance of VC-wTGS Rao-Blackwellized Estimators (ALG 2)
![14_image_1.png](14_image_1.png)
Figure 6: Comparing the variance between subset wTGS and VC-wTGS at S = 2.
## References
Christophe Andrieu, Nando de Freitas, A. Doucet, and Michael I. Jordan. An introduction to MCMC for machine learning. Machine Learning, 50:5-43, 2004.
C. M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006. William M. Bolstad. Understanding Computational Bayesian Statistics. John Wiley, 2010.
L. Breiman. The strong law of large numbers for a class of Markov chains. Annals of Mathematical Statistics, 31:801-803, 1960.
Mónica F. Bugallo, Shanshan Xu, and Petar M. Djurić. Performance comparison of EKF and particle filtering methods for maneuvering targets. Digit. Signal Process., 17:774-786, 2007.
R. Combes and M. Touati. Computationally efficient estimation of the spectral gap of a markov chain.
Proceedings of the ACM on Measurement and Analysis of Computing Systems, 3:1 - 21, 2019.
Persi Diaconis and Laurent Saloff-Coste. Comparison theorems for reversible markov chains.
Annals of Applied Probability, 3:696-730, 1993.
William J. Fitzgerald. Markov chain Monte Carlo methods with applications to signal processing. Signal Process., 81:3–18, 2001.
Ankur Gupta and James B. Rawlings. Comparison of parameter estimation methods in stochastic chemical kinetic models: Examples in systems biology. *AIChE journal. American Institute of Chemical Engineers*,
60 4:1253–1268, 2014.
Tim Hesterberg. Monte carlo strategies in scientific computing. *Technometrics*, 44:403 - 404, 2002. Martin Jankowiak. Bayesian variable selection in a million dimensions. In *International Conference on* Artificial Intelligence and Statistics, 2023.
Muhammad F. Kasim, A. F. A. Bott, Petros Tzeferacos, Donald Q. Lamb, Gianluca Gregori, and Sam M.
Vinko. Retrieving fields from proton radiography without source profiles. *Physical review. E*, 100 3-1:
033208, 2019.
Faming Liang, Chuanhai Liu, and Raymond J. Carroll. Advanced Markov chain Monte Carlo methods:
Learning from past samples. 2010.
Daniel Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods.
Electronic Journal of Probability, 20(79):1 - 32, 2015.
Shravas Rao. A Hoeffding inequality for Markov chains. *Electronic Communications in Probability*, 2018.
Jesse Read, Luca Martino, and David Luengo. Efficient Monte Carlo methods for multi-dimensional learning with classifier chains. *Pattern Recognit.*, 47:1535–1546, 2012.
Christian P. Robert and George Casella. Monte carlo statistical methods. *Technometrics*, 47:243 - 243, 2005.
Lan V. Truong. On linear model with markov signal priors. In *AISTATS*, 2022. Pekka Tuominen and Richard L. Tweedie. Markov Chains with Continuous Components. Proceedings of the London Mathematical Society, s3-38(1):89–114, 01 1979.
Adrian G. Wills and Thomas Bo Schön. Sequential monte carlo: A unified review. Annu. Rev. Control.
Robotics Auton. Syst., 6:159–182, 2023.
G. Wolfer and A. Kontorovich. Estimating the mixing time of ergodic Markov chains. In 32nd Annual Conference on Learning Theory, 2019.
Giacomo Zanella and Gareth O. Roberts. Scalable importance tempering and Bayesian variable selection.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 81, 2019.
## A Appendix B Proof Of Lemma 3
The transition kernel for the sequence {γ
(t)} can be written as
$$K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-\mathbf{flip}(\gamma|j))+\bigg(1-\frac{S}{P}\bigg)\delta(\gamma^{\prime}-\gamma).$$
This implies that for any pair (*γ, γ*′) such that γ
′ = flip(γ|i) for some i ∈ [P], we have
$(\gamma,\gamma)$ such that $\gamma^{\prime}=\mathtt{Lip}(\gamma|i)$ for some $i\in[i]$, we have $$K^{*}(\gamma\to\gamma^{\prime})=\frac{S}{P}\sum_{j=1}^{P}f(j|\gamma)\delta(\gamma^{\prime}-\mathtt{flip}(\gamma|j))$$ $$=\frac{S}{P}f(i|\gamma).$$
$$(55)$$
$\mathbf{a}\cdot\mathbf{a}=\mathbf{a}\cdot\mathbf{a}$.
If $\gamma^{\prime}=\texttt{T11p}(\gamma|i)$ for some $i\in[P]$, we have $\gamma^{\prime}=\texttt{T11p}(\gamma|i)$.
$$(56)$$
$$\left(57\right)$$
Now, by ALG 2, we also have
$$f(i|\gamma)=\phi^{-1}(\gamma)\frac{\frac{1}{2}\eta(\gamma_{-i})}{p(\gamma_{i}|\gamma_{-i},\mathcal{D})}$$
p(γi|γ−i, D)(58)
and
$$f(i|\gamma^{\prime})=\phi^{-1}(\gamma^{\prime})\frac{\frac{1}{2}\eta(\gamma_{-i}^{\prime})}{p(\gamma_{i}^{\prime}|\gamma_{-i}^{\prime},\mathcal{D})}.$$
From (58) and (59) and γ−i = γ
′
−i
, we obtain In addition, we also have K∗(γ → γ
′) = K∗(γ
′ → γ) = 0 if γ
′ ̸= γ and γ
′ ̸= flip(γ|i) for any i ∈ [P].
Furthermore, K∗(γ → γ
′) = K∗(γ
′ → γ) = 1 −
S
P
if γ = γ
′.
By combining all these cases, it holds that
$$f(\gamma)K^{*}(\gamma\to\gamma^{\prime})=f(\gamma^{\prime})K^{*}(\gamma^{\prime}\to\gamma)$$
for all γ
′, γ.
This means that {γ
(t)}
T
t=1 form a reversible Markov chain with stationary distribution f(γ)/Zf where
$$Z_{f}=\sum_{\gamma}f(\gamma).\tag{1}$$
$$(65)$$
$$(67)$$
Since {Qt}
T
t=1 is an i.i.d. Bernoulli sequence with q(1) = S/P and independent of {γ
(t)}
T
t=1, {γ
(t), Q(t)}
T t=1 forms a Markov chain with the transition kernel satisfying:
$$K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=q(Q^{\prime})K^{*}(\gamma\to\gamma^{\prime}).$$
It follows from (66) that
$$q(Q)f(\gamma)/Z_{f}K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=[K^{*}(\gamma\to\gamma^{\prime})f(\gamma)/Z_{f}]q(Q)q(Q^{\prime})$$
′) (67)
for any pair (*γ, Q*) and (γ
′, Q′).
Finally, from (64) and (67), we have
$$q(Q)f(\gamma)/Z_{f}K((\gamma,Q)\to(\gamma^{\prime},Q^{\prime}))=q(Q^{\prime})f(\gamma)/Z_{f}K((\gamma^{\prime},Q^{\prime})\to(\gamma,Q)).$$
This means that {γt, Q(t)}
T
t=1 forms a reversible Markov chain with stationary distribution q(Q)f(γ)/Zf .
## C Proof Of Lemma 1
Observe that with probability at least 1 − α, we have
Let $1-\alpha$, we have $\begin{array}{l}(1-\varepsilon)\mathbb{E}[U]\leq U\leq(1+\varepsilon)\mathbb{E}[U]\\ (1-\varepsilon)\mathbb{E}[V]\leq V\leq(1+\varepsilon)\mathbb{E}[V].\end{array}$ .
$$\frac{K^{*}(\gamma\rightarrow\gamma^{\prime})}{K^{*}(\gamma^{\prime}\rightarrow\gamma)}=\frac{\frac{S}{P}f(i|\gamma)}{\frac{S}{P}f(i|\gamma^{\prime})}$$ $$=\frac{f(i|\gamma)}{f(i|\gamma^{\prime})}$$ $$=\frac{\phi(\gamma^{\prime})p(\gamma^{\prime}|\mathcal{D})}{\phi(\gamma)p(\gamma|\mathcal{D})}$$ $$=\frac{f(\gamma^{\prime})}{f(\gamma)}.$$
$$(58)$$
$$(59)$$
$$(60)$$
$$(61)$$
$$(62)$$
$$(63)$$
$$(64)$$
$$(68)$$
$$\begin{array}{l}{(69)}\\ {(70)}\end{array}$$
Hence, we have
$$\left({\frac{1-\varepsilon}{1+\varepsilon}}\right){\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}\leq{\frac{U}{V}}\leq\left({\frac{1+\varepsilon}{1-\varepsilon}}\right){\frac{\mathbb{E}[U]}{\mathbb{E}[V]}}.$$
. (71)
From (71), with probability at least 1 − α, we have
$$(71)$$
$$\left|{\frac{U}{V}}\right.$$
$$(72)$$
(73) $\binom{74}{7}$ .
−
$$\left.\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|\leq\frac{2\varepsilon}{1-\varepsilon}\bigg{(}\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\bigg{)}.\tag{1}$$
It follows from (72) that
$$\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]=\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]D\right]\mathbb{P}(D)+\mathbb{E}\left[\left|\frac{U}{V}-\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right|^{2}\right]D^{c}\right]\mathbb{P}(D^{c})$$ $$\leq\frac{4\varepsilon^{2}}{(1-\varepsilon)^{2}}\left(\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right)^{2}+\left[\max\left(M,\frac{\mathbb{E}[U]}{\mathbb{E}[V]}\right)\right]^{2}\alpha.$$
## D Proof Of Lemma 5
First, by definition of ϕˆ(γ) in (36) we have
$$\rho^{(t)}=\frac{\hat{\phi}(\gamma^{(t)})Q^{(t)}}{\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}}.\tag{1}$$
In addition, observe that
$$0\leq{\hat{\phi}}(\gamma)\leq1.$$
0 ≤ ϕˆ(γ) ≤ 1. (76)
Now, let g : {0, 1}
P → R+ such that g(γ) ≤ 1 for all γ. Then, by applying Lemma 2 and a change of measure, with probability 1 − 2 dν dπ exp(−
ζ 2T(1−λ)
64e), we have
$$\frac{1}{T}\bigg|\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\bigg[\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}\bigg]\bigg|\leq\zeta\tag{1}$$
$$(75)$$
$$(76)$$
$$\left(77\right)$$
for any ζ > 0.
Similarly, by using Lemma 2, with probability at least 1 − 2 dν dπ exp(−
ζ 2T(1−λ)
64e), it holds that
$$\frac{1}{T}\left|\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\left[\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)}\right]\right|\leq\zeta.$$
By using the union bound, with probability at least 1 − 4 dν dπ exp(−
ζ 2T(1−λ)
64e), it holds that
$$\frac{1}{T}\bigg{|}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}-\mathbb{E}_{\pi}\bigg{[}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}\bigg{]}\bigg{|}\leq\zeta,$$ $$\frac{1}{T}\bigg{|}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})-\mathbb{E}_{\pi}\bigg{[}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})\bigg{]}\bigg{|}\leq\zeta.$$
$$(78)$$
$$\begin{array}{c}{{(79)}}\\ {{}}\end{array}$$ $$\begin{array}{c}{{(80)}}\\ {{}}\end{array}$$
Now, by setting ζ = ζ0 := ε T min Eπ PT
t=1 ϕˆ(γ
(t))g(γ
(t))Q(t), Eπ PT
t=1 ϕˆ(γ
(t)) for some ε > 0 (to be chosen later), with probability at least 1 − 4 dν dπ exp(−
ζ 2 0 T(1−λ)
64e), it holds that
1 T X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) − Eπ X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) ≤ ε T Eπ X T t=1 ϕˆ(γ (t))g(γ (t))Q (t) , (81) 1 T X T t=1 ϕˆ(γ (t))Q (t) − Eπ X T t=1 ϕˆ(γ (t))Q (t) ≤ ε T Eπ X T t=1 ϕˆ(γ (t))Q (t) . (82)
(83) $\binom{84}{84}$ .
$$(86)$$
Furthermore, by setting
$$\begin{array}{l}{{U:=\frac{1}{T}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)},}}\\ {{V:=\frac{1}{T}\sum_{t=1}^{T}\hat{\phi}(\gamma^{(t)})Q^{(t)},}}\end{array}$$
we have
$$\frac{U}{V}=\frac{\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})g(\gamma^{(t)})Q^{(t)}}{\sum_{t=1}^{T}\phi^{-1}(\gamma^{(t)})Q^{(t)}}$$ $$=\sum_{t=1}^{T}\rho^{(t)}g(\gamma^{(t)})$$
$$(85)$$
and
$$M:=\operatorname*{sup}(U/V)\leq1$$
$$(8{\overline{{7}}})$$
M := sup(U/V ) ≤ 1 (87)
since PT
t=1 ρ
(t) = 1 and g(γ
(t)) ≤ 1 for all γ
(t).
From (80)-(87), by Lemma 1, we have
$$\mathbb{E}\bigg{[}\bigg{|}\sum_{t=1}^{T}\rho^{(t)}g(\gamma^{(t)})Q^{(t)}-\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{|}^{2}\bigg{]}\leq\frac{4c^{2}}{(1-\varepsilon)^{2}}\bigg{(}\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{)}^{2}+\bigg{[}\max\bigg{(}1,\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\bigg{)}\bigg{]}^{2}\alpha,\tag{88}$$ $\frac{d\varepsilon}{d\pi}\exp\left(-\frac{\varepsilon^{2}T(1-\lambda_{\gamma,Q})\min\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}^{2}}{\varepsilon^{4}\varepsilon}\right)$, where $\lambda_{\gamma,Q}$ is the stationary distribution of the reversible
where α := 4 dν
Markov chain {γ
(t), Q(t)}.
Now, by setting
$$\varepsilon=\varepsilon_{0}=\frac{1}{\operatorname*{min}\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}},$$ which is a $\varepsilon$-function.
we have α = 4 dν dπ 1 T
. Then, we obtain
So if $ \mathbb{E}\bigg[\bigg|\sum_{t=1}^T\rho^{(t)}g(\gamma^{(t)})-\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg|^2\bigg]\leq\frac{4\varepsilon_0^2}{(1-\varepsilon_0)^2}\bigg(\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg)^2+\bigg[\max\bigg(1,\frac{\mathbb{E}_\pi[U]}{\mathbb{E}_\pi[V]}\bigg)\bigg]^2\alpha.$ that is.
2α. (90)
Now, observe that
$$\mathbb{E}_{\pi}[U]=\frac{\mathbb{E}_{\pi}\big{[}g(\gamma)Q\hat{\phi}(\gamma)\big{]}}{\mathbb{E}_{\pi}\big{[}\hat{\phi}(\gamma)Q\big{]}}$$ $$=\frac{\mathbb{E}_{\pi}\big{[}g(\gamma)Q\phi^{-1}(\gamma)\big{]}}{\mathbb{E}_{\pi}\big{[}\phi^{-1}(\gamma)Q\big{]}}.$$
$$(89)$$
$$(90)$$
(91) $\binom{92}{92}$ .
On the other hand, by Lemma 3, we have π(*γ, Q*) = q(Q)f(γ)
Zfwhere Zf := Pγ f(γ) and f(γ) = p(γ|D)ϕ(γ).
It follows that
$$\mathbb{E}_{\pi}\left[g(\gamma)Q\phi^{-1}(\gamma)\right]=\mathbb{E}_{q(Q)f(\gamma)/Z_{f}}\left[g(\gamma)Q\phi^{-1}(\gamma)\right]$$ $$=\sum_{\gamma}\sum_{Q}g(\gamma)Q\phi^{-1}(\gamma)\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\frac{1}{Z_{f}}\sum_{\gamma}\sum_{Q}g(\gamma)q(Q)Qp(\gamma|\mathcal{D})$$ $$=\frac{1}{Z_{f}}\mathbb{E}_{p(\gamma|\mathcal{D})}\left[g(\gamma)\right]\mathbb{E}_{q}[Q].$$
$$(93)$$
$$(94)$$
$$(95)$$
$$({\mathfrak{g h}})$$
Similarly, we have
$$\mathbb{E}_{\pi}\left[\phi^{-1}(\gamma)Q\right]=\mathbb{E}_{q(Q)f(\gamma)/Z_{f}}\left[\phi^{-1}(\gamma)Q\right]$$ $$=\sum_{Q}\sum_{\gamma}\phi^{-1}(\gamma)Q\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\frac{1}{Z_{f}}\bigg{(}\sum_{\gamma}P(\gamma|\mathcal{D})\bigg{)}\mathbb{E}_{q}[Q].$$
$$(97)$$
$$(98)$$
$$(99)$$
$$(100)^{\frac{1}{2}}$$
From (92), (96) and (99), we obtain
$$\mathbb{E}_{\pi}[U]=\mathbb{E}_{p(\gamma|\mathcal{D})}\left[g(\gamma)\right].\tag{1}$$
For the given problem, by setting g(γ) = p(γi = 1|γ−i, D), from (100), we have
$$\mathbb{E}_{\pi}[U]=\mathbb{P}\mathbb{P}(i).\tag{1}$$
$$(101)$$
$$\mathbb{E}_{\pi}[V]=\mathbb{E}_{\pi}\big{[}\hat{\phi}(\gamma)Q\big{]}$$ $$=\sum_{\gamma,Q}\hat{\phi}(\gamma)Q\frac{f(\gamma)}{Z_{f}}q(Q)$$ $$=\bigg{(}\sum_{\gamma}\hat{\phi}(\gamma)\frac{f(\gamma)}{Z_{f}}\bigg{)}\bigg{(}\sum_{Q}Qq(Q)\bigg{)}$$ $$=\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]\mathbb{E}_{Q}[Q]$$ $$=\frac{S}{P}\mathbb{E}_{\pi}[\hat{\phi}(\gamma)].$$
$$(102)$$
$$(103)$$
$$(104)$$
$$\min\{\mathbb{E}_{\pi}[U],\mathbb{E}_{\pi}[V]\}=\mathbb{E}_{\pi}[V]\min\left\{1,\frac{\mathbb{E}_{\pi}[U]}{\mathbb{E}_{\pi}[V]}\right\}$$ $$=\mathbb{E}_{\pi}[V]\min\left\{1,\texttt{PIP}(i)\right\}$$ $$=\mathbb{E}_{\pi}[V]\texttt{PIP}(i)$$ $$=\frac{S}{P}\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]\texttt{PIP}(i).$$
$$(107)$$
$$(108)$$
$$(109)$$
In addition, we have Hence, we obtain
$$(110)$$
From (90), (101), and (110), we have
$$\mathbb{E}\left[\left|\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathbb{P}\mathbb{P}(i)\right|^{2}\right]\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathbb{P}\mathbb{P}^{2}(i)+4\frac{d\nu}{d\pi}\frac{1}{T},$$
, (111)
and
$$\varepsilon_{0}=\frac{P}{\text{PIP}(i)\mathbb{E}_{\pi}[\hat{\phi}(\gamma)]S}\sqrt{\frac{64e\log T}{(1-\lambda_{\gamma,Q})T}}.\tag{1}$$
Now, observe that
$$\frac{d\nu}{d\pi}(\gamma,Q)=\frac{p_{\gamma_{1},Q_{1}}(\gamma,Q)}{\pi(\gamma,Q)}$$ $$\leq\frac{1}{\pi(\gamma,Q)}$$ $$=\frac{1}{\pi(\gamma)q(Q)}$$ $$\leq\frac{P}{S}\frac{1}{\min_{\gamma}\pi(\gamma)}.$$
$$(111)$$
$$(112)$$
$$(117)$$
By combining (111) and (116), we have
$$\mathbb{E}\left[\left|\sum_{t=1}^{T}\rho^{(t)}p(\gamma_{i}^{(t)}=1|\gamma_{-i}^{(t)},\mathcal{D})-\mathsf{P I P}(i)\right|^{2}\right]\leq\frac{4\varepsilon_{0}^{2}}{(1-\varepsilon_{0})^{2}}\mathsf{P I P}^{2}(i)+\frac{4P}{S}\frac{1}{\operatorname*{min}_{\gamma}\pi(\gamma)T}.$$
. (117)
## E Derive P(Γi|D, Γ−I)
Observe that
$$p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})=\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}\bigg(1+\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}\bigg)^{-1}.$$
In addition, we have
p(γi = 1|D, γ−i) p(γi = 0|D, γ−i) = p(γi = 1, D|γ−i) p(γi = 0, D|γ−i)(119) = p(γi = 1|γ−i, X) p(γi = 0|γ−i, X) p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X)(120) = p(γi = 1) p(γi = 0)p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X) (121) = h 1 − h p(Y |γi = 1, γ−i, X) p(Y |γi = 0, γ−i, X) . (122)
On the other hand, for any tuple γ = (γ1, γ2, · · · , γP ) such that γi = 1 (so |γ| ≥ 1), we have
$$p(Y|\gamma_{i}=1,\gamma_{-i},\beta_{\gamma},\sigma_{\gamma}^{2},X)=\frac{1}{\left(\sigma_{\gamma}\sqrt{2\pi}\right)^{N}}\exp\bigg{(}-\frac{\|Y-X_{\gamma}\beta_{\gamma}\|^{2}}{2\sigma_{\gamma}^{2}}\bigg{)}.\tag{123}$$
$$(118)$$
It follows that
p(Y |γi = 1, γ−i, X = Z βγ Z ∞ σ2γ=0 σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ p(βγ|γi = 1, γ−i)p(σ 2 γ |γi = 1, γ−i)dβγdσ2 1 = Z ∞ σ2γ=0 InvGamma12 ν0, 1 2 ν0λ0 Z βγ σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ 1 ×1 σγ √2πτ −1|γ| exp −∥βγ∥ 2 2σ 2 γ τ −1 dβγdσ2 γ . (125) Now, observe that
γ(124)
$$\quad(124)$$ $$\quad(125)$$ $$\quad(125)$$
$$\begin{array}{l}{(126)}\\ {(127)}\\ {(128)}\end{array}$$
$$(129)$$
$$\left(130\right)$$ $$\left(131\right)$$
$$\|Y-X_{\gamma}\beta_{\gamma}\|^{2}+\tau\|\beta_{\gamma}\|^{2}$$ $$\qquad=(Y-X_{\gamma}\beta_{\gamma})^{T}(Y-X_{\gamma}\beta_{\gamma})+\tau\beta_{\gamma}^{T}\beta_{\gamma}$$ $$\qquad=Y^{T}Y-2Y^{T}X_{\gamma}\beta_{\gamma}+\beta_{\gamma}^{T}X_{\gamma}^{T}X_{\gamma}\beta_{\gamma}+\tau\beta_{\gamma}^{T}\beta_{\gamma}$$ $$\qquad=Y^{T}Y-2Y^{T}X_{\gamma}\beta_{\gamma}+\beta_{\gamma}^{T}(X_{\gamma}^{T}X_{\gamma}+\tau I)\beta_{\gamma}.$$ In the above condition, the critical finite matrix $Y^{T}Y$ is $Y$-invariant.
Now, consider the EVD (singular value decomposition) of the positive definite matrix XT
γ Xγ +τ I (note that τ > 0):
$$X_{\gamma}^{T}X_{\gamma}+\tau I=U^{T}\Lambda U$$
TΛU (129)
where Λ is the a diagonal matrix consisting of all positive eigenvalue of XT
γ Xγ + τ I. Let
$$\begin{array}{l}{{\tilde{\beta}_{\gamma}:=\sqrt{\Lambda}U\beta_{\gamma},}}\\ {{\tilde{Y}_{\gamma}:=\sqrt{\Lambda^{-1}}U X_{\gamma}^{T}Y.}}\end{array}$$
Then, we have
∥Y − Xγβγ∥ 2 + τ∥βγ∥ 2 = Y T Y − 2Y T Xγβγ + β T γ (XT γ Xγ + τ I)βγ (132) = Y T Y − 2Y T Xγ √ Λ−1U T β˜γ + β˜T γ β˜γ (133) = Y T Y − 2Y˜ T γ β˜γ + β˜T γ β˜γ (134) =∥Y ∥ 2 − ∥Y˜γ| 2+Y˜ T γ Y˜γ − 2Y˜ T γ β˜γ + β˜T γ β˜γ (135) =∥Y ∥ 2 − ∥Y˜γ| 2+ ∥Y˜γ − β˜γ∥ 2. (136)
Hence, we have
$$\begin{array}{l}{{d\beta_{\gamma}=\operatorname*{det}(U^{T}\Lambda^{-1/2})d\tilde{\beta}_{\gamma}}}\\ {{\qquad=\operatorname*{det}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1/2}d\tilde{\beta}_{\gamma}.}}\end{array}$$
Hence, we have
σγ √2πN exp − ∥Y − Xγβγ∥ 2 2σ 2 γ σγ √2πτ −1|γ| exp −∥βγ∥ 2 2σ 2 γ τ −1 dβγ (139) 1 Z 1 βγ = Z β˜γ 1 σγ √2πN exp − ∥Y ∥ 2 − ∥Y˜γ| 2+ ∥Y˜γ − β˜γ∥ 2 2σ 2 γ ×1 σγ √2πτ −1|γ| det(XT γ Xγ + τ I) −1/2dβ˜γ (140) =1 σγ √2πN τ |γ|/2exp − ∥Y ∥ 2 − ∥Y˜γ| 2 2σ 2 γ det(XT γ Xγ + τ I) −1/2. (141)
$\left(132\right)$ $\left(133\right)$ $\left(134\right)$ $\left(135\right)$ $\left(136\right)$
$$(137)$$ $$(138)$$
$$\left({139}\right)$$ $$\left({140}\right)$$ $$\left({141}\right)$$ ...
By combining (125) and (141), we obtain
p(Y |γi = 1, γ−i, X
=
Z
βγ
Z ∞
σ2γ=0
σγ
√2πN
exp −
∥Y − Xγβγ∥
2
2σ
2
γ
1
$$(142)$$
$$(143)$$
p(βγ|γi = 1, γ−i)p(σ
2
γ
|γi = 1, γ−i)dβγdσ2
γ(142)
=
Z ∞
σ2γ=0
InvGamma12
ν0,
1
2
ν0λ0
1
σγ
√2πN
τ
|γ|/2
× exp −
∥Y ∥
2 − ∥Y˜γ|
2
2σ
2
γ
det(XT
γ Xγ + τ I)
−1/2dσ2
γ(143)
= det(XT
γ Xγ + τ I)
−1/2τ
|γ|/2(2π)
−N/2
Z ∞
σ2γ=0
InvGamma12
ν0,
1
2
ν0λ0
(σ
2
γ
)
−N/2
× exp −
∥Y ∥
2 − ∥Y˜γ∥
2
2σ
2
γ
dσ2
γ(144)
= det(XT
γ Xγ + τ I)
−1/2τ
|γ|/2(2π)
−N/2
×
Z ∞
σ2γ=0
(1/2λ0ν0)
1/2ν0
Γ(1/2ν0)(1/σ2
γ
)
1/2ν0+1 exp − 1/2ν0λ0/σ2
γ
(σ
2
γ
)
−N/2
× exp −
∥Y ∥
2 − ∥Y˜γ∥
2
2σ
2
γ
dσ2
γ(145)
= det(XT
γ Xγ + τ I)
−1/2τ
|γ|/2(2π)
−N/2
(1/2λ0ν0)
1/2ν0
Γ(1/2ν0)
×
Z ∞
σ2γ=0
(1/σ2
γ
)
1/2ν0+1+N/2exp −
∥Y ∥
2 − ∥Y˜γ∥
2 + ν0λ0
2σ
2
γ
$$(144)$$
$$(145)$$
$$(146)$$
$$(147)$$
$$(148)$$
$$(149)$$ $$(150)$$
dσ2
γ(146)
= det(XT
γ Xγ + τ I)
−1/2τ
|γ|/2(2π)
−N/2
(1/2λ0ν0)
1/2ν0
Γ(1/2ν0)
× Γ
N + ν0
2
∥Y ∥
2 − ∥Y˜γ∥
2 + ν0λ0
2
−
N+ν0
2
. (147)
Let γ˜1 is given by γ−i with γi = 1, γ˜0 is given by γ−i with γi = 0. It follows that
$$\frac{p(Y|\gamma_{i}=1,\gamma_{-i},X)}{p(Y|\gamma_{i}=0,\gamma_{-i},X)}=\sqrt{\tau}\sqrt{\frac{\operatorname*{det}(X_{\gamma_{i}}^{2}X_{\gamma_{0}}+\tau I)}{\operatorname*{det}(X_{\gamma_{1}}^{2}X_{\gamma_{1}}+\tau I)}}\left(\frac{\|Y\|^{2}-\|\hat{Y}_{\gamma_{0}}\|^{2}+\nu_{0}\lambda_{0}}{\|Y\|^{2}-\|\hat{Y}_{\gamma_{1}}\|^{2}+\nu_{0}\lambda_{0}}\right)^{\frac{N+\nu_{0}}{2}}.$$
. (148)
$$\|\tilde{Y}_{\gamma}\|^{2}=\tilde{Y}_{\gamma}^{T}\tilde{Y}_{\gamma}$$ $$=Y^{T}X_{\gamma}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1}X_{\gamma}^{T}Y.\tag{1}$$
γ Y. (150)
$$\frac{p(Y|\gamma_{i}=1,\gamma_{-i},X)}{p(Y|\gamma_{i}=0,\gamma_{-i},X)}=\sqrt{\frac{\operatorname*{det}(X_{\gamma_{0}}^{T}X\gamma_{0}+\tau I)}{\operatorname*{det}(X_{\gamma_{1}}^{T}X\gamma_{1}+\tau I)}\Big(\frac{S_{\gamma_{0}}}{S_{\gamma_{1}}}\Big)^{N+\nu_{0}}},$$
$$S_{\gamma}:=Y^{T}Y-Y^{T}X_{\gamma}(X_{\gamma}^{T}X_{\gamma}+\tau I)^{-1}X_{\gamma}^{T}Y+\nu_{0}\lambda_{0}.$$
γ Y + ν0λ0. (152)
$$p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})={\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}}\bigg(1+{\frac{p(\gamma_{i}|{\mathcal{D}},\gamma_{-i})}{p(1-\gamma_{i}|{\mathcal{D}},\gamma_{-i})}}\bigg)^{-1}.$$
$$(151)$$
$$(152)$$
$$(153)$$
On the other hand, we have Hence, we finally have where Based on this, we can estimate Denote the set of included variables in γ˜0 as I = {j : ˜γ0,j = 1} . Define F =XT
γ˜0Xγ˜0 + τ I−1, ν = XT Y
and νγ˜0 = (νj )j∈I . Also define A = XT X and ai = (Aji)j∈I . Then, by using the same arguments as (Zanella
& Roberts, 2019, Appendix B1), we can show that
S(˜γ1) = S(˜γ0) − di ν T γ˜0 F ai − νi 2, (154)
where di = (Aii + τ − a T
i F ai)
−1. In addition, we can compute a T
i F ai by using the Cholesky decomposition of F = LLT and
$$a_{i}^{T}Fa_{i}=\|a_{i}^{T}L\|^{2}$$ $$=\sum_{j\in I}(BL)_{ij}^{2},\tag{1}$$
$$\left(155\right)$$ $$\left(156\right)$$
$$(157)$$
$$(158)$$
where B is the p × |γ| matrix made of the columns of A corresponding to variables included in γ.
In addition, we have
$X^T_{\tilde{\tau}_1}X_{\tilde{\tau}_1}+\tau I=\begin{pmatrix}X^T_{\tilde{\tau}0}X_{\tilde{\tau}0}+\tau I&a_i\\ a^T_i&A_{ii}+\tau\end{pmatrix}$ for the last one is to find the natural numbers we want to use that.
Hence, by using Schur's formula for the determinant of block matrix, we are easy to see that
$ \frac{\det(X^T_{\bar{\gamma}_0}X_{\bar{\gamma}_0}+\tau I)}{\det(X^T_{\bar{\gamma}_1}X_{\bar{\gamma}_1}+\tau I)}=d_i.$ $ \tau$.
$$(159)$$
$$(160)$$
(161) (162) (163) (164) (165) (166) (166) (167) (168)
Using this algorithm, if pre-computing XT X is not possible, the computational complexity per conditional PIP is O(N|γ| 2 +|γ| 3 +P|γ| 2). Otherwise, if pre-computing XT X is possible, the computational complexity per conditional PIP is O(|γ| 3 + P|γ| 2).
## F Proof Of Lemma 9
From Lemma 8 and the fact that {γ
(t), Q(t)} forms a reversible Markov chain with transition kernel K((*γ, Q*) → (γ
′, Q′)) = K∗(γ → γ
′)q(Q′), we have
1 − λγ,Q
= inf
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
⟨g, g⟩π − ⟨K*g, g*⟩ (159)
= 1 − sup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
⟨K*g, g*⟩ (160)
= 1 − sup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
X
γ,Q
Kg(γ, Q)g(γ, Q)π(*γ, Q*) (161)
= 1 − sup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
X
γ,Q
X
γ′,Q′
K((*γ, Q*) → (γ
′, Q′))g(γ
′, Q′)g(γ, Q)π(*γ, Q*) (162)
= 1 −
S
Psup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
X
γ,Q
X
γ′,Q′
K∗(γ → γ
′)q(Q
′)g(γ
′, Q′)g(γ, Q)π(*γ, Q*) (163)
= 1 −
S
Psup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
X
γ,Q
X
γ′,Q′
K∗(γ → γ
′)
f(γ)
Zf
q(Q)g(γ
′, Q′)g(*γ, Q*)q(Q
′) (164)
= 1 −
S
Psup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
X
γ,γ′
K∗(γ → γ
′)
f(γ)
Zf
X
Q,Q′
g(γ
′, Q′)g(*γ, Q*)q(Q)q(Q
′) (165)
= 1 −
S
Psup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
X
γ,γ′
K∗(γ → γ
′)π(γ)
X
Q
g(*γ, Q*)q(Q)
X
Q′
π(γ
′, Q′)q(Q
′)
(166)
= 1 −
S
Psup
g(γ,Q):Eπ[g]=0,Eπ[g
2]=1
X
γ,γ′
K∗(γ → γ
′)π(γ)h(γ)h(γ
′) (167)
where
$$\pi(\gamma)=\frac{f(\gamma)}{Z_{f}},$$ $$Z_{f}=\sum_{\gamma}f(\gamma),$$ $$h(\gamma):=\sum_{Q}g(\gamma,Q)q(Q).$$
(168) $\left(169\right)$ $\left(170\right)$ .
Observe that
$$\mathbb{E}_{\pi}[h(\gamma)]=\sum_{\gamma}h(\gamma)\pi(\gamma)$$ $$=\sum_{\gamma}\sum_{Q}g(\gamma,Q)q(Q)\pi(\gamma)$$ $$=\sum_{\gamma,Q}g(\gamma,Q)\pi(\gamma,Q)$$ $$=\mathbb{E}_{\pi}[g(\gamma,Q)]$$ $$=0.$$
$$(171)$$
$$(172)$$
$$(173)$$
On the other hand, we also have where (177) follows from the convexity of the function x 2 on [0, ∞).
From (175), (180), and (167), we obtain
$$1-\lambda_{\gamma,Q}\geq1-\sup_{h(\gamma):\mathbb{E}_{\pi}[h]=0,\mathbb{E}_{\pi}[h^{2}]\leq1}\sum_{\gamma,\gamma^{\prime}}K^{*}(\gamma\to\gamma^{\prime})\pi(\gamma)h(\gamma)h(\gamma^{\prime}).\tag{181}$$
Now, note that Eπ[h] = 0 is equivalent to h ⊥π 1. Let |Ω| = 2P +1 := n and h1, h2, · · · , hn are eigenfunctions of K∗corresponding to the decreasing ordered eigenvalues λ1 ≥ λ2 *≥ · · · ≥* λn and are orthogonal since K∗is self-adjoint. Set h1 = 1. Since ∥h∥2,π = 1 and h ⊥π 1, we have h =Pn j=2 ajhj because it is perpendicular to h1 so it can be only represented by these eigenvectors. By taking l2-norm on both sizes we have Pn j=2 a 2 j ≤ 1 since the form like ⟨hi, hj ⟩π = 0 and ⟨hi, hi⟩ = ∥hi∥
2 2,π = 1. Thus,
$$\sup_{h:\mathbb{E}_{\tau}[h]=0,\mathbb{E}_{\tau}[h^{2}]\leq1}\sum_{\gamma,\gamma^{\prime}}K^{*}(\gamma\to\gamma^{\prime})\pi(\gamma)h(\gamma)h(\gamma^{\prime})\leq\max_{a_{2},a_{3},\cdots,a_{n},\sum_{j=2}^{n}a_{j}^{2}\leq1}\sum_{j=1}^{n}a_{j}^{2}\lambda_{j}$$ $$\leq\lambda_{2}\sum_{j=2}^{n}a_{j}^{2}$$ $$=\lambda_{2},$$
jλj (182)
$$\begin{array}{l}{(174)}\\ {(175)}\end{array}$$
$$\mathbb{E}_{\pi}\big{[}h^{2}(\gamma)\big{]}=\sum_{\gamma}\bigg{(}\sum_{Q}g(\gamma,Q)q(Q)\bigg{)}^{2}\pi(\gamma)$$ $$\leq\sum_{\gamma}\bigg{(}\sum_{Q}g(\gamma,Q)^{2}q(Q)\bigg{)}\pi(\gamma)$$ $$=\sum_{\gamma,Q}g(\gamma,Q)^{2}\pi(\gamma,Q)$$ $$=\mathbb{E}_{\pi}\big{[}g(\gamma,Q)^{2}\big{]}$$ $$=1,$$
$$(176)$$
$$(177)$$
$$(178)$$
$$\begin{array}{l}{(179)}\\ {(180)}\end{array}$$
where Pn j=2 a 2 j ≤ 1 and λj ∈ spec(P) such that λ2 ≥ λ3 *· · · ≥* λn. Hence, from (184), we obtain
$$1-\lambda_{\gamma,Q}\geq1-\frac{S}{P}\lambda_{2}\tag{185}$$ $$=\frac{S}{P}(1-\lambda_{P})+1-\frac{S}{P}$$ (186) $$\geq1-\frac{S}{P}.\tag{187}$$
|