context
stringlengths 100
12k
| A
stringlengths 100
5.1k
| B
stringlengths 100
6.02k
| C
stringlengths 100
4.6k
| D
stringlengths 100
4.68k
| label
stringclasses 4
values |
---|---|---|---|---|---|
In this work, we proposed a new approach to VAR modeling and forecasting by generating trends as well as model parameters using an LSTM network and the associated deep learning methodology for exact maximum likelihood estimation. A simulation study demonstrated the effectiveness of the proposed approach.
|
The DeepVARwT model outperformed the other models for federal funds rate. It gave the best prediction intervals for GDP gap over h=1:4 and h=1:8, while in second place for prediction accuracy. Its performance is similar for inflation with a slight drop to second place in terms of SIS over h=1:4.
|
Section 4 shows results of model fitting to three data sets and comparisons with alternative models in terms of forecasting accuracy.
|
Three examples with real data are provided to show that it competes well with existing models in terms of prediction performance.
|
From Fig. 9, we can observe obvious trends in the three series. [11] assumed that the trends in the Northern and Southern Hemispheres series are deterministic and modelled the local changes in data using a vector shifting-mean autoregressive model with order p=3𝑝3p=3italic_p = 3. We continue to fit a DeepVARwT(3) model to the three series and make predictions h=1,2,…,6ℎ12…6h=1,2,...,6italic_h = 1 , 2 , … , 6 steps ahead of T=147𝑇147T=147italic_T = 147. As with our first real data application, this is repeated 19 times, each time moving the training sample forward by one time point.
|
C
|
{i}})\right)}{\alpha(1-\alpha)}.2 over^ start_ARG italic_σ end_ARG start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT divide start_ARG ( italic_α roman_KL ( italic_P start_POSTSUBSCRIPT italic_W , italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∥ over^ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_W , italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) + ( 1 - italic_α ) roman_KL ( italic_P start_POSTSUBSCRIPT italic_W end_POSTSUBSCRIPT italic_P start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∥ over^ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_W , italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) ) end_ARG start_ARG italic_α ( 1 - italic_α ) end_ARG .
|
We have presented the Auxiliary Distribution Method, a novel approach for deriving information-theoretic upper bounds on the generalization error within the context of supervised learning problems. Our method offers the flexibility to recover existing bounds while also enabling the derivation of new bounds grounded in the α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS and α𝛼\alphaitalic_α-Rényi information measures. Notably, our upper bounds, which are rooted in the α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS information measure, are finite, in contrast to mutual information-based bounds. Moreover, our upper bound based on α𝛼\alphaitalic_α-Rényi information, for α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ), remains finite when considering a deterministic learning process. An intriguing observation is that our newly introduced α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS information measure can, in certain regimes, yield tighter bounds compared to existing approaches. We also discuss the existence of algorithms under α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS-regularized and α𝛼\alphaitalic_α-Rényi-regularized empirical risk minimization problems and provide upper bounds on excess risk of these algorithms, where the upper bound on the excess risk under α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS-regularized empirical risk minimization is tighter than other well-known upper bounds on excess risk. Furthermore, we provide an upper bound on generalization error in a mismatch scenario, where the distributions of test and training datasets are different, via our auxiliary distribution method.
|
We propose a Lemma connecting certain KL divergences to the α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS information.
|
We next compare the upper bounds based on α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS information, Theorem 2, with the upper bounds based on α𝛼\alphaitalic_α-Rényi information, Theorem 3. The next proposition showcases that the α𝛼\alphaitalic_α-JSJS\mathrm{JS}roman_JS information bound can be tighter than the α𝛼\alphaitalic_α-Rényi based upper bound under certain conditions. The proof details are deferred to Appendix C.
|
Akin to Proposition 1, the result in Proposition 5 paves the way to offer new tighter expected generalization error upper bound by ADM. We next offer a Lemma connecting certain KL divergences to the α𝛼\alphaitalic_α-Rényi information [35, Theorem 30].
|
B
|
We list some assumptions for later reference. We briefly explain the meaning and implications of each of them. In what follows, k𝑘kitalic_k is a kernel, {kλ:λ∈Λ}conditional-setsubscript𝑘𝜆𝜆Λ\left\{k_{\lambda}:\lambda\in\Lambda\right\}{ italic_k start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT : italic_λ ∈ roman_Λ } a family of kernels (which might come from different parametric families), and PP\operatorname{P}roman_P and Q∈ℳp(𝒳)Qsubscriptℳp𝒳\operatorname{Q}\in\mathcal{M}_{\operatorname{p}}(\mathcal{X})roman_Q ∈ caligraphic_M start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT ( caligraphic_X ), Borel probability measures defined on a space 𝒳𝒳\mathcal{X}caligraphic_X. In what follows we use the standard notation in functional analysis and operator theory; for k1subscript𝑘1k_{1}italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and k2subscript𝑘2k_{2}italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT positive definite kernels on 𝒳𝒳\mathcal{X}caligraphic_X, we denote k1≪k2much-less-thansubscript𝑘1subscript𝑘2k_{1}\ll k_{2}italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≪ italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT if and only if k2−k1subscript𝑘2subscript𝑘1k_{2}-k_{1}italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a positive definite kernel; see [2, Part I.7].
|
To present the contributions of this paper, we briefly refer to some important, mutually related, technical notions. As emphasized in [6], Reproducing Kernel Hilbert Spaces (RKHS in short) provide an excellent environment to construct helpful transformations in several statistical problems. Given a topological space 𝒳𝒳\mathcal{X}caligraphic_X (in many applications 𝒳𝒳\mathcal{X}caligraphic_X is a subset of a Hilbert space), a kernel k𝑘kitalic_k is a real non-negative semidefinite symmetric function on 𝒳×𝒳𝒳𝒳\mathcal{X}\times\mathcal{X}caligraphic_X × caligraphic_X. The RKHS associated with k𝑘kitalic_k, denoted in the following by ℋksubscriptℋ𝑘\mathcal{H}_{k}caligraphic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, is the Hilbert space generated by finite linear combinations of type ∑jαjk(xj,⋅)subscript𝑗subscript𝛼𝑗𝑘subscript𝑥𝑗⋅\sum_{j}\alpha_{j}\,k\left(x_{j},\cdot\right)∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_k ( italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , ⋅ ); see Section 2 for additional details.
|
Regularity assumption. 𝒳𝒳\mathcal{X}caligraphic_X is a separable metric space and each kernel is continuous as a real function of one variable (with the other kept fixed).
|
Let H𝐻Hitalic_H be a real and separable Hilbert space. Let us consider a linear and continuous operator T:H→𝒞b(𝒳):𝑇→𝐻subscript𝒞𝑏𝒳T:H\rightarrow\mathcal{C}_{b}(\mathcal{X})italic_T : italic_H → caligraphic_C start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ( caligraphic_X ), where 𝒞b(𝒳)subscript𝒞𝑏𝒳\mathcal{C}_{b}(\mathcal{X})caligraphic_C start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ( caligraphic_X ) is the space of real bounded continuous functions on 𝒳𝒳\mathcal{X}caligraphic_X endowed with the supremum norm. If BHsubscript𝐵𝐻B_{H}italic_B start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT is the unit ball in H𝐻Hitalic_H, then the class B=T(BH)𝐵𝑇subscript𝐵𝐻B=T\left(B_{H}\right)italic_B = italic_T ( italic_B start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT ) is universal Donsker.
|
Let ℳp(𝒳)subscriptℳp𝒳\mathcal{M}_{\operatorname{p}}(\mathcal{X})caligraphic_M start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT ( caligraphic_X ) be the set of (Borel) probability measures on 𝒳𝒳\mathcal{X}caligraphic_X. Under mild assumptions on k𝑘kitalic_k, the functions in ℋksubscriptℋ𝑘\mathcal{H}_{k}caligraphic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT are measurable and PP\operatorname{P}roman_P-integrable, for each P∈ℳp(𝒳)Psubscriptℳp𝒳\operatorname{P}\in\mathcal{M}_{\operatorname{p}}(\mathcal{X})roman_P ∈ caligraphic_M start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT ( caligraphic_X ). Moreover, it can be checked that the function μPsubscript𝜇P\mu_{\operatorname{P}}italic_μ start_POSTSUBSCRIPT roman_P end_POSTSUBSCRIPT in (1) belongs to ℋksubscriptℋ𝑘\mathcal{H}_{k}caligraphic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. The transformation P↦μPmaps-toPsubscript𝜇P\operatorname{P}\mapsto\mu_{\operatorname{P}}roman_P ↦ italic_μ start_POSTSUBSCRIPT roman_P end_POSTSUBSCRIPT from ℳp(𝒳)subscriptℳp𝒳\mathcal{M}_{\operatorname{p}}(\mathcal{X})caligraphic_M start_POSTSUBSCRIPT roman_p end_POSTSUBSCRIPT ( caligraphic_X ) to ℋksubscriptℋ𝑘\mathcal{H}_{k}caligraphic_H start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is called the (kernel) mean embedding; see [28] and [6, Chapter 4]. The mean embedding of PP\operatorname{P}roman_P can be viewed as a smoothed version of the distribution of PP\operatorname{P}roman_P through the kernel k𝑘kitalic_k within the RKHS. This is evident when PP\operatorname{P}roman_P is absolutely continuous with density f𝑓fitalic_f and k(x,y)=K(x−y)𝑘𝑥𝑦𝐾𝑥𝑦k(x,y)=K(x-y)italic_k ( italic_x , italic_y ) = italic_K ( italic_x - italic_y ), for some real function K𝐾Kitalic_K. In this situation, μPsubscript𝜇P\mu_{\operatorname{P}}italic_μ start_POSTSUBSCRIPT roman_P end_POSTSUBSCRIPT is the convolution of f𝑓fitalic_f and K𝐾Kitalic_K. On the other hand, mean embeddings appear, under the name of potential functions, in some other mathematical fields (such as functional analysis); see [20, p. 15].
|
B
|
Hierarchical MA-IDM (car 𝜽#273subscript𝜽#273\boldsymbol{\theta}_{\#273}bold_italic_θ start_POSTSUBSCRIPT # 273 end_POSTSUBSCRIPT)
|
[18.930,3.416,2.585,0.383,1.649]18.9303.4162.5850.3831.649[18.930,3.416,2.585,0.383,1.649][ 18.930 , 3.416 , 2.585 , 0.383 , 1.649 ]
|
∼similar-to\sim∼10 sec as input. However, most existing probabilistic calibration methods are developed based on a simple assumption—the errors are independent and identically distributed (i.i.d.), and as a result, ignoring the autocorrelation in the residuals will lead to biased calibration [8, 9]. For example, Fig. 1 shows the residual process in acceleration (a𝑎aitalic_a), speed (v𝑣vitalic_v), and gap (s𝑠sitalic_s) when calibrating an IDM model with i.i.d. noise, and we can see that the residuals have strong serial correlation (i.e., autocorrelation). Essentially, two classes of methods are developed in the literature to model serial correlation. One approach is to directly process the time series data to eliminate serial correlations. For instance, Hoogendoorn and Hoogendoorn [3] adopted a difference transformation (see [10]) to eliminate the serial correlation. However, they used empirical correlation coefficients to perform the transformation instead of jointly learning the IDM parameters and the correlations. Another approach is to explicitly model the serial correlations (e.g., by stochastic processes); for example, Treiber and Kesting [11] introduced the Wiener process to model the temporally correlated error process.
|
In general, there are two ways to perform model estimation in the presence of serial correlations: (1) by directly processing the nonstationary data and eliminating serial correlations (e.g., performing the differencing operation), so that one can safely ignore the model inadequacy function and obtain stationary time series; or (2) by explicitly modeling the serial correlations based on specific model inadequacy functions. For instance, Hoogendoorn [3] performed a differencing transformation to eliminate serial correlations, which then did not show significant differences between autocorrelation coefficients and zeros in the previously mentioned Durbin–Watson test [18]. However, the information conveyed by the serial correlations is directly discarded, which prevents us from modeling the generative processes of observations. Another way is to explicitly develop the formation of serial correlations based on further assumptions. For example, dynamic regression models leverage linear regression and autoregressive integrated moving average (ARIMA) into a single regression model to forecast time series data [19]. For the main scope of this paper, i.e., calibration and simulation of car-following models, we use GP [20] to model serially correlated errors (i.e., for the model inadequacy part). GP provide a solid statistical solution to learn the autocorrelation structure, and more importantly, it allows us to understand the temporal effect in driving behavior through the lengthscale parameter l𝑙litalic_l, which partially explains the memory effect (see [21]) of human driving behaviors.
|
A fundamental question in performing probabilistic calibration is how to define the probabilistic model and data generation process. Most existing car-following models seek parsimonious structures by simply taking observations from the most recent (i.e., only one) step as input to generate acceleration/speed as output for the current step. However, given physical inertia, delay in reaction, and missing important covariates (e.g., car follower models, in general, do not take the status of the breaking light of the leading vehicle as input variable), we should expect unexplained behavior (ie, the discrepancy between predicted acceleration/speed and observed acceleration/speed) to be temporally correlated [6, 7]. For instance, Wang et al. [7] show that the best-performed deep learning model essentially takes states/observations from the most recent
|
A
|
\omega}+\frac{\log(\eta)}{\kappa}.+ roman_log ( 1 + italic_ω ) - divide start_ARG roman_log ( italic_ω ) end_ARG start_ARG italic_κ end_ARG - divide start_ARG 2 italic_ω end_ARG start_ARG 1 + italic_ω end_ARG + divide start_ARG roman_log ( italic_η ) end_ARG start_ARG italic_κ end_ARG .
|
P(σ2)𝑃superscript𝜎2\displaystyle P(\sigma^{2})italic_P ( italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )
|
C¯′(σ2)superscript¯𝐶′superscript𝜎2\displaystyle\overline{C}^{\prime}(\sigma^{2})over¯ start_ARG italic_C end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )
|
C¯(σ2)¯𝐶superscript𝜎2\displaystyle\overline{C}(\sigma^{2})over¯ start_ARG italic_C end_ARG ( italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )
|
C¯(σ2)¯𝐶superscript𝜎2\displaystyle\overline{C}(\sigma^{2})over¯ start_ARG italic_C end_ARG ( italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )
|
A
|
We use ∥⋅∥𝒰\|\cdot\|_{\cal U}∥ ⋅ ∥ start_POSTSUBSCRIPT caligraphic_U end_POSTSUBSCRIPT to denote the norm associated with a space 𝒰𝒰{\cal U}caligraphic_U.
|
We use ∥⋅∥𝒰\|\cdot\|_{\cal U}∥ ⋅ ∥ start_POSTSUBSCRIPT caligraphic_U end_POSTSUBSCRIPT to denote the norm associated with a space 𝒰𝒰{\cal U}caligraphic_U.
|
defined as ‖L‖𝒰→𝒱=maxu:‖u‖𝒰≤1‖Lu‖𝒱subscriptnorm𝐿→𝒰𝒱subscript:𝑢subscriptnorm𝑢𝒰1subscriptnorm𝐿𝑢𝒱\|L\|_{{\cal U}\to{\cal V}}=\max_{u:\|u\|_{\cal U}\leq 1}\|Lu\|_{\cal V}∥ italic_L ∥ start_POSTSUBSCRIPT caligraphic_U → caligraphic_V end_POSTSUBSCRIPT = roman_max start_POSTSUBSCRIPT italic_u : ∥ italic_u ∥ start_POSTSUBSCRIPT caligraphic_U end_POSTSUBSCRIPT ≤ 1 end_POSTSUBSCRIPT ∥ italic_L italic_u ∥ start_POSTSUBSCRIPT caligraphic_V end_POSTSUBSCRIPT. For convenience, sometimes we simply use ∥⋅∥\|\cdot\|∥ ⋅ ∥ when the
|
meaning is clear from the context. For a finite-dimensional matrix we use ∥⋅∥F\|\cdot\|_{F}∥ ⋅ ∥ start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT and ∥⋅∥2\|\cdot\|_{2}∥ ⋅ ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT to denote its Frobenius norm and operator norm
|
The operator norm for a linear L𝐿Litalic_L operator from space 𝒰→𝒱→𝒰𝒱{\cal U}\to{\cal V}caligraphic_U → caligraphic_V is
|
D
|
202020204040404060606060808080800050505050100100100100tBoomTsubscript𝑡Boom𝑇\frac{t_{\text{Boom}}}{T}divide start_ARG italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_ARG start_ARG italic_T end_ARG (%)Image over the threshold (%)
|
As Boomerang distance (tBoomsubscript𝑡Boomt_{\text{Boom}}italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT) increases,
|
to measure the variability of Boomerang-generated images as tBoomsubscript𝑡Boomt_{\text{Boom}}italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT is changed. As an expression of this variability, we consider the distribution of samples generated through the Boomerang procedure conditioned on the associated noisy input image at step tBoomsubscript𝑡Boomt_{\text{Boom}}italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT, i.e., pϕ(𝒙0′|𝒙tBoom)subscript𝑝italic-ϕconditionalsuperscriptsubscript𝒙0′subscript𝒙subscript𝑡Boomp_{\phi}({\bm{x}}_{0}^{\prime}|{\bm{x}}_{t_{\text{Boom}}})italic_p start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT | bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_POSTSUBSCRIPT ).
|
Boom}}}\mathbf{I})caligraphic_N ( square-root start_ARG italic_α start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_ARG bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , 1 - italic_α start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_POSTSUBSCRIPT bold_I ), with 𝐱0∼p(𝐱0)similar-tosubscript𝐱0𝑝subscript𝐱0\mathbf{x}_{0}\sim p(\mathbf{x}_{0})bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ italic_p ( bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ), which is equal to the forward diffusion process distribution at step tBoomsubscript𝑡Boomt_{\text{Boom}}italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT, denoted as q(𝐱tBoom|𝐱0)𝑞conditionalsubscript𝐱subscript𝑡Boomsubscript𝐱0q(\mathbf{x}_{t_{\text{Boom}}}|\mathbf{x}_{0})italic_q ( bold_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_POSTSUBSCRIPT | bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) (recall Equation 2). Given that the diffusion model is well-trained, we can expect that its output matches the original image distribution regardless of which step the reverse process in initiated, as long as the same forward diffusion process noise schedule is used. Equation 7 suggests that the density of Boomerang-generated images is proportional to the density of a Gaussian distribution with covariance (1−αtBoom)𝑰1subscript𝛼subscript𝑡Boom𝑰(1-\alpha_{t_{\text{Boom}}}){\bm{I}}( 1 - italic_α start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) bold_italic_I times the clean image density p(𝒙0)𝑝subscript𝒙0p({\bm{x}}_{0})italic_p ( bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ). In other words, the resulting density will have very small values far away from the mean of the Gaussian distribution. In addition, the high probability region of pϕ(𝒙0′|𝒙tBoom)subscript𝑝italic-ϕconditionalsuperscriptsubscript𝒙0′subscript𝒙subscript𝑡Boomp_{\phi}({\bm{x}}_{0}^{\prime}|{\bm{x}}_{t_{\text{Boom}}})italic_p start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT | bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) grows as 1−αtBoom1subscript𝛼subscript𝑡Boom1-\alpha_{t_{\text{Boom}}}1 - italic_α start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT end_POSTSUBSCRIPT becomes larger. This quantity monotonically increases as tBoomsubscript𝑡Boomt_{\text{Boom}}italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT goes from one to T𝑇Titalic_T since αt=∏i=1t(1−βi)subscript𝛼𝑡superscriptsubscriptproduct𝑖1𝑡1subscript𝛽𝑖\alpha_{t}=\prod_{i=1}^{t}(1-\beta_{i})italic_α start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = ∏ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ( 1 - italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and βi∈(0,1)subscript𝛽𝑖01\beta_{i}\in(0,1)italic_β start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ ( 0 , 1 ). As a result, we expect the variability in Boomerang-generated images to increase as we run Boomerang for larger tBoomsubscript𝑡Boomt_{\text{Boom}}italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT steps.
|
Increasing tBoomsubscript𝑡Boomt_{\text{Boom}}italic_t start_POSTSUBSCRIPT Boom end_POSTSUBSCRIPT also increases the
|
D
|
A∈ℝnr×nc𝐴superscriptℝsubscript𝑛𝑟subscript𝑛𝑐A\in\mathbb{R}^{n_{r}\times n_{c}}italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT × italic_n start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUPERSCRIPT
|
{F})italic_B italic_i italic_M italic_M italic_D italic_F ( italic_n , 2 , roman_Π start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , roman_Π start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT roman_in end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT roman_out end_POSTSUBSCRIPT , caligraphic_F ), αinsubscript𝛼normal-in\alpha_{\mathrm{in}}italic_α start_POSTSUBSCRIPT roman_in end_POSTSUBSCRIPT and αoutsubscript𝛼normal-out\alpha_{\mathrm{out}}italic_α start_POSTSUBSCRIPT roman_out end_POSTSUBSCRIPT range in (0,+∞)0(0,+\infty)( 0 , + ∞ ) when ℱℱ\mathcal{F}caligraphic_F is Poisson distribution. Setting γ=1𝛾1\gamma=1italic_γ = 1 in Equation (2) gives
|
‖M‖2→∞subscriptnorm𝑀→2\|M\|_{2\rightarrow\infty}∥ italic_M ∥ start_POSTSUBSCRIPT 2 → ∞ end_POSTSUBSCRIPT
|
{F})italic_B italic_i italic_M italic_M italic_D italic_F ( italic_n , 2 , roman_Π start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , roman_Π start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT roman_in end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT roman_out end_POSTSUBSCRIPT , caligraphic_F ), αinsubscript𝛼normal-in\alpha_{\mathrm{in}}italic_α start_POSTSUBSCRIPT roman_in end_POSTSUBSCRIPT and αoutsubscript𝛼normal-out\alpha_{\mathrm{out}}italic_α start_POSTSUBSCRIPT roman_out end_POSTSUBSCRIPT range in (−∞,+∞)(-\infty,+\infty)( - ∞ , + ∞ ), and we also have ρ=max(|pin|,|pout|)=log(n)nmax(|αin|,|αout|)𝜌normal-maxsubscript𝑝normal-insubscript𝑝normal-outnormal-log𝑛𝑛normal-maxsubscript𝛼normal-insubscript𝛼normal-out\rho=\mathrm{max}(|p_{\mathrm{in}}|,|p_{\mathrm{out}}|)=\frac{\mathrm{log}(n)}%
|
{F})italic_B italic_i italic_M italic_M italic_D italic_F ( italic_n , 2 , roman_Π start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , roman_Π start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT roman_in end_POSTSUBSCRIPT , italic_α start_POSTSUBSCRIPT roman_out end_POSTSUBSCRIPT , caligraphic_F ), we set ρP𝜌𝑃\rho Pitalic_ρ italic_P as
|
B
|
In this work, we perform a comprehensive empirical study 111The code repository can be accessed here: github.com/divyat09/cate-estimator-selection over 78 datasets to understand the efficacy of 34 surrogate metrics for conditional average treatment effect (CATE) model selection, where the model selection task is made challenging by training a large number of estimators (415 CATE estimators) for each dataset. Our evaluation framework encourages unbiased evaluation of surrogate metrics by proper tuning of their nuisance models using AutoML (Wang et al., 2021), which were chosen in a limited manner even in recent benchmarking studies (Curth & van der Schaar, 2023). We also provide a novel two-level model selection strategy based on careful hyperparameter selection for each class of meta-estimators, and causal ensembling which improves the performance of several surrogate metrics significantly.
|
To ensure we have reliable conclusions, unlike prior works, we also make use of recent advances in generative modeling for causal inference (Neal et al., 2020) to include realistic benchmarks in our analysis. Further, we introduce several new surrogate metrics inspired by other related strands of the literature such as TMLE, policy learning, calibration, and uplift modeling.
|
This has led to several techniques that estimate flexible and accurate models of heterogeneous treatment effects. These approaches range from adapting neural networks (Shi et al., 2019) to random forests (Wager & Athey, 2018), along with frameworks like double machine learning (Chernozhukov et al., 2016; Foster & Syrgkanis, 2019; Nie & Wager, 2021), instrumental variables (Hartford et al., 2017), meta learners (Künzel et al., 2019), etc. But how do we select between the different estimators? While in some situations we can choose between the estimators based on domain knowledge and application requirements, it is desirable to have a model-free approach for model selection. Further, the commonly used practice of cross-validation in supervised learning problems (Bengio et al., 2013) cannot be used for model selection in causal inference, as we never observe both of the potential outcomes for an individual (fundamental problem of causal inference (Holland, 1986)).
|
We also propose a variety of new metrics that are based on blending ideas from other strands of the literature and which have not been examined in prior works. The primary reason for including these new metrics was to have a more comprehensive evaluation, not necessarily to beat the prior metrics.
|
We work with the ACIC 2016 (Dorie et al., 2019) benchmark, where we discard datasets that have variance in true CATE lower than 0.010.010.010.01 to ensure heterogeneity; which leaves us with 75 datasets from the ACIC 2016 competition. Further, we incorporate three realistic datasets, LaLonde PSID, LaLonde CPS (LaLonde, 1986), and Twins (Louizos et al., 2017), using RealCause. For each dataset, the CATE estimator population comprises 7 different types of meta-learners, where the nuisance models (η^^𝜂\hat{\eta}over^ start_ARG italic_η end_ARG) are learned using AutoML (Wang et al., 2021). For the CATE predictor (f^^𝑓\hat{f}over^ start_ARG italic_f end_ARG) in direct meta-learners, we allow for multiple choices with variation across the regression model class and hyperparameters, resulting in a diverse collection of estimators for each direct meta-learner. Even the most recent benchmarking study by Curth & van der Schaar (2023) did not consider a large range of hyperparameters for direct meta-learners, while me make the task of model selection more challenging with a larger grid of hyperparameters. For the set of surrogate metrics, we incorporate all the metrics used in the prior works and go beyond to consider various modifications of them, along with the novel metrics described in Section 4. As stated before in Section 3, we use AutoML for selecting the nuisance models (ηˇˇ𝜂\check{\eta}overroman_ˇ start_ARG italic_η end_ARG) of surrogate metrics on the validation set. More details regarding the experiment setup can be found in Appendix C.
|
A
|
Statistical learning methods based on kernels are successfully applied in many fields, including biology [56], social sciences [20], physics [44], and astronomy [14, 62]. Given their nonparametric nature and many theoretical guarantees, kernel methods allow for principled modeling of complex relationships in real-world data.
|
These methods have also sparked extensive methodological research in statistics, leading to the development of kernel-based tools for feature selection [22, 25], causal inference [68], hypothesis testing [24, 37, 58, 54], and privacy [5]. In particular, cornerstones of machine learning, such as kernel ridge regression, Gaussian processes [50], kernel principal component analysis [55], support vector machines [10], and neural tangent kernels [23] are based on kernels.
|
Our work also overlaps with the literature in kernel learning [69, 59, 39, 35, 32, 30, 16]. Nonetheless, kernel learning is often based on alignment objectives, which seek to produce a meaningful representation of data using kernels [59]. In this scenario, even after learning the kernel, users would need to train a kernel-based predictive model to produce outputs for given labeled data. Within the literature of kernel learning, Automated Spectral Kernel Learning (ASKL) [35] has the most resemblance with RFFNet. However, ASKL does not update the lengthscales of the kernel it seeks to learn and thus falls short of removing the influence of irrelevant features. Consequently, it is not possible to subsidize interpretative claims based on ASKL-learned spectral density frequencies.
|
For regression, we compared RFFNet to kernel ridge regression (KRR) with an isotropic Gaussian kernel, approximate kernel ridge regression with Fastfood [34] and Nyström [65, 64, 13] feature maps for the isotropic Gaussian kernel, Gaussian Processes Regression (GPR) [64] with an ARD Gaussian kernel, EigenPro regressor [40] with a Gaussian isotropic kernel, and to Sparse Random Fourier Features (SRFF) [18] with a Gaussian kernel.
|
Automatic Relevance Determination (ARD) kernels [43] are widely used for variable selection in Bayesian regression (e.g., Gaussian processes [50, 12], sparse Bayesian learning [61]), support vector machines [27, 17, 1], and self-penalizing objectives [25, 51]. This family of kernels, which includes the usual Gaussian, Laplace, Cauchy, and Matérn kernels, is generated by introducing continuous feature weights in shift-invariant kernels, adding a layer of interpretability atop them by controlling how features contribute to the kernel value.
|
A
|
We consider a multivariate Gaussian model 𝕪i∼N(𝝁i,𝚺i)similar-tosubscript𝕪𝑖𝑁subscript𝝁𝑖subscript𝚺𝑖\mathbb{y}_{i}\sim N({\boldsymbol{\mu}}_{i},{\boldsymbol{\Sigma}}_{i})blackboard_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∼ italic_N ( bold_italic_μ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_Σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ), where 𝝁isubscript𝝁𝑖{\boldsymbol{\mu}}_{i}bold_italic_μ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and 𝚺isubscript𝚺𝑖{\boldsymbol{\Sigma}}_{i}bold_Σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are controlled by the linear predictors vector 𝜼isubscript𝜼𝑖{\boldsymbol{\eta}}_{i}bold_italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, as described in Section 2.1, and each element of 𝜼isubscript𝜼𝑖{\boldsymbol{\eta}}_{i}bold_italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is modelled via (1).
|
The proposed model has q=119𝑞119q=119italic_q = 119 linear predictors and each of them could be modelled via any of the covariates described above. Hence, model selection is challenging. As explained in Section 3.2, we use 2014-2016 data to generate a long list of candidate covariate effects, ordered in terms of decreasing importance. We choose the number of effects to add to the final multivariate Gaussian model, that is where to stop along the ordered effect list, by maximising the out-of-sample predictive performance on 2017 net-demand. Having chosen the model structure, in Section 3.3 we explore the model output, and in Sections 3.4 and 3.5 we evaluate the accuracy of the resulting forecasts on 2018 data.
|
a user-defined importance threshold. In contrast, we use the out-of-sample predictive performance to determine the number of effects to include in the final model and we fit the latter using the methods from Section 2.2, rather than boosting.
|
The rest of the paper is structured as follows. Section 2 introduces, in a general setting, the proposed multivariate Gaussian model structure and fitting methodology. It also summarises the inferential framework and motivates the use of ALEs for model output exploration. Section 3 focuses on the regional net-demand modelling application. In particular, the data is introduced in Section 3.1, while Section 3.2 describes the bespoke, boosting-based model selection approach proposed here. The output of the final model is explored in Section 3.3, while the forecasting performance of the proposed model is assessed in Sections 3.4 and 3.5. Section 4 summarises the main results.
|
the set of candidate effects that could be used to model each parameter. Then, we use gradient boosting (friedman2001greedy) to order the effects on the basis of how much they improve the fit, and we choose the number of effects modelling the MCD elements by maximising the forecasting performance on a validation set. The results show that the semi-automatic effect selection procedure just outlined leads to satisfactory predictive performance and to model selection decisions that are largely in agreement with intuition (e.g., wind speed and solar irradiance are selected to model net-demand variability in, respectively, Scotland and the South of England).
|
A
|
)^{2}\right]}},\qquad(\mathbf{X},Y)\sim\mathbb{P}.italic_e := divide start_ARG italic_δ ( italic_Y - bold_X start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_italic_β ) end_ARG start_ARG square-root start_ARG blackboard_E start_POSTSUBSCRIPT blackboard_P end_POSTSUBSCRIPT [ ( italic_Y - bold_X start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_italic_β ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] end_ARG end_ARG , ( bold_X , italic_Y ) ∼ blackboard_P .
|
Note that the worst-case distribution is an element of BδSLOPE(ℙ)subscriptsuperscript𝐵SLOPE𝛿ℙB^{\textrm{SLOPE}}_{\delta}(\mathbb{P})italic_B start_POSTSUPERSCRIPT SLOPE end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( blackboard_P ).
|
One aspect of Corollary 1 that is worth emphasizing is that the testing distribution that attains the worst out-of-sample performance is an additive perturbation of the baseline training distribution. The perturbation has a low-dimensional structure where a one-dimensional error, e𝑒eitalic_e, which is proportional to prediction error, Y−𝐗⊤𝜷𝑌superscript𝐗top𝜷Y-\mathbf{X}^{\top}\boldsymbol{\beta}italic_Y - bold_X start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT bold_italic_β, is added to 𝐗𝐗\mathbf{X}bold_X using loadings that depend on the subgradient of ρ𝜌\rhoitalic_ρ at 𝜷𝜷\boldsymbol{\beta}bold_italic_β and also on the conjugate of ρ𝜌\rhoitalic_ρ.
|
It is worth mentioning that the set BδLASSO(ℙ)subscriptsuperscript𝐵LASSO𝛿ℙB^{\sqrt{\mathrm{LASSO}}}_{\delta}(\mathbb{P})italic_B start_POSTSUPERSCRIPT square-root start_ARG roman_LASSO end_ARG end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT ( blackboard_P ) contains different versions of (𝐗,Y)𝐗𝑌(\mathbf{X},Y)( bold_X , italic_Y ) measured with error. For example, any additive measurement error model of the form
|
The worst-case mean-squared error of LASSOLASSO\sqrt{\mathrm{LASSO}}square-root start_ARG roman_LASSO end_ARG is attained at distributions where there is a (possibly correlated) measurement error that has a factor structure. Note that the worst-case distribution is an element of (21).
|
D
|
More recently, an exact high-dimensional analysis for generalized linear models was carried out in [LL20, MM19]. These works focus on the regime of interest in this paper: n𝑛nitalic_n and d𝑑ditalic_d growing at a proportional rate δ≔limnd≔𝛿𝑛𝑑\delta\coloneqq\lim\frac{n}{d}italic_δ ≔ roman_lim divide start_ARG italic_n end_ARG start_ARG italic_d end_ARG. This sharp analysis allows for the optimization of the preprocessing function: the choice of 𝒯𝒯{\mathcal{T}}caligraphic_T minimizing the value of δ𝛿\deltaitalic_δ (and, hence, the amount of data) needed to achieve a strictly positive overlap was provided in [MM19]; furthermore, the choice of 𝒯𝒯{\mathcal{T}}caligraphic_T maximizing the overlap was provided in [LAL19]. Going beyond the proportional regime in which n𝑛nitalic_n is linear in d𝑑ditalic_d, bounds on the sample complexity required for moment methods (including spectral) to achieve non-vanishing overlap were recently obtained in [DPVLB24]. The aforementioned analyses assume a Gaussian design matrix.
|
Let O∼Haar(𝕆(d))similar-to𝑂Haar𝕆𝑑O\sim\operatorname{Haar}({\mathbb{O}}(d))italic_O ∼ roman_Haar ( blackboard_O ( italic_d ) ) be a matrix sampled uniformly from the orthogonal group 𝕆(d)𝕆𝑑{\mathbb{O}}(d)blackboard_O ( italic_d ) and independent of everything else.
|
Going beyond this assumption, [DBMM20] provides precise asymptotics for design matrices sampled from the Haar distribution, and [MKLZ22] studies rotationally invariant designs.
|
Thus, Theorem I.6 provides us with a characterization of the limiting spectral distribution of E~isubscript~𝐸𝑖\widetilde{E}_{i}over~ start_ARG italic_E end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT:
|
A finite-sample analysis which allows the number of iterations to grow roughly as logn𝑛\log nroman_log italic_n (n𝑛nitalic_n being the ambient dimension) was put forward in [RV18], and the recent paper [LW22] improves this guarantee to a linear (in n𝑛nitalic_n) number of iterations. This could potentially allow to study settings in which δ=n/d𝛿𝑛𝑑\delta=n/ditalic_δ = italic_n / italic_d approaches the spectral threshold. The works on AMP discussed above all assume i.i.d. Gaussian matrices. A number of recent papers have proposed generalizations of AMP for the much broader class of rotationally invariant matrices, e.g., [OCW16, MP17, RSF19, Tak20, ZSF21, Fan22, MV21b, VKM22].
|
B
|
Y¯(x)=Y(x)−y^(x)e^(x)∼𝒩(0,1)¯𝑌𝑥𝑌𝑥^𝑦𝑥^𝑒𝑥similar-to𝒩01\displaystyle\bar{Y}(x)=\frac{Y(x)-\hat{y}(x)}{\hat{e}(x)}\sim\mathcal{N}(0,1)over¯ start_ARG italic_Y end_ARG ( italic_x ) = divide start_ARG italic_Y ( italic_x ) - over^ start_ARG italic_y end_ARG ( italic_x ) end_ARG start_ARG over^ start_ARG italic_e end_ARG ( italic_x ) end_ARG ∼ caligraphic_N ( 0 , 1 )
|
is uniquely determined from a countable set of points (such as X∩ℚn𝑋superscriptℚ𝑛X\cap\mathbb{Q}^{n}italic_X ∩ blackboard_Q start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT),
|
In the extremely unlikely case where some configurations x𝑥xitalic_x belongs to S∩T𝑆𝑇S\cap Titalic_S ∩ italic_T after generating T𝑇Titalic_T, they should not be used, or T𝑇Titalic_T should be resampled, as
|
Measure the set T𝑇Titalic_T and and build T¯¯𝑇\bar{T}over¯ start_ARG italic_T end_ARG by recording the measured values,
|
Choose a set T𝑇Titalic_T such that T∩S=∅𝑇𝑆T\cap S=\emptysetitalic_T ∩ italic_S = ∅ and T𝑇Titalic_T contains 50505050 elements
|
D
|
It is observed that the estimate ρ^msubscript^𝜌𝑚\hat{\rho}_{m}over^ start_ARG italic_ρ end_ARG start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT is significantly biased compared to the true density. However, by utilizing the density surrogate that incorporates both moments, we achieve an estimate with a considerably reduced error. This outcome is remarkable since the density surrogate only requires 10 parameters and does not rely on any knowledge of ρxt+1∣𝒴t(x)subscript𝜌conditionalsubscript𝑥𝑡1subscript𝒴𝑡𝑥\rho_{x_{t+1}\mid\mathcal{Y}_{t}}(x)italic_ρ start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ∣ caligraphic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_x ), such as the number of modes or the specific functional form.
|
The probability density functions of the Gumbel and Gaussian distributions is illustrated in Figure 4. The probability density function of the Gumbel distribution is given by
|
Now that the power and generalized logarithmic moments are defined, we give the following definition to characterize the equivalence of two densities in the sense of the two types of moments.
|
In the next example, we simulate a mixture of generalized logistic densities, which is known to be challenging to estimate accurately. Specifically, Example 2 represents a mixture of two type-I generalized logistic densities with the probability density function given by
|
In the final example, Example 3, we consider a mixture of two Laplacian densities. The probability density function is defined as follows:
|
D
|
With this solution, we can leverage Eqs. 5, 6, and 8 to quantitatively assess the mean and variance of system outputs. This integrated approach allows us to directly account for the impact of uncertainty in our input variables.
|
In the preceding system of equations, the matrix [A]delimited-[]𝐴[A][ italic_A ] is referred to as the design matrix because it contains information about the polynomial values that the design samples Kumar et al. (2016). In least-squares-based regression, the design matrix [A]delimited-[]𝐴[A][ italic_A ] plays a pivotal role. Therefore, the sampling method and the number of samples significantly affect the instancing of the design matrix. Various sampling strategies, such as the Latin hypercube Helton and Davis (2003), the Sobol sequence Sobol’ (1967), and random sampling Etikan and Bala (2017) can be used to build the design matrix. The influence of various sampling techniques and the number of total samples was studied in the previous works Kumar et al. (2020b); Hosder et al. (2006); Kumar et al. (2021, 2022), and it was concluded that more than twice as many samples, 2(P+1)2𝑃12(P+1)2 ( italic_P + 1 ), as coefficients are needed to ensure accuracy.
|
As it is mentioned in Sections 2.1 and 2.2, the sampling number should be greater than 2(P+1)2𝑃12(P+1)2 ( italic_P + 1 ) to ensure accuracy. In this problem, there are 4 input variables (n𝑛nitalic_n) and the polynomial order (p𝑝pitalic_p) of 3. Hence, the required sample number can be computed:
|
The technique of Polynomial Chaos Expansion (PCE) is an established method for uncertainty quantification (UQ) that has proven effective for stochastic simulations, as highlighted in various studies Daróczy et al. (2016); Tang et al. (2020). PCE allows for the characterization of variables and the solution output through mean, variance, higher-order moments, and probability density functions Kumar et al. (2016, 2020a). It is important to note that our previous work Kobayashi et al. (2023c) has already laid out the foundational methodology for constructing multi-dimensional polynomials (Section 2.1), as well as the regression technique for estimating Polynomial Chaos coefficients (Section 2.2).
|
DT can be categorized by purpose, but the components are the same: visualization, data processing, system update, prediction, and decision-making Yu et al. (2022); Chen et al. (2022); Bondarenko and Fukuda (2020). In this context, “visualization" is used for preparing the virtual asset of a physical system and visualizing it on the computer. It is the foundation for building DT. It is similar to conventional simulation; imagine commercial software such as computer-aided design (CAD). Second, “data processing" is responsible for transferring the physical system’s sensor data to the digital assets on the computer. Large systems such as nuclear reactors are expected to have a large number of sensors and data size and, therefore, require constructing an appropriate database. The third “system update" clearly differs between DT and traditional simulation. A typical simulation predicts the system state at a given time, assuming the system parameters are known. However, the objective of DT is to monitor and predict system conditions over a long time scale from the start of operation of that system to its shutdown (e.g., from months to years). It means that the system parameters must be treated as a function of their system operation time, and their values must be updated as time evolves. Obtaining the system parameters at a given time is generally classified as an inverse problem, and the “update" handles the sequence of updating the obtained values at the next time step. The “prediction" predicts the system state using the above-mentioned updated system parameters. This process is a forward problem to solve for the system state, and the user can select any solvers according to the information they want to obtain Kobayashi and Alam (2024a); Kobayashi et al. (2024). For example, for industrial products such as automobiles and aircraft, commercial FEA tools such as ABAQUS and ANSYS or in-house codes owned by each vendor would be an option. The final “decision-making" makes decisions about system maintenance, modifications, requests for maintenance, etc., based on the results of the previous forecasting module Kobayashi et al. (2023a); Kobayashi and Alam (2022). It is challenging since it involves not only the design values of the system but also the restrictions imposed by national and international conventions. For example, consider an automobile as a system. The vehicle must meet each country’s exhaust gas emission regulations even if the driving performance is at the expected value. Therefore, even if the same system is adopted, changing the decision-making module becomes a point of caution. All of these modules are essential, but in a nuclear system, the prediction tools are independent for each of their applications. In this study, the nuclear fuel performance evaluation code BISON is assumed to be one of the prediction tools in nuclear DT, and its potential applications are explored.
|
A
|
\prime}+c_{ji}^{\prime}\text{ or }c_{ij}\neq c_{ij}^{\prime}).SHD ( italic_c , italic_c start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) := ∑ start_POSTSUBSCRIPT italic_i > italic_j end_POSTSUBSCRIPT bold_1 ( italic_c start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT + italic_c start_POSTSUBSCRIPT italic_j italic_i end_POSTSUBSCRIPT ≠ italic_c start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT + italic_c start_POSTSUBSCRIPT italic_j italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT or italic_c start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ≠ italic_c start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) .
|
In order to aggregate SHD values over different data regimes, we introduce the area under the SHD curve (AUSHD):
|
and the ground truth graph as the main metric. SHD between two directed graphs is defined as the number of edges that need to be added, removed, or reversed in order to transform one graph into the other.
|
where m𝑚mitalic_m is the used method, T𝑇Titalic_T is the number of interventional data batches, cgtsubscript𝑐𝑔𝑡c_{gt}italic_c start_POSTSUBSCRIPT italic_g italic_t end_POSTSUBSCRIPT is the ground truth graph, and cm,tsubscript𝑐𝑚𝑡c_{m,t}italic_c start_POSTSUBSCRIPT italic_m , italic_t end_POSTSUBSCRIPT is the graph fitted by the method m𝑚mitalic_m using t𝑡titalic_t interventional data batches. Intuitively, for small to moderate values of T𝑇Titalic_T, AUSHD captures a method’s speed of convergence: the faster the SHD converges to 00, the smaller the area. For large values of T𝑇Titalic_T, AUSHD measures the asymptotic convergence.
|
We run the experiment on synthetic graphs with 25252525 nodes and we run for 25252525 acquisition rounds. We present the AUSHD values in Table 2 and full SHD curves in Appendix F.3.
|
A
|
&\leq Cn^{-sp}.\end{split}start_ROW start_CELL ∫ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT | italic_f start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ( italic_x ) - italic_f ( italic_x ) | start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT italic_d italic_x ≤ ∫ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT roman_max start_POSTSUBSCRIPT italic_i ∈ caligraphic_K ( italic_x ) end_POSTSUBSCRIPT | italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) - italic_f ( italic_x ) | start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT italic_d italic_x end_CELL start_CELL ≤ ∫ start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i ∈ caligraphic_K ( italic_x ) end_POSTSUBSCRIPT | italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) - italic_f ( italic_x ) | start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT italic_d italic_x end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL ≤ ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ∥ italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - italic_f ∥ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( roman_Ω start_POSTSUBSCRIPT italic_l start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_ϵ end_POSTSUBSCRIPT ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL ≤ italic_C italic_n start_POSTSUPERSCRIPT - italic_s italic_p end_POSTSUPERSCRIPT . end_CELL end_ROW
|
We remark that the constant in Theorem 2 can be chosen uniformly in r𝑟ritalic_r. Note that the width W=25d+31𝑊25𝑑31W=25d+31italic_W = 25 italic_d + 31 of our networks are fixed as L→∞→𝐿L\rightarrow\inftyitalic_L → ∞, but scale linearly with the input dimension d𝑑ditalic_d. We remark that a linear scaling with the input dimension is necessary since if d≥W𝑑𝑊d\geq Witalic_d ≥ italic_W, then the set of deep ReLU networks is known to not be dense in C(Ω)𝐶ΩC(\Omega)italic_C ( roman_Ω ) [32]. The next Theorem gives a lower bound which shows that the rates in Theorems 1 and 2 are sharp in terms of the number of parameters.
|
The rest of the paper is organized as follows. First, in Section 2 we describe a variety of deep ReLU neural network constructions which will be used to prove Theorem 1. Many of these constructions are trivial or well-known, but we collect them for use in the following Sections. Then, in Section 3 we prove Theorem 4 which gives an optimal representation of sparse vectors using deep ReLU networks and will be key to proving superconvergence in the non-linear regime p>q𝑝𝑞p>qitalic_p > italic_q. In Section 4 we give the proof of the upper bounds in Theorems 1 and 2. Finally, in Section 5 we prove the lower bound Theorem 3 and also prove the optimality of Theorem 4. We remark that throughout the paper, unless otherwise specified, C𝐶Citalic_C will represent a constant which may change from line to line, as is standard in analysis. The constant C𝐶Citalic_C may depend upon some parameters and this dependence will be made clear in the presentation.
|
In this section, we study lower bounds on the approximation rates that deep ReLU neural networks can achieve on Sobolev spaces. Our main result is to prove Theorem 3, which shows that the construction of Theorem 1 is optimal in terms of the number of parameters. In addition, we show that the representation of sparse vectors proved in Theorem 4 is optimal.
|
In this section, we prove the main technical result which enables the efficient approximation of Sobolev and Besov functions in the non-linear regime when q<p𝑞𝑝q<pitalic_q < italic_p. Specifically, we have the following Theorem showing how to optimally represent sparse integer vectors using deep ReLU neural networks.
|
C
|
However, the constant-width property of standard conformal prediction intervals can be overly restrictive. The variance of a conditional random variable Y𝑌Yitalic_Y is often heterogeneous (i.e., dependent on the value of the conditioning variable 𝐱𝐱\mathbf{x}bold_x). For instance, time series data is often heteroskedastic with the variance increasing along with the horizon due to the accumulation of uncertainty over time. Constant-width conditional prediction intervals computed for data with heterogeneous variance tend to be inefficient, meaning that they are wider than necessary.
|
where cδsubscript𝑐𝛿c_{\delta}italic_c start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT is chosen to achieve coverage of at least 1−δ1𝛿1-\delta1 - italic_δ on the calibration set, as in conformal prediction and CQR. Because the conformal correction is applied to the cumulative probability rather than the estimated conditional quantile, we refer to our method as Probability-space Conformalized Quantile Regression (PCQR).
|
In an effort to achieve more efficient intervals while retaining marginal validity, Romano et al. introduced an elegant method known as Conformalized Quantile Regression (CQR; Romano et al., 2019). CQR borrows techniques from both quantile regression and conformal prediction by applying a conformalized “correction” to the standard quantile regression interval. The resulting prediction is the corrected interval,
|
where Q^Y|𝐱(α)subscript^𝑄conditional𝑌𝐱𝛼\hat{Q}_{Y|\mathbf{x}}(\alpha)over^ start_ARG italic_Q end_ARG start_POSTSUBSCRIPT italic_Y | bold_x end_POSTSUBSCRIPT ( italic_α ) is the conditional α𝛼\alphaitalic_α-quantile estimate (the quantile regression prediction), and cδsubscript𝑐𝛿c_{\delta}italic_c start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT is a conformity score selected to achieve at least 1−δ1𝛿1-\delta1 - italic_δ coverage over a calibration set as in conformal prediction. CQR intervals offer the heterogeneous widths of quantile regression intervals as well as the marginal coverage guarantees of conformal prediction intervals.
|
To overcome the invalidity of quantile regression estimates, many modern solutions are derived instead from the technique of conformal prediction (Vovk et al., 2005). Conformal prediction is a distribution-free frequentist approach that provides prediction intervals with finite-sample guarantees for marginal coverage probability. The more computationally-attractive versions hold out a “calibration set” in a style referred to as split conformal prediction. Conformal prediction strategies exist for classification and regression tasks. In the case of regression, conformal prediction typically involves regressing the conditional expectation, as in standard regression, and then forming a symmetric, constant-width prediction band around the point estimate,
|
B
|
}}\right)italic_M start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT = italic_M start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT + divide start_ARG italic_M start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_ARG start_ARG italic_δ end_ARG = caligraphic_O ( divide start_ARG 1 end_ARG start_ARG italic_ε start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT end_ARG ) with the choice
|
δ=ε4𝛿superscript𝜀4\delta=\varepsilon^{4}italic_δ = italic_ε start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT.
|
and δ=ε4𝛿superscript𝜀4\delta=\varepsilon^{4}italic_δ = italic_ε start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT
|
δ=ε4𝛿superscript𝜀4\delta=\varepsilon^{4}italic_δ = italic_ε start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT.
|
and δ=ε4𝛿superscript𝜀4\delta=\varepsilon^{4}italic_δ = italic_ε start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT.
|
A
|
]}.italic_G := blackboard_E start_POSTSUBSCRIPT italic_Z ∼ caligraphic_N ( bold_0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_I ) end_POSTSUBSCRIPT [ [ start_ARG start_ROW start_CELL ∇ start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT caligraphic_D ( italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , italic_I ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_Z ) ) end_CELL end_ROW start_ROW start_CELL ⋮ end_CELL end_ROW start_ROW start_CELL ∇ start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT caligraphic_D ( italic_I ( italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) , italic_I ( italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT + italic_Z ) ) end_CELL end_ROW end_ARG ] ] .
|
To implement the UPI-PCA scheme for generating universal perturbations, we propose a stochastic optimization method which can efficiently converge to the top singular vector of first-order interpretation-targeting perturbations. Finally, we demonstrate our numerical results of applying the UPI-Grad and UPI-PCA methods to standard image recognition datasets and neural network architectures. Our numerical results reveal the vulnerability of commonly-used gradient-based feature maps to universal perturbations which can significantly alter the interpretation of neural networks. The empirical results show the satisfactory convergence of the proposed stochastic optimization method to the top singular vector of the attack scheme, and further indicate the proper generalization of the designed attack vector to test samples unseen during the optimization of the universal perturbation. We can summarize the contributions of this work as follows:
|
The above result suggests using the top principal component of the gradient matrix G𝐺Gitalic_G as the UPI perturbation. Hence, we propose the principal component analysis (PCA)-based UPI-PCA in Algorithm 2 as a stochastic power method for computing the top right singular vector of matrix G𝐺Gitalic_G.
|
Furthermore, in order to handle the difficult non-convex nature of the formulated optimization problem, we develop a principal component analysis (PCA)-based approach called UPI-PCA to approximate the solution to this problem using the top singular vector of fast gradient method (FGM) perturbations to the interpretation vectors. We demonstrate that the spectral UPI-PCA scheme yields the first-order approximation of the solution to the UPI-Grad optimization problem.
|
Consider the optimization problem in Equation (8) for the ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-norm case. Then, the solution to the optimization problem is the top right singular vector of the following matrix:
|
B
|
Limitations. Efficiency and scalability were two concerns raised by E4. The former refers to the required computation time to render all views. However, this does not threaten interactivity as long as everything gets parallelized and/or pre-computed beforehand [15]. For the latter case, he pointed out the tool’s limitation to visualize a much larger data set with more difficult-to-predict instances due to the increased space demand for the zone-based matrix. A simple solution to this problem could be filtering, which applies to scenarios where some metamodel pairs are performing poorly. As E2 stated, the tool works solely with binary classification problems and does not support alternative hyperparameter optimization techniques [11]. E1 referred to the important role that metamodels’ confidence plays in the data exposition, but instead of being aggregated as in our tool, it could be beneficial to use individual visual representations of spread. He continued to say that it is necessary to visualize the data distribution on demand to better relate to the underlying explanation of why some instances are constantly misclassified. In the future, we plan to improve MetaStackVis to overcome such limitations.
|
In this paper, we presented MetaStackVis, a visualization tool that enables users to visually assess the performance of metamodels in stacking ensemble learning. It allows users to tune HDBSCAN and apply metamodels to different cluster compositions of base models. Users can also compare the metamodels based on seven validation metrics and their average predicted probability, observe the performance similarities with the underlying base models, and check for powerful pairwise combinations of metamodels that hint at the possible benefit of introducing an extra stacking layer. The applicability and effectiveness of MetaStackVis were evaluated using a real-world healthcare data set and interviews with four experts, who suggested that the comparison of alternative metamodels with our tool is promising. Finally, they helped us recognize the current limits of MetaStackVis, which we will work on in the future.
|
The zone-based matrix in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(d) is inspired by the scatterplot matrix [4], and it provides a more comprehensive perspective of the metamodels’ performance. We designed three different zones: the matrix diagonal, the lower triangular part, and the upper triangular part. A bar chart in the matrix diagonal visualizes the metric-based performance of the validation metrics individually as a bar. Color and text convey the confidence (Conf.) of each metamodel, ranked from the best- to the worst-performing one, as already explained for view (b) in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels. Black denotes the highest confidence value, while light gray is the lowest possible. The remaining zones allow users to perform pairwise comparisons between all combinations of metamodels. The lower triangular part demonstrates the union of all misclassified test instances by at least one metamodel pair (20 in our example). The points in the grid are sorted according to the sum of predicted probabilities for all combinations, leading to the easiest-to-classify test samples always being on top (in white, if correctly classified by both metamodels) and the hardest-to-classify at the bottom (in yellow color, if wrongly classified by both metamodels). As a reference model, we apply the soft majority voting strategy [2] (i.e., predicted probabilities being used) with dark red when the row-wise metamodels are unable to overcome the wrong prediction of the blue metamodels and light red in case these metamodels are correct and their confidence surpasses the other metamodel. Thus, more prominent colors highlight the points and demonstrate the failure of metamodels to predict these points correctly. On the contrary, the upper triangular part is about the “theoretically achievable maximum” predictive performance if the optimal metamodel was selected for all the test instances (140 in our case). The gauge charts represent the average of all validation metrics’ performance in orange (and in the black text below) and the higher or lower confidence value compared to this metric-based performance in green or purple colors, respectively. The exploration of metamodel pairs in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(d) aims to indicate the available room for other schemas, such as establishing an extra stacking layer to aggregate the predictions of this layer.
|
The UMAP plot in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(c) enables the visual exploration of the base models belonging to the active cluster selected before and the 11 metamodels summarizing their predictions. Hence, offering a deeper behavioral analysis of all metamodels in contrast to the base models. Each point is one model, with base models being smaller in size, while the opposite is true for the metamodels. The UMAP projects the high-dimensional predicted probabilities calculated for the provided data set into two dimensions. In our example, groups of points represent clusters of models that perform similarly according to 140 test instances (which is the 20% testing set). A summary of the performance of each model according to the average value computed from the seven validation metrics is designated as Metric-Based Performance in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(c) and is being color-encoded using the Viridis colormap [16]. The legend on the left-hand side of this visualization maps the different algorithms as 11 distinguishable symbols for each ML algorithm. For example, the right-pointing arrows are the models constructed from random forest and the left-pointing arrows from extra trees. The opacity of the models is used for the confidence previously introduced, with a higher value forcing the ML model to be more opaque and vice versa.
|
The stacked bar chart in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(b) presents the best-performing metamodel in each cluster, including all base models and the group of outliers for seven different validation metrics also supported by StackGenVis. This visualization provides an overview of performance (in percentage % format) for the best candidate from the 11 metamodels created in every cluster, using the following metrics: Accuracy, Precision, Recall, ROC AUC, Geometric Mean, Matthews Correlation Coefficient (CorrCoeff), F1 Score, and Confidence. The last metric is the average predicted probability for all test instances. Additionally, we convert Matthews CorrCoeff to an absolute value ranging from 0 to 100%. The average of all seven validation metrics plus the confidence is then divided by 2 in order to compute the Overall Performance that defines the ranking of the clusters from top to bottom in this visualization (i.e., from best to worst). Therefore, Confidence is multiplied seven times to capture the same space as all validation metrics because users should be able to compare the two main components of overall performance globally. The legend for this view maps the metrics to the different color encodings. If a user deems a metric useless for the given problem, they can deselect this metric and temporarily hide it. If we compare the total length of the stacked bars in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(b), cluster_0 contains only 10 instead of 55 base models and reaches the highest overall performance with Linear Discriminant Analysis as the metamodel.
|
A
|
Another approach to describing points that in a certain sense maximize the probability under a measure is seeking minimizers of its Onsager–Machlup functional, which plays the role of a generalized negative log-density for measures without a Lebesgue density. In our context, it is defined as follows.
|
Note that the covariance operator of any Borel probability measure on a separable Hilbert space is self-adjoint, positive, and trace class, see [12, Prop. 1.8].
|
Let us again consider a probability measure μ𝜇\muitalic_μ on the Borel σ𝜎\sigmaitalic_σ-algebra of a separable Banach space X𝑋Xitalic_X.
|
Now, we consider a Bayesian posterior distribution μysuperscript𝜇𝑦\mu^{y}italic_μ start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT on the Borel σ𝜎\sigmaitalic_σ-algebra ℬ(X)ℬ𝑋\mathcal{B}(X)caligraphic_B ( italic_X ) of a separable Hilbert space X𝑋Xitalic_X whose density with respect to a Gaussian prior distribution is given by Bayes’ formula.
|
Let us consider a probability measure μ𝜇\muitalic_μ on the Borel σ𝜎\sigmaitalic_σ-algebra ℬ(X)ℬ𝑋\mathcal{B}(X)caligraphic_B ( italic_X ) of a separable Banach space X𝑋Xitalic_X.
|
B
|
For nonconvex-concave minimax problem (P) with linearly coupled equality or inequality constraints, we first prove the strong duality with respect to y𝑦yitalic_y under some feasibility assumption. Prior to our results, instead of convex minimax problems, we have never seen any strong duality with respect to y𝑦yitalic_y for solving nonconvex minimax problems with coupled linear constraints.
|
In this paper, we considered nonsmooth nonconvex-(strongly) concave and nonconvex-linear minimax problems with coupled linear constraints which are widely used in many fields such as machine learning, signal processing, etc. For nonconvex-concave minimax problem (P) with linearly coupled equality or inequality constraints, we first proved the strong duality with respect to y𝑦yitalic_y under some feasibility assumption, which is the first strong duality result with respect to y𝑦yitalic_y for solving nonconvex minimax problems with coupled linear constraints. Based on the strong duality, we then proposed a single-loop primal-dual alternating proximal gradient (PDAPG) algorithm for solving problem (P) under nonconvex-(strongly) concave settings. We demonstrated the iteration complexity of the two algorithms are 𝒪(ε−2)𝒪superscript𝜀2\mathcal{O}\left(\varepsilon^{-2}\right)caligraphic_O ( italic_ε start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ) (resp. 𝒪(ε−4)𝒪superscript𝜀4\mathcal{O}\left(\varepsilon^{-4}\right)caligraphic_O ( italic_ε start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT )) under nonconvex-strongly concave (resp. nonconvex-concave) setting to obtain an ε𝜀\varepsilonitalic_ε-stationary point. To the best of our knowledge, It is the first algorithm with iteration complexity guarantees for solving nonconvex minimax problems with coupled linear constraints. Moreover, the proposed PDAPG algorithm is optimal for solving nonconvex-strongly concave minimax problems with coupled linear constraints. Numerical experiments for adversarial attacks on network flow problems showed the efficiency of the proposed algorithms.
|
Based on the strong duality, we then propose a primal-dual alternating proximal gradient (PDAPG) algorithm for solving problem (P) under nonconvex-(strongly) concave settings. It is a single-loop algorithm.
|
By the strong duality shown in Theorem 1, instead of solving (P), we propose a primal-dual alternating proximal gradient (PDAPG) algorithm for solving (D).
|
The rest of this paper is organized as follows. In Section 2, we first establish the strong duality with respect to y𝑦yitalic_y under some feasibility assumption for nonconvex-concave minimax problem (P) with linearly coupled equality or inequality constraints. Then, we propose a primal-dual alternating proximal gradient (PDAPG) algorithm for nonsmooth nonconvex-(strongly) concave minimax problem with coupled linear constraints, and then prove its iteration complexity. In Section 3, we propose another primal-dual proximal gradient (PDPG-L) algorithm for nonsmooth nonconvex-linear minimax problem with coupled linear constraints, and also establish its iteration complexity. Numerical results in Section 4 show the efficiency of the two proposed algorithms. Some conclusions are made in the last section.
|
B
|
The generalized is thus FAS=[0.5,1]𝐹𝐴𝑆0.51FAS=[0.5,1]italic_F italic_A italic_S = [ 0.5 , 1 ] and does not contain β𝛽\betaitalic_β.
|
of Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, only identifies β𝛽\betaitalic_β when Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is
|
where Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT violates the exclusion assumption and Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and
|
Because Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is here an endogenous explanatory variable, and because
|
that are themselves endogenous explanatory variables with γℓαℓ≠0subscript𝛾ℓsubscript𝛼ℓ0\gamma_{\ell}\alpha_{\ell}\neq 0italic_γ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ≠ 0.
|
C
|
Ω(3)∗=Ω(4)∗subscriptsuperscriptΩ3subscriptsuperscriptΩ4\Omega^{*}_{(3)}=\Omega^{*}_{(4)}roman_Ω start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( 3 ) end_POSTSUBSCRIPT = roman_Ω start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( 4 ) end_POSTSUBSCRIPT. This is the ideal scenario for our method
|
D∈ℝp×p𝐷superscriptℝ𝑝𝑝D\in\mathbb{R}^{p\times p}italic_D ∈ blackboard_R start_POSTSUPERSCRIPT italic_p × italic_p end_POSTSUPERSCRIPT is a diagonal matrix with diagonal entries
|
of Ω∗superscriptΩ\Omega^{*}roman_Ω start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT are nonzero, one could consider the estimator
|
2, only one p/4×p/4𝑝4𝑝4p/4\times p/4italic_p / 4 × italic_p / 4 diagonal block is nonzero in each covariance
|
Model 2. The Ω(h)∗subscriptsuperscriptΩℎ\Omega^{*}_{(h)}roman_Ω start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_h ) end_POSTSUBSCRIPT are block diagonal with each
|
C
|
The optimization of the so-called trade-off of bias and variance is also crucial for regression problems.
|
In the case of the target variable, we conceived the idea of separating the independent variable of each data set.
|
Papers categorized as others on the target variable group concern ML settings in which no target variable is available. These are mostly related to DR and clustering problems.
|
In ML, the target (or response) variable is the characteristic known during the learning phase that has to be predicted for new data by the learned model. In classification problems, it can take a binary value for two-class problems, a single label for multi-class problems, or even a set of labels for multi-label problems. In regression problems, it is generally a continuous variable.
|
Further interesting cases mainly include competing categories from the same group. For example, model-agnostic techniques contradict model-specific techniques, because they consider different visualization granularities for a given ML model. 2D and 3D oppose each other as typically only one of them exists in a visualization approach. Moreover, techniques that focus on data exploration, explanation, and manipulation related to the in-processing phases of an ML pipeline are very different compared to systems that monitor the results in the post-processing phase of an ML model. The strong negative correlation between multi-class and other target variables might point to an effect that comes from our own categorization procedure: when papers could not be mapped to a concrete target variable (multi-class, for instance), then the other category has been assigned, e.g., to show the irrelevance of the target variable for a visualization technique. The category domain experts is negatively correlated to managing models during the in-processing ML phase, which makes sense as they do often not know much about how models work. Similarly, developers and ML experts together are weakly but negatively correlated with domain experts confirming the previous acquisition. Other insights are that beginners are not usually using selection as interaction technique and domain experts do not work with diagnosing/debugging ML models as they do not have the experience and/or knowledge following the previous inference.
|
B
|
From a practitioner’s perspective this allows the modeling flexibility of Gaussian processes via the kernel, while ensuring that conditioning on observations of the sample paths through a linear operator is possible.
|
(Hennig et al., 2015; Cockayne et al., 2019b; Oates and Sullivan, 2019; Owhadi et al., 2019; Hennig et al., 2022).
|
(Hennig et al., 2015; Cockayne et al., 2019b; Oates and Sullivan, 2019; Owhadi et al., 2019; Hennig et al., 2022), which frames numerical problems as statistical estimation tasks.
|
most widely-used throughout the literature (Graepel, 2003; Särkkä, 2011; Särkkä et al., 2013; Cockayne et al., 2017; Raissi et al., 2017; Agrell, 2019; Albert, 2019; Krämer et al., 2022).
|
(see e.g. Graepel (2003); Rasmussen and Williams (2006); Särkkä (2011); Särkkä et al. (2013); Cockayne et al. (2017); Raissi et al. (2017); Agrell (2019); Albert (2019); Krämer et al. (2022)),
|
D
|
It is worth noting that even though we focus on handling DTRs problems, the proposed method is applicable and can be generalized to other sequential decision-making problems beyond biomedical research. One example is the promotion recommendation in E-commerce, where the goal is to learn a personalized strategy that maximizes customers’ buying willingness at a tolerable loss of revenue (Goldenberg et al., 2021; Wang et al., 2023). In this application, multiple waves of promotions are scheduled to be delivered to customers in a cycle (Chen et al., 2022) and BR-DTRs can be applied to learn the optimal strategies at each stage. In Appendix C.3, an additional simulation study mimicking such promotion recommendation problem has been conducted for T=4𝑇4T=4italic_T = 4 and the results indicate that the BR-DTRs method still performs well. Moreover, even though we assumed treatments to be dichotomous and only one risk constraint is imposed at each stage in BR-DTRs, our method can also be extended to problems with more treatment options and risk constraints at each stage. One can achieve this by imposing multiple smooth risk constraints to multicategory learning algorithms, such as angle-based learning methods Qi et al. (2020); Ma et al. (2023). However, verifying the Fisher consistency of generalized problems is not trivial and is beyond the scope of this work. In addition, for many real world applications, finding the most influential feature variables that drive the optimal decisions is of equal importance as obtaining the explicit rules that maximize the beneficial reward under the constraints. Thus, BR-DTRs can also be extended to incorporate feature selection during the estimation. For example, when the RKHS is generated by the linear kernel, the optimal decision boundary is linear, and one can introduce an additional penalty term with a group structure to impose sparsity over feature variables.
|
Our contributions are two-fold: first, we propose a general framework to estimate the optimal DTRs under the stagewise risk constraints. We note that the proposed framework reduces to the outcome weighted learning for DTRs in Zhao et al. (2015) when there is no risk constraint and reduces to the method in Wang et al. (2018) when there is only one stage. When stagewise risk restrictions are imposed, we show that the backward induction technique adopted in Zhao et al. (2015) along with the single-stage framework proposed in Wang et al. (2018) can be jointly used to solve the optimal DTRs under the stagewise risk constraints. We note that such extension is nontrivial since the treatment of each stage is entangled with unknown treatments of the previous stage through risk constraints when the backward induction technique is used. Hence, additional theoretical justification is needed to rigorously prove that the problem can be decomposed into a series of constrained optimal treatment regimen problems of the current stage under acute risk assumption. Second, our work establishes the non-asymptotic results for the estimated DTRs for both value and risk functions, and such results have never been given before. In particular, we show that support vector machines still yield Fisher consistent treatment rules under a range of risk constraints. Our theory also shows that the convergence rate of the predicted value function is in the order of the cubic root of the sample size, and the convergence rate for the risk control has the order of the square root of the sample size.
|
There are several limitations of the proposed method. One limitation is that the proposed method may not perform well for a very large number of horizons. For example, the uncertainty for the objective maximization is accumulated over stages in the backward algorithm, so it will increase for large T𝑇Titalic_T. In contrast, as shown in Theorem 5, the uncertainty for the risk control at each stage will remain independent of T𝑇Titalic_T. Consequently, the risk constraint will mainly drive the decision rules for large T𝑇Titalic_T, which may not be the ideal solution in practice. Possible extensions can be to impose appropriate parametric assumptions on the DTRs, or less strict control on the risk function. Another limitation is the acute risk assumption, which requires the stagewise risk to be solely determined by the most recent action. However, this assumption may be violated in some applications when risks are expected to be affected by earlier actions. For example, the stagewise risks can be defined as the total number of the most toxic treatments received since the beginning of the treatments. Therefore, further extensions are necessary when the delayed risks exist.
|
When there is no risk constraint, (3) will reduce to the standard OWL framework which is guaranteed to yield optimal solutions for the unconstrained problem following the similar idea as the Bellman equation and Q-learning (Bellman, 1966; Qian and Murphy, 2011). However, extending the backward induction technique to risk-constrained DTRs problems is nontrivial, and the backward induction usually does not yield the optimal solutions for the general problem since the estimation of the treatment of each stage is entangled with unknown treatments from previous stages via the risk constraints. As one of our major contributions, our later proof for Theorem 2 shows that the backward algorithm (3) leads to the optimal solutions of the BR-DTRs problem. To the best of our knowledge, our work is the first to provide the necessary conditions for the optimality of the implementation of the backward induction for stagewise risk-constrained DTRs problems.
|
To address the real-world challenge of treating chronic diseases, in this work, we consider the problem of learning the optimal DTRs in a multistage study, subject to different acute risk constraints at each stage. We develop a general framework, namely benefit-risk DTRs (BR-DTRs), using the finding that under additional acute risk assumption, the stagewise benefit-risk DTRs can be decomposed into a series of single-stage benefit-risk problem only involving the risk restriction of the current stage. Numerically, we propose a backward procedure to estimate the optimal treatment rules: at each stage, we maximize the expected value function under the risk constraint imposed at the current stage, where the solution can be obtained by solving a constrained support vector machine problem. Theoretically, we show that the resulting DTRs are Fisher consistent when some proper surrogate functions are used to replace the objective function and risk constraints. We further derive the non-asymptotic error bounds for the cumulative reward and stagewise risks associated with the estimated DTRs.
|
B
|
More research is still needed to investigate the efficiency issue with respect to assertions of interest. Perhaps, investigations along the line of mathematical decision theory are a possibility. The current practice of IMs is more or less intuition based. For example, we choose centered predictive random sets such as 𝒮(U)=[−|U|,|U|]𝒮𝑈𝑈𝑈{\cal S}(U)=[-|U|,|U|]caligraphic_S ( italic_U ) = [ - | italic_U | , | italic_U | ] when U∼𝒩(0,1))U\sim\mathcal{N}(0,1))italic_U ∼ caligraphic_N ( 0 , 1 ) ) so that the resulting plausibility intervals are efficient in terms of interval length. Such centered predictive random sets are referred to as “default” predictive random sets.
|
The IM framework for the current problem repeats the three steps introduced in the previous section.
|
In the case when point estimation is of interest, which is often the starting-point for frequentist methods, our results (4.19) for the general n𝑛nitalic_n case can be used to provide an alternative local shrinkage or smoothing scheme, in the same manner as described in Section 3.3.
|
Again, the basic IM framework for the current problem repeats the three steps introduced in the previous section.
|
As seen in the previous section for the simple one-point case, the basic IMs framework is similar to the frequentist pivotal method of constructing confidence intervals.
|
D
|
In Sec. V.1 and Sec. V.2, we will consider cases where labels originate from a vector field 𝐲=𝓕(𝐱)𝐲𝓕𝐱\mathbf{y}=\pmb{\mathcal{F}}(\mathbf{x})bold_y = bold_caligraphic_F ( bold_x ) and 𝐲^=Ψ(𝐱)^𝐲double-struck-Ψ𝐱\mathbf{\hat{y}}=\pmb{\Psi}(\mathbf{x})over^ start_ARG bold_y end_ARG = blackboard_bold_Ψ ( bold_x ). In Sec. V.1 and Sec. V.2a, we will consider analytically computed 𝓕(𝐱)𝓕𝐱\pmb{\mathcal{F}}(\mathbf{x})bold_caligraphic_F ( bold_x ), whereas in Sec. V.2b it will be computed numerically, following a Newton’s quotient rule. Lastly, in Sec. V.3, we will discuss training with labels derived from noisy trajectories: 𝐲=I[𝓕,𝐱,𝐀,t1,t2]+𝜺𝐲𝐼𝓕𝐱𝐀subscript𝑡1subscript𝑡2𝜺\mathbf{{y}}=I[\pmb{\mathcal{F}},\mathbf{x},\mathbf{A},t_{1},t_{2}]+\pmb{\varepsilon}bold_y = italic_I [ bold_caligraphic_F , bold_x , bold_A , italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] + bold_italic_ε and predicted labels defined as 𝐲^=I[Ψ,𝐱,𝐀,t1,t2]^𝐲𝐼double-struck-Ψ𝐱𝐀subscript𝑡1subscript𝑡2\mathbf{\hat{y}}=I[\pmb{\Psi},\mathbf{x},\mathbf{A},t_{1},t_{2}]over^ start_ARG bold_y end_ARG = italic_I [ blackboard_bold_Ψ , bold_x , bold_A , italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ].
|
Here 𝜺𝜺\pmb{\varepsilon}bold_italic_ε denotes observational noise, and numerical integration is denoted as
|
The dataset derived from time series could furthermore be sampled at potentially irregular intervals and be subject to observational noise: assuming additive noise, the observed signal is 𝐳[𝐱(t)]=𝐱(t)+𝜺(t)𝐳delimited-[]𝐱𝑡𝐱𝑡𝜺𝑡\mathbf{z}[\mathbf{x}(t)]=\mathbf{x}(t)+\pmb{\varepsilon}(t)bold_z [ bold_x ( italic_t ) ] = bold_x ( italic_t ) + bold_italic_ε ( italic_t ). If δrsubscript𝛿𝑟\delta_{r}italic_δ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is time-varying, numerically approximating the derivative 𝐱˙˙𝐱\dot{\mathbf{x}}over˙ start_ARG bold_x end_ARG may introduce large numerical errors, especially if the signal is noisy. As an alternative, one could learn Ψdouble-struck-Ψ\pmb{\Psi}blackboard_bold_Ψ at a much higher frequency ΔtΔ𝑡\Delta troman_Δ italic_t to allow estimates 𝐱(tr)𝐱subscript𝑡𝑟\mathbf{x}(t_{r})bold_x ( italic_t start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ), i.e. 𝐱^(tr)^𝐱subscript𝑡𝑟\hat{\mathbf{x}}(t_{r})over^ start_ARG bold_x end_ARG ( italic_t start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ) from 𝐱(tr−1)𝐱subscript𝑡𝑟1\mathbf{x}(t_{r-1})bold_x ( italic_t start_POSTSUBSCRIPT italic_r - 1 end_POSTSUBSCRIPT ) using numerical integration. We set δr≫Δtmuch-greater-thansubscript𝛿𝑟Δ𝑡\delta_{r}\gg\Delta titalic_δ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ≫ roman_Δ italic_t so that Δtδr≈0Δ𝑡subscript𝛿𝑟0\frac{\Delta t}{\delta_{r}}\approx 0divide start_ARG roman_Δ italic_t end_ARG start_ARG italic_δ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_ARG ≈ 0 ∀rfor-all𝑟\forall r∀ italic_r. This procedure is illustrated in Fig. 7a)
|
To test the effect of noise, we trained another neural network using the same data with observational noise removed. We then considered a test error E𝒟:𝐱∼ϕx0(𝐱)[∥𝐱^(δ)−𝐱(δ)∥1]subscriptE:𝒟similar-to𝐱subscriptitalic-ϕsubscript𝑥0𝐱delimited-[]subscriptdelimited-∥∥^𝐱𝛿𝐱𝛿1\textbf{E}_{\mathcal{D}:{\mathbf{x}\sim\phi_{x_{0}}(\mathbf{x})}}\left[\left%
|
Note that the time step dτd𝜏\text{d}\taud italic_τ in the numerical integration may be different for the true and the predicted labels.
|
A
|
J.I. conceived the proof of concept, developed the theoretical formalism, developed the neural network code base, wrote the majority of the manuscript, performed the simulations and analyses. M.T. wrote the majority of the case-base sampling section 3.1. M.T. and S.B. provided key insights into the core sampling technique (case-base). R.S. provided multiple edits to the manuscript structure and key insights. All authors discussed the results and contributed to the final manuscript.
|
During the preparation of this work the author(s) used Microsoft 365 (Word) and ProWritingAid in order to improve grammar, assess typos and improve general sentence structure. After using these tools, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
|
(Terry M. Therneau and Patricia M. Grambsch, 2000). The Prostate dataset is available as part of the asaur package in R (Moore, 2016).
|
The complex simulation requires a method that can learn both time-varying interactions and have a flexible baseline hazard. Based on our complex simulation results (Figure 2 A, E and Table 2 A), CBNN outperforms the competitors. This simulation shows how all models perform under ideal conditions with minimal noise in the data, while the three case studies assess their performance in realistic conditions. In the MM case study, flexibility in both interaction modeling and baseline hazard improves the performance of CBNN over the other models, suggesting that this flexibility aids calibration (Figure 2 B, F and Table 2 B). Upon examination of the FLC case study, CBNN demonstrates a small improvement to performance compared to the linear models and DeepHit for both IPA and AUC (Figure 2 C, G and Table 2 C). In the Prostate case study, the linear models outperform the neural network ones, while CBNN and DeepHit alternate their positions depending on the follow-up time of interest and DeepSurv maintains last place (Figure 2 C, G and Table 2 C). We attribute this to potential over-parameterization in the neural network models, as we did not test for fewer nodes in each hidden layer, even with dropout. Though the ranking places the linear models above the neural network ones, their overall performance falls within a small range of IPA and AUC values aside from DeepSurv.
|
The subcontracts from UM1DK078616 and R01HL151855 to R.S supported this work. The work was also supported as part of the Congressionally Directed Medical Research Programs (CDMRP) award W81XWH-17-1-0347.
|
D
|
Create algorithms to model sensor location optimization, sensor degradation, re-calibration, and sensor signal reconstruction Kabir et al. (2010) to understand their impact on overall system degradation predictions.
|
Figure 2: Developed update module with an unscented Kalman filter (UKF) and ML method for an intelligent digital twin framework Kobayashi et al. (2023).
|
This section explains recent developments by the authors in digital twin research with simple illustrations: (a) an update module in the digital twin for temporal synchronization on the fly (Section 2.1), (b) a faster prediction module for operator learning (Section 2.2), and (c) a digital twin framework (Section 2.3). This section then justifies the important role of AI/ML in the DT framework, which requires explainability to understand the prediction better. The next section discusses explainable and interpretable AI.
|
Build an online learning algorithm that can continuously update the digital twin models and parameters to maintain temporal synchronization with the physical asset.
|
The scope of this section is to explain the concept of a system in DT in terms of the application of ML—justify the XAI for DT update systems. The method for updating a DT involves two approaches: (1) using the Bayesian filtering algorithm for estimating the parameters and states, and (2) considering the temporal evolution of the system. To continuously update the model within the DT, online and sequential learning algorithms are necessary.
|
C
|
3.000×10−23.000superscript1023.000\times 10^{-2}3.000 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT
|
2.342×10−22.342superscript1022.342\times 10^{-2}2.342 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT
|
2.027×10−22.027superscript1022.027\times 10^{-2}2.027 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT
|
2.003×10−22.003superscript1022.003\times 10^{-2}2.003 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT
|
2.932×10−22.932superscript1022.932\times 10^{-2}2.932 × 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT
|
A
|
\scriptscriptstyle\gamma$}}}}:S^{d}\to\mathbb{R}_{+}roman_Δ start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT : italic_S start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT is indeed monotone.
|
Our goal is now to apply Theorem 8 to Sdsuperscript𝑆𝑑S^{d}italic_S start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT in order to obtain sufficient and generically necessary conditions for large-sample and catalytic matrix majorization. Here and throughout this subsection, let us write
|
As mentioned previously, majorization in large samples implies catalytic majorization and thus the conditions in (5) are sufficient for catalytic majorization as well (and they are still generically necessary). Strengthening this, we show in Theorem 22 of Section 3 that asymptotic catalytic majorization is possible if and only if the conditions in (6) are met. More precisely, we show that the following two statements are equivalent:
|
As we will see, majorization in large samples implies catalytic majorization, and this follows from a known general construction (see e.g., [10]). Sufficient and generically necessary conditions for majorization in large samples in the case d=2𝑑2d=2italic_d = 2 were determined by Mu et al. in [21], and analogous conditions for the case of general d𝑑ditalic_d were conjectured. In this work, we prove a minor variation of their conjecture (see Remark 20 for the difference). This provides sufficient and generically necessary conditions for matrix majorization in large samples in general. Our proof uses the real-algebraic machinery derived by one of the authors in [12, 13], namely the theory of preordered semirings. According to these results, the ordering in large samples on certain types of preordered semirings can be characterized in terms of inequalities involving monotone homomorphisms to ℝ+subscriptℝ\mathbb{R}_{+}blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT and a number of similar monotone maps.
|
Now that we have provided sufficient and generically necessary conditions for catalytic matrix majorization, we note that as a corollary using the same preorder we can also obtain sufficient conditions for asymptotic catalytic matrix majorization.
|
A
|
\operatorname{Var}(e_{h})\right)over~ start_ARG roman_Var end_ARG ( over~ start_ARG italic_p end_ARG ) start_RELOP SUPERSCRIPTOP start_ARG = end_ARG start_ARG italic_d italic_e italic_f end_ARG end_RELOP ∑ start_POSTSUBSCRIPT italic_h = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( over~ start_ARG roman_Var end_ARG ( over^ start_ARG italic_p end_ARG start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) + roman_Var ( italic_e start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ) )
|
Theoretical results including privacy guarantees and asymptotic properties are established. With proper conditions on the relation between the privacy budget and sample sizes, as stated in the theorems, the resulting confidence intervals will achieve the desired coverage asymptotically, and the width tends to be that of a non-private confidence interval when the sample sizes go to infinity.
|
In the following, we extend Algorithm 1 to serve the needs of privacy protection of sample sizes by adding noise at the stratum level.
|
In section 3.2, we additionally propose adding noise at the stratum level when sample sizes are private.
|
This approach is formulated in Algorithm 1 which we call StrNz-PubSz (adding noise at the stratum level with public sample sizes). The theoretical results regarding privacy level and asymptotic coverage are provided in Theorems 4.1 and 4.2.
|
D
|
TABLE IV: Clustering performance comparison (by spectral clustering) in terms of normalized mutual information (NMI). “-” indicates the corresponding measures cannot be extended to multivariate time series or fail to obtain meaningful results. The best performance is in bold; the second best performance is underlined.
|
TABLE IV: Clustering performance comparison (by spectral clustering) in terms of normalized mutual information (NMI). “-” indicates the corresponding measures cannot be extended to multivariate time series or fail to obtain meaningful results. The best performance is in bold; the second best performance is underlined.
|
TABLE V: Clustering performance comparison (by k𝑘kitalic_k-medoids) in terms of normalized mutual information (NMI). “-” indicates the corresponding measures cannot be extended to multivariate time series or fail to obtain meaningful results. The best performance is in bold; the second best performance is underlined.
|
2) there is no obvious winner for univariate time series, all methods can achieve competitive performance; this makes sense, as DTW, MSM, TWED and TCK are all established methods; 3) our conditional CS divergence has obvious performance gain for multivariate time series; it is also generalizable to Traffic and UCLA, in which the dimension is significantly larger than the length; 4) the performance of our conditional CS divergence is stable in the sense that our measure does not have failing case; by contrast, DTW get very low NMI values in Robert failure LP1-LP5, whereas TCK completely fails in Traffic and UCLA.
|
We use normalized mutual information (NMI) as the clustering evaluation metric. Please refer to [68] for detail definitions of NMI. Table IV and Table V summarize the clustering results using, respectively, spectral clustering and k𝑘kitalic_k-medoids. We can summarize a few observations: 1) the clustering performances in terms of two different clustering methods roughly remain consistent;
|
B
|
As we increase the missing probability of group 0, (our upper bound estimate of) 𝖥𝖺𝗂𝗋𝖥𝗋𝗈𝗇𝗍𝖥𝖺𝗂𝗋𝖥𝗋𝗈𝗇𝗍\mathsf{FairFront}sansserif_FairFront decreases since it becomes more difficult to accurately predict outcomes for group 0. This in turn affects the overall model performance, since the fairness constraint requires that the model performs similarly for both groups. We also observe the fairness-accuracy curves of Reduction decrease as the missing data for group 0 become more prevalent. In other words, as the missing data for group 0 increase, it becomes more difficult to maintain both high accuracy and fairness in the model’s prediction.
|
We train a classifier that approximates the Bayes optimal and use it as a basis for both Reduction and FairProjection, which are SOTA fairness interventions. We then apply these two fairness interventions to the entire dataset and evaluate their performance on the same dataset. Figure 1 shows that in this infinite sample regime, the fairness-accuracy curves produced by Reduction and FairProjection can approach our upper bound estimate of 𝖥𝖺𝗂𝗋𝖥𝗋𝗈𝗇𝗍𝖥𝖺𝗂𝗋𝖥𝗋𝗈𝗇𝗍\mathsf{FairFront}sansserif_FairFront. This result not only demonstrates the tightness of our approximation (recall that Algorithm 1 gives an upper bound of 𝖥𝖺𝗂𝗋𝖥𝗋𝗈𝗇𝗍𝖥𝖺𝗂𝗋𝖥𝗋𝗈𝗇𝗍\mathsf{FairFront}sansserif_FairFront and existing fairness interventions give lower bounds) but also shows that SOTA fairness interventions have already achieved near-optimal fairness-accuracy curves.
|
The past years have witnessed a growing line of research introducing various group fairness-intervention algorithms. Most of these interventions focus on optimizing model performance subject to group fairness constraints. Though comparing and benchmarking these methods on various datasets is valuable (e.g., see benchmarks in Friedler et al.,, 2019; Bellamy et al.,, 2019; Wei et al.,, 2021), this does not reveal if there is still room for improvement in their fairness-accuracy curves, or if existing methods approach the information-theoretic optimal limit when infinite data is available. Our results address this gap by introducing the fairness Pareto frontier, which measures the highest possible accuracy under a set of group fairness constraints. We precisely characterize the fairness Pareto frontier using Blackwell’s conditions and present a greedy improvement algorithm that approximates it from data. Our results show that the fairness-accuracy curves produced by SOTA fairness interventions are very close to the fairness Pareto frontier on standard datasets.
|
Recently, many strategies have been proposed to reduce the tension between group fairness and model performance by investigating properties of the data distribution. For example, Blum and Stangl, (2019); Suresh and Guttag, (2019); Fogliato et al., (2020); Wang et al., (2020); Mehrotra and Celis, (2021); Fernando et al., (2021); Wang and Singh, (2021); Zhang and Long, (2021); Tomasev et al., (2021); Jacobs and Wallach, (2021); Kallus et al., (2022); Jeong et al., (2022) studied how noisy or missing data affect fairness and model accuracy. Dwork et al., (2018); Ustun et al., (2019); Wang et al., (2021) considered training a separate classifier for each subgroup when their data distributions are different. Another line of research introduces data pre-processing techniques that manipulate data distribution for reducing its bias (e.g., Calmon et al.,, 2017; Kamiran and Calders,, 2012). Among all these works, the closest one to ours is Chen et al., (2018), which decomposed group fairness measures into bias, variance, and noise (see their Theorem 1) and proposed strategies for reducing each term. Compared with Chen et al., (2018), the main difference is that we characterize a fairness Pareto frontier that depends on fairness metrics and a performance measure, giving a complete picture of how the data distribution influences fairness and accuracy.
|
We refer to the answer as the fairness Pareto frontier. This frontier delineates the optimal performance achievable by a classifier when unlimited data and computing power are available. For a fixed data distribution, the fairness Pareto frontier represents the ultimate, information-theoretic limit for accuracy and group fairness beyond which no model can achieve. Characterizing this limit enables us to (i) separate sources of discrimination and create strategies to control them accordingly; (ii) evaluate the effectiveness of existing fairness interventions for reducing epistemic discrimination; and (iii) inform the development of data collection methods that promote fairness in downstream tasks.
|
B
|
In Section 5.3, we define various related notions of stability more formally, and consider the implications of our main result for these alternative definitions of stability.
|
Classical bagging (Breiman, 1996a, ; Breiman, 1996b, ) samples m𝑚mitalic_m indices with replacement from [n]={1,…,n}delimited-[]𝑛1…𝑛\left[n\right]=\left\{1,\dots,n\right\}[ italic_n ] = { 1 , … , italic_n }.
|
Bagging has a rich history in the machine learning literature (Breiman, 1996a, ; Dietterich,, 2000; Valentini and Masulli,, 2002) and is widely used in a variety of practical algorithms; random forests are a notable example (Breiman,, 2001).
|
Anticipating various benefits of stability, Breiman, 1996a ; Breiman, 1996b proposed bagging as an ensemble meta-algorithm to stabilize any base learning algorithm. Bagging, short for bootstrap aggregating, refits the base algorithm to many perturbations of the training data and averages the resulting predictions. Breiman’s vision of bagging as off-the-shelf stabilizer motivates our main question: How stable is bagging on an arbitrary base algorithm, placing no assumptions on the data generating distribution?
|
Stability guarantees are central in a variety of contexts, despite the fact that many widely-used practical algorithms are not stable (Xu et al.,, 2011). For instance, Bousquet and Elisseeff, (2002) establish generalization bounds for stable learning algorithms, and Mukherjee et al., (2006) show that stability is necessary and sufficient for empirical risk minimization to be consistent; related works include (Poggio et al.,, 2004; Kutin and Niyogi,, 2002; Freund et al.,, 2004). Shalev-Shwartz et al., (2010) identify stability as a necessary and sufficient condition for learnability. Stability is further relevant to differential privacy guarantees; assuming worst-case stability (often called “sensitivity” in the privacy literature) is a standard starting point for constructing differentially private algorithms (Dwork,, 2008). In the field of conformal prediction, distribution-free coverage guarantees rely upon the stability of the underlying estimators (e.g., Steinberger and Leeb,, 2016, 2023; Ndiaye,, 2022; Barber et al.,, 2021). We now discuss applications of algorithmic stability to generalization and conformal inference in greater detail.
|
B
|
Table 5: Performance comparison on Tabular datasets at ϵ=1italic-ϵ1\epsilon=1italic_ϵ = 1 and δ=10−5𝛿superscript105\delta=10^{-5}italic_δ = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT. The average over five independent runs. For DP-KIP FC-NTK, 10 images per class are distilled and for private synthetic methods, as many samples as the original dataset contains are generated.
|
Here, we show the performance of KIP and DP-KIP over different real world datasets. In Sec. 5.1 we follow previous data distillation work and focus our study on grayscale and color image datasets. In addition, we also test DP-KIP performance on imbalanced tabular datasets with numerical and categorical features in Sec. 5.2. All experiments were implemented using JAX (Bradbury et al., 2018), except KIP e-NTK experiments were we used autograd.grad function implemented in PyTorch (Paszke et al., 2019). All the experiments were run on a single NVIDIA V100 GPU. Our code is publicly available at: https://anonymous.4open.science/r/DP-KIP/
|
For datasets with binary labels, we use the area under the receiver characteristics curve (ROC) and the area under the precision recall curve (PRC) as evaluation metrics, and for multi-class datasets, we use F1 score. Table 5 shows the average over the classifiers (averaged again over the 5 independent runs) trained on the synthetic privated generated samples for DP-CGAN Torkzadehmahani et al. (2019), DP-GAN (Xie et al., 2018), DP-MERF (Harder et al., 2021), DP-HP (Vinaroz et al., 2022) and DP-NTK (Yang et al., 2023) and trained on the privately distilled samples for DP-KIP FC-NTK under the same privacy budget ϵ=1italic-ϵ1\epsilon=1italic_ϵ = 1 and δ=10−5𝛿superscript105\delta=10^{-5}italic_δ = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT. Details on hyperparameter settings and classifiers used in evaluation can be found in Sec. D.2 in Appendix.
|
In the following we present DP-KIP results applied to eight different tabular datasets for imbalanced data. These datasets contain both numerical and categorical input features and are described in detail in Sec. C in Appendix. To evaluate the utility of the distilled samples, we train 12 commonly used classifiers on the distilled data samples and then evaluate their performance on real data for 5 independent runs.
|
We propose a DP data distillation framework based on KIP, which uses DP-SGD for privacy guarantees in the resulting distilled data. This itself is a mere application of DP-SGD to an existing data distillation method. However, motivated by the unbearable computational costs in using the infinite-width convolutional NTKs, we look for alternative features and empirically observe that the features from ScatterNets are the most useful in distilling image datasets, evaluated on classification tasks.
|
C
|
\Big{\{}I(T_{i}\leq t,\delta_{i}=k)-\hat{\pi}_{ik}(t)\Big{)}\Big{\}}^{2}dtitalic_e italic_r italic_r italic_O italic_O italic_B = ∫ start_POSTSUBSCRIPT italic_τ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT over^ start_ARG italic_ω end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) { italic_I ( italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≤ italic_t , italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_k ) - over^ start_ARG italic_π end_ARG start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT ( italic_t ) ) } start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_d italic_t
|
}\hat{\pi}^{h_{\star}^{b}}over^ start_ARG italic_π end_ARG start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG | caligraphic_O start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT | end_ARG ∑ start_POSTSUBSCRIPT italic_b ∈ caligraphic_O start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT end_POSTSUBSCRIPT over^ start_ARG italic_π end_ARG start_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT
|
\Big{\{}I(T_{i}\leq t,\delta_{i}=k)-\hat{\pi}_{ik}(t)\Big{)}\Big{\}}^{2}dtitalic_e italic_r italic_r italic_O italic_O italic_B = ∫ start_POSTSUBSCRIPT italic_τ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT over^ start_ARG italic_ω end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) { italic_I ( italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ≤ italic_t , italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_k ) - over^ start_ARG italic_π end_ARG start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT ( italic_t ) ) } start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_d italic_t
|
ω^(t)^𝜔𝑡\hat{\omega}(t)over^ start_ARG italic_ω end_ARG ( italic_t ) the estimated weights using Inverse Probability of
|
π^⋆(s)=1B∑b=1Bπ^h⋆b(s)subscript^𝜋⋆𝑠1𝐵superscriptsubscript𝑏1𝐵superscript^𝜋superscriptsubscriptℎ⋆𝑏𝑠\hat{\pi}_{\star}(s)=\frac{1}{B}\sum_{b=1}^{B}\hat{\pi}^{h_{\star}^{b}}(s)over^ start_ARG italic_π end_ARG start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT ( italic_s ) = divide start_ARG 1 end_ARG start_ARG italic_B end_ARG ∑ start_POSTSUBSCRIPT italic_b = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT over^ start_ARG italic_π end_ARG start_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT ⋆ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ( italic_s )
|
C
|
\mid\mathbf{D})}{\pi(\bm{\Theta},\bm{\Phi}\mid\mathbf{D})}\right\}.italic_A start_POSTSUBSCRIPT AM end_POSTSUBSCRIPT = roman_min { 1 , divide start_ARG italic_π ( bold_Θ start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT , bold_Φ ∣ bold_D ) end_ARG start_ARG italic_π ( bold_Θ , bold_Φ ∣ bold_D ) end_ARG } .
|
log(𝑼−φ)∣𝑼j,−φ∼𝒩(log(𝑼j,−φ),ΔU)similar-toconditionalsubscript𝑼𝜑subscript𝑼𝑗𝜑𝒩subscript𝑼𝑗𝜑subscriptΔ𝑈\log(\bm{U}_{-\varphi})\mid\bm{U}_{j,-\varphi}\sim\mathcal{N}(\log(\bm{U}_{j,-%
|
Components of 𝚯𝚯\bm{\Theta}bold_Θ are either transformed into log-scale or logit-scale based on their priors.
|
We group our model parameters as 𝚯=(β0,λc,𝚪,Σ0)𝚯subscript𝛽0subscript𝜆𝑐𝚪subscriptΣ0\bm{\Theta}=(\beta_{0},\lambda_{c},\bm{\Gamma},\Sigma_{0})bold_Θ = ( italic_β start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_λ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , bold_Γ , roman_Σ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ), 𝚽={(𝐜ju,𝑼j,Σj)}j=1NU𝚽superscriptsubscriptsuperscriptsubscript𝐜𝑗𝑢subscript𝑼𝑗subscriptΣ𝑗𝑗1subscript𝑁𝑈\bm{\Phi}=\{(\mathbf{c}_{j}^{u},\bm{U}_{j},\Sigma_{j})\}_{j=1}^{N_{U}}bold_Φ = { ( bold_c start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT , bold_italic_U start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , roman_Σ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. 𝚯𝚯\bm{\Theta}bold_Θ is the parameter block that is always present in our models while 𝚽𝚽\bm{\Phi}bold_Φ is the trans-dimensional part. 𝚽𝚽\bm{\Phi}bold_Φ has varying cardinality as NUsubscript𝑁𝑈N_{U}italic_N start_POSTSUBSCRIPT italic_U end_POSTSUBSCRIPT is random. We construct a blocked-Gibbs algorithm by first sampling 𝚯𝚯\bm{\Theta}bold_Θ using an adaptive Metropolis update (Haario et al., 2001; Roberts and Rosenthal, 2009) and then sampling 𝚽𝚽\bm{\Phi}bold_Φ through an improved birth-death-move update based on Geyer and Møller (1994); Møller and Torrisi (2005). The adaptive Metropolis ensures 𝚯𝚯\bm{\Theta}bold_Θ does not jeopardize the mixing performance of the entire algorithm. We also construct specialized proposal distributions for 𝚽𝚽\bm{\Phi}bold_Φ to improve upon the mixing performance of the algorithm based on Geyer and Møller (1994); Møller and Torrisi (2005).
|
Adaptive Metropolis (AM) For 𝚯𝚯\bm{\Theta}bold_Θ, we sample a new state 𝚯′superscript𝚯′\bm{\Theta}^{\prime}bold_Θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT from the conditional posterior π(𝚯∣𝚽,𝐃)∝π(𝚯,𝚽∣𝐃)proportional-to𝜋conditional𝚯𝚽𝐃𝜋𝚯conditional𝚽𝐃\pi(\bm{\Theta}\mid\bm{\Phi},\mathbf{D})\propto\pi(\bm{\Theta},\bm{\Phi}\mid%
|
B
|
The proof of Theorem 2 relies on the decomposition of the estimating equation and the control of the resulting terms using the assumptions on the nuisance process estimators. The cross-fitting procedure plays a crucial role in ensuring that the bias terms are asymptotically negligible.
|
A key challenge in establishing the asymptotic properties of doubly robust estimators with continuous-time nuisance parameters lies in the need for stronger assumptions on the total variation of the estimated processes. We discuss these assumptions and their implications, highlighting the differences between the continuous-time setting and the classical theory for doubly robust estimators.
|
In this paper, we have developed a general asymptotic theory for doubly robust estimators with continuous-time nuisance parameters. We considered a broad class of estimating equations involving stochastic processes and Riemann-Stieltjes integrals, which encompass a wide range of applications in various fields, such as survival analysis and causal inference. We established the consistency and asymptotic normality of the model doubly robust estimator under suitable assumptions on the uniform convergence and asymptotic linearity of the nuisance process estimators. These assumptions are specific to the continuous-time setting and differ from those in the classical Z-estimation theory. We also provided a rigorous theoretical foundation for the use of rate doubly robust estimators, which allow for flexible machine learning methods to estimate the nuisance processes, under the condition that their combined convergence rate is faster than the parametric rate.
|
Theorem 2 establishes the consistency and asymptotic normality of the rate doubly robust estimator when the product of the convergence rates of the nuisance process estimators is faster than the parametric rate n−1/2superscript𝑛12n^{-1/2}italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT. This condition allows for the use of flexible machine learning methods to estimate the nuisance processes, as long as their combined convergence rate is sufficiently fast.
|
In this short communication, we aim to fill this gap by developing a general asymptotic theory for a class of doubly robust estimating equations involving stochastic processes and Riemann-Stieltjes integrals. We introduce generic assumptions on the nuisance parameter estimators that ensure the consistency and asymptotic normality of the resulting doubly robust estimator. Our results cover both the model doubly robust estimator, which relies on parametric or semiparametric models, and the rate doubly robust estimator, which allows for flexible machine learning methods.
|
B
|
To test a single (simple) hypothesis, it is well-known that the Neyman-Pearson test [NP33] based on the likelihood ratio is the most powerful which minimizes the Type II error rate (the probability of falsely accepting a non-null) while controlling the Type I error rate (the probability of rejecting a true null) at a prescribed level α𝛼\alphaitalic_α. Moving to multiple testing, a natural question is to find the optimal decision rule under a meaningful objective function.
|
To this end, analogous to Type I and II error rates, Genovese and Wasserman [GW02] introduced a dual quantity of the FDR, called the false non-discovery rate (FNR), which is the expectation of the false non-discovery proportion (FNP):
|
large-scale hypothesis testing has been widely applied in a variety of fields such as genetics, astronomy and brain imaging, in which hundreds or thousands of tests are carried out simultaneously, with the primary goal of identifying the non-null hypotheses while controlling the false discoveries. One of the most popular figures of merit in multiple testing is the false discovery rate (FDR), formally introduced by Benjamini and Hochberg in 1995 [BH95].
|
To test a single (simple) hypothesis, it is well-known that the Neyman-Pearson test [NP33] based on the likelihood ratio is the most powerful which minimizes the Type II error rate (the probability of falsely accepting a non-null) while controlling the Type I error rate (the probability of rejecting a true null) at a prescribed level α𝛼\alphaitalic_α. Moving to multiple testing, a natural question is to find the optimal decision rule under a meaningful objective function.
|
[GW02] considered asymptotic approximations to the FDR and FNR when the decision rules are restricted to those that threshold p𝑝pitalic_p-values with a fixed cutoff. Later in [Sto03, SC07], these approximations were referred to as the marginal false discovery rate (mFDR) and marginal false non-discovery rate (mFNR), formally defined as
|
A
|
Good candidates for estimating this reference are MC estimators, which estimate the expected number of failures locally w.r.t. the IM.
|
In this paper, we will deal with limited sets of binary data and will consider the log-normal model in a Bayesian framework. In this sense, this paper will mainly address equipment problems for which only binary results of seismic qualification tests (e.g., tests of electrical relays, etc.) or empirical data such as presented in [12] are available. However, the methodology developed here could perfectly be applied to simulation-based approaches as well. The Bayesian perspective focuses on the impact of the prior on the estimations of parametric fragility curves, as part of the SPRA framework. With a limited data set, the impact of the choice of the prior on the posterior distribution cannot be neglected and, consequently, neither does its impact on the estimation of any key asset related to the fragility curves. In this study, the goal is to choose the prior while eliminating, insofar as it is possible, any subjectivity which would unavoidably lead to open questions regarding the impact of the prior on the final results. The reference prior theory defines relevant metrics for determining whether a prior can be called “objective” [42]. This allows us to focus on the well-known Jeffreys prior, the asymptotic optimum of the “mutual information" w.r.t. the size of the data set [43], and which will be explicitly derived for the first time in this study. Of course, from a subjectivity perspective, the choice of a parametric model for the fragility curve is debatable. However, numerical experiments based on the seismic responses of mechanical systems suggest that the choice of an appropriate IM makes it possible to reduce the potential biases between reference fragility curves (that can be obtained by massive Monte-Carlo methods) and their log-normal estimations [35]. This observation is reinforced by recent studies on the impact of IMs on fragility curves [28, 44, 45]. In this paper, we will ensure the relevance of the estimations by comparing them to the results of massive Monte-Carlo methods on academic examples. Although the numerical results are illustrated with the PGA, the proposed methodology is independent of the choice of the IM, and it can be implemented with any IM of interest, without additional complexity.
|
The fragility curve estimations are shown in Figure 6. They are obtained from L=5000𝐿5000L=5000italic_L = 5000 samples of θ𝜃\thetaitalic_θ generated with the implemented statistical methods (see Section 5), which are based on two samples of nonlinear dynamical simulations of sizes k=20𝑘20k=20italic_k = 20 and k=30𝑘30k=30italic_k = 30. Although the nature of the two intervals compared is different—credibility interval for the Bayesian framework and confidence interval for the MLE—, these results clearly illustrate the advantage of the Bayesian framework over the MLE for small samples. With the MLE, irregularities characterized by null estimates of β𝛽\betaitalic_β appear, resulting in "vertical" confidence intervals. In A, we established that the likelihood is easily maximized for β=0𝛽0\beta=0italic_β = 0 when samples are partitioned into two disjunct subsets when classified according to IM values: one subset for which there is no failure and one for which there is failure. Moreover, when few failures are observed in the initial sample, the bootstrap technique can lead to the generation of a large number of samples that maximize the likelihood at β=0𝛽0\beta=0italic_β = 0. This is better evidenced by an examination of the raw values of θ𝜃\thetaitalic_θ generated in Figure 7. The degenerate β𝛽\betaitalic_β values resulting from the MLE appear clearly but, although it should theoretically also be affected, the Bayesian framework shows no evidence of a similar phenomenon for this type of samples.
|
In this section, we will first present the Bayesian estimation tools and the MC reference method used to evaluate the relevance of the log-normal model when the amount of data allows it. We will then present two competing approaches, implemented in order to evaluate the performance of the Jeffreys prior in practical cases. On the one hand, we will apply the MLE method, widely used in literature, coupled with a bootstrap technique. On the other hand, we will apply a Bayesian technique implemented with the prior introduced by Straub and Der Kiureghian [12]. For a fair comparison, this study proposes to calibrate the latter according to the results of Figure 1, which illustrates that in α𝛼\alphaitalic_α the distribution is similar to the PGA distribution of the artificial and real signals. It would indeed be easy to calibrate it in such a way so as to skew comparisons, for instance by considering too large a variance. Finally, we will define performance evaluation metrics.
|
First, we need to divide the IM values into sub-intervals and estimate the probability of failure for each. Sub-intervals of regular size should be avoided because the observed IMs are not uniformly distributed. We will therefore consider clusters of IMs, defined through the K-means, as suggested by Trevlopoulos et al. [22].
|
D
|
Variational Inequalities (VI) have a rather long history of research. Their first applications were the equilibrium problems in economics and game theory [72, 34, 29].
|
Whereas research for minimization was actively developed separately, research for saddle point problems was often coupled with science’s development around variational inequalities. This trend continues even now.
|
Variational inequalities were seen as a universal paradigm, involving other attractive problems, including minimization, Saddle Point Problems (SPP), fixed point problems, and others.
|
This is partly related to the fact that the methods for minimization problems are not always suitable for SPP and VI. Moreover, from the theoretical point of view, these methods give relatively poor or even zero guarantees of convergence.
|
Variational Inequalities (VI) have a rather long history of research. Their first applications were the equilibrium problems in economics and game theory [72, 34, 29].
|
B
|
A⟂⟂B|𝐒A\not\!\perp\!\!\!\perp B|{\bf S}italic_A not ⟂ ⟂ italic_B | bold_S for any 𝐒⊆V∖{A,B}𝐒𝑉𝐴𝐵{\bf S}\subseteq V\setminus\{A,B\}bold_S ⊆ italic_V ∖ { italic_A , italic_B } s.t. C∉𝐒𝐶𝐒C\notin{\bf S}italic_C ∉ bold_S.
|
(iv) OF holds for all unshielded triples in 𝒢*superscript𝒢{\mathcal{G}}^{*}caligraphic_G start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT.
|
(iii) For every edge A—B∉𝒢*𝐴—𝐵superscript𝒢A\text{---}B\notin{\mathcal{G}}^{*}italic_A — italic_B ∉ caligraphic_G start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT, ∃𝐒⊆(𝑁𝑒𝒰(A)∪𝑁𝑒𝒰(B))𝐒subscript𝑁𝑒𝒰𝐴subscript𝑁𝑒𝒰𝐵\exists{\bf S}\subseteq(\text{Ne}_{{\mathcal{U}}}(A)\cup\text{Ne}_{{\mathcal{U%
|
Given a DAG 𝒢(𝐕,𝐄)𝒢𝐕𝐄{\mathcal{G}}({\bf V},{\bf E})caligraphic_G ( bold_V , bold_E ), OF holds for an unshielded triple A—C—B𝐴—𝐶—𝐵A\text{---}C\text{---}Bitalic_A — italic_C — italic_B (where A—B∉E𝐴—𝐵𝐸A\text{---}B\notin Eitalic_A — italic_B ∉ italic_E) iff
|
(ii) OF holds for every unshielded triple A—X—B∈𝒢𝐴—𝑋—𝐵𝒢A\text{---}X\text{---}B\in{\mathcal{G}}italic_A — italic_X — italic_B ∈ caligraphic_G.
|
D
|
TL (Ding et al., 2022b) (one hyperparameter): For algorithms with only one hyperparameter, TL is used.
|
Syndicated (Ding et al., 2022b) (multiple hyperparameters): For GLOC and SGD-TS (two hyperparameters), the Syndicated framework is utilized for comparison.
|
Note our work is the first one to consider model selection for bandits with a continuous candidate set, and the regret analysis for online model selection in the bandit setting (Foster et al., 2019) is intrinsically difficult. For example, regret bounds of the algorithm CORRAL (Agarwal et al., 2017) for model selection and Syndicated (Ding et al., 2022b) for bandit hyperparameter tuning are (sub)linearly dependent on the number of candidates, which would be infinitely large and futile in our case.
|
In this section, we show by experiments that our hyperparameter tuning framework outperforms the theoretical hyperparameter setting and other tuning methods with various (generalized) linear bandit algorithms. We utilize seven state-of-the-art bandit algorithms: two of them (LinUCB (Li et al., 2010), LinTS (Agrawal & Goyal, 2013)) are linear bandits, and the other five algorithms (UCB-GLM (Li et al., 2017), GLM-TSL (Kveton et al., 2020), Laplace-TS (Chapelle & Li, 2011), GLOC (Jun et al., 2017), SGD-TS (Ding et al., 2021)) are GLBs. Note that all these bandit algorithms except Laplace-TS contain an exploration rate hyperparameter, while GLOC and SGD-TS further require an additional learning parameter. And Laplace-TS only depends on one stepsize hyperparameter for a gradient descent optimizer.
|
We compare our CDT framework with the theoretical setting, OP (Bouneffouf & Claeys, 2020) and TL (Ding et al., 2022b) (one hyperparameter) and Syndicated (Ding et al., 2022b) (multiple hyperparameters) algorithms. Their details are given as follows:
|
A
|
In order to leverage the result in (2) to obtain confidence intervals in practice, it is required to the variance term (and the expectation) appearing in (2). We construct a sub-sampling based online estimator for those terms, in Section 3. We show in Theorem 3.1 that the additional error incurred in estimating these unknown parameters is negligible in comparison to the rates in (2). Finally, to correct for the non-trivial bias incurred in the high-dimensional setting, in Section 4, we propose a two-step bias correction methodology, which is also fully online. This provides the first fully data-driven, online procedure for practical high-dimensional algorithmic inference with stochastic optimization algorithms like SGD.
|
In order to leverage the result in (2) to obtain confidence intervals in practice, it is required to the variance term (and the expectation) appearing in (2). We construct a sub-sampling based online estimator for those terms, in Section 3. We show in Theorem 3.1 that the additional error incurred in estimating these unknown parameters is negligible in comparison to the rates in (2). Finally, to correct for the non-trivial bias incurred in the high-dimensional setting, in Section 4, we propose a two-step bias correction methodology, which is also fully online. This provides the first fully data-driven, online procedure for practical high-dimensional algorithmic inference with stochastic optimization algorithms like SGD.
|
Using Stein’s identity in the context of single-index models could be traced back to the works of [13, 58]. Recent works in this direction include include [74, 96, 95, 40, 41]. Our novelty in this work likes in using the Stein’s identity in the context of SGD thereby providing an algorithmic approach for estimating the indices as opposed to the above mentioned references that focus on non-algorithmic “arg-min” type estimators. Finally, we remark that the condition that μ≠0𝜇0\mu\neq 0italic_μ ≠ 0 is very mild and is satisfied in many cases including one-bit compressed sensing and logistic regression; see, for example [73, 74, 96]
|
In order to estimate the true parameter β∗superscript𝛽\beta^{*}italic_β start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, assuming that ϵitalic-ϵ\epsilonitalic_ϵ has a finite variance (σ2<∞superscript𝜎2\sigma^{2}<\inftyitalic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT < ∞) the natural procedure is to minimize the following population least-squares stochastic optimization problem, minθ∈ℝd𝔼[(Y−g(⟨X,θ⟩))2]subscript𝜃superscriptℝ𝑑𝔼delimited-[]superscript𝑌𝑔𝑋𝜃2\min_{\theta\in\mathbb{R}^{d}}\mathbb{E}[(Y-g(\langle X,\theta\rangle))^{2}]roman_min start_POSTSUBSCRIPT italic_θ ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT blackboard_E [ ( italic_Y - italic_g ( ⟨ italic_X , italic_θ ⟩ ) ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ], by derive an online SGD algorithm for solving this problem. However, under the Gaussian input assumption, we propose to use the SGD updates in (1), originally designed for the linear model to estimate the β∗superscript𝛽\beta^{*}italic_β start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT under the single-index model as well. The intuition for this proposal is motivated by widely used Gaussian Stein’s identity [87, 21], which states that a random vector X∈ℝd𝑋superscriptℝ𝑑X\in\mathbb{R}^{d}italic_X ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT is distributed as N(0,A)𝑁0𝐴N(0,A)italic_N ( 0 , italic_A ), if and only if 𝔼[Xr(X)]=A𝔼[∇r(X)]𝔼delimited-[]𝑋𝑟𝑋𝐴𝔼delimited-[]∇𝑟𝑋\mathbb{E}[Xr(X)]=A\mathbb{E}[\nabla r(X)]blackboard_E [ italic_X italic_r ( italic_X ) ] = italic_A blackboard_E [ ∇ italic_r ( italic_X ) ], for all “sufficiently smooth” functions r:ℝd→ℝ:𝑟→superscriptℝ𝑑ℝr:\mathbb{R}^{d}\to\mathbb{R}italic_r : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT → blackboard_R. The (population) gradient from which (1) is derived, is given by h(θ):=−𝔼[X(Y−⟨X,θ⟩)]assignℎ𝜃𝔼delimited-[]𝑋𝑌𝑋𝜃h(\theta):=-\mathbb{E}[X(Y-\langle X,\theta\rangle)]italic_h ( italic_θ ) := - blackboard_E [ italic_X ( italic_Y - ⟨ italic_X , italic_θ ⟩ ) ]. Substituting the above single-index model for Y𝑌Yitalic_Y and using, the Gaussian Stein’s identity, we have that h(θ)=Aθ−𝔼[g(XTβ∗)X]=Aθ−μAβ∗ℎ𝜃𝐴𝜃𝔼delimited-[]𝑔superscript𝑋𝑇superscript𝛽𝑋𝐴𝜃𝜇𝐴superscript𝛽h(\theta)=A\theta-\mathbb{E}[g(X^{T}\beta^{*})X]=A\theta-\mu A\beta^{*}italic_h ( italic_θ ) = italic_A italic_θ - blackboard_E [ italic_g ( italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_β start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ) italic_X ] = italic_A italic_θ - italic_μ italic_A italic_β start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. This motivates our regularity condition that μ≠0𝜇0\mu\neq 0italic_μ ≠ 0. Note that for the linear model, we have μ=1𝜇1\mu=1italic_μ = 1. Hence, the population gradient h(θ)ℎ𝜃h(\theta)italic_h ( italic_θ ) stays almost the same under the single-index model for Y𝑌Yitalic_Y, as that of the linear model, except for the scaling factor μ𝜇\muitalic_μ. Recall that in single-index models, we are interested in estimating the direction. Hence, in our analysis, this scaling does not affect the final rates (and only affects the constants). We now present the high-dimensional central limit theorem in the single-index model setting. The online variance estimation procedure and the corresponding theoretical bound in Theorem 3.1, similarly follows in the single-index model setting.
|
In Section 5, we next extend our main results to the case when the data is generated from a class of single-index models. This extension is based on leveraging Gaussian Stein’s identity in the context of online SGD. In particular, for the class of single-index models we consider, running the same iterates as in (1) (which is developed originally for linear models) also provides estimates for the direction of the true index parameter. While Gaussian Stein’s identity has been used previously for estimation in index models, our novelty lies in using it in the algorithmic context of SGD. We verify our theoretical contributions via numerical simulations in Section 6.
|
D
|
In the rest of this section, we will review our approaches and clarify how we employ various strategies to achieve our goal.
|
The runtime of our algorithm is nearly linear in the verification time — given a set of observed entries ΩΩ\Omegaroman_Ω, it takes O(k)𝑂𝑘O(k)italic_O ( italic_k ) time to verify an entry of PΩ(U^V^⊤)subscript𝑃Ω^𝑈superscript^𝑉topP_{\Omega}(\widehat{U}\widehat{V}^{\top})italic_P start_POSTSUBSCRIPT roman_Ω end_POSTSUBSCRIPT ( over^ start_ARG italic_U end_ARG over^ start_ARG italic_V end_ARG start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ), hence requires a total of O(|Ω|k)𝑂Ω𝑘O(|\Omega|k)italic_O ( | roman_Ω | italic_k ) time. [24] achieves a similar runtime behavior with an improved sample complexity of |Ω|=O~(nk2+o(1))Ω~𝑂𝑛superscript𝑘2𝑜1|\Omega|=\widetilde{O}(nk^{2+o(1)})| roman_Ω | = over~ start_ARG italic_O end_ARG ( italic_n italic_k start_POSTSUPERSCRIPT 2 + italic_o ( 1 ) end_POSTSUPERSCRIPT ). It is worth noting that most popular practical algorithms for matrix completion are based on either alternating minimization or gradient descent, since they are easy to implement and certain steps can be sped up via fast solvers. In contrary, the machinery of [24] is much more complicated. In short, they need to decompose the update into a “short” progress matrix and a “flat” noise component whose singular values are relatively close to each other. To achieve this goal, their algorithm requires complicated primitives, such as approximating singular values and spectral norms [36], Nesterov’s accelerated gradient descent [40] (which is known to be hard to realize for practical applications) and a complicated post-process procedure. While it is totally possible that these subroutines can be made practically efficient, empirical studies seem to be necessary to justify its practical performance. In contrast, our algorithm could be interpreted as providing a theoretical foundation on why fast alternating minimization works so well in practice. As most of the fast alternating minimization implementations rely on quick, approximate solvers (for instance, [35, 33]) but most of their analyses assume every step of the algorithm is computed exactly. From this perspective, one can view our robust analytical framework as “completing the picture” for all these variants of alternating minimization. Moreover, if one can further sharpen the dependence on k𝑘kitalic_k and condition number κ𝜅\kappaitalic_κ in the sample complexity for alternating minimization, matching that of [24], we automatically obtain an algorithm with the same (asymptotic) complexity of their algorithm. We leave improving the sample complexity of alternating minimization as a future direction.
|
Before diving into the details of our algorithm and analysis, let us review the alternating minimization proposed in [21]. The algorithm can be described pretty succinctly: given the sampled indices ΩΩ\Omegaroman_Ω, the algorithm starts by partitioning ΩΩ\Omegaroman_Ω into 2T+12𝑇12T+12 italic_T + 1 groups, denoted by Ω0,…,Ω2TsubscriptΩ0…subscriptΩ2𝑇\Omega_{0},\ldots,\Omega_{2T}roman_Ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , roman_Ω start_POSTSUBSCRIPT 2 italic_T end_POSTSUBSCRIPT. The algorithm first computes a top-k𝑘kitalic_k SVD of the matrix 1pPΩ0(M)1𝑝subscript𝑃subscriptΩ0𝑀\frac{1}{p}P_{\Omega_{0}}(M)divide start_ARG 1 end_ARG start_ARG italic_p end_ARG italic_P start_POSTSUBSCRIPT roman_Ω start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_M ) where p𝑝pitalic_p is the sampling probability. It then proceeds to trim all rows of the left singular matrix U𝑈Uitalic_U with large row norms (this step is often referred to as clipping). It then optimizes the factors U𝑈Uitalic_U and V𝑉Vitalic_V alternatively. At iteration t𝑡titalic_t, the algorithm first fixes U𝑈Uitalic_U and solves for V𝑉Vitalic_V with a multiple response regression using entries in Ωt+1subscriptΩ𝑡1\Omega_{t+1}roman_Ω start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT, then it fixes the newly obtained V𝑉Vitalic_V and solves for U𝑈Uitalic_U with entries in ΩT+t+1subscriptΩ𝑇𝑡1\Omega_{T+t+1}roman_Ω start_POSTSUBSCRIPT italic_T + italic_t + 1 end_POSTSUBSCRIPT. After iterating over all groups in the partition, the algorithm outputs the final factors U𝑈Uitalic_U and V𝑉Vitalic_V as its output. Here, we use independent samples across different iterations to ensure that each iteration is independent of priors (in terms of randomness used). If one would like to drop the uniform and independent samples across iterations (see, e.g., [32]), the convergence has only been shown under additional assumptions and in terms of critical points to certain non-convex program.
|
5: U^0←Init(Uϕ)←subscript^𝑈0Initsubscript𝑈italic-ϕ\widehat{U}_{0}\leftarrow\textsc{Init}(U_{\phi})over^ start_ARG italic_U end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ← Init ( italic_U start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ) ▷▷\triangleright▷ Algorithm 4
|
Algorithm 1 Alternating minimization for matrix completion. The Init procedure clips the rows with large norms, then performs a Gram-Schmidt process.
|
D
|
}_{g}^{\top})^{\top}bold_italic_θ = ( italic_π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_π start_POSTSUBSCRIPT italic_g - 1 end_POSTSUBSCRIPT , bold_italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT , … , bold_italic_ω start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT.
|
In the sequel, we assume that the class-conditional densities of 𝒚𝒚\bm{y}bold_italic_y are multivariate Gaussian with
|
Under the model (3) for MAR labels in the case of the two-class homoscedastic Gaussian model, Ahfock & McLachlan (2020) derived the following theorem that motivates the development of a package to implement this semi-supervised learning approach for possibly multiple classes with multivariate Gaussian distributions.
|
Ahfock & McLachlan (2020) noted that it is common in practice for unlabelled images (that is, the features with missing labels) to fall in regions of the feature space where there is class overlap. This finding led them to argue that the unlabelled observations can carry additional information that can be used to improve the efficiency of the parameter estimation of 𝜽𝜽\bm{\theta}bold_italic_θ. Additional theoretical motivation is available in Appendix 5. They noted that in these situations the difficulty of classifying an observation can be quantified using the Shannon entropy of an entity with feature vector 𝒚𝒚\bm{y}bold_italic_y, which is defined by
|
The R package gmmsslm implements the semi-supervised learning approach proposed by Ahfock & McLachlan (2020) for estimating the Bayes’ classifier from a partially classified training sample in which some of the feature vectors have missing labels. It uses a generative model approach whereby the joint distribution of the feature vector and its ground-truth label is adopted. Each of g𝑔gitalic_g pre-specified classes to which a feature vector can belong has the multivariate Gaussian distribution. The conditional probability that a feature vector has a missing label is formulated in a framework in which the missingness mechanism models this probability to depend on the entropy of the feature vector using a logistic model. The parameters in the Bayes’ classifier are estimated by ML via an ECM algorithm. The package applies to classes with equal or unequal covariance matrices in their multivariate Gaussian distributions. In application to a real-world medical dataset, the estimated error rate of the Bayes’ classifier based on the partially classified training sample is lower than that of the Bayes’ classifier formed from a completely classified sample.
|
A
|
To clarify the difference, it is perhaps useful to rephrase our claims in terms of sample complexity.
|
While previous works show that the target function can be learnt with O(d)𝑂𝑑O(d)italic_O ( italic_d )
|
the target function to be well approximated (in the d,m→∞→𝑑𝑚d,m\to\inftyitalic_d , italic_m → ∞ limit),
|
We then show that the products ⟨ui,uj⟩subscript𝑢𝑖subscript𝑢𝑗\langle u_{i},u_{j}\rangle⟨ italic_u start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ⟩ can be eliminated, with an
|
activation and target function, for which not all the components of φ𝜑\varphiitalic_φ are actually learnt.
|
A
|
We can see the running time in the highD dataset is faster than the NGSIM dataset. This is because the highD dataset has a smaller grid size. The computational time of STH-LRTC is considerably higher compared to other methods. For instance, on HighD data, it takes approximately 20 to 60 times longer than the P-GP-rotated method and 15 to 30 times longer than ASM computation. Moreover, the computational efficiency of STH-LRTC drops significantly with a lower the penetration rate. This is mainly due to the increase in the spatiotemporal delay embedding lengths (τssubscript𝜏𝑠\tau_{s}italic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and τtsubscript𝜏𝑡\tau_{t}italic_τ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT), which impacts the computation time substantially. As a result, the computational cost of STH-LRTC becomes extremely high under such scenarios. However, it is essential to note that this trend might not always hold, and a slight change in the parameters of the delay embedding in STH-LRTC could alter the trend.
|
Our proposed method consistently outperforms the ASM benchmark. An advantage of ASM is that it considers the traffic propagation of both congestion and free flow. Therefore, ASM could produce a more natural traffic flow pattern, as demonstrated in the high-speed region (top left corner) of Fig. 2 (c). However, the ASM is not as good as the GP-rotated in estimating small shockwaves during the first 500 seconds (bottom left corner). The proposed GP-rotated outperforms ASM in most cases (except for the highD dataset with 5% penetration rate). The gaps between ASM and GP-rotated increase with a larger CV penetration rate.
|
As presented here, the most similar research to our work is the adaptive smoothing interpolation (ASM) by Treiber et al. (2011). The authors used the anisotropic features of traffic waves and developed a smoothing method to estimate the traffic speed profile. The interpolation of ASM is a weighted sum of a free-flow component and a congested component. Schreiter et al. (2010) proposed two fast implementations of ASM by efficient matrix operations and Fast Fourier Transform (FFT), bringing improvements in computation time by two orders of magnitude. The classical ASM lacks a well-defined method to determine the model parameters. Yang et al. (2022) reformulate ASM using matrix completion, which can estimate the weight parameter by the Alternating Direction Method of Multipliers (ADMM) algorithm. Yang et al. (2023) proposed a neural network model based on ASM, which can learn its parameters from sparse data of road sensors. In our study, the proposed GP approach is a probabilistic model that can learn the parameters and uncertainties from the data. Besides, the proposed method can be applied to the TSE problem on a continuous space without defining grids.
|
We observe that the computational time of ASM, GP-rotated, and P-GP-rotated methods increases as the penetration rate increases. This is understandable as more data needs to be processed, leading to higher computation costs due to the increased traffic information. It’s worth highlighting that both ASM and GP-based methods can benefit from using a locality approximation that excludes distant points in the filters/covariance matrices to speed up the computation (e.g., Gramacy and Apley, 2015). Besides, our testing only employs a naive implementation of ASM. Faster implementations of ASM exist, leveraging efficient matrix operations and the Fast Fourier Transform (FFT) (Schreiter et al., 2010), which can reduce computation time by two orders of magnitude. Considering these, while P-GP-rotated demonstrates satisfactory computational efficiency, ASM can be significantly faster with proper implementations.
|
We can see the running time in the highD dataset is faster than the NGSIM dataset. This is because the highD dataset has a smaller grid size. The computational time of STH-LRTC is considerably higher compared to other methods. For instance, on HighD data, it takes approximately 20 to 60 times longer than the P-GP-rotated method and 15 to 30 times longer than ASM computation. Moreover, the computational efficiency of STH-LRTC drops significantly with a lower the penetration rate. This is mainly due to the increase in the spatiotemporal delay embedding lengths (τssubscript𝜏𝑠\tau_{s}italic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT and τtsubscript𝜏𝑡\tau_{t}italic_τ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT), which impacts the computation time substantially. As a result, the computational cost of STH-LRTC becomes extremely high under such scenarios. However, it is essential to note that this trend might not always hold, and a slight change in the parameters of the delay embedding in STH-LRTC could alter the trend.
|
C
|
}}R_{\infty}(h\circ\theta).italic_R start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ( over^ start_ARG italic_f end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) - italic_R start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ≤ 2 roman_sup start_POSTSUBSCRIPT italic_h ∈ caligraphic_H end_POSTSUBSCRIPT | italic_R start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT - italic_R start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT | ( italic_h ∘ italic_θ ) + 2 roman_sup start_POSTSUBSCRIPT italic_h ∈ caligraphic_H end_POSTSUBSCRIPT | italic_R start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT - over^ start_ARG italic_R end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT | ( italic_h ∘ italic_θ ) + italic_R start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT ∘ italic_θ ) - roman_inf start_POSTSUBSCRIPT italic_h measurable end_POSTSUBSCRIPT italic_R start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT ( italic_h ∘ italic_θ ) .
|
A key feature of Algorithm 1 is that its training step involves the angular component of extremes solely. It returns a prediction function f^^𝑓\widehat{f}over^ start_ARG italic_f end_ARG which only depends on the angular component θ(X)𝜃𝑋\theta(X)italic_θ ( italic_X ) of a new input X𝑋Xitalic_X. This apparently arbitrary choice turns out to be fully justified under regular variation assumptions, which are introduced and discussed in the following subsections. To wit, the main theoretical advantage of considering angular prediction function is to ensure the convergence of the conditional risk Rtsubscript𝑅𝑡R_{t}italic_R start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, as t→+∞→𝑡t\to+\inftyitalic_t → + ∞. In practice, rescaling all extremes (in the training set and in new examples) onto a bounded set allows a drastic increase in the density of available training examples and a clear extrapolation method beyond the envelope of observed examples.
|
An upper confidence bound for the excess of R∞subscript𝑅R_{\infty}italic_R start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT-risk of a solution of (4) is established, when
|
As it is generally the case in statistics of extremes, two types of bias terms are involved in the upper bound (20) of Corollary 1.
|
The corollary below summarizes the main results of Sections 3.1 and 3.2 in the form of an upper confidence bound for the excess of R∞subscript𝑅R_{\infty}italic_R start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT-risk for any solution f^ksubscript^𝑓𝑘\hat{f}_{k}over^ start_ARG italic_f end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT of the problem
|
C
|
At the end of this process with the convergence of the chain, there is a sample of the joint posterior distribution of 𝒁𝒁\boldsymbol{Z}bold_italic_Z, 𝒃𝒃\boldsymbol{b}bold_italic_b, and σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
|
Here we present the details of how to obtain the full conditional distributions used in our Gibbs sampler (see panel in Section 2.2) for the model given in (3). Similarly, one can obtain the full conditional distributions for the model in (4).
|
Numerical experiments were carried out with synthetic data in order to assess the main properties of the proposed model. In Section 3.1, we describe details about the synthetic data generation and the Gibbs sampler implementation. Section 3.2 presents the performance metrics used for model evaluation. Then, in Section 3.3, we present the results of our numerical experiments.
|
This paper is organized as follows. Section 2 presents the proposed Bayesian model for variable selection in FOSR, while Section 3 shows the design and results of several numerical experiments involving the proposed methodology. Also, in Section 3, we present a comparative study between the proposed model and the methods group LASSO, group SCAD, group MCP and BGLSS. In Section 4, we conduct a study to evaluate the performance of the proposed model in a functional regression problem involving some socioeconomic data and COVID-19 data from the Federal District and Brazilian states. Finally, Section 5 provides some general conclusions about this work.
|
This section exhibits the results of two different types of numerical experiments. Firstly, the proposed method is tested with synthetic data without replications. Secondly, simulations show the performance of the procedure under a scheme with replications. The diagnostic analysis based on the method proposed by [12] attested the convergence of the chains of the partial functional coefficients in all experiments and in all model configurations evaluated after the burn-in period. It is also important to notice that B-splines were used for the basis expansion of all partial functional coefficients.
|
B
|
To analyze functional data with serial correlations, such as the Texas temperature data in our study, one approach is to model the functions directly by extending auto regressive models to functional data, see Bosq (2000) and Kokoszka and Reimherr (2013).
|
To analyze functional data with serial correlations, such as the Texas temperature data in our study, one approach is to model the functions directly by extending auto regressive models to functional data, see Bosq (2000) and Kokoszka and Reimherr (2013).
|
This approach, however, is not applicable when the functions are only sparsely observed as in the Texas temperature data.
|
The aforementioned works, however, only considered one-dimensional functional time series, and cannot be simply extended to model 2-dimensional functional time series with missing data and observed on an irregular 2-dimensional domain, such as the Texas temperature data.
|
In this work, we propose a unified approach to model serially correlated 2-dimensional functional data and analyze the Texas temperature data with an FPCA model.
|
B
|
The likely core reason for these problems is the inverse scaling with the "squared" gradient, which has previously been pointed out as problematic for methods using empirical estimates of Fisher information matrix as preconditioners (Kunstner et al., 2019). It is likely that better normalization or adaptive control of possibly element-wise α2superscript𝛼2\alpha^{2}italic_α start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT will resolve these issues, but we do not have good solutions ready yet. The current experiments validate the metric has potential but further development is needed for providing a robust practical method.
|
We note that in the main experiments we purposefully ignored the samplers’ different running times to ensure that they have equal storage cost and computational cost during evaluation, without requiring method-specific thinning intervals or other tricks that would make the interpretations of the results harder. As shown by Tables 2 and 3, the Shampoo metric takes, depending on the case, 1.3−2.41.32.41.3-2.41.3 - 2.4 times longer to compute than the Identity metric. In Appendix A.6 we provide additional empirical results where all samplers are restricted to use the same total computation time, to provide an alternative perspective for sampler efficiency in terms of wall-clock time. We observe that despite some differences, the Monge and Shampoo metrics still retain the advantages over the previous methods.
|
We will later observe that the performance depends strongly on the choice of α2superscript𝛼2\alpha^{2}italic_α start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and hence the metric introduces a new hyper parameter that needs to be selected carefully. When α2=0superscript𝛼20\alpha^{2}=0italic_α start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0 the metric reduces to identity and differs increasingly more from the Euclidean one for larger values. Note that α𝛼\alphaitalic_α value here does not have the same interpretation as in Lagrangian Monte Carlo, as there is an implicit scaling depending on the number of data points in the training set. The validity of the sampler for all α2≥0superscript𝛼20\alpha^{2}\geq 0italic_α start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≥ 0 is shown by the following theorem, with the proof provided in the Appendix A.2.
|
The experimental results are shown in Table 3. Similar to the previous experiments, the Shampoo metric is the best in both cases and the computational overhead over the alternatives is manageable. For the Monge metric the optimal choice is here always to resort to identity metric with α2=0superscript𝛼20\alpha^{2}=0italic_α start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = 0 and hence no improvement is observed. In terms of posterior difficulty, we observe that switching to correlated priors makes the curvature slightly lower, matching the hypothesis of Fortuin et al. (2022), while also improving the log-probability and accuracy in terms of all metrics. Shampoo results in the best performances in both cases, while Wenzel and RMSprop are worse than using identity metric. While the standard deviations are larger than the MNIST case, Shampoo still yields consistent improvements over identity especially when using correlated Gaussian prior, with log-probability roughly equal to that of identity metric plus two times the standard deviation.
|
This plot is shown as a function of the iteration. The Shampoo metric takes around 1.3−2.41.32.41.3-2.41.3 - 2.4 times longer per iteration, but all other metrics share roughly the same cost.
|
A
|
We introduce a novel partial order for the class of pdRCON models that coincides with the model inclusion order if two models are model inclusion comparable but that also includes order relationships between certain models which are model inclusion incomparable. We show that the class of pdRCON models forms a complete lattice also with respect to this order, that we call the twin lattice. The twin lattice is distributive and its exploration is more efficient than that of the model inclusion lattice. Hence, the twin lattice can be used to improve the efficiency of procedures, either Bayesian and frequentist, which explore the model space moving between neighbouring models. More specifically, the focus of this paper is on stepwise greedy search procedures, and we show how the twin lattice can be exploited to improve efficiency in the identification of neighbouring submodels.
|
One way to increase the efficiency of greedy search procedures is by applying the, so-called, principle of coherence that is used as a strategy for pruning the search space. The latter was introduced in Gabriel (1969) where it is stated that: “in any procedure involving multiple comparisons no hypothesis should be accepted if any hypothesis implied by it is rejected”. We remark that, for convenience, we say “accepted” instead of the more correct “non-rejected”. Consider some goodness-of-fit test for testing models at a given level α𝛼\alphaitalic_α so that for every model in a given class we can apply the test and determine whether the model is rejected or accepted. In graphical modelling, the principle of coherence is typically implemented by requiring that we should not accept a model while rejecting a larger model; see, among others,
|
We have considered the problem of structure learning of GGMs for paired data by focusing on the family of RCON models defined by coloured graphs named pdCGs. The main results of this paper provide insight into the structure of the model inclusion lattice of pdCGs. We have introduced an alternative representation of these graphs that facilitates the computation of neighbouring models. Furthermore, this alternative representation is naturally associated with a novel order relationship that has led to the construction of the twin lattice, whose structure resembles that of the well-known set inclusion lattice, and that facilitates the exploration of the search space. These results can be applied in the implementation of both greedy and Bayesian model search procedures. Here, we have shown how they can be used to improve the efficiency of stepwise backward elimination procedures. This has also made it clear that the use of the twin lattice facilitates the correct application of the principle of coherence. Finally, we have applied our procedure to learn a brain network on 36 variables. This model dimension could be regarded as somehow small, compared with the number of variables that can be dealt with by penalized likelihood methods. This is due to the fact that, as shown in Section 6, the number of pdRCON models is much larger than that of GGMs and the same is the number of neighbouring submodels that need to be identified at every step of the algorithm. Furthermore, for every model considered, the computation of the maximum likelihood estimate is not available in closed form, but it involves an iterative procedure. Efficiency improvement is object of current research and could be achieved, for instance, by both implementing a procedure that deals with candidate submodels in parallel, and a procedure for the computation of maximum likelihood estimates explicitly designed for pdRCON models.
|
One way to increase the efficiency of greedy search procedures is by applying the, so-called, principle of coherence (Gabriel, 1969) that is used as a strategy for pruning the search space. We show that for the family of pdRCON models the twin lattice allows a more straightforward implementation of the principle of coherence.
|
We introduce a novel partial order for the class of pdRCON models that coincides with the model inclusion order if two models are model inclusion comparable but that also includes order relationships between certain models which are model inclusion incomparable. We show that the class of pdRCON models forms a complete lattice also with respect to this order, that we call the twin lattice. The twin lattice is distributive and its exploration is more efficient than that of the model inclusion lattice. Hence, the twin lattice can be used to improve the efficiency of procedures, either Bayesian and frequentist, which explore the model space moving between neighbouring models. More specifically, the focus of this paper is on stepwise greedy search procedures, and we show how the twin lattice can be exploited to improve efficiency in the identification of neighbouring submodels.
|
C
|
{\prime}}}-\mu_{Z_{i}}>_{\mathcal{H}}.\end{split}start_ROW start_CELL ∥ italic_μ start_POSTSUBSCRIPT over^ start_ARG italic_Z end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_μ start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = end_CELL start_CELL ∥ divide start_ARG 1 end_ARG start_ARG italic_m end_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_K start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_μ start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) ∥ start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL = end_CELL start_CELL divide start_ARG 1 end_ARG start_ARG italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ∥ italic_K start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_μ start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL + ∑ start_POSTSUBSCRIPT italic_j ≠ italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT < italic_K start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_μ start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_K start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT italic_i italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_μ start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT > start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT . end_CELL end_ROW
|
}}_{ij^{\prime}}}-\mu_{Z_{i}}>_{\mathcal{H}}divide start_ARG 1 end_ARG start_ARG italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ∑ start_POSTSUBSCRIPT italic_j ≠ italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT < italic_K start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_μ start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_K start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT italic_i italic_j start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_μ start_POSTSUBSCRIPT italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT > start_POSTSUBSCRIPT caligraphic_H end_POSTSUBSCRIPT had 00 expectation and this will not be the case under assumed dependence of the samples. Instead we will show that the expectation of the sum of the cross-terms will also be O(1/m)𝑂1𝑚O(1/m)italic_O ( 1 / italic_m ) under the restriction on the β𝛽\betaitalic_β-mixing coefficients (uniform geometric decay).
|
We note that in the proof of the theorems for the i.i.d. case the samples 𝒙ijsubscript𝒙𝑖𝑗{\bm{x}}_{ij}bold_italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT only play a part in the excess risk term Rn0superscriptsubscript𝑅𝑛0R_{n}^{0}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT of (4.4), and the error bounds of Rn0superscriptsubscript𝑅𝑛0R_{n}^{0}italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT depend on 𝒙ijsubscript𝒙𝑖𝑗{\bm{x}}_{ij}bold_italic_x start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT exclusively through the result in Lemma 2 on the distances between the mean embeddings between the true and observed distributions. So it is enough to show that the bound in Lemma 2 for this dependent setup.
|
It is now evident, that (2.5) is a direct generalization of the standard Gaussian process regression to distribution-valued covariates. The induced kernel 𝕂𝕂\mathbb{K}blackboard_K on the distribution space is simply the double expectation of the original kernel K𝐾Kitalic_K over the pair of distributions. We refer to (2.5) as the
|
We will now show that the expectation of the cross terms in (S2) is 0. Using the RKHS property and applying (2.6),
|
D
|
He (2019) and Cao and Lee (2020) investigate certain asymptotic properties of their proposed posterior for θ𝜃\thetaitalic_θ, but they focus exclusively on (a) logistic regression and (b) results concerning the marginal posterior πnsuperscript𝜋𝑛\pi^{n}italic_π start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT for the configuration S𝑆Sitalic_S. Here we extend the analysis beyond the logistic regression case to arbitrary GLMs as described above, with arbitrary link functions, and establish conditions under which our proposed posterior distribution ΠnsuperscriptΠ𝑛\Pi^{n}roman_Π start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT for θ𝜃\thetaitalic_θ concentrates around the true θ⋆superscript𝜃⋆\theta^{\star}italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT at (nearly) the optimal rate (e.g., Rigollet, 2012), adaptive to the unknown sparsity level |S(θ⋆)|𝑆superscript𝜃⋆|S(\theta^{\star})|| italic_S ( italic_θ start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ) |.
|
Walker (2017) proposed the idea of using the data to properly center the prior. The motivation is that the tails of the prior should not matter if the prior is strategically centered, so then the computationally simpler conjugate normal priors could still be used. Centering the model-specific conjugate normal priors on the corresponding least-squares estimators makes the approach empirical Bayes, in a certain sense, and the previous authors show that the corresponding empirical Bayes posterior has optimal asymptotic concentration properties and has strong empirical performance compared to existing Bayesian and non-Bayesian methods. In other words, the double-use of data—in the prior and in the likelihood—does not hurt the method’s performance in any way; in fact, one could argue that the double-use of data actually helps. Beyond the normal linear model (Martin, Mess and
|
The goal here is to develop the aforementioned empirical Bayes strategy for the case of high-dimensional GLMs. In Section 2, we introduce the set up of the GLM problem and review the empirical Bayes approach for linear regression. In Section 3, we present our empirical Bayes GLM, including the particular choice of data-driven prior, the corresponding empirical Bayes posterior, and our proposed computational strategy. The key challenge in the present GLM case compared to previous efforts in the linear model setting is that the models are sufficiently complicated that there is no conjugacy and, therefore, no posterior computations can be done in closed form. Here the “informativeness” of the data-driven prior allows for some simple and accurate approximations. In Section 4, we offer theoretical support for our proposed solution. In particular, we have two basic kinds of posterior concentration results: those for the GLM coefficients, which are relevant to estimation, and those for the so-called configuration, or active set, which are relevant to variable selection. In the former case, we give sufficient conditions for the posterior to concentrate around the true (sparse) coefficient vector at rates equivalent to those established in, e.g., Jeong and Ghosal (2021), which agree with the minimiax optimal rates in the linear model setting. In the latter case, we give sufficient conditions (e.g., on the size of the smallest non-zero coefficient), comparable to those in Narisetty, Shen and
|
The appearance of an additional term—the sparse singular value—depending on X𝑋Xitalic_X is expected since the response y𝑦yitalic_y depends directly on Xθ𝑋𝜃X\thetaitalic_X italic_θ, not on θ𝜃\thetaitalic_θ itself. This is easy to see in the linear model case where the Hellinger distance is proportional to the ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-norm between fitted values. So to strip the X𝑋Xitalic_X away and investigate the posterior concentration directly in terms of θ𝜃\thetaitalic_θ requires some conditions on X𝑋Xitalic_X, which are baked into the effect the ϕitalic-ϕ\phiitalic_ϕ term has on the rate. For example, if the ϕitalic-ϕ\phiitalic_ϕ term in (13) is bounded away from 0, which amounts to a condition on X𝑋Xitalic_X, then that term can be absorbed into the constant M𝑀Mitalic_M and the ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-rate agrees with the Hellinger rate above. In any case, the result here is the same as that proved in Jeong and Ghosal (2021), so the reader interested in details about ϕitalic-ϕ\phiitalic_ϕ can refer to their discussion.
|
He (2019) and Cao and Lee (2020). These can be roughly classified as conditions on the dimension of the problem and on the design matrix X𝑋Xitalic_X. The third condition concerns the hyperparameters in our empirical prior.
|
D
|
Given the severe target imbalance and extrapolation induced by using URI to estimate the ATE, one has a few options. First, one can attempt to use other methods that retain the target estimand, such as MRI or other weighting or matching methods (possibly in combination with MRI or URI). Second, one can change the target estimand of interest to one that is better supported by the data, though this choice should be determined primarily by substantive considerations (see Greifer and Stuart 2021). We use both of these approaches below, changing the target estimand to the study’s more natural estimand, the ATT, and investigating the performance of MRI and an initial matching step to reduce extrapolation and dependence on the outcome model.
|
Below, we use the MRI to target the ATT. The call to lmw() is almost identical, except this time we set method = "MRI" and estimand = "ATT":
|
Although the usual target estimand with the Lalonde data is the ATT, here we target the ATE to demonstrate the features of lmw that can be used to diagnose balance, representativeness, and extrapolation. We then follow with an example targeting the ATT, an example using a multi-valued treatment, and an example using 2SLS with an instrumental variable.
|
The syntax for multi-valued treatments is essentially the same as with binary treatments except that the focal argument needs to be supplied when estimand = "ATT" to identify which treatment level is to be considered the “treated” or “focal” level. (When URI is used, an additional contrast argument is required to identify a pair of groups to be contrasted, since each contrast will receive its own set of weights, which will be computed one at a time by lmw().)
|
Given the severe target imbalance and extrapolation induced by using URI to estimate the ATE, one has a few options. First, one can attempt to use other methods that retain the target estimand, such as MRI or other weighting or matching methods (possibly in combination with MRI or URI). Second, one can change the target estimand of interest to one that is better supported by the data, though this choice should be determined primarily by substantive considerations (see Greifer and Stuart 2021). We use both of these approaches below, changing the target estimand to the study’s more natural estimand, the ATT, and investigating the performance of MRI and an initial matching step to reduce extrapolation and dependence on the outcome model.
|
A
|
Fig. S11: The empirical sizes of the score-matching-based Wald test for the significance levels ranging from 0.01 to 0.30 under the setting of the vMF auto model for n𝑛nitalic_n varies in {200,500,1000}2005001000\{200,500,1000\}{ 200 , 500 , 1000 }. The benchmark represents the ideal case when the percentage of rejections from 1000 replications is equal to the significance level.
|
In this section, we develop generalized score matching for ordinal data. For the sake of simplicity, we only present generalized score matching for independent ordinal data, where Section 3.1 and Section 3.2 cover univariate and multivariate ordinal data, respectively. The extension to dependent data follows a similar idea to what is presented in Section 3.2.
|
In this section, we will present the theoretical properties of our proposed generalized score matching for INID multivariate ordinal data 𝒚𝒚\bm{y}bold_italic_y.
|
In this article, we extend score matching beyond continuous IID models. Specifically, we propose a novel generalized score matching approach for ordinal data. The proposed generalized score matching approach goes beyond previous research in that it can be applied to univariate and multivariate ordinal data. The simulations and real data analysis support our theoretical results. Furthermore, our proposed generalized score matching technique has the potential to make significant contributions to the fields of Bayesian statistics and deep learning based on the subsequent studies in [26, 25]. By deriving the consistency and asymptotic normality of the proposed estimators under the independence assumption, we establish the theoretical foundation for score-matching-based inference for general models. Additionally, we propose a novel auto model for spherical data and develop a score-matching-based Wald test to test the spatial independence. This illustration also shows that the extension of score matching beyond the IID case advances the fields of both statistical estimation and statistical modeling.
|
Generalized score matching for multivariate ordinal data has analogous theoretical properties to those in the univariate ordinal case. Detailed discussion of these properties can be found in Section S3 of the Supplementary Material.
|
B
|
A more accurate approximation of a non-linear Bayesian filter than the advanced EKF variants is possible through derivative-free Gaussian sigma-point KFs (SPKFs). These filters generate deterministic points and propagate them through the non-linear functions to approximate the mean and covariance of the posterior density. While EKF is applicable to differentiable functions, SPKFs handle discontinuities. A popular SPKF is the unscented KF (UKF) [15], which utilizes unscented transform to generate sigma points and approximates the mean and covariance of a Gaussian distribution under non-linear transformation. The basic intuition of unscented transform is that it is easier to approximate a probability distribution than it is to approximate an arbitrary non-linear function [15]. EKF, on the other hand, considers a linear approximation for the non-linear functions. The corresponding inverse UKF (I-UKF) was proposed in our recent work [16, 17].
|
The SPKF performance is further improved by employing better numerical integration techniques to calculate the recursive integrals in Bayesian estimation. For example, cubature KF (CKF) [18] and quadrature KF (QKF) [19, 20] numerically approximate the multidimensional integral based on, respectively, cubature and Gauss-Hermite quadrature rules. The cubature-quadrature KF (CQKF)[21] uses the cubature and quadrature rules together while central difference KF[19] considers polynomial interpolation methods with central difference approximation of the derivatives. In practice, a non-linear filter’s performance also depends on the system itself. Selecting the most appropriate filter for a given application typically entails striking a balance between estimation precision and computational complexity [22].
|
Contributions: In this paper, we develop inverse filters based on the afore-referenced efficient numerical integration techniques, namely, inverse CKF (I-CKF), inverse QKF (I-QKF), and inverse CQKF (I-CQKF). To this end, similar to the inverse cognition framework in [5, 10], we assume perfect system information. These methods can also be readily generalized to non-Gaussian, continuous-time state evolution or complex-valued system cases. When the system model is not known, our prior works [14, 17] addressed this case by employing parameter learning in the reproducing kernel Hilbert space (RKHS). In this paper, we develop RKHS-CKF based on the cubature rules. We then derive the stability conditions for the proposed I-CKF and show that the recursive estimates are also consistent. Our theoretical analyses show that the forward filter’s stability is sufficient to guarantee the same for the inverse filter under mild conditions imposed on the system. In the process, we also obtain improved stability results, hitherto unreported in the literature, for the forward CKF. Our numerical experiments demonstrate the proposed methods’ performance compared to the recursive Cramér-Rao lower bound (RCRLB) [26].
|
The (forward) CKF generates a set of ‘2nx2subscript𝑛𝑥2n_{x}2 italic_n start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT’ cubature points deterministically about the state estimate based on the third-degree spherical-radial cubature rule to numerically compute a standard Gaussian weighted non-linear integral [18]. Similarly, the (m𝑚mitalic_m-point) QKF employs a m𝑚mitalic_m-point Gauss-Hermite quadrature rule to generate mnxsuperscript𝑚subscript𝑛𝑥m^{n_{x}}italic_m start_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT end_POSTSUPERSCRIPT quadrature points[19]. In [20], QKF was reformulated using statistical linear regression, wherein the linearized function is more accurate in a statistical sense than EKF’s first-order Taylor series approximation. The CQKF generalizes CKF by efficiently employing the cubature and quadrature integration rules together[21]. In particular, the nxsubscript𝑛𝑥n_{x}italic_n start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT-dimensional recursive Bayesian integral is decomposed into a surface and line integral approximated using third-degree spherical cubature and one-dimensional Gauss-Laguerre quadrature rules, respectively.
|
We developed I-CKF, I-QKF, and I-CQKF to estimate the defender’s state, given noisy measurements of the attacker’s actions in highly non-linear systems. On the other hand, RKHS-CKF, as both forward and inverse filters, provides the desired state and parameter estimates even without any prior system model information. In the case of the perfect system model information, our developed methods can be further generalized to systems with non-Gaussian noises, continuous-time state evolution, or complex-valued states and observations. The proposed filters exploit the cubature and quadrature rules to approximate the recursive Bayesian integrals. The I-CKF’s stability conditions are easily achieved for a stable forward CKF. The developed inverse filters are also consistent, provided that the initial estimate pair is consistent. Numerical experiments show that I-CKF and I-QKF outperform I-UKF even when they incorrectly assume the true form of the forward filter. However, I-QKF is computationally expensive, while I-CQKF provides reasonable estimates at lower computational costs. The non-trivial upshot of this result is that the forward filter does not need to be known exactly to the defender.
|
A
|
\prime}\right)}\left(\mathbf{c}_{Q}-\mathbf{c}_{Q}\right)\end{bmatrix}.bold_E start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT = [ start_ARG start_ROW start_CELL bold_P start_POSTSUPERSCRIPT ( italic_m start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT ( bold_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) end_CELL start_CELL … end_CELL start_CELL bold_P start_POSTSUPERSCRIPT ( italic_m start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT ( bold_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - bold_c start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT ) end_CELL end_ROW start_ROW start_CELL ⋮ end_CELL start_CELL ⋱ end_CELL start_CELL ⋮ end_CELL end_ROW start_ROW start_CELL bold_P start_POSTSUPERSCRIPT ( italic_m start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT ( bold_c start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT - bold_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) end_CELL start_CELL … end_CELL start_CELL bold_P start_POSTSUPERSCRIPT ( italic_m start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_n start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT ( bold_c start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT - bold_c start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT ) end_CELL end_ROW end_ARG ] .
|
To find the optimal values of the regularization parameters in (44a), we employ the following criteria based on the expected value of the dual norm to select the hyperparameters:
|
We rewrite the first conditions of the dual certificate (31) and (32) as the following linear system of equations
|
The existence of dual polynomials guarantees that the optimal solution of the primal problem (23) is the pair {𝐗r,𝐗c}subscript𝐗𝑟subscript𝐗𝑐\{\mathbf{X}_{r},\mathbf{X}_{c}\}{ bold_X start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , bold_X start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT } based on the derivation of the dual certificate. We first find the polynomials 𝐟r(𝐫)subscript𝐟𝑟𝐫\mathbf{f}_{r}(\mathbf{r})bold_f start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( bold_r ) and 𝐟c(𝐜)subscript𝐟𝑐𝐜\mathbf{f}_{c}(\mathbf{c})bold_f start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( bold_c ) that satisfy the following conditions to ensure maximum modulus of the dual polynomial occurs at the true parameter values:
|
The following Proposition (4) states the conditions for the exact recovery of radar and communications channel parameters.
|
B
|
∃z∈{b}⟂:‖AIC⊤(b/‖b‖+z)‖∞<λ.:𝑧superscript𝑏perpendicular-tosubscriptnormsuperscriptsubscript𝐴superscript𝐼𝐶top𝑏norm𝑏𝑧𝜆\exists z\in\{b\}^{\perp}:\|A_{I^{C}}^{\top}(b/\|b\|+z)\|_{\infty}<\lambda.∃ italic_z ∈ { italic_b } start_POSTSUPERSCRIPT ⟂ end_POSTSUPERSCRIPT : ∥ italic_A start_POSTSUBSCRIPT italic_I start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ( italic_b / ∥ italic_b ∥ + italic_z ) ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT < italic_λ .
|
For λ=25𝜆25\lambda=\frac{2}{\sqrt{5}}italic_λ = divide start_ARG 2 end_ARG start_ARG square-root start_ARG 5 end_ARG end_ARG, we find that x¯=0¯𝑥0\bar{x}=0over¯ start_ARG italic_x end_ARG = 0 is a solution
|
Note that for I=∅𝐼I=\emptysetitalic_I = ∅ (i.e., x¯=0¯𝑥0\bar{x}=0over¯ start_ARG italic_x end_ARG = 0), condition (i) is vacuously
|
The former corresponds to the fact that for x¯=0¯𝑥0\bar{x}=0over¯ start_ARG italic_x end_ARG = 0, we have
|
For λ=2𝜆2\lambda=\sqrt{2}italic_λ = square-root start_ARG 2 end_ARG, we find that x¯=0¯𝑥0\bar{x}=0over¯ start_ARG italic_x end_ARG = 0 is a solution (with
|
C
|
Under the generative model associated with the Risk Ratio (as stated in Corrolary 2), the conditional Risk Ratio captures the treatment effect, but the Risk Ratio computed on the overall population depends on the baseline: the Risk Ratio is unable to disentangle the treatment effect from the baseline both at a strata level and at the population level. Finally Appendix D proposes a comment about the logistic regression model which is usually used in applied statistics.
|
While any CATE τ(x)𝜏𝑥\tau(x)italic_τ ( italic_x ) is able to disentangle the treatment effect from the baseline provided a suitable generative model, it is not the case for the ATE. In fact, among all collapsible measures, only linear causal measures are able to disentangle the baseline from the treatment effect modification both at a conditional level (CATE) and for the overall population (ATE).
|
Lemma 5 seems to suggest that all causal measures are equivalent, in the sense that they can all capture the conditional treatment effect modification, provided a suitable generative model.
|
The proof can be found in Appendix C.5.1. Lemma 5 shows that for any causal measure, there exists an appropriate generative model such that, under this model, the conditional causal measure captures the treatment effect m(x)𝑚𝑥m(x)italic_m ( italic_x ).
|
one can make explicit the generative model associated to any causal measure. In particular, the Conditional Odds Ratio equals the treatment effect in the logistic model (see Section D).
|
B
|
𝟏conditionsubscript1condition\displaystyle\bm{1}_{\mathrm{condition}}bold_1 start_POSTSUBSCRIPT roman_condition end_POSTSUBSCRIPT
|
See section 4.4 in the graphics bundle documentation (http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps)
|
Set subtraction, i.e., the set containing the elements of 𝔸𝔸{\mathbb{A}}blackboard_A that are not in 𝔹𝔹{\mathbb{B}}blackboard_B
|
The parents of xisubscriptx𝑖{\textnormal{x}}_{i}x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in 𝒢𝒢{\mathcal{G}}caligraphic_G
|
There will be a strict upper limit of 9 pages for the main text of the initial submission, with unlimited additional pages for citations.
|
A
|
Typically, the chemical kinetics capturing detailed chemistry of methane-air ignition is computationally expensive due to hundreds of associated reactions.
|
Staley and Yue [15] proposed that the positive definiteness of the FIM is the necessary and sufficient condition for the parameters to be considered practically identifiable.
|
To model the reaction chemistry, consider the classical 2-step mechanism proposed by Westbrook and Dryer [50] that accounts for the incomplete oxidation of methane.
|
Such priors result in pre-exponential factors in the order similar to those reported by Westbrook and Dryer [50], and are therefore considered suitable for the study.
|
Typically, the chemical kinetics capturing detailed chemistry of methane-air ignition is computationally expensive due to hundreds of associated reactions.
|
B
|
Similarly, we applied the feature selection techniques and ultimately incorporated the same six variables into the model, with an additional one being insulin use. We adjusted for the five effect modifiers in the P(Y|do(T),𝑪)𝑃conditional𝑌𝑑𝑜𝑇𝑪P(Y\,|\,do(T),\bm{C})italic_P ( italic_Y | italic_d italic_o ( italic_T ) , bold_italic_C ) component in the frugal parameterization as outlined in Section 4.1. With the optimal power η𝜂\etaitalic_η adaptively set at 0.6 on average, the estimated ATE is −1.33%percent1.33-1.33\%- 1.33 % with a 95% credible interval of [−2.41%,−0.19%]percent2.41percent0.19[-2.41\%,-0.19\%][ - 2.41 % , - 0.19 % ], a 19.5% reduction in width from the confidence interval from analysing PIONEER 6 alone. Thus, leveraging augmented RWD, we have concluded a statistically significant reduction in 1-year MACE risk.
|
In this paper, we present a novel power likelihood approach for effectively augmenting RCTs with observational data to improve the estimation of heterogeneous treatment effects. The remainder of this paper is organized as follows: Section 2 reviews data fusion methods proposed in recent literature. In Section 3 we outline the problem setup and main assumptions, before moving on to our proposed power likelihood approach in Section 4. In Section 5 we conduct a simulation study to illustrate the effectiveness of our approach.
|
The feature selection methods outlined in Section 6.1 led to the inclusion of six key variables in our model: age, serum albumin level, serum creatinine level, LDL, HDL and history of heart failure. To address missingness in the outcome, we employed multiple imputation (Gelman et al., 1995). Pooling the posterior samples from five sets of imputations, the ATE estimated from our proposed power likelihood method is −--1.36% with a 95% credible interval of [−2.42%,−0.23%]percent2.42percent0.23[-2.42\%,-0.23\%][ - 2.42 % , - 0.23 % ]. This represents a 20.5% reduction from the width of the confidence interval from the unadjusted analysis of the PIONEER 6 data. Notably, this enhanced efficiency resulted in the reduction in 1-year MACE risk becoming statistically significant, despite the minimal shift in the point estimate.
|
Assumption 3 states that the CATE of interest in the RWD is identical to the CATE in the RCT population. It is important to highlight that Assumption 3 does not stipulate the mean of the potential outcomes to be the same across 𝒟esubscript𝒟𝑒\mathcal{D}_{e}caligraphic_D start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT and 𝒟osubscript𝒟𝑜\mathcal{D}_{o}caligraphic_D start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT. Therefore, Assumption 3 is strictly weaker than assuming 𝔼[Y(t)|S=0]=𝔼[Y(t)|S=1]𝔼delimited-[]conditional𝑌𝑡𝑆0𝔼delimited-[]conditional𝑌𝑡𝑆1\mathbb{E}[Y(t)\,|\,S=0]=\mathbb{E}[Y(t)\,|\,S=1]blackboard_E [ italic_Y ( italic_t ) | italic_S = 0 ] = blackboard_E [ italic_Y ( italic_t ) | italic_S = 1 ] for t∈(0,1)𝑡01t\in(0,1)italic_t ∈ ( 0 , 1 ); this gives flexibility when the absolute level of the outcome differs between the RCT and RWD, for reasons such as different time windows, regions or standards of care. In practice, to improve adherence to this assumption, we can follow the target trial emulation framework, as outlined in Hernán and Robins (2016) and Hernán et al. (2022). This approach systematically emulates the design of a target RCT using observational data, thereby enhancing the comparability between the RCT and RWD, as demonstrated in the PIONEER 6 data study in Section 6.
|
In this case study, as shown in Figure 3, the two data fusion designs yielded consistent results: the ATE’s point estimates closely aligned with the mean difference observed in PIONEER 6, yet the confidence intervals effectively shrank by approximately 20%, demonstrating the effectiveness of our method in robustly augmenting the RCT to increase power.
|
D
|
We also consider mixing in MCMC algorithms by examining the effective sample size per second (ES/sec), or the rate at which independent samples are generated by the MCMC algorithm. Larger values of ES/sec are indicative of faster mixing Markov chains. Note that PICAR-Z approach generates a faster mixing MCMC algorithm than the ‘reparameterized’ approach (Christensen and Waagepetersen,, 2002), a method specifically designed to improve mixing for SGLMMs. For model parameters β1osubscript𝛽1𝑜\beta_{1o}italic_β start_POSTSUBSCRIPT 1 italic_o end_POSTSUBSCRIPT, β2osubscript𝛽2𝑜\beta_{2o}italic_β start_POSTSUBSCRIPT 2 italic_o end_POSTSUBSCRIPT, β1psubscript𝛽1𝑝\beta_{1p}italic_β start_POSTSUBSCRIPT 1 italic_p end_POSTSUBSCRIPT, and β1psubscript𝛽1𝑝\beta_{1p}italic_β start_POSTSUBSCRIPT 1 italic_p end_POSTSUBSCRIPT, PICAR-Z yields an ES/sec of 218.89218.89218.89218.89, 214.15214.15214.15214.15, 44.0044.0044.0044.00, and 43.8343.8343.8343.83, respectively. The ‘reparameterized’ approach returns an ES/sec 0.660.660.660.66, 0.630.630.630.63, 0.260.260.260.26, and 0.250.250.250.25, respectively. For the spatial random effects 𝐖o(s)subscript𝐖𝑜𝑠\mathbf{W}_{o}(s)bold_W start_POSTSUBSCRIPT italic_o end_POSTSUBSCRIPT ( italic_s ) and 𝐖p(s)subscript𝐖𝑝𝑠\mathbf{W}_{p}(s)bold_W start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_s ), the median ES/sec is 53.0953.0953.0953.09 and 28.7028.7028.7028.70 for the PICAR-Z approach and 0.710.710.710.71 and 0.400.400.400.40 for the ‘reparameterized’ approach, an improvement by a factor of roughly 74.374.374.374.3 and 72.572.572.572.5. Across all four model classes, the PICAR-Z approach has shorter walltimes to run 150,000150000150,000150 , 000 iterations of the Metropolis-Hastings algorithm than low-rank (bisquare) and the ‘reparameterized’ approach (Table 1). Against the ‘reparameterized’ approach, PICAR-Z exhibits a speed-up factor of roughly 152.4152.4152.4152.4, 121.2121.2121.2121.2, 203.9203.9203.9203.9, and 177.4177.4177.4177.4 for the count hurdle, semi-continuous hurdle, count mixture, and semi-continuous mixture models, respectively. We also conducted a sensitivity analysis regarding the proportion of zeros within the sample and various model performance metrics. Results (see supplement) indicate that datasets with a low proportion of zeros have lower AUC (poor classification) than datasets with larger proportion of zeros. Low proportions of zeros are linked with shorter model-fitting walltimes. Boxplots of the relevant metrics - Total RMSPE, Non-Zero RMPSE, AUC for the zero-valued observations, and walltimes - are also provided in the supplement.
|
Table 3: West Antarctica Ice Thickness Results: Results are grouped by two-part model (hurdle vs. mixture) and approach (PICAR-Z vs. low-rank with bisquare basis functions). Results include the root mean squared prediction error (rmspe) for the entire validation dataset and the non-zero data. For zero- vs. non-zero classification, we report the area under the ROC curve (AUC). Model-fitting walltimes are reported in minutes.
|
Table 2: Bivalve Species Results: Results are grouped by two-part model (hurdle vs. mixture) and approach (PICAR-Z, PICAR-Z with cross-correlation, and low-rank approach using bisquare basis functions). Results include the root mean squared prediction error (rmspe) for the entire validation dataset and the non-zero data. For zero- vs. non-zero classification, we report the area under the ROC curve (AUC). Model-fitting walltimes are reported in minutes.
|
Table 1 contains the out-of-sample prediction results for the entire validation sample (rmspe), positive-valued observations (rmspe), and zero vs. non-zero values (AUC) as well as the average model-fitting walltimes. Results of the simulation study suggest that PICAR-Z outperforms both competing approaches in prediction across all four classes of two-part models (see Table 1). All approaches perform comparably for binary classification of the zero vs. non-zero cases, as corroborated by similar AUC values. However, the PICAR-Z methods (with and without cross-correlation) provide more accurate predictions for the non-zero (i.e positive-valued) observations, in comparison to the other two methods. Estimating the correlation parameter does not strongly affect accuracy, save for the semi-continuous hurdle case. Note that the PICAR-Z approach outperformed the ‘reparameterized’ approach in predictive performance, which is consistent with results from past studies that examined basis representations of spatial latent fields (Bradley et al.,, 2019; Lee and Haran,, 2022). Figure 1 provides a visual representation of the latent probability π(𝐬)𝜋𝐬\pi({\bf s})italic_π ( bold_s ) and log-intensity log(θ(𝐬))𝑙𝑜𝑔𝜃𝐬log(\theta({\bf s}))italic_l italic_o italic_g ( italic_θ ( bold_s ) ) surfaces.
|
Table 1: Simulation Study Results: Median values for all 100 samples in the simulation study. Results are grouped by two-part model (hurdle vs. mixture), data type (counts vs. semi-continuous), and approach (PICAR-Z, PICAR-Z with cross-correlation, low-rank (bisquare), and the ‘reparameterized’ approach). Results include the root mean squared prediction error (rmspe) for the entire validation dataset and the non-zero data. For zero- vs. non-zero classification, we report the area under the ROC curve (AUC). Model-fitting walltimes are reported in minutes.
|
D
|
))}( blackboard_E [ italic_X ] , blackboard_E [ italic_f ( italic_X ) ] ) ∈ over¯ start_ARG roman_Conv ( caligraphic_G ( italic_f ) ) end_ARG are of algebraic nature and have little to do with measure or integration theory.
|
Hence, the graph convex hull bounds can be extended to a broader class of linear operators beyond expected values, so-called Markov operators, as well as to conditional expectations, which are well-known facts in the case of Jensen’s inequality, see e.g. (Bakry et al., 2014, Equation (1.2.1)) and (Dudley, 2002, Theorem 10.2.7).
|
Such operators are known as Markov operators and play a crucial role in the analysis of time evolution phenomena and dynamical systems (Bakry et al., 2014).
|
They are well-known to satisfy Jensen’s inequality (Bakry et al., 2014, Equation (1.2.1)), yet we significantly broaden the setting in which it applies, cf. Remark 4.2.
|
It is well-known that Jensen’s inequality also holds for conditional expectations (Dudley, 2002, Theorem 10.2.7).
|
A
|
}+\left(A-\frac{1}{2}\right)\sum_{j=1}^{5}W_{j}\right)\right)\;.\end{split}start_ROW start_CELL italic_W end_CELL start_CELL ∼ italic_N ( 0 , roman_Σ start_POSTSUBSCRIPT 100 × 100 end_POSTSUBSCRIPT ) , roman_Σ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT = { start_ROW start_CELL 1 , end_CELL start_CELL italic_i = italic_j end_CELL end_ROW start_ROW start_CELL 0.1 | italic_i - italic_j | start_POSTSUPERSCRIPT - 1.8 end_POSTSUPERSCRIPT , end_CELL start_CELL otherwise end_CELL end_ROW end_CELL end_ROW start_ROW start_CELL italic_A | italic_W end_CELL start_CELL ∼ Bernoulli ( logit start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( divide start_ARG 1 end_ARG start_ARG 4 end_ARG ( italic_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_W start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + italic_W start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) ) ) end_CELL end_ROW start_ROW start_CELL italic_Y | italic_A , italic_W end_CELL start_CELL ∼ Bernoulli ( logit start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( 1 - 2 italic_A + ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_W start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + ( italic_A - divide start_ARG 1 end_ARG start_ARG 2 end_ARG ) ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT italic_W start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ) . end_CELL end_ROW
|
treatment indicator. Here, C(a)superscript𝐶𝑎C^{(a)}italic_C start_POSTSUPERSCRIPT ( italic_a ) end_POSTSUPERSCRIPT and T(a)superscript𝑇𝑎T^{(a)}italic_T start_POSTSUPERSCRIPT ( italic_a ) end_POSTSUPERSCRIPT correspond, respectively, to
|
Here, ΣΣ\Sigmaroman_Σ is a 100×100100100100\times 100100 × 100 Toeplitz matrix, so that the pre-treatment
|
W∼N(0,Σ100×100),Σij={1,i=j0.1|i−j|−1.8,otherwiseA|W∼Bernoulli(logit−1(14(W1+W2+W3)))Y|A,W∼Bernoulli(logit−1(1−2A+∑j=15Wj+(A−12)∑j=15Wj)).formulae-sequencesimilar-to𝑊𝑁0subscriptΣ100100subscriptΣ𝑖𝑗conditionalcases1𝑖𝑗0.1superscript𝑖𝑗1.8otherwise𝐴𝑊similar-toconditionalBernoullisuperscriptlogit114subscript𝑊1subscript𝑊2subscript𝑊3𝑌𝐴𝑊similar-toBernoullisuperscriptlogit112𝐴superscriptsubscript𝑗15subscript𝑊𝑗𝐴12superscriptsubscript𝑗15subscript𝑊𝑗\begin{split}W&\sim N(0,\Sigma_{100\times 100}),\>\Sigma_{ij}=\begin{cases}1,&%
|
Here, α𝛼\alphaitalic_α is the average treatment effect. Thus, ΨF(PX,0)superscriptΨ𝐹subscript𝑃𝑋0\Psi^{F}(P_{X,0})roman_Ψ start_POSTSUPERSCRIPT italic_F end_POSTSUPERSCRIPT ( italic_P start_POSTSUBSCRIPT italic_X , 0 end_POSTSUBSCRIPT ) can
|
B
|
Since A−1superscript𝐴1A^{-1}italic_A start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT is in fact the exact posterior variance,
|
σ^d−2−σ*d−2=Op(N−1/2)superscriptsubscriptnormal-^𝜎𝑑2superscriptsubscript𝜎𝑑2subscript𝑂𝑝superscript𝑁12\hat{\sigma}_{d}^{-2}-\accentset{*}{\sigma}_{d}^{-2}=O_{p}(N^{-1/2})over^ start_ARG italic_σ end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT - over* start_ARG italic_σ end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT = italic_O start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_N start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ) and
|
The η^^𝜂\hat{\eta}over^ start_ARG italic_η end_ARG of Equation 8 is an estimate of η*𝜂\accentset{*}{\eta}over* start_ARG italic_η end_ARG insofar
|
iteration, to estimate η*𝜂\accentset{*}{\eta}over* start_ARG italic_η end_ARG. The ADVI algorithm, which we will sometimes
|
N𝑁Nitalic_N is, in contrast to σ*𝜎\accentset{*}{\sigma}over* start_ARG italic_σ end_ARG, which can be a poor estimate of
|
D
|
Various neural network architectures have been explored for forecasting, ranging from recurrent neural networks to convolutional networks to graph neural networks.
|
Long-term forecasting, which is to predict several steps into the future given a long context or look-back, is one of the most fundamental problems in time series analysis, with broad applications in energy, finance, and transportation. Deep learning models (Wu et al., 2021; Nie et al., 2022) have emerged as a popular approach for forecasting rich, multivariate, time series data, often outperforming classical statistical approaches such as ARIMA or GARCH (Box et al., 2015). In several forecasting competitions such as the M5 competition (Makridakis et al., 2020) and IARAI Traffic4cast contest (Kreil et al., 2020), deep neural networks performed quite well.
|
For sequence modeling tasks in domains such as language, speech and vision, Transformers (Vaswani et al., 2017) have emerged as the most successful model, even outperforming recurrent neural networks (LSTMs)(Hochreiter and Schmidhuber, 1997).
|
Recent work has improved the efficacy of RNNs (Kag et al., 2020; Lukoševičius and Uselis, 2022; Rusch and Mishra, 2020; Li et al., 2019b) and applied parameter efficient SSMs (Gu et al., ; Gupta et al., 2022) to modeling long range dependencies in sequences. They have demonstrated improvement over some transformer based architectures on sequence modeling benchmarks including speech and 1-D pixel level image classification tasks. We compare our method to S4 model (Gu et al., ), which is the only such method that has been applied to global univariate and global multivariate forecasting.
|
Various neural network architectures have been explored for forecasting, ranging from recurrent neural networks to convolutional networks to graph neural networks.
|
B
|
4.9689⋅10−4⋅4.9689superscript1044.9689\cdot 10^{-4}4.9689 ⋅ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT
|
4.9803⋅10−4⋅4.9803superscript1044.9803\cdot 10^{-4}4.9803 ⋅ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT
|
4.9689⋅10−4⋅4.9689superscript1044.9689\cdot 10^{-4}4.9689 ⋅ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT
|
3.485⋅10−4⋅3.485superscript1043.485\cdot 10^{-4}3.485 ⋅ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT
|
3.4428⋅10−4⋅3.4428superscript1043.4428\cdot 10^{-4}3.4428 ⋅ 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT
|
A
|
Let M=L−1(0)=⋂Mi𝑀superscript𝐿10subscript𝑀𝑖M=L^{-1}(0)=\bigcap M_{i}italic_M = italic_L start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( 0 ) = ⋂ italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, where Mi=fi−1(0)subscript𝑀𝑖superscriptsubscript𝑓𝑖10M_{i}=f_{i}^{-1}(0)italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( 0 ), be the locus of global minima of L𝐿Litalic_L. If each Misubscript𝑀𝑖M_{i}italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is a smooth codimension 1 submanifold of ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, M𝑀Mitalic_M is nonempty, and the Misubscript𝑀𝑖M_{i}italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT intersect transversally at every point of M𝑀Mitalic_M, then at every point m∈M𝑚𝑀m\in Mitalic_m ∈ italic_M, the Hessian evaluated at m𝑚mitalic_m has d𝑑ditalic_d positive eigenvalues and n−d𝑛𝑑n-ditalic_n - italic_d eigenvalues equal to 0.
|
Without losing the generality, suppose our shallow neural network is in the setting of Corollary 2.6.
|
Consider now a shallow neural net as in Corollary 2.6 or Corollary 2.9. Then we have the following Corollary of Proposition 2.7 :
|
Now we will extend the interpolation property from shallow neural networks to the class of all deep feedforward neural networks. More precisely, we have the following result
|
The nonemptiness of M𝑀Mitalic_M follows from Corollary 2.6. Each Misubscript𝑀𝑖M_{i}italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is smooth of codimension 1, again by Corollary 2.6. for d=1𝑑1d=1italic_d = 1. It remains to prove that the intersection of Misubscript𝑀𝑖M_{i}italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is transversal. Let m=(w,b)∈M𝑚𝑤𝑏𝑀m=(w,b)\in Mitalic_m = ( italic_w , italic_b ) ∈ italic_M. We assume that the intersection at m𝑚mitalic_m is not transversal. This means the tangent space TmM1=TmMisubscript𝑇𝑚subscript𝑀1subscript𝑇𝑚subscript𝑀𝑖T_{m}M_{1}=T_{m}M_{i}italic_T start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_T start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT italic_M start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for all i𝑖iitalic_i. From our notations, we have that
|
B
|
_{t-1,a}over^ start_ARG roman_Δ end_ARG start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT = roman_max start_POSTSUBSCRIPT italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_t - 1 , italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_t - 1 , italic_a end_POSTSUBSCRIPT is the empirical suboptimality gap of arm a𝑎aitalic_a, and σ𝜎\sigmaitalic_σ is the subgaussian parameter of the reward distribution of all arms.
|
For sub-Gaussian reward distributions, MS enjoys the asymptotic optimality under the special case of Gaussian rewards
|
While the regret guarantee of KL-MS is not asymptotically optimal for the general [0,1]01[0,1][ 0 , 1 ] bounded reward setting, it nevertheless is a better regret guarantee than naively viewing this problem as a sub-Gaussian bandit problem and applying sub-Gaussian bandit algorithms on it. To see this, note that any reward distribution supported on [0,1]01[0,1][ 0 , 1 ] is 1414\frac{1}{4}divide start_ARG 1 end_ARG start_ARG 4 end_ARG-sub-Gaussian; therefore, standard sub-Gaussian bandit algorithms will yield an asymptotic regret (1+o(1))∑a∈[K]:Δa>0lnT2Δa1𝑜1subscript𝑎delimited-[]𝐾:subscriptΔ𝑎0𝑇2subscriptΔ𝑎(1+o(1))\sum_{a\in[K]\mathrel{\mathop{:}}\Delta_{a}>0}\frac{\ln T}{2\Delta_{a}}( 1 + italic_o ( 1 ) ) ∑ start_POSTSUBSCRIPT italic_a ∈ [ italic_K ] : roman_Δ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT > 0 end_POSTSUBSCRIPT divide start_ARG roman_ln italic_T end_ARG start_ARG 2 roman_Δ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT end_ARG. This is always no better than the asymptotic regret provided by Eq. (5), in view of Pinsker’s inequality that 𝗄𝗅(μa,μ1)≥2Δa2𝗄𝗅subscript𝜇𝑎subscript𝜇12superscriptsubscriptΔ𝑎2\mathsf{kl}(\mu_{a},\mu_{1})\geq 2\Delta_{a}^{2}sansserif_kl ( italic_μ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_μ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ≥ 2 roman_Δ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
|
Of these, Maillard sampling (MS) Maillard (2013); Bian and Jun (2022), a Gaussian adaptation of the Minimum Empirical Divergence (MED) algorithm Honda and Takemura (2011) originally designed for finite-support reward distributions, provides a simple algorithm for the sub-Gaussian bandit setting that computes ptsubscript𝑝𝑡p_{t}italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in a closed form:
|
Our second corollary is that KL Maillard sampling achieves a tight asymptotic regret guarantee for the special case of Bernoulli rewards:
|
A
|
This creates new challenges to data storage and analysis. A standard approach is based on data reduction, or subsampling, where one selects a portion of the data to extract useful information. This is a crucial step in big data analysis. For massive data, subsampling techniques are popular to mitigate computational burden by reducing the data size considerably and bringing it back to a doable size.
|
The input of our algorithm is the subdata, say set S=(si),i=1,2,…,kformulae-sequenceSsubscripts𝑖𝑖12…𝑘\textbf{S}=(\textbf{s}_{i}),i=1,2,\ldots,kS = ( s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , italic_i = 1 , 2 , … , italic_k, obtained by some approach, i.e., the OSS or the IBOSS approach, and so the implementation of our algorithm requires the implementation of the corresponding algorithm first. Our goal is to identify and interchange selected data points by the corresponding algorithm with those that were not selected, say set D=(dr⋅),r=1,2,…,n−kformulae-sequenceDsubscriptd𝑟⋅𝑟12…𝑛𝑘\textbf{D}=(\textbf{d}_{r\cdot}),r=1,2,\ldots,n-kD = ( d start_POSTSUBSCRIPT italic_r ⋅ end_POSTSUBSCRIPT ) , italic_r = 1 , 2 , … , italic_n - italic_k. The criterion that allows the aforementioned interchange is the increase of the generalized variance, denoted as V𝑉Vitalic_V. At first, we need to mention that we do not take into consideration all data points that were not selected by the corresponding algorithm (set D) as candidate data points of the final subdata. Therefore, a set of candidate points should be obtained. Such a set, denoted by F, is a subset of data points among the ones that have not been already selected by the corresponding algorithm, that is F⊂DFD\textbf{F}\subset\textbf{D}F ⊂ D. In order to provide a more comprehensive explanation about F, we need to clarify how F is obtained. Data points in F are selected in a manner similar to how the algorithm of the IBOSS approach selects data points, with a notable difference. We reiterate (see Section 3.1) that the algorithm of the IBOSS approach selects data points with the smallest as well as largest values of all covariates sequentially, given that previous selected data points are excluded. So, in order to obtain F, we select data points from D with the smallest as well as largest values of all covariates sequentially, but previous selected data points are not excluded. This is the difference in the selection of data points between the algorithm of the IBOSS approach and the construction of F. A consequence of such a construction of F is that a data point could be selected more than once. This is a circumstance that may occur when a data point is selected to belong to F due to its values in multiple covariates. To clarify and without loss of generality, a data point in D can be selected twice for the construction of F, if its value in the first covariate is among the smallest ones and its value in the second covariate is among the largest ones. This situation may arise because the selected data point is not excluded from D when we select data points based on the values of the first covariate. Therefore, the already selected data point still remains in D, and so it can be selected again when we select data points based on the values of the second covariate to obtain F. Thus, such a method of selecting data points can lead to at least duplicated ones in F, and so only one of them is kept, that is we keep only unique data points in F. Also, we are not able to know the final size of the data points in F, say NFsubscript𝑁FN_{\text{F}}italic_N start_POSTSUBSCRIPT F end_POSTSUBSCRIPT, since it is not feasible to know in advance the existence of at least duplicated data points. However, the maximum final size of the data points in F is equal to Kp𝐾𝑝Kpitalic_K italic_p, where K𝐾Kitalic_K is an even number of data points selected for each covariate. Note that the value of K𝐾Kitalic_K is user-selected.
|
However, it is not always feasible to fully analyze the whole data, since the sample size n𝑛nitalic_n of the full data can be too large. Thus, an approach is to gain useful information from the full data given that computational resources are limited. An effective investigation can be focused on selecting a subset of the full data.
|
Consider the problem of regression where the sample size n𝑛nitalic_n is quite large and one needs to fit a model with standard least squares approach. In many circumstances, this can create a lot of computational issues, since the standard least square approach involves big matrices that perhaps do not fit in the memory. Working with less data is an option as far as the reduced dataset can keep as much information as possible. In most problems, while picking the necessary data with pure randomness is an option, improved approaches can be used to select subdata in an optimal way.
|
Figure 8 provides an insight into each approach in the case of a real big data example. Our approach dominates other approaches in terms of maximizing the volume generated by the selected subdata, even in the case of an extraordinary large dataset. Also, it is evident that the choice of a subsampling approach is particularly important in cases of real big data, since each approach poses its own risks. Figure 8 is very revealing about the way the algorithms work. The algorithm of the IBOSS approach selects extreme data points from both covariates, and so it ignores the shape of the data to a great extent. In the plotted case the shape is quite large and the algorithm of the IBOSS approach fails to account for that. The algorithm of the OSS approach selects data points in the corners of a rectangular and because of the data it fails also to see the shape. Our proposed algorithms started from the data points selected by the OSS approach exchange data points in order to increase the convex hull, and hence they end up with a shape closer to the true one of the full data. This way the shaded area is quite large and comes closer to the one of the full data.
|
C
|
Therefore, we can just use J𝐽Jitalic_J in (3) as an objective function for estimating the score function s0subscript𝑠0s_{0}italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. When a random sample is available, we use a sample version of J𝐽Jitalic_J as the empirical objective function. Since J𝐽Jitalic_J involves the partial derivatives of s(x)𝑠𝑥s(x)italic_s ( italic_x ), we need to compute the derivatives of the functions in ℱℱ\mathcal{F}caligraphic_F during estimation. And we need to analyze the properties of ℱℱ\mathcal{F}caligraphic_F and their derivatives to develop the learning theories. In particular, if we take ℱℱ\mathcal{F}caligraphic_F to be a class of deep neural network functions, we need to study the properties of their derivatives in terms of estimation and approximation.
|
In isotonic regression, we assume that f0∈ℱ0.subscript𝑓0subscriptℱ0f_{0}\in\mathcal{F}_{0}.italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ caligraphic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT .
|
Isotonic regression is a technique that fits a regression model to observations such that the fitted
|
We propose a penalized deep isotonic regression (PDIR) approach using RePU networks, which encourages the partial derivatives of the estimated regression function to be nonnegative. We establish non-asymptotic excess risk bounds for PDIR under the assumption that the target regression function f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is Cssuperscript𝐶𝑠C^{s}italic_C start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT smooth. Moreover, we show that PDIR achieves the minimax optimal rate of convergence for non-parametric regression. We also show that PDIR can mitigate the curse of dimensionality when data concentrates near a low-dimensional manifold. Furthermore, we show that with tuning parameters tending to zero, PDIR is consistent even when the target function is not isotonic.
|
It is worth noting that if the target function is isotonic, then the misspecification error vanishes, leading the scenario to that of isotonic regression. However, the convergence rate based on Lemma 26 is slower than that in Lemma 20. The reason is that Lemma 26 is general and holds without prior knowledge of the monotonicity of the target function. If knowledge is available about the non-isotonicity of the j𝑗jitalic_jth argument of the target function f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, setting the corresponding λj=0subscript𝜆𝑗0\lambda_{j}=0italic_λ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = 0 decreases the misspecification error and helps improve the upper bound.
|
B
|
Therefore, under the targeted stopping setting, we could justify that the CLT CI is not too far from a valid one in terms of both upper bound and lower bound.
|
As mentioned before, under the targeted stopping setting, Chernoff’s CI is no longer easy to analyze. Thus, in this subsection, we will only cover the analyses for the CLT, Wilson’s and B-E CIs. For the first two, as the formulas of the CIs are the same as the standard setting, we simply present the results here. For the B-E CI, the main idea of the derivations is similar to the standard setting, but there are differences in the technical details. In particular, now we are able to get a non-trivial lower bound.
|
This paper is organized as follows. Section 2 describes the problem setting and the motivating challenges. Section 3 overviews the existing and new CIs, and Section 4 summarizes our main results. Then, in Sections 5 and 6, we present the details of the derivation and analyses of these intervals. After that, Section 7 reports some numerical results to visualize our comparisons. Section 8 concludes this paper with our findings and recommendations. All missing proofs can be found in the appendix.
|
While the interval is easy to compute numerically, it is not easy to analyze. Similar to the standard setting, we will relax the confidence region (6) to construct valid CIs respectively via inverting the Chernoff’s inequality and the B-E theorem. We leave the details of developing these two new CIs to Section 6.1 and only present the formulas here.
|
To close this paper, we perform some numerical experiments to visualize the differences among the CIs.
|
D
|
We also do not require sample splitting thus the test can rely on the power of the whole sample size, which can be vital in datasets of smaller size.
|
However, the abilities to fit pure noise are increasing as well. We propose a method to test whether a model is only fitting noise.
|
We will use the tennis serve dataset in order to demonstrate an application of the permutation test to real life data.
|
Our findings are supported through an application to the tennis serve dataset. In this case, it gave evidence that a seemingly well-fitting model is not necessarily trustworthy.
|
Our method is not restricted to linear models since it is not a test for specific parameters in the model.
|
C
|
In Lemma 3, we only need the lower bound in (A1), not the upper bound. This uniform lower bound in Condition (A1) plays an important role in our analysis as it allows a uniform control on the Voronoi cell diameters. Some refinement might be possible given the recent progress in k𝑘kitalic_k-NN regression for covariates with unbounded support (Kohler et al., 2006) and k𝑘kitalic_k-NN classification using some tail assumption on the covariates (Gadat et al., 2016). Extending the present analysis to such general measures is left for further research.
|
This section gathers some preliminary considerations on distributions on metric spaces and a bound on moments of nearest-neighbor distances before stating the main theoretical properties of the control neighbor estimates (1) and (2).
|
The outline of the paper is as follows. The mathematical foundations of nearest neighbor estimates are gathered in Section 2 with a formal introduction of two different nearest neighbors estimates. The theoretical properties of the control neighbor estimates are stated in Section 3. Finally, Section 4 reports on several numerical experiments along with some practical remarks on the implementation of the proposed estimates, and Section 5 concludes.
|
The main result of this section provides a finite-sample bound on the root mean squared error of the two control neighbors estimates in (1) and (2). The bound depends on the regularity of the integrand.
|
Using this property and Lemma 1, we obtain that the root mean squared distance between the leave-one-out version and the proposed estimate is of the order 𝒪(n−1/2−s/d)𝒪superscript𝑛12𝑠𝑑\mathcal{O}(n^{-1/2-s/d})caligraphic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 2 - italic_s / italic_d end_POSTSUPERSCRIPT ) as n→∞→𝑛n\to\inftyitalic_n → ∞; see Proposition 3 in the Supplementary material for a precise statement. Therefore, the two estimates share the same convergence rate.
|
C
|
\boldsymbol{Z}_{ij}).roman_ℓ ( bold_italic_θ ) = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT roman_log italic_P ( bold_italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | bold_italic_Z start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ) .
|
𝜽~~𝜽\small\widetilde{\boldsymbol{\theta}}over~ start_ARG bold_italic_θ end_ARG in two stages. The first
|
𝜽~~𝜽\small\widetilde{\boldsymbol{\theta}}over~ start_ARG bold_italic_θ end_ARG in the following two steps:
|
𝜽~1subscript~𝜽1\small\widetilde{\boldsymbol{\theta}}_{1}over~ start_ARG bold_italic_θ end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, maximizing the
|
𝑿~|𝑾~conditional~𝑿~𝑾\small\widetilde{\boldsymbol{X}}|\widetilde{\boldsymbol{W}}over~ start_ARG bold_italic_X end_ARG | over~ start_ARG bold_italic_W end_ARG.
|
A
|
In the final results, 8 of the 117 total meta-gene covariates had non-zero fixed effect values in the best model selected by the BIC-ICQ criteria, implying these covariates were important for the prediction of the basal outcome. These 8 meta-gene covariates represented 37 genes in total. Table 4 includes the label for these 8 meta-genes, the sign of the associated fixed effect coefficient (i.e. the log odds ratio estimate), and the gene symbols of the genes that make up the meta-gene. Meta-genes with positive log odds ratios indicate that having greater relative expression of these meta-genes increases the odds of a subject being in the basal subtype, and vice versa for negative log odds ratios. The best model contained a random intercept (variance value 0.54) and no other random slopes.
|
In the final results, 8 of the 117 total meta-gene covariates had non-zero fixed effect values in the best model selected by the BIC-ICQ criteria, implying these covariates were important for the prediction of the basal outcome. These 8 meta-gene covariates represented 37 genes in total. Table 4 includes the label for these 8 meta-genes, the sign of the associated fixed effect coefficient (i.e. the log odds ratio estimate), and the gene symbols of the genes that make up the meta-gene. Meta-genes with positive log odds ratios indicate that having greater relative expression of these meta-genes increases the odds of a subject being in the basal subtype, and vice versa for negative log odds ratios. The best model contained a random intercept (variance value 0.54) and no other random slopes.
|
In the final results, 8 of the 117 total meta-gene covariates had non-zero fixed effect values in the best model selected by the BIC-ICQ criteria, implying these covariates were important for the prediction of the basal outcome. These 8 meta-gene covariates represented 37 genes in total. Table 4 includes the label for these 8 meta-genes, the sign of the associated fixed effect coefficient (i.e. the log odds ratio estimate), and the gene symbols of the genes that make up the meta-gene. Meta-genes with positive log odds ratios indicate that having greater relative expression of these meta-genes increases the odds of a subject being in the basal subtype, and vice versa for negative log odds ratios. The best model contained a random intercept (variance value 0.54) and no other random slopes.
|
Table 3: Covariate meta-gene label within the case study dataset of the meta-genes that had non-zero fixed effects in the final best model, the sign of the fixed effect coefficient (i.e. the sign of the log odds ratio) associated with the meta-gene, and the gene symbols of the genes within the meta-gene.
|
Table 3: Covariate meta-gene label within the case study dataset of the meta-genes that had non-zero fixed effects in the final best model, the sign of the fixed effect coefficient (i.e. the sign of the log odds ratio) associated with the meta-gene, and the gene symbols of the genes within the meta-gene.
|
C
|
The Q1(𝜽|𝜽(s))subscript𝑄1conditional𝜽superscript𝜽𝑠Q_{1}(\boldsymbol{\theta}|\boldsymbol{\theta}^{(s)})italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_italic_θ | bold_italic_θ start_POSTSUPERSCRIPT ( italic_s ) end_POSTSUPERSCRIPT ) function expresses the conditional model of the observed data given the latent (random) variables and integrates over the latent variables. Using the Q1(𝜽|𝜽(s))subscript𝑄1conditional𝜽superscript𝜽𝑠Q_{1}(\boldsymbol{\theta}|\boldsymbol{\theta}^{(s)})italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_italic_θ | bold_italic_θ start_POSTSUPERSCRIPT ( italic_s ) end_POSTSUPERSCRIPT ) function, we aim to derive the fixed and random effect coefficient estimates during the M-step of the algorithm. During the E-step, we aim to approximate the integral in the Q1(𝜽|𝜽(s))subscript𝑄1conditional𝜽superscript𝜽𝑠Q_{1}(\boldsymbol{\theta}|\boldsymbol{\theta}^{(s)})italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_italic_θ | bold_italic_θ start_POSTSUPERSCRIPT ( italic_s ) end_POSTSUPERSCRIPT ) function by incorporating samples from the posterior distribution of the latent variables.
|
The integrals in the Q-function do not have closed forms when f(𝒚k|𝑿k,𝜶k(s,m);𝜽)𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘superscriptsubscript𝜶𝑘𝑠𝑚𝜽f(\boldsymbol{y}_{k}|\boldsymbol{X}_{k},\boldsymbol{\alpha}_{k}^{(s,m)};%
|
f(𝒚k|𝑿k;𝜽)=1P(Ak|𝒚k,𝑿k;𝜽)∫Θf(𝒚k|𝑿k,𝜶k;𝜽)ϕ(𝜶k)I(𝜶k∈Ak)𝑑𝜶k=1P(Ak|𝒚k,𝑿k;𝜽)∫Θf(𝒚k|𝑿k,𝜶k;𝜽)ϕ(𝜶k)I(𝜶k∈Ak)s(𝜶k)d𝜶ks(𝜶k),𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘𝜽absent1𝑃conditionalsubscript𝐴𝑘subscript𝒚𝑘subscript𝑿𝑘𝜽subscriptΘ𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘subscript𝜶𝑘𝜽italic-ϕsubscript𝜶𝑘𝐼subscript𝜶𝑘subscript𝐴𝑘differential-dsubscript𝜶𝑘missing-subexpressionabsent1𝑃conditionalsubscript𝐴𝑘subscript𝒚𝑘subscript𝑿𝑘𝜽subscriptΘ𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘subscript𝜶𝑘𝜽italic-ϕsubscript𝜶𝑘𝐼subscript𝜶𝑘subscript𝐴𝑘𝑠subscript𝜶𝑘𝑑subscript𝜶𝑘𝑠subscript𝜶𝑘\displaystyle\begin{aligned} f(\boldsymbol{y}_{k}|\boldsymbol{X}_{k};%
|
f(𝒚k|𝑿k;𝜽)=∫f(𝒚k|𝑿k,𝜶k;𝜽)ϕ(𝜶k)𝑑𝜶k𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘𝜽𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘subscript𝜶𝑘𝜽italic-ϕsubscript𝜶𝑘differential-dsubscript𝜶𝑘f(\boldsymbol{y}_{k}|\boldsymbol{X}_{k};\boldsymbol{\theta})=\int f(%
|
P(Ak|𝒚k,𝑿k;𝜽)=∫Akϕ(𝜶k|𝒚k,𝑿k;𝜽)𝑑𝜶k=∫Θ1f(𝒚k|𝑿k;𝜽)f(𝒚k|𝑿k,𝜶k;𝜽)ϕ(𝜶k)I(𝜶k∈Ak)𝑑𝜶k,𝑃conditionalsubscript𝐴𝑘subscript𝒚𝑘subscript𝑿𝑘𝜽absentsubscriptsubscript𝐴𝑘italic-ϕconditionalsubscript𝜶𝑘subscript𝒚𝑘subscript𝑿𝑘𝜽differential-dsubscript𝜶𝑘missing-subexpressionabsentsubscriptΘ1𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘𝜽𝑓conditionalsubscript𝒚𝑘subscript𝑿𝑘subscript𝜶𝑘𝜽italic-ϕsubscript𝜶𝑘𝐼subscript𝜶𝑘subscript𝐴𝑘differential-dsubscript𝜶𝑘\displaystyle\begin{aligned} P(A_{k}|\boldsymbol{y}_{k},\boldsymbol{X}_{k};%
|
A
|
The notion of coordinate invariance arises naturally from one of the primary building blocks of topology,
|
We will now discuss the Mapper graph, the primary tool through which we create a topologically consistent
|
The primary idea of the Mapper construction is to summarize a dataset by creating a neighbor graph of
|
Through our Mapper construction, we have an topologically faithful reconstruction of ℋℋ\mathcal{H}caligraphic_H and
|
probability measure. We will first define the pseudo-distance δμ,msubscript𝛿𝜇𝑚\delta_{\mu,m}italic_δ start_POSTSUBSCRIPT italic_μ , italic_m end_POSTSUBSCRIPT which is required for the
|
A
|
1Mnhnβ<c.1subscript𝑀𝑛superscriptsubscriptℎ𝑛𝛽𝑐\frac{1}{M_{n}h_{n}^{\beta}}<c.divide start_ARG 1 end_ARG start_ARG italic_M start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_h start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT end_ARG < italic_c .
|
Concerning Condition 2, by construction and from the properties of the function ψ~~𝜓\tilde{\psi}over~ start_ARG italic_ψ end_ARG it is
|
and a privatization kernel 𝑸=(Q1,…,Qd)𝑸superscript𝑄1…superscript𝑄𝑑\bm{Q}=(Q^{1},\dots,Q^{d})bold_italic_Q = ( italic_Q start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , … , italic_Q start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT ) where Qjsuperscript𝑄𝑗Q^{j}italic_Q start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT is the privatization channel from 𝒳jsuperscript𝒳𝑗\mathcal{X}^{j}caligraphic_X start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT to 𝒵jsuperscript𝒵𝑗\mathcal{Z}^{j}caligraphic_Z start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT. We denote by M𝑀Mitalic_M and M~~𝑀\tilde{M}over~ start_ARG italic_M end_ARG the law of the images of P𝑃Pitalic_P and P~~𝑃\tilde{P}over~ start_ARG italic_P end_ARG through the operation of privatization. It means that we consider a couple of raw samples 𝑿𝑿\bm{X}bold_italic_X, 𝑿~bold-~𝑿\bm{\tilde{X}}overbold_~ start_ARG bold_italic_X end_ARG with distribution P𝑃Pitalic_P, P~~𝑃\tilde{P}over~ start_ARG italic_P end_ARG, and that the associated privatized samples 𝒁𝒁\bm{Z}bold_italic_Z, 𝒁~bold-~𝒁\bm{\tilde{Z}}overbold_~ start_ARG bold_italic_Z end_ARG have distribution denoted by M𝑀Mitalic_M and M~~𝑀\tilde{M}over~ start_ARG italic_M end_ARG. Consistently with the description in Section 2, each channel Qjsuperscript𝑄𝑗Q^{j}italic_Q start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT acts on its associated component Xjsuperscript𝑋𝑗X^{j}italic_X start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT independently of the other channels. More formally, we can write the correspondence between P𝑃Pitalic_P and M𝑀Mitalic_M as
|
where we have used that the integrals of ψ~~𝜓\tilde{\psi}over~ start_ARG italic_ψ end_ARG are 00 by construction.
|
The constant η𝜂\etaitalic_η can be chosen as small as we want, while cπsubscript𝑐𝜋c_{\pi}italic_c start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT is a normalization constant added in order to get ∫ℝdπ(𝒙)𝑑𝒙=1subscriptsuperscriptℝ𝑑𝜋𝒙differential-d𝒙1\int_{\mathbb{R}^{d}}\pi(\bm{x})d\bm{x}=1∫ start_POSTSUBSCRIPT blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_π ( bold_italic_x ) italic_d bold_italic_x = 1. Regarding π∗superscript𝜋\pi^{*}italic_π start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, we give it as π𝜋\piitalic_π to which we add a bump. Let ψ~:ℝ→ℝ:~𝜓→ℝℝ\tilde{\psi}:\mathbb{R}\rightarrow\mathbb{R}over~ start_ARG italic_ψ end_ARG : blackboard_R → blackboard_R be a C∞superscript𝐶C^{\infty}italic_C start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT function with support on [−1,1]11[-1,1][ - 1 , 1 ] and such that ψ~(0)=1~𝜓01\tilde{\psi}(0)=1over~ start_ARG italic_ψ end_ARG ( 0 ) = 1, ∫−11ψ~(z)𝑑z=0.superscriptsubscript11~𝜓𝑧differential-d𝑧0\int_{-1}^{1}\tilde{\psi}(z)dz=0.∫ start_POSTSUBSCRIPT - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT over~ start_ARG italic_ψ end_ARG ( italic_z ) italic_d italic_z = 0 .
|
A
|
DCA converges in objective values, and in iterates if g𝑔gitalic_g or hℎhitalic_h is strongly convex, to a critical point (Pham Dinh & Le Thi, 1997, Theorem 3). We can always make the DC components strongly convex by adding ρ2∥⋅∥2\tfrac{\rho}{2}\|\cdot\|^{2}divide start_ARG italic_ρ end_ARG start_ARG 2 end_ARG ∥ ⋅ ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT to both g𝑔gitalic_g and hℎhitalic_h.
|
As discussed in Section 2, CDCA is a special instance of DCA which is guaranteed to converge to a strong critical point.
|
apply. In addition, CDCA is known to converge to a strong critical point (Pham Dinh & Souad, 1988, Theorem 3). We extend this to the variant with inexact iterates and approximate convergence.
|
DC programs are well studied problems for which a classical popular algorithm is the DC algorithm (DCA) (Pham Dinh & Le Thi, 1997; Pham Dinh & Souad, 1988). DCA has been successfully applied to a wide range of non-convex optimization problems, and several algorithms can be viewed as special cases of it, such as the convex-concave procedure, the expectation-maximization (Dempster et al., 1977), and the iterative shrinkage-thresholding algorithm (Chambolle et al., 1998); see (Le Thi & Pham Dinh, 2018) for an extensive survey on DCA.
|
A special instance of DCA, called complete DCA, converges to a strong critical point, but requires solving concave minimization subproblems (Pham Dinh & Souad, 1988, Theorem 3).
|
D
|
Rounding to the grid was previously considered (under the phrase e-value boosting) in Wang and Ramdas (2022), but they needed to know the true distribution for each e-value Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT when the null hypothesis is true; that is often infeasible, particularly when we are testing composite null hypotheses. In what follows, we develop methods that use independent external randomness to stochastically round e-values and increase the number of discoveries made by e-BH. This idea can in turn be used to directly improve the power of the seminal BY procedure, which is based on p-values and may at first glance appear to have nothing to do with rounding e-values.
|
Figure 2: Heatmaps of the power difference between randomized e-BH methods and e-BH. The hypotheses are one-sided tests of the mean of a standard Gaussian with covariance parameterized by ρ𝜌\rhoitalic_ρ (larger ρ𝜌\rhoitalic_ρ means larger covariance) and non-null means of μ𝜇\muitalic_μ. U-eBH is the most powerful in each setting, and randomized procedures uniformly improve on e-BH.
|
While the set of randomized multiple testing procedures contains all deterministic ones, it is far from clear when the most powerful randomized procedure is strictly more powerful (in at least some situations) than the most powerful deterministic one.
|
Our randomized procedures strictly improve over the corresponding deterministic procedure (e.g., reject a superset of the deterministic procedure’s discovery set, produce a smaller or equal p-value, etc.). Among randomization techniques in the statistics literature (such as the bootstrap or sample splitting), this type of uniform improvement is rare.
|
Note that each of the randomized procedures we discuss in this paper satisfies two main properties: (1) the procedure will never be worse (where “worse” is defined based on the problem, e.g., fewer discoveries in multiple testing) than the deterministic procedure which it is derived from and (2) under no conditions or some weak regularity conditions on the distribution, the randomized procedure is better with positive probability (e.g., more discoveries in multiple testing, while still having the same error rate guarantee). Thus, the power increase from randomization does not result in any kind of tradeoff or cost, and is a “strict” improvement in this sense.
|
B
|
Moreover, the mean difference is the value of Cmsubscript𝐶𝑚C_{m}italic_C start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT,
|
Kolmogorov’s and Smirnov’s is 0.09403/σ=7.4560.09403𝜎7.4560.09403/\sigma=7.4560.09403 / italic_σ = 7.456.
|
Kolmogorov’s and Smirnov’s is 0.02328/σ=2.0210.02328𝜎2.0210.02328/\sigma=2.0210.02328 / italic_σ = 2.021.
|
Kolmogorov’s and Smirnov’s is 0.01245/σ=2.3120.01245𝜎2.3120.01245/\sigma=2.3120.01245 / italic_σ = 2.312.
|
Kolmogorov’s and Smirnov’s is 0.01813/σ=3.3660.01813𝜎3.3660.01813/\sigma=3.3660.01813 / italic_σ = 3.366.
|
A
|
Table 1 summarizes the results for varying sample sizes. As the sample size increased, the ABias2, AVar, and AIMSE consistently decreased for all the estimators. The MP_MEM estimator had ABias2 consistently smaller than that of the UP_MEM, PACE, Average, and Naive estimators and closest to that of the Oracle estimator. The Oracle estimator had the smallest Avar, followed in order by the MP_MEM, UP_MEM, Naive, and Average methods. Similarly, the Average method had the highest values for AIMSE while the AIMSE associated with the Oracle estimator was the smallest.
|
We observed that ABias2 of the Average estimator is larger than that of the Naive estimator, which is different from what we usually observe in the classical additive measurement error model. However, the measurement error model proposed in this manuscript is different from the classical additive measurement errors model as we allow the observed measurement to have a nonlinear relationship and also relax the assumption on the distribution of measurement error.
|
We developed the multi-level generalized functional linear regression model with functional covariates prone to heteroscedastic errors. These models are suited to massive longitudinal functional data assumed to be error-prone, such as those collected by wearable devices at frequent intervals over multiple days. Additionally, the measurement error component of the model was developed under the multi-level generalized functional linear regression models framework. To date, most approaches to adjusting for biases due to measurement error in functional data analysis are based on the assumption that the observed measures, Wij(t)subscript𝑊𝑖𝑗𝑡W_{ij}(t)italic_W start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ), follow Gaussian distributions. We assumed an exponential family distribution for the observed measures and relaxed the assumption on the probability distribution of measurement error. We implemented functional mixed effects-based methods to adjust for measurement errors biases and allow arbitrary heteroscedastic covariance functions for the measurement errors. To evaluate our proposed methods, we conducted simulations that involved Poisson assumptions for the observed measures and Gaussian error processes for the measurement errors. The functional mixed effects-based methods generally had lower Abias2superscriptAbias2\text{Abias}^{2}Abias start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT than estimators based on the PACE, Average, and Naive methods that do not correct for measurement error explicitly. While the PACE estimator reduces bias more than the Average and Naive estimators, it did not provide any formal adjustment for the serial correlations associated with measurement error in the estimation. The MP_MEM estimator yielded lower Abias2superscriptAbias2\text{Abias}^{2}Abias start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and Avar than the UP_MEM estimator because MP_MEM focuses on multiple time points concurrently in the estimation while UP_MEM is a univariate approach. Overall, the UP_MEM and MP_MEM methods performed better than the PACE, Average and Naive methods with increasing sample sizes and varying levels of correlations in the serially observed functional covariate prone to errors.
|
A1 indicates that the scalar response may be discrete or continuous with a distribution belonging to the EF. A2 includes the usual assumptions for link functions in generalized linear regression models. A3 indicates that nonlinear functions of h[E{Wij(t)∣Xi(t)}]hdelimited-[]Econditional-setsubscript𝑊𝑖𝑗𝑡subscript𝑋𝑖𝑡\mathrm{h[{E}}\{W_{ij}(t)\mid X_{i}(t)\}]roman_h [ roman_E { italic_W start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ) ∣ italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) } ] are unbiased measures of Xi(t)subscript𝑋𝑖𝑡X_{i}(t)italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ). It specifies a non-linear association between the true and observed measures. The corresponding measurement error model is Wij(t)=E{Wij(t)|Xi(t)}+Uij(t)subscript𝑊𝑖𝑗𝑡𝐸conditional-setsubscript𝑊𝑖𝑗𝑡subscript𝑋𝑖𝑡subscript𝑈𝑖𝑗𝑡W_{ij}(t)=E\{W_{ij}(t)|X_{i}(t)\}+U_{ij}(t)italic_W start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ) = italic_E { italic_W start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ) | italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) } + italic_U start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ), which is generalized from the classic additive measurement error model W=X+U𝑊𝑋𝑈W=X+Uitalic_W = italic_X + italic_U, where the true measures X𝑋Xitalic_X and observed measures W𝑊Witalic_W have a linear relationship. In other words, the classic additive measurement error is a special case of the proposed measurement error model, when h(⋅)ℎ⋅h(\cdot)italic_h ( ⋅ ) is the identity function. A4 states that the observed measures and the true unobserved covariate are correlated, a classical assumption in measurement error models. A5 indicates that observed measures belong in the EF. While the non-differential measurement error assumption specifying that the observed measure, Wij(t)subscript𝑊𝑖𝑗𝑡W_{ij}(t)italic_W start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_t ), does not provide any additional information about the response, Yisubscript𝑌𝑖Y_{i}italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, beyond the information given by Xi(t)subscript𝑋𝑖𝑡X_{i}(t)italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) is given in A6. A7 indicates that the true measures follow a Gaussian process with a mean function, μx(t)subscript𝜇𝑥𝑡\mu_{x}(t)italic_μ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ( italic_t ), and a covariance function, Σxx(t,t′)subscriptΣ𝑥𝑥𝑡superscript𝑡′\Sigma_{xx}(t,t^{\prime})roman_Σ start_POSTSUBSCRIPT italic_x italic_x end_POSTSUBSCRIPT ( italic_t , italic_t start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ). This is a typical assumption for the measurement error model. In addition, the proposed model allows for correlated measurement errors for the observed measures and the true covariate for each subject i𝑖iitalic_i. The proposed model also relaxed the assumption on the distribution of measurement error.
|
The measurement error model we proposed in this manuscript allows the observed measurement prone to measurement error and the true measurement to have a non-linear relationship, which makes the proposed model different from the classical additive measurement error. Such difference leads to different findings than what we usually observe in the classical additive measurement error model. First, we did not observe a bias-variance trade-off in the regression coefficient estimations in our simulations. Second, we observed that the Naive estimator had smaller Abias2superscriptAbias2\text{Abias}^{2}Abias start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT than the Average estimator.
|
A
|
For further information on forecast verification, we refer readers to (Gneiting and Katzfuss, 2014).
|
Table 3 presents the CRPS values for forecasts generated by both the proposed and benchmark models. In comparison to cases 1 and 2, all models exhibit improved CRPS performance, attributed to the exclusion of high wind power generation values that often result in substantial forecast errors. Nonetheless, the DeepAR model continues to display the poorest forecast quality among all models. The performance of UI and the proposed models surpasses that of models following the “impute, then predict” strategy. Notably, the proposed model attains the highest performance among all models, indicating its applicability for MNAR cases.
|
We present the reliability diagrams and prediction interval widths for 1-step ahead forecasts in Figure 6. As shown, the reliability diagrams of models employing the“impute, then predict” strategy deviate noticeably from the ideal case. The prediction interval widths of the proposed and benchmark models are comparable and consistently smaller than that of the reference model. DeepAR exhibits the poorest reliability among all models, although its prediction interval widths are the smallest. The reliability diagrams of UI and the proposed model closely align with the ideal case, indicating small biases in their forecasts. Surprisingly, the reliability of the reference model is inferior to that of the proposed and UI models, despite the reference model yielding larger prediction interval widths. This discrepancy may be attributed to overfitting in the reference model.
|
The CRPS values for forecasts from both the proposed and benchmark models are presented in Table 1. Note that the CRPS values evaluate the area between the predictive c.d.f of wind power and the observed one, and are normalized by the wind power capacity here. In reality, the generated scenarios/quantiles may differ in the values of several MW, which will have a considerable impact on energy trading and energy dispatch (Morales et al., 2013). Notably, climatology exhibits the poorest performance among all models, as it relies solely on an empirical distribution without incorporating contextual information. In contrast to the common situations where quantile regression models often outperform Gaussian distributional models, The performance of IM-Gaussian and IM-QR is closely matched. Although the impact of imputation on the training of downstream forecasting models can be intricate, we infer that the accumulation of errors from several independently trained quantile regression models may be a potential cause. Surprisingly, among the three models employing the “impute, then predict” strategy, DeepAR exhibits the poorest performance. Specifically, DeepAR imputes missing values by leveraging the intermediate results of the recurrent neural network during training, potentially leading to imputed values deviating further from real values compared to those derived from the MissForest model. The performance of UI and the proposed model is comparable to the reference model. Notably, the UI model slightly surpasses the reference model, suggesting its potential robustness against overfitting. This underscores the ancillary effects of missing values on forecasting model development, aligning with (Breiman, 2001) which advocates subsampling feature subsets for ensemble use to enhance robustness. As an illustration, Figure 5 showcases the 1-step ahead 90% prediction intervals generated by the proposed model for 144 consecutive observations.
|
Table 2 displays the CRPS values for forecasts generated by both the proposed and benchmark models. In this case, the differences in CRPS values among all models are smaller compared to those in case 1. Unlike case 1, missingness occurs in blocks, resulting in a greater number of samples with complete observations. Consequently, the impact of missing values on the quality of forecasts is reduced. Among models employing the “impute, then predict” strategy, DeepAR continues to exhibit the poorest performance, although the difference between DeepAR and IM-Gaussian/IM-QR is smaller than in case 1. In contrast, the performance of the proposed and UI models remains superior to that of “impute, then predict” strategy-based models and is comparable to the reference model. This implies the applicability of the proposed and UI models to cases with both sporadic and block-wise missingness.
|
C
|
The subspace ℰℰ\mathcal{E}caligraphic_E represents the feasible set in which the constraints are satisfied at all slots t≥1𝑡1t\geq 1italic_t ≥ 1. The Slater vector P~~𝑃\tilde{P}over~ start_ARG italic_P end_ARG is an interior point for ℰℰ\mathcal{E}caligraphic_E and its existence is a classical sufficient condition for strong duality to hold for a convex optimization problem; see, e.g. [b12][Prop. 3.3.9].
|
To tackle the above-mentioned problem, we propose a randomized control policy that minimizes the cumulative reservation cost while maintaining the long-term average of the cumulative expected violation and transfer costs under the budget threshold. This transforms the online combinatorial optimization problem into an online continuous optimization problem on the space of probability distributions over the set of reservations. In particular, we propose an online saddle-point algorithm tailored to our model for which we derive an explicit upper bound for the incurred regret against K𝐾Kitalic_K-Benchmark, a concept introduced in [b18] which allows us to examine the trade-offs between regret minimization and long-term budget constraint violations.
|
The rest of the paper is organized as follows: in Section II we introduce the problem; in Section III, we formalize the problem as a constrained online optimization problem on the simplex of probability distributions over the space of reservations; section IV contains our proposed online saddle point algorithm; then, in Section V, we present an upper bound for the K𝐾Kitalic_K-benchmark regret together with the cumulative constraint violations upper bound, and finally, we present in Section LABEL:num-sec some numerical results where we compare the performance of our algorithm with some deterministic online reservation policies.
|
with α>0𝛼0\alpha>0italic_α > 0 is a positive scalar and ∥⋅∥\|\cdot\|∥ ⋅ ∥ stands for the Euclidean distance, thus obtaining a proximal point approach. In particular, the proximal methods are well-known in the classical optimization literature where their main advantage is to transform the original objective function into a strongly convex one. Therefore, the convergence of these methods does not require strict convexity; see, e.g. [b4, b10, b11] and the references therein for more details. For online optimization algorithms, the proximal term ‖P−Pt−1‖2superscriptnorm𝑃superscript𝑃𝑡12\big{\|}P-P^{t-1}\big{\|}^{2}∥ italic_P - italic_P start_POSTSUPERSCRIPT italic_t - 1 end_POSTSUPERSCRIPT ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT acts as a learning term that accumulates knowledge from the past allowing to improve the performance of the algorithms in terms of regret. Thus many authors used a similar term in their algorithms; see, e.g. [b8, b17, b19]. Finally, the Lagrange multiplier λtsubscript𝜆𝑡\lambda_{t}italic_λ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is in turn updated at each step t𝑡titalic_t using a projected gradient ascent step. Namely,
|
A classical approach for solving constrained convex optimization problems is the Lagrange multipliers method. This relies in particular on the Karush-Kuhn-Tucker theorem which states that finding an optimal point for the constrained optimization problem is akin to finding a saddle point of the Lagrangian function (cf. [b3, Section I.5]). Popular saddle-point methods are the Arrow-Hurwicz-Uzawa algorithms introduced in [b2] and widely used in the literature; see, e.g. [b5, b6, b7] for an overview. These methods alternate a minimization of the Lagrangian function with respect to the primal variable and a gradient ascent with respect to the dual variable given the primal variable. Online versions of these methods have also been proposed recently (see, e.g., [b8, b9, b16]). In the same spirit as the aforementioned references, we develop here an online saddle-point algorithm to solve the problem detailed in Section III. To this end, let us first introduce the sequence of per slot optimization problem
|
D
|
Table 3: Evaluation of the variability in the proportion of the global RMSE between different models. Each row contains the name of the index associated with the pair of models compared.
|
In the first place, we will study the feedback of the geostatistical models by assuming given the fit of the preferential model. As we have indicated above with respect to the feedback schemes, the feedback will be performed by replacing the prior distributions by the posterior ones or by updating the characteristic parameters of the prior distributions according to the estimation of the same from the posteriors of the common parameters and hyperparameters.
|
In this section, we present an example with real data from the fishery sciences field, in order to show the variation in the predictive results under the implementation of the different feedback procedure presented throughout the paper. In particular, two sources of information on hake abundance are available. The first source comes from a random sampling from the EVHOE scientific survey performed from 2003 to 2021. The second source of information comes from commercial data collected through observers on board in the same time interval. The region where all the locations of the sampling were performed is the southern French coast of the Atlantic Ocean, as shown in Figure 12.
|
In this section, we present the results of the four fitting structures across the different scenarios for both geostatistical and sampling processes. In other words, we consider the proposed protocols as they were explained in the previous sections: using the updating by moments for all the parameters and hyperparameters with exception of the precision of the gamma distribution, which is update through the full updating protocol. In order to obtained these results we have used the PC priors in the base models for the spatial hyperparameters. Then we have updated these distributions in the feedback models using the normal distribution for the re-parameterisation of the spatial hyperparameters.
|
In order to validate the two procedures we also present here a set of simulated scenarios through which we compare the different direction of the two feedback procedures. Evaluation and assessment of the behaviour of both procedures is done by means of the analysis of residuals from predictive maps, encompassing metrics like root mean square error, bias, histograms of residuals, and residual plots against predicted values. These simulated environments also allow us to identify possible biases in parameter and hyperparameter estimation. We finally present an application of the proposed method in the context of a real fishery scenario. In particular, we study the distribution of the European hake (Merluccius merluccius) in the southern French coast of the Bay of Biscay. In the analysis, we combine information gathered from fishery independent samples collected through the french EVHOE fishery trawl survey (FI samples), and fishery dependent samples collected through onboard sampling of basque pair trawlers.
|
B
|
Similar to the previous experiment, we observe that NLR and SDP exhibit similar behavior and achieve superior and more consistent performance compared to KM, SC, and NMF. NMF displays significant variance, while KM, SC produce many outliers.
|
Table 1: Mis-clustering error (SD) for clustering three datasets in UCI: Msplice, Heart and DNA. We randomly sample n=1,000𝑛1000n=1,000italic_n = 1 , 000 (n=300𝑛300n=300italic_n = 300 for Heart) many data points for 10 replicates. DNA1 (DNA2) stands for the perturbation with t-distribution (skewed normal distribution) random noise.
|
UCI datasets. To empirically illustrate the advantages and robustness of our method against the GMM assumption, we conduct further experiments on three datasets in UCI: Msplice, Heart and DNA.
|
We present numerical results to assess the effectiveness of the proposed NLR method. We first conduct two simulation experiments using GMM to evaluate the convergence and compare the performance of NLR with other methods. Then, we perform a comparison using two real datasets.
|
One of the competing methods in our comparison is clustering based on solving the NMF formulation (7). Specifically, we employ the projected gradient descent algorithm, which is a simplified version of Algorithm 1 discussed in Section 3, to implement this method. We adopt random initialization and set the same r𝑟ritalic_r for both NMF and NLR to ensure a fair comparison. Finally, we conduct further experiments on three datasets in UCI.
|
B
|
If a1superscript𝑎1a^{1}italic_a start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT is preferred over a0superscript𝑎0a^{0}italic_a start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT, the algorithm assigns on=1superscript𝑜𝑛1o^{n}=1italic_o start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = 1; otherwise, it assigns on=0superscript𝑜𝑛0o^{n}=0italic_o start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = 0 according to the model in Eq. 2.
|
Similar to Algorithm 1, we use MLE to learn the advantage function. More specifically, we learn it by maximizing the log-likelihood:
|
To learn the unknown reward function, it is necessary to make structural assumptions about the reward. We consider a setting where the true reward function possesses a linear structure:
|
We adopt the widely-used maximum likelihood estimation (MLE) approach to learn the reward function, which has also been employed in other works Ouyang et al., (2022); Christiano et al., (2017); Brown et al., (2019); Shin et al., (2023); Zhu et al., (2023). Specifically, we learn the reward model by maximizing the log-likelihood L(θ,𝒟reward,{on}n=1N)𝐿𝜃subscript𝒟rewardsuperscriptsubscriptsuperscript𝑜𝑛𝑛1𝑁L(\theta,\mathcal{D}_{\mathrm{reward}},\{o^{n}\}_{n=1}^{N})italic_L ( italic_θ , caligraphic_D start_POSTSUBSCRIPT roman_reward end_POSTSUBSCRIPT , { italic_o start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ):
|
where Ah∗subscriptsuperscript𝐴ℎA^{*}_{h}italic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT is the advantage function of the optimal policy. Similar to trajectory-based comparisons with linear reward parametrization, we assume linearly parameterized advantage functions:
|
A
|
Table 5: Reconciliation model size and computational time (in seconds) in the four experiments implemented in different sections.
|
Zambon et al. (2022) further extend this idea to accommodate both count time series and real-valued time series. The main difference between this work and the methods we propose is a distinction between ‘conditioning’ and ‘mapping’. Both Zambon et al. (2022), and our own work proposed here, take an incoherent multivariate discrete distribution as an input. Zambon et al. (2022) take the reconciled distribution to be a suitably normalised ‘slice’ of the incoherent distribution along the domain where coherence holds. In contrast, what we propose in this paper trains a mapping from the domain where forecasts are incoherent to the domain where they are coherent. We argue that by training this ‘mapping’, we are able to correct for model misspecification in the base forecast, including a proper accounting of dependence in the hierarchy. This is particularly appealing since for practical reasons, the input multivariate base models usually assume independence as multivariate discrete time series models remain challenging.
|
Corani et al. (2022) propose a novel reconciliation approach that conditions base probabilistic forecasts of the most disaggregated series on base forecasts of aggregated series.
|
then optimally adjusting or reconciling these to produce coherent forecasts. In most cases, this is achieved by taking a base forecast 𝒚^^𝒚\hat{\bm{y}}over^ start_ARG bold_italic_y end_ARG and premultiplying it by a projection matrix 𝐏𝐏\mathbf{P}bold_P to yield reconciled forecasts 𝒚~=𝐏𝒚^~𝒚𝐏^𝒚\tilde{\bm{y}}=\mathbf{P}\hat{\bm{y}}over~ start_ARG bold_italic_y end_ARG = bold_P over^ start_ARG bold_italic_y end_ARG, that are coherent by construction. The precise form of the projection depends on assumptions about the covariance matrix of forecast errors, for instance, assuming homoskedastic uncorrelated forecasts leads to the OLS reconciliation of Hyndman et al. (2011), while plugging in a shrinkage estimator of the forecast error covariance leads to the MinT method of Wickramasuriya et al. (2019). Reconciliation can also be interpreted as a forecast combination method (Hollyman et al., 2021) that avoids the requirement for complex models that simultaneously capture hierarchical constraints, external information, and serial dependence. In addition to achieving coherence, state-of-art reconciliation approaches have been demonstrated to improve forecast accuracy.
|
Finally, we note that in our experiments, fairly simple models were used to obtain base forecasts. While it remains challenging to accurately forecast low count time series with an excessive number of zeros, state-of-the-art methods have been proposed by Berry and West (2020) and Weiß et al. (2022). While adopting such methods could lead to more accurate base and unreconciled forecasts, our emphasis here was to demonstrate that forecast reconciliation can improve base forecasts in the discrete case.
|
D
|
In the main paper, we considered the application of R-VGAL to models where the random effects are correlated within subjects (individuals), but independent between subjects. In practice, there are many cases where other random effect structures, such as crossed or nested random effects, are needed; see Pinheiro and Bates, (2006); Gelman and Hill, (2007); West et al., (2014) or Papaspiliopoulos et al., (2023) for examples. Here, we briefly discuss the application of R-VGAL to some classes of models with crossed or nested random effects. The implementation of R-VGAL to these models is left for future endeavours.
|
Models with crossed effects are often used to model data that can be organised in the form of contingency tables between categorical variables. For example, consider a study of annual income, in which a number of characteristics from participants are recorded using categorical variables, such as their age range, sex, ethnicity, and highest level of education. In this case, the data can be organised into a multi-dimensional contingency table between age range, sex, ethnicity, and level of education. Participants who have the same combination of characteristics may have similar income levels, and the correlation between people in the same set of categories may be modelled with the addition of category-specific random effects.
|
The notation we use in the following crossed effect model follows that in Chapter 11 of Gelman and Hill, (2007) and Sect. 2 of Papaspiliopoulos et al., (2023). Suppose that there are K𝐾Kitalic_K categorical variables, and the k𝑘kitalic_kth variable has Lksubscript𝐿𝑘L_{k}italic_L start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT levels, for k=1,…,K𝑘1…𝐾k=1,\dots,Kitalic_k = 1 , … , italic_K. The logarithm of the income of the i𝑖iitalic_ith individual may then be modelled as
|
where, here, the notation lk,gsubscript𝑙𝑘𝑔l_{k,g}italic_l start_POSTSUBSCRIPT italic_k , italic_g end_POSTSUBSCRIPT denotes the level of the k𝑘kitalic_kth categorical variable associated with group g𝑔gitalic_g. The Hessian of the group log likelihood can be similarly expressed via Louis’ identity (22), which we do not restate here. The gradient and Hessian of the group log likelihood can then be approximated using the importance-sampling-based approach described in Sects. 2.3.1 and 2.3.2. Note that there are analytical formulae for the gradient (and Hessian) in this case, as the model is linear; but this approach is applicable to a wide class of GLMMs with crossed random effects.
|
for i=1,…,N𝑖1…𝑁i=1,\dots,Nitalic_i = 1 , … , italic_N, where 𝐱isubscript𝐱𝑖\mathbf{x}_{i}bold_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes a vector of covariates associated with the fixed effects 𝜷𝜷\bm{\beta}bold_italic_β, and αl(k)superscriptsubscript𝛼𝑙𝑘\alpha_{l}^{(k)}italic_α start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT denotes the random effect associated with the l𝑙litalic_lth level of the k𝑘kitalic_kth categorical variable. The notation lk[i]subscript𝑙𝑘delimited-[]𝑖l_{k}[i]italic_l start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT [ italic_i ] denotes the level of the k𝑘kitalic_kth category that the i𝑖iitalic_ith individual falls into; for example, if the first categorical variable in the model is age range, where the categories are 1 for 18–30 years old, 2 for 30–50 years old, and 3 for 50 years old and above, then l1[4]=2subscript𝑙1delimited-[]42l_{1}[4]=2italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT [ 4 ] = 2 means that the 4th individual in the dataset is in level 2 of the “age” variable (between 30 and 50 years old). Here we have assumed that the random effects αl(k)superscriptsubscript𝛼𝑙𝑘\alpha_{l}^{(k)}italic_α start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT have the same variance for all l=1,…,L𝑙1…𝐿l=1,\dots,Litalic_l = 1 , … , italic_L and k=1,…,K𝑘1…𝐾k=1,\dots,Kitalic_k = 1 , … , italic_K. Thus the parameters of interest in this model are 𝜽=(𝜷⊤,σα2,σϵ2)⊤𝜽superscriptsuperscript𝜷topsuperscriptsubscript𝜎𝛼2superscriptsubscript𝜎italic-ϵ2top\bm{\theta}=(\bm{\beta}^{\top},\sigma_{\alpha}^{2},\sigma_{\epsilon}^{2})^{\top}bold_italic_θ = ( bold_italic_β start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT , italic_σ start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_σ start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT.
|
A
|
TKSD: our method as described in Section 5, using a randomly sampled ∂V~~𝑉\widetilde{\partial V}over~ start_ARG ∂ italic_V end_ARG.
|
In this implementation, we follow the advice of Liu et al. (2022), Section 7, where the distance function to the ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ball is calculated via
|
TruncSM/bd-KSD (approximate): the implementation by Liu et al. (2022)/Xu (2022) respectively with distance function given by Equation 28, using the same ∂V~~𝑉\widetilde{\partial V}over~ start_ARG ∂ italic_V end_ARG as given to TKSD.
|
When the boundary’s functional form is unknown, the recommended distance functions by Xu (2022) and Liu et al. (2022) cannot be used, and instead TruncSM and bd-KSD must use approximate boundary points. This approximation to the distance function is given by
|
TruncSM/bd-KSD (exact): the implementation by Liu et al. (2022)/Xu (2022) respectively, where the distance function is computed exactly using the known boundaries.
|
D
|
A related line of work is federated learning across distributed data sites. In this setting, information exchange between sites may be restricted due to privacy or feasibility considerations, prohibiting pooled analyses (Maro et al., 2009; McMahan et al., 2017). As such, study sites leverage models or parameters derived from other sites without sharing individual-level data. In the context of HTE estimation, Tan et al. (2022) proposed a tree-based ensemble approach that combines models across data sites. Vo et al. (2022) performed federated causal inference through adaptive kernel functions on observational studies. Similar to this line of work, the multi-study R𝑅Ritalic_R-learner performs cross-site learning by computing study-specific nuisance functions and HTEs for all individuals. Currently, the multi-study R𝑅Ritalic_R-learner loss function requires centralized access to individual-level data from all sites, and, as a result, is not directly applicable to distributed data. Thus, we leave this extension to future work.
|
We propose a statistical machine learning (ML) framework for estimating HTEs on multiple studies without assuming transportability. Adapting statistical ML methods for HTE estimation is attractive because of their flexibility and strong empirical performance (Brantner et al., 2023). Notable examples in the single-study setting include tree-based approaches (Wager and Athey, 2018), boosting (Powers et al., 2018), neural networks (Shalit et al., 2017), and Lasso (Imai and Ratkovic, 2013). Despite growing interest in these adaptations, developing them can be labor-intensive. Moreover, they generally do not have theoretical guarantees for improvement in isolating causal effects compared to simple nonparametric regressions. To this end, Nie and Wager (2021) proposed the R𝑅Ritalic_R-learner, a framework that is not only algorithmically flexible, allowing any off-the-shelf ML method to be employed, but also quasi-oracle in the case of penalized kernel regression. We extend the R𝑅Ritalic_R-learner to account for between-study heterogeneity in the multi-study setting.
|
Recently, there is growing interest in combining data from multiple studies to estimate treatment effects (Degtiar and Rose (2023); Colnet et al. (2024)). A common assumption in this literature is ignorability of study label given covariates (Hotz et al., 2005; Stuart et al., 2011; Tipton, 2013; Hartman et al., 2015; Buchanan et al., 2018; Kallus et al., 2018; Egami and Hartman, 2021; Colnet et al., 2024). Mathematically, this assumption states that the potential outcome Y(a)𝑌𝑎Y(a)italic_Y ( italic_a ) under treatment a∈𝒜𝑎𝒜a\in\mathcal{A}italic_a ∈ caligraphic_A is independent of the study label S∈{1,…,K}𝑆1…𝐾S\in\{1,\ldots,K\}italic_S ∈ { 1 , … , italic_K } given the covariates X∈𝒳𝑋𝒳X\in\mathcal{X}italic_X ∈ caligraphic_X, i.e., Y(a)⟂⟂S∣XY(a)\perp\!\!\!\!\perp S\mid Xitalic_Y ( italic_a ) ⟂ ⟂ italic_S ∣ italic_X. An important implication of this assumption is transportability or mean exchangeability of the HTEs (Dahabreh et al., 2019; Dahabreh and Hernán, 2019; Wu and Yang, 2021; Colnet et al., 2024). That is, E[Y(a)∣X=x,S=k]=E[Y(a)∣X=x,S=k′]=E[Y(a)∣X=x]𝐸delimited-[]formulae-sequenceconditional𝑌𝑎𝑋𝑥𝑆𝑘𝐸delimited-[]formulae-sequenceconditional𝑌𝑎𝑋𝑥𝑆superscript𝑘′𝐸delimited-[]conditional𝑌𝑎𝑋𝑥E[Y(a)\mid X=x,S=k]=E[Y(a)\mid X=x,S=k^{\prime}]=E[Y(a)\mid X=x]italic_E [ italic_Y ( italic_a ) ∣ italic_X = italic_x , italic_S = italic_k ] = italic_E [ italic_Y ( italic_a ) ∣ italic_X = italic_x , italic_S = italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ] = italic_E [ italic_Y ( italic_a ) ∣ italic_X = italic_x ] for all studies k≠k′𝑘superscript𝑘′k\neq k^{\prime}italic_k ≠ italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, treatment a∈𝒜𝑎𝒜a\in\mathcal{A}italic_a ∈ caligraphic_A, and covariates x∈𝒳𝑥𝒳x\in\mathcal{X}italic_x ∈ caligraphic_X. In practice, however, this may be untenable due to various sources of between-study heterogeneity. Transportability of the HTEs will be violated if a treatment effect modifier was not measured across all studies due to differences in study design or data collection protocols. Another example is when the HTEs differ due to the heterogeneity in study populations (e.g., differences in the distribution of treatment effect modifiers across studies).
|
The proposed framework can be generalized to incorporate different multi-study learning strategies for estimating m(⋅)𝑚⋅m(\cdot)italic_m ( ⋅ ). Generally, when studies are homogeneous, Patil and Parmigiani (2018) showed that merging all studies and training a single model can lead to improved accuracy due to increased sample size; as between-study heterogeneity increases, multi-study ensembling is preferred. The empirical and theoretical trade-offs between merging and multi-study ensembling have been explored in detail for ML techniques, including linear regression (Guan et al. (2019)), random forest (Ramchandran et al. (2020)), gradient boosting (Shyr et al. (2022)), and multi-study stacking (Ren et al., 2020). Because estimation of m(⋅)𝑚⋅m(\cdot)italic_m ( ⋅ ) directly impacts the downstream analysis of τ(⋅)𝜏⋅\tau(\cdot)italic_τ ( ⋅ ), exploring the empirical and theoretical implications of these strategies in the context of the multi-study R𝑅Ritalic_R-learner is an interesting avenue for future work.
|
Our paper makes several contributions. 1) The proposed framework, the multi-study R𝑅Ritalic_R-learner, is robust to between-study heterogeneity in the nuisance functions and HTEs. It involves a data-adaptive objective function that links study-specific treatment effects with nuisance functions through membership probabilities. These probabilities enable cross-study learning, thereby allowing information to be borrowed across heterogeneous studies. Nie and Wager (2021)’s R𝑅Ritalic_R-learner is a special case of the proposed approach in the absence of between-study heterogeneity. 2) Under homoscedasticity, we show analytically that the multi-study R𝑅Ritalic_R-learner is asymptotically unbiased and normally distributed in the series estimation framework. In the two-study setting, the proposed method is more efficient than the R𝑅Ritalic_R-learner when there is between-study heterogeneity in the propensity score models. 3) Results from extensive evaluations using cancer data showed that the multi-study R𝑅Ritalic_R-learner performs favorably compared to other methods as between-study heterogeneity increases. 4) The multi-study R𝑅Ritalic_R-learner is easy to implement and allows flexible estimation of nuisance functions, HTEs, and membership probabilities using modern ML techniques. 5) It can be used to combine RCTs, observational studies, or a combination of both.
|
C
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.