context
stringlengths
100
12k
A
stringlengths
100
5.1k
B
stringlengths
100
6.02k
C
stringlengths
100
4.6k
D
stringlengths
100
4.68k
label
stringclasses
4 values
They generally achieve coverage close to the nominal level. For large error metrics relative to sample size, the vertex and double-or-nothing bootstrap methods can be considered as good alternatives.
In the case of the 𝙵𝙰𝚁𝙵𝙰𝚁\mathtt{FAR}typewriter_FAR (false acceptance rate), the subsets and two-level bootstrap techniques fail to achieve nominal coverage at any level of the error metric, while the naive Wilson interval, where one neglects to account for data dependence, shrinks with growing 𝙵𝙰𝚁𝙵𝙰𝚁\mathtt{FAR}typewriter_FAR. The remaining three methods are more promising: Wilson interval that accounts for data dependence (which we will refer to simply as Wilson hereafter) always achieves nominal coverage, while the vertex and double-or-nothing bootstraps cover at the right level when the true error metrics are large.
We strongly advise against using naive Wilson intervals, subsets, and two-level bootstrap techniques.
can lead to different conclusions due to miscoverage. Six methods for computing estimates and corresponding 95% confidence intervals on synthetic data for the false accept rate (𝙵𝙰𝚁𝙵𝙰𝚁\mathtt{FAR}typewriter_FAR) of two 1:1 matching algorithms (A and B) that have underlying equal accuracy (𝙵𝙰𝚁=10−1𝙵𝙰𝚁superscript101\mathtt{FAR}=10^{-1}typewriter_FAR = 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT). The data contains 50 groups, with 5 images each, and all pairwise comparisons are considered in the estimation of the error metric (details in Section 6). Dots and bars correspond to error estimates and corresponding confidence intervals. The naive Wilson, subsets bootstrap, and two-level bootstrap intervals may lead the practitioner to erroneously conclude that Algorithm A has inferior performance compared to Algorithm B – while in our simulation they are equivalent. In our analysis and experiments we find that only Wilson intervals achieve nominal coverage in the presence of low error rates 1 and sample dependence 2. Double-or-nothing and vertex bootstrap intervals also work well in settings characterized only by 2.
We provide a review of two classes of methods for constructing confidence intervals for matching tasks, one based on parametric assumptions, and the other on nonparametric, resampling-based methods. The reviewed methods include the Wilson intervals without (naive version) and with variance adjusted for data dependence, subsets, two-level, vertex, and double-or-nothing bootstraps.
B
Is there a lightweight method that provably minimizes the original objective in (1), when the participation statistics of clients are unknown, uncontrollable, and heterogeneous?
We leverage the insight that we can apply different weights to different clients’ updates in the parameter aggregation stage of FedAvg. If this is done properly, the effect of heterogeneous participation can be canceled out so that we can minimize (1), as shown in existing works that assume known participation statistics (Chen et al., 2022, Fraboni et al., 2021a, Li et al., 2020b; c). However, in our setting, we do not know the participation statistics a priori, which makes it challenging to compute (estimate) the optimal aggregation weights. It is also non-trivial to quantify the impact of estimation error on convergence.
Most existing works on FL with partial client participation assume that the clients participate according to a known or controllable random process (Karimireddy et al., 2020, Yang et al., 2021, Chen et al., 2022, Fraboni et al., 2021a, Li et al., 2020b; c).
Earlier works on FedAvg considered the convergence analysis with full client participation (Gorbunov et al., 2021, Haddadpour et al., 2019, Lin et al., 2020, Stich, 2019, Wang & Joshi, 2019; 2021, Yu et al., 2019, Malinovsky et al., 2023), which do not capture the fact that only a subset of clients participates in each round in practical FL systems. Recently, partial client participation has came to attention. Some works analyzed the convergence of FedAvg where the statistics or patterns of client participation are known or controllable (Fraboni et al., 2021a; b, Li et al., 2020c, Yang et al., 2021, Wang & Ji, 2022, Cho et al., 2023, Karimireddy et al., 2020, Li et al., 2020b, Chen et al., 2022, Rizk et al., 2022). However, as pointed out by Wang et al. (2021), Bonawitz et al. (2019), the participation of clients in FL can have complex dependencies on the underlying system characteristics, which makes it difficult to know or control each client’s behavior a priori. A recent work analyzed the convergence for a re-weighted objective (Patel et al., 2022), where the re-weighting is essentially arbitrary for unknown participation distributions. Some recent works (Yang et al., 2022, Yan et al., 2020, Gu et al., 2021, Jhunjhunwala et al., 2022) aimed at addressing this problem using variance reduction, by including the most recent local update of each client in the global update, even if they do not participate in the current round. These methods require a substantial amount of additional memory to store the clients’ local updates.
Some previous works have discovered this need of debiasing the skewness of client participation (Li et al., 2020c, Perazzone et al., 2022) or designing the client sampling scheme to ensure that the updates are unbiased (Fraboni et al., 2021a, Li et al., 2020b). However, in our work, we consider the more realistic case where the participation statistics are unknown, uncontrollable, and heterogeneous. In this case, we are unable to directly find the optimal aggregation weights because we do not know the participation statistics a priori.
A
Table 2: Proteins surpassing the Benjamini-Hochberg corrected p-value threshold 0.05. Associations denoted with an X are those that had a pQTL surpassing the genome-wide significance threshold 5×10−85superscript1085\times 10^{-8}5 × 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT for association with the trait.
Recent studies suggest that protein expression prediction models do not generalize well across different ancestral groups. For example, Zhang et al., (2022) found that models trained on data collected on individuals of European ancestry (EA) did not perform well when predicting protein expression in individuals of African ancestry (AA). Bhattacharya et al., (2022) found a similar result in the context of gene expression prediction modeling, claiming that “expression models are not portable across ancestry groups”. The results of Patel et al., (2022) further suggest that gene expression effect sizes may differ across ancestral groups. Because of this, Zhang et al., (2022) built ancestry-specific proteomic prediction models using data on that ancestral population alone. However, such an approach does not exploit genetic commonalities between EA and AA populations, and thus may lead to less predictive models.
In this work, we focus on proteome-wide association studies (PWAS) specific to individuals of African ancestry. Proteomics are important because many diseases manifest through changes in protein expression, so proteome-wide association studies can identify novel biomarkers and drug targets (Kavallaris and Marshall,, 2005). Although previous studies have demonstrated that the proteome is under genetic control (e.g., see Sun et al.,, 2018; Robins et al.,, 2021), relative to the transcriptome, the genetic architecture of the proteome—especially in populations of non-European ancestry—is less well understood due to limited studies (Zhang et al.,, 2022). Our objective is to build a prediction model for protein expression as a function of local/cis SNP genotypes, specific to individuals of African ancestry. We will then use these models to impute protein expression into large-scale genome-wide association studies (GWAS) and assess their association with complex traits (Barbeira et al.,, 2018; Dong et al.,, 2020). We focus our attention on the potential association between genetically predicted protein expression and five lipid blood traits: low-density lipoprotein cholesterol (LDL), high-density lipoprotein cholesterol (HDL), triglycerides (TG), total cholesterol (TC), and non-high-density lipoprotein cholesterol (nonHDL). It is well known that blood lipid levels are associated with cardiovascular disease risk, and are heritable. Many recent studies have focused on GWAS for lipid levels (Graham et al.,, 2021; Ramdas et al.,, 2022), though PWAS are very limited (Zhang et al.,, 2022). According to Carnethon et al., (2017), as of 2017, cardiovascular disease was the primary cause of life expectancy differences between African American and White American individuals. Thus, identifying proteins associated with elevated lipid levels may suggest new avenues of research on treating or preventing cardiovascular disease in African American individuals.
Focusing only on proteins whose average testing set R2≥.01superscript𝑅2.01R^{2}\geq.01italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ≥ .01 using our method, we tested 286 proteins in total. To perform the MetaXcan, we obtained LD matrices using the AA individual level genotyping data from the WHI. Then, we tested the relationship between genetically predicted protein expression and five blood lipid traits: LDL, HDL, TG, TC, and nonHDL (Graham et al.,, 2021). The summary statistics were downloaded from the Global Lipids Genetics Consortium (GLCC, http://csg.sph.umich.edu/willer/public/glgc-lipids2021/). These summary statistics are from 99,432 individuals of African ancestry. Consistent with our ancestral group definition in Section 6.1, GLCC also defines ancestry based on SIRE.
We performed the group-specific association analysis of genetically predicted protein expression based on our fitted models with blood lipid traits with GWAS summary statistics using MetaXcan (Barbeira et al.,, 2018).
D
We can see on Figure 1 that the optimal variance is reached for the OSSGD (as for the MLE, AVSGD and ADSGD) that naturally overperforms the non-optimal variance of the slowly converging SGD. It is worth noting the relative bias for samples of finite size of the AVSGD when the initial value ϑ0subscriptitalic-ϑ0\vartheta_{0}italic_ϑ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is fixed.
In order to improve the convergence rate of the gradient descent algorithm, we propose in the following the one-step procedure starting from an initial guess estimator taken from the projected stochastic gradient algorithm. This procedure is shown to be faster than the classical computation of the MLE but still asymptotically efficient. It is an interesting alternative to the stochastic gradient algorithm with averaging or adaptative gradient descent and shows nice properties also on samples of finite size.
In the following, Section 2 is dedicated to notations and known results of convergence rates for stochastic gradient descent (SGD), stochastic gradient descent with averaging (AVSGD), adaptative gradient descent (ADSGD) and maximum likelihood estimation (MLE). The main result on (strong) consistency and asymptotic normality of the one-step procedure in the multidimensional parameter setting is given in Section 3. Monte Carlo simulations are done in Section 4 to assess the performance of the proposed statistical procedure (OSSGD) in comparison with SGD, AVSGD, ADSGD and MLE in terms of computation time and asymptotic variance for samples of finite size.
In terms of computation time, the OSSGD (as the AVSGD) is more than 3 times faster than the MLE. In comparison, the ADSGD is more than two times faster.
We can see on Figure 1 that the optimal variance is reached for the OSSGD (as for the MLE, AVSGD and ADSGD) that naturally overperforms the non-optimal variance of the slowly converging SGD. It is worth noting the relative bias for samples of finite size of the AVSGD when the initial value ϑ0subscriptitalic-ϑ0\vartheta_{0}italic_ϑ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is fixed.
C
1}{\kappa b}+2\ln\kappa\right).over^ start_ARG caligraphic_L end_ARG ( italic_θ , italic_X , italic_Y , italic_l ) = caligraphic_L ( italic_θ , italic_X , italic_Y , italic_l ) - italic_n ( roman_ln ( italic_b ) + divide start_ARG 1 end_ARG start_ARG italic_κ italic_b end_ARG + 2 roman_ln italic_κ ) .
MIXALIME can produce standard errors of MLEs on a user request. Standard errors are calculated with the help of Rao–Cramér inequality that provides a lower bound on the estimates’ variance:
A user engages with MIXALIME via command-line interface. The package provides a complete documentation of its feature alongside with a small tutorial through the help command:
BetaNB model tends to provide ultra-conservative P-value estimates, see Section 9 for details of the scoring procedure. This happens due to the fact that the beta negative binomial distribution is significantly more heavy-tailed than a negative binomial distribution for small values of κ𝜅\kappaitalic_κ. Therefore, it might be useful to compromise the goodness of fit for greater sensitivity of the model by encouraging higher values of the κ𝜅\kappaitalic_κ parameter. On the other hand, we also observed that high coverage data has lower variance, i.e. higher values of κ𝜅\kappaitalic_κ for high values of y𝑦yitalic_y are expected. We introduce a regularization that accommodates for this observation by assuming that
Nota bene: Although MIXALIME will output standard errors when requested for MAP estimates of the BetaNB regularized model (see Section 7.1), they should be ignored.
A
While this is a relatively simple setting for trial design simulation, considering U𝑈Uitalic_U values for u𝑢uitalic_u, N𝑁Nitalic_N values for sample size, T𝑇Titalic_T values for η𝜂\etaitalic_η and γ𝛾\gammaitalic_γ (keeping other model parameters fixed), and M𝑀Mitalic_M simulation iterations (noting that M>10,000𝑀10000M>10,000italic_M > 10 , 000 is currently recommended by the FDA, (2019)) results in 𝒪⁢(U⁢N⁢T⁢M⁢𝒞)𝒪𝑈𝑁𝑇𝑀𝒞\mathcal{O}(UNTM\mathcal{C})caligraphic_O ( italic_U italic_N italic_T italic_M caligraphic_C ), where 𝒞𝒞\mathcal{C}caligraphic_C is the computational cost associated with posterior estimation. The proposed approach can reduce the computational complexity to 𝒪⁢(N0⁢T0⁢M⁢𝒞)𝒪subscript𝑁0subscript𝑇0𝑀𝒞\mathcal{O}(N_{0}T_{0}M\mathcal{C})caligraphic_O ( italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_M caligraphic_C ), with N0<<Nmuch-less-thansubscript𝑁0𝑁N_{0}<<Nitalic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT < < italic_N and T0<<Tmuch-less-thansubscript𝑇0𝑇T_{0}<<Titalic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT < < italic_T.
Figure 2 shows the estimated curves and 95% credible intervals for power as a function of sample size and a range of effect sizes which are different from those included in the simulation scenarios. The black solid circles are simulation-based estimates of power for the 25 scenarios listed in rows 2-6 of Table 1. This is to demonstrate that power curves and their associated uncertainty may be estimated for any given effect assumption of interest by modelling the sampling distribution using only 25 simulation scenarios. These estimates provide an overall understanding of power corresponding to each analysis and can be used to specify a design with the preferred analysis model.
Figure 1 shows the estimated curves (posterior median) and 95% credible intervals (equal-tailed posterior quantiles) for type I error rate as a function of sample size for the adjusted and unadjusted models. These results are obtained by fitting the models in Section 2 to the simulated sampling distribution (with M=100,000𝑀100000M=100,000italic_M = 100 , 000 iterations) at the 14 sample sizes. The black solid circles are simulation-based estimates of the type I error rate. The goal of this exercise is to demonstrate that although these curves are generated by fitting a model to the sampling distribution and not directly to the simulated tail probabilities, they are able to capture the type I error rates obtained from the simulations and produce corresponding uncertainty estimates. The full curves exhibit the slower rate of convergence to the nominal type I error rate in the highly parameterized adjusted model, a useful detail for specifying sample sizes and scheduling interim analyses when covariate adjustment is considered.
The selection of simulation scenarios at which the sampling distribution is simulated is important, as these simulations play the role of data in the proposed approach. The selection of these scenarios is therefore a design problem. For type I error rate, this boils down to selecting a sequence of sample size values. We recommend concentrating the simulations at small n𝑛nitalic_n, where the type I error rate changes (decreases) quickly with sample size. At large n𝑛nitalic_n, the probability of a type I error converges to (1−u)%percent1𝑢(1-u)\%( 1 - italic_u ) %, where u𝑢uitalic_u is the decision threshold, and the additional refinement from simulations is negligible. A similar rationale applies in determining the simulation scenarios for estimating power curves. An equally-spaced sequence over the support of the design prior paired with a sequence of reasonable sample sizes achieves acceptable performance.
Figure 3 shows these results for the type I error rate, where the training set includes sample sizes n=20,40,60,80,100,200,1000𝑛204060801002001000n=20,40,60,80,100,200,1000italic_n = 20 , 40 , 60 , 80 , 100 , 200 , 1000; the test set is defined as the remaining sample sizes listed in the first row of Table 1. The grey dots and error bars show estimates and 95% credible intervals obtained from modelling the sampling distribution, while the triangles and squares show the simulation-based type I error rates in the training and test sets, respectively. For both models, type I error rate is estimated with less than 0.002 bias and in most cases, the 95% credible intervals include the simulation-based estimates.
B
Second, while we focus on situations where the value of discoveries is described by weights w⁢(A)𝑤𝐴w(A)italic_w ( italic_A ) decreasing in |A|𝐴|A|| italic_A |, in different contexts it might be useful to consider other evaluations: the procedures we will describe adapt to any, as long as the weights are fixed.
The constraint (5) expresses our interest in obtaining non-redundant rejections. In section 5.1 and appendix LABEL:appendix:emlkf_description we will discuss, instead, procedures that lead to discoveries at multiple resolutions, in a coordinated fashion.
Using knockoff e-values we can overcome this limitation, as we describe in the appendix LABEL:appendix:global_partial_description. Testing these partial conjunction hypotheses can also be combined with testing across multiple levels of resolution. Indeed, in our application to the UK Biobank data in section 7, we test a global null hypothesis for platelet-related outcomes.
Self-consistency can also be used to define other multiple comparisons procedures based on e-values. For example, as mentioned in Wang and Ramdas (2022), one can construct the equivalence of Focused BH (Katsevich et al., 2021) for e-values. We do so precisely in appendix LABEL:appendix:focusedeBH_vs_kelp, obtaining a procedure we call Focused e-BH and which controls FDR for any filter and under any dependence structure.
E-values can be used to develop an analog of the p-filter, as mentioned by Wang and Ramdas (2022). We include a description of the e-filter in appendix LABEL:appendix:emlkf_description.
A
Related methods. Contrastive-based SSL methods are the most suitable choice for these two tasks since the core of contrastive learning is identifying positive and negative samples. Specifically, TS-TCC [116] introduces temporal contrast and contextual contrast in order to obtain more robust representations. TS2Vec [118] and MHCCL [139] perform a hierarchical contrastive learning strategy over augmented views, which enables robust representations. Similar to anomaly detection and prediction tasks, an adversarial-based SSL strategy can also be introduced into classification and clustering tasks. DTCR [65] propose a fake-sample generation strategy to assist the encoder in obtaining more expressive representations.
The rest of the article is organized as follows. Section 2 provides some review literature on SSL and time series data. Section 3 to Section 5 describe the generation-based, contrastive-based, and adversarial-based methods, respectively. Section 6 lists some commonly used time series data sets from the application perspective. The quantitative performance comparisons and discussions are also provided. Section 7 discusses promising directions of time series SSL, and Section 8 concludes the article.
In this section, we point out some critical problems in current studies and outline several research directions worthy of further investigation.
Abundant future directions. We point out key problems in this field from both applicative and methodology perspectives, analyze their causes and possible solutions, and discuss future research directions for time series SSL. We strongly believe that our efforts will ignite further research interests in time series SSL.
In this section, the definition of time series data is first introduced, and then several recent reviews on SSL and time series analysis are scrutinized.
B
The kernel 𝒦1subscript𝒦1\mathcal{K}_{1}caligraphic_K start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a tensor product kernel
The last row shows the contribution of 𝒦2subscript𝒦2\mathcal{K}_{2}caligraphic_K start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT,
The kernel 𝒦1subscript𝒦1\mathcal{K}_{1}caligraphic_K start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is a tensor product kernel
The kernel 𝒦1subscript𝒦1\mathcal{K}_{1}caligraphic_K start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT indeed perfectly captures the
The second kernel 𝒦2subscript𝒦2\mathcal{K}_{2}caligraphic_K start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a very rough exponential
D
5.3.2 Modelling [Z⁢(⋅)∣Y⁢(⋅),ω⁢(⋅),u⁢(⋅)]delimited-[]conditional𝑍⋅𝑌⋅𝜔⋅𝑢⋅[Z(\cdot)\mid Y(\cdot),\omega(\cdot),u(\cdot)][ italic_Z ( ⋅ ) ∣ italic_Y ( ⋅ ) , italic_ω ( ⋅ ) , italic_u ( ⋅ ) ]
Table 2 presents the posterior estimates of the finite population PIFV (first row) from the four models described above. The performance of these models was evaluated using D𝐷Ditalic_D and G⁢R⁢S𝐺𝑅𝑆GRSitalic_G italic_R italic_S defined in (29) and (30), respectively. Based upon these metrics, model (iii) is marginally preferred to model (iv), which suggests a lack of spatial dependence in the sampling inclusion model. Both the spatial models, (iii) and (iv), considerably outperform the non-spatial models including (ii) which still accommodates the nonignorable response. Model (i), which assumes ignorability, performs the worse. The credible interval from the ignorable model seems very tight, while those from the other models are wider because of the propagation of uncertainty from the additional parameters being estimated. The PIFV from model (iii) is recommended for reporting given the model’s superior performance.
While the sampling indicator process Z⁢(⋅)𝑍⋅Z(\cdot)italic_Z ( ⋅ ) “disappears” from the likelihood or the posterior for ignorable designs, we can model non-ignorable designs using such a process. For example, we consider the model
A crucial observation is that the conditional distribution p⁢(Z∣Y,ϕ)𝑝conditional𝑍𝑌italic-ϕp(Z\mid Y,\phi)italic_p ( italic_Z ∣ italic_Y , italic_ϕ ) models the sampling design and is accounted for in the finite population inference. For example, in the models considered in Sections 2–4 p⁢(Z∣y,ϕ)=p⁢(Z)𝑝conditional𝑍𝑦italic-ϕ𝑝𝑍p(Z\mid y,\phi)=p(Z)italic_p ( italic_Z ∣ italic_y , italic_ϕ ) = italic_p ( italic_Z ) does not depend on the sampled values and there are no unknown design parameters ϕitalic-ϕ\phiitalic_ϕ. Only the sizes of the population and sample are used, which means that information about the design does not transmit to inference on the unknown population values and, hence, these designs are referred to as ignorable. As seen in the aforementioned sections, purely model-based approaches can reproduce the Horvitz-Thompson estimator without requiring any information on the sampling design.
I will share some perspectives on Bayesian inference for finite population quantities with an emphasis on dependent finite populations. [42] comments that advocating Bayesian inference for survey sampling is akin to “swimming upstream”, given the aversion that many survey statisticians have to modelling assumptions, but notes the richness and flexibility available in models for improving design-based inference and for addressing survey weights. Advocates of design-based survey sampling, where inference is based on the randomisation distribution, have often criticised modelling assumptions such as the population units being “independent and identically distributed”, which fails to account for the sampling design [see, e.g., 39, Section 9]. However, with computational resources available to statisticians today, building models for dependent finite population units yield rich and flexible classes of Bayesian models. A key point is that the usual model-based approaches (likelihood or Bayesian) often proceed from an assumption of “ignorability” that the analysis includes all variables that affect the probability of a unit being included in the sample. A different way to make this point is to say that the probability of obtaining a given sample does not depend upon the values of the population units. Under such assumptions, the likelihood, or the model of how the data was realised, drives the inference and the sampling design can be ignored [see, e.g., 50, 25, 26, for further discussions].
B
\boldsymbol{\theta}}}}e^{-n{\cal I}_{\boldsymbol{\theta}}(a)}\,.roman_lim start_POSTSUBSCRIPT italic_a → italic_L ( bold_italic_θ ) - italic_m start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT end_POSTSUBSCRIPT blackboard_P start_FLOATSUBSCRIPT italic_D ∼ italic_ν start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT end_FLOATSUBSCRIPT ( italic_L ( bold_italic_θ ) - over^ start_ARG italic_L end_ARG ( italic_D , bold_italic_θ ) ≥ italic_a ) = roman_lim start_POSTSUBSCRIPT italic_a → italic_L ( bold_italic_θ ) - italic_m start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - italic_n caligraphic_I start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_a ) end_POSTSUPERSCRIPT .
As we show in this work, these bounds are directly connected to Large Deviation Theory (LDT) (Ellis, 2012) because their complexity measure 𝒞⁢(𝒜⁢(D),n,ν,δ)𝒞𝒜𝐷𝑛𝜈𝛿{\cal C}({\cal A}(D),n,\nu,\delta)caligraphic_C ( caligraphic_A ( italic_D ) , italic_n , italic_ν , italic_δ ) directly depends on the so-called rate function (also known as the Cramér-Chernoff function), which is the central element of LDT. We employ the rate function to present a new characterization of the smoothness of a model using distribution-dependent measures. According to Theorem 4.5, this approach enables a precise characterization of which interpolators better generalize, addressing an outstanding open question in machine learning.
On the other hand, Cramér’s Theorem (Cramér, 1938) states that Chernoff’s bound is exponentially tight for large n𝑛nitalic_n. Formally, this statement is written as follows,
Theorem 4.5 states that an interpolator generalizes better than another (with h.p.) if it is sufficiently smoother in terms of its rate function. Figure 4 illustrates the premise of this theorem. We should note that the above result holds even for the log-loss, which is the default loss used for training, and for over-parameterized model classes. This result is specially useful when ϵitalic-ϵ\epsilonitalic_ϵ is very small or null, as it states, with h.p., that smooth interpolators generalize better, up to an ϵitalic-ϵ\epsilonitalic_ϵ, than other less smooth models; independently on whether these interpolate the data or not. Furthermore, the above result verifies that the higher the probability 1−δ1𝛿1-\delta1 - italic_δ or the number of parameters p𝑝pitalic_p, the stronger the smoothness condition needs to be; and the opposite for larger n𝑛nitalic_n. In that sense, we could have interpolators which are β𝛽\betaitalic_β-smoother than others but have worse generalization performance, because they are not smooth enough in order to apply Theorem 4.5.
The following result is an adaptation of the PAC-Chernoff bound given in Theorem 4.1 for this setup, which describes the effect of using the data-augmented loss on the generalization error of interpolators.
B
In fact, for any two invertible functions h1subscriptℎ1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and h2subscriptℎ2h_{2}italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT that satisfy the implicit autoregressive constraint, i.e., for all d,h1∘fd∘h2∈ℱA𝑑subscriptℎ1subscript𝑓𝑑subscriptℎ2subscriptℱ𝐴d,h_{1}\circ f_{d}\circ h_{2}\in\mathcal{F}_{A}italic_d , italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∘ italic_f start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ∘ italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ caligraphic_F start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, we can construct a counterfactually equivalent model—which can have arbitrarily different latent representations defined by g′=g∘h1−1superscript𝑔′𝑔superscriptsubscriptℎ11g^{\prime}=g\circ h_{1}^{-1}italic_g start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_g ∘ italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT since h1subscriptℎ1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT can be an arbitrary invertible function.
In this paper, we strive to balance practicality and theoretical guarantees by answering the question: “Can we theoretically and practically estimate domain counterfactuals without the need to recover the ground-truth causal structure?”
With weak assumptions about the true causal model and available data, we analyze invertible latent causal models and show that it is possible to estimate domain counterfactuals both theoretically and practically, where the estimation error depends on the intervention sparsity.
Ultimately, this result implies that to estimate domain counterfactuals, we indeed do not require the recovery of the latent representations or the full causal model.
In contrast, we show that estimating DCFs is easier than estimating the latent causal representations and may require fewer assumptions in Section 2.2.
C
RMT is a powerful tool for describing the spectral statistics of complex systems. It is particularly useful for systems that are chaotic but also have certain coherent structures. The theory predicts universal statistical properties, provided that the underlying matrix ensemble is large enough to sufficiently fill the space of all matrices with a given symmetry, a property known as ergodicity (Guhr et al., 1998). Ergodicity has been observed in a variety of systems, including chaotic quantum systems (Bohigas et al., 1984; Mehta, 1991; Pandey, 1983), financial markets, nuclear physics and many others (Plerou et al., 1999; Brody, 1981; Efetov, 1997).
In this section, we analyze the feature-feature covariance matrix for datasets of varying size, complexity, and origin. We consider real-world as well as correlated and uncorrelated gaussian datasets, establish a power-law scaling of their eigenvalues, and relate it to a correlation length.
We interpret ΣMsubscriptΣ𝑀\Sigma_{M}roman_Σ start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT for real world data as a single realization, drawn from the space of all possible Gram matrices which could be constructed from sampling the underlying population distribution. In that sense, ΣMsubscriptΣ𝑀\Sigma_{M}roman_Σ start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT itself is a random matrix with an unknown distribution. For such a random matrix, there are several universality classes, which depend on the strength of correlations in the underlying distribution. These range from extremely strong correlations, which over-constrain the system and lead to the so called Poisson ensemble (Atas et al., 2013), to the case of no correlations, which is equivalent to sampling independent elements from a normal distribution, represented by the Gaussian Orthogonal Ensemble (GOE) (Mehta, 2004). These classes are the only ones allowed by the symmetry of the matrix X⁢XT𝑋superscript𝑋𝑇XX^{T}italic_X italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT, provided that the number of samples and the number of features are both large.
To demonstrate that a similar universal structure is also observed for correlation matrices resulting from datasets, we will employ several diagnostic tools widely used in the field of quantum chaos. We will analyze the global and local spectral statistics of empirical covariance matrices generated from three classes of datasets: (i) Data generated by sampling from a normal distribution with a specific correlation structure for its features, (ii) Uncorrelated Gaussian Data (UGD), (iii) Real-world datasets composed of images, at varying levels of complexity and resolution.
What are the universal properties of datasets that can be gleaned from the empirical covariance matrix and how are they related to local and global statistical properties of RMT?
C
{j}\right|\Big{/}\left|\widehat{\lambda}_{j,ZW,2}-\lambda_{j}\right|.caligraphic_R start_POSTSUBSCRIPT italic_Z italic_W , 2 end_POSTSUBSCRIPT ( italic_λ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = | over^ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT italic_j , italic_W italic_P italic_K end_POSTSUBSCRIPT - italic_λ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | / | over^ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT italic_j , italic_Z italic_W , 2 end_POSTSUBSCRIPT - italic_λ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | .
Figure 7: Our method compared to the one in Zhou et al., (2022): the ratio ℛZ⁢W,2⁢(ψj)subscriptℛ𝑍𝑊2subscript𝜓𝑗\mathcal{R}_{ZW,2}(\psi_{j})caligraphic_R start_POSTSUBSCRIPT italic_Z italic_W , 2 end_POSTSUBSCRIPT ( italic_ψ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) of the L2−limit-fromsuperscript𝐿2L^{2}-italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -norm errors of the eigenfunctions estimates. The simulation setup as in Figure 5, with σ0=1subscript𝜎01\sigma_{0}=1italic_σ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1.
For the eigenfunctions, we use the ratio of the L2−limit-fromsuperscript𝐿2L^{2}-italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -norm errors, given by the square root of
respectively. Here, ∥⋅∥2\|\cdot\|_{2}∥ ⋅ ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT denotes the L2−limit-fromsuperscript𝐿2L^{2}-italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - norm. The risks depend on the bandwidth and the challenge is to find a way to select the bandwidths which minimize them.
Figure 5: Our method compared to the one in Zhou et al., (2022): the ratio ℛZ⁢W,2⁢(ψj)subscriptℛ𝑍𝑊2subscript𝜓𝑗\mathcal{R}_{ZW,2}(\psi_{j})caligraphic_R start_POSTSUBSCRIPT italic_Z italic_W , 2 end_POSTSUBSCRIPT ( italic_ψ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) of the L2−limit-fromsuperscript𝐿2L^{2}-italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT -norm errors of the eigenfunctions estimates. The same simulation setup as in Figure 4.
B
The objective of our work was, therefore, to propose a new location-scale joint model accounting for both time-dependent individual variability of a marker and competing events. To do this, we extended the model proposed by Gao et al.(Gao et al., 2011) and Barrett et al.(Barrett et al., 2019) to include a time-dependent variability, competing events, a more flexible dependence structure between the event and the marker trajectory, and more flexible baseline risk functions. In contrast to the previous works we propose a frequentist estimation approach which is implemented in the R-package FlexVarJM.
The analysis of the PROGRESS trial has shown that a high variability of blood pressure is associated with a high risk of CVD and death from other causes. Moreover, the individual residual variability depends on treatment group. These results are difficult to generalise to the entire population as the population study considered in this clinical trial is for the secondary prevention of stroke.
This paper is organized as follows. Section 2 describes the model and the estimation procedure using a robust algorithm for maximizing the likelihood. Section 3 presents a simulation study to assess the estimation procedure performance. In section 4, the model is applied to the data from the Perindopril Protection Against Stroke Study (PROGRESS) clinical trial, a blood-pressure lowering trial for the secondary prevention of stroke (Mac Mahon et al., 2001). Finally, Section 5 concludes this work with some elements of discussion.
In order to evaluate the performance of the estimation procedure, we performed a simulation study using a design similar to the application data.
In this work, we have proposed a new joint model with a subject-specific time-dependent variance that extends the models proposed by Gao et al. (Gao et al., 2011) and Barrett et al. (Barrett et al., 2019). Indeed, this new model allows time and covariate dependent individual variance and a flexible dependence structure between the competing events and the longitudinal marker. In particular, the risk of events may depend on both the current value and the current slope of the marker, in addition to the subject-specific time-dependent standard deviation of the residual error. This is an important asset of the model given that, in most health research contexts, it is more sensible to assume that the event risk depends on the time-dependent current value or slope of the marker instead of only time-independent random effects. Moreover, accounting for competing events may be important in many clinical applications. Simulation study allows us to demonstrate the good performance of the estimation procedure and to study the impact of the choice of S𝑆Sitalic_S1 and S⁢2𝑆2S2italic_S 2. The model converged without bias and with good coverage rates, whatever the number of individual and the number of visits. Moreover, the estimates of the time-to-event sub-model are quite robust to a misspecification of the marker trajectory. In addition, we provided an R-package that allows frequentist estimation with a robust estimation algorithm which had shown very good behaviour in our simulations and in a previous work with different models (Philipps et al., 2021).
B
})}+\sum_{i=1}^{N}\langle\ln{(x_{i})}\rangle\Big{]}.= divide start_ARG 1 end_ARG start_ARG roman_ln 2 end_ARG [ divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_ln ( ( 2 italic_π italic_e ) start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT roman_det ( roman_Σ ) ) + ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ⟨ roman_ln ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ⟩ ] .
Our major assumption in this section is that both firing rates f→→𝑓\vec{f}over→ start_ARG italic_f end_ARG and
where C⁢o⁢vX⁢Y𝐶𝑜subscript𝑣𝑋𝑌Cov_{XY}italic_C italic_o italic_v start_POSTSUBSCRIPT italic_X italic_Y end_POSTSUBSCRIPT is n×k𝑛𝑘n\times kitalic_n × italic_k covariance matrix between variables
Now, we partition our dimensionality N𝑁Nitalic_N into two parts, N=n+k𝑁𝑛𝑘N=n+kitalic_N = italic_n + italic_k,
The covariance C⁢o⁢vX⁢Y𝐶𝑜subscript𝑣𝑋𝑌Cov_{XY}italic_C italic_o italic_v start_POSTSUBSCRIPT italic_X italic_Y end_POSTSUBSCRIPT is n×k𝑛𝑘n\times kitalic_n × italic_k sparse matrix taken with one nonzero element
C
We have presented a data-driven model for RANS simulations that quantifies and propagates in its predictions an often neglected source of uncertainty, namely the aleatoric, model uncertainty in the closure equations. We have combined this with a parametric closure model which employs a set of tensor basis functions that depend on the invariants of the rate of strain and rotation tensors. A fully Bayesian formulation is advocated which makes use of a sparsity-inducing prior in order to identify the regions in the problem domain where the parametric closure is insufficient and in order to quantify the stochastic correction to the Reynolds stress tensor.
In order to address these limitations, we advocate incorporating the RANS model in the training process. This enables one to use indirect data (e.g., mean velocities and pressure) obtained from higher-fidelity simulations or experiments as well as direct data (i.e. RS tensor observables) if this is available. In the subsequent discussions, we will refer to such a training strategy as "model-consistent learning" [7]. It necessitates the solution of a high-dimensional inverse problem that minimizes a discrepancy measure between the RANS solver’s output (mean velocities and pressure) and the observables (e.g. mean fields from LES/DNS). As pointed out in [7], model-consistent training or simulation-based Inference [26] benefits from the differentiability of the solver, as it provides derivatives of the outputs with respect to the tunable parameters that can significantly expedite the learning/inference process.
This is in contrast to the majority of efforts in data-driven RANS closure modeling [22, 15, 65, 17], which employ direct RS data. In the ensuing numerical illustrations, the data is obtained from higher-fidelity computational simulations, but one could readily make use of actual, experimental observations.
The indirect data i.e. velocities/pressures as in the Equation (22), could be complemented with direct, RS data at certain locations of the problem domain. This could be beneficial in improving the model’s predictive accuracy and generalization capabilities.
We have demonstrated how the model can be trained using sparse, indirect data, namely mean velocities/pressures in contrast to the majority of pertinent efforts that require direct, RS data. While the training data in our illustrations arose from a higher-fidelity model, one can readily envision using experimental observations as well.
D
Table 4: Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once change point location (coverage), on the piecewise linear waves signal contaminated with noise types N1-N4 over 100100100100 replications. The noise level was set to σ=5𝜎5\sigma=5italic_σ = 5 for all noise types. We also report whether each method is theoretically guaranteed to provide correct coverage.
Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once change point location (coverage), on the piecewise constant blocks signal contaminated with noise N1-N4 over 500500500500 replications. The noise level was set to σ=10𝜎10\sigma=10italic_σ = 10 for noise types N1-2and to σ=5𝜎5\sigma=5italic_σ = 5 for noise types N3-4. We also report whether each method is theoretically guaranteed to provide correct coverage.
Table 4: Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once change point location (coverage), on the piecewise linear waves signal contaminated with noise types N1-N4 over 100100100100 replications. The noise level was set to σ=5𝜎5\sigma=5italic_σ = 5 for all noise types. We also report whether each method is theoretically guaranteed to provide correct coverage.
Next we investigate the performance of our method and its competitors on test signals containing change points. To investigate performance we apply each method to 500500500500 sample paths from the change point models M1, M2, and M3 listed below, contaminated with each of the four noise types introduced in Section 4.2 above. On each iteration we record for each method: the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once change point location (coverage). We report the average of these quantities, and again highlight whether each method comes with theoretical coverage guarantees for each noise type (guarantee).
Table 5: Average of the number of intervals which contain at least one change point location (no. genuine), the proportion of intervals returned which contain at least one change point location (prop. genuine), the average length of intervals returned (length), and whether all intervals returned contain at least once change point location (coverage), on the piecewise quadratic hills signal contaminated with noise types N1-N4 over 100100100100 replications. The noise level was set to σ=1𝜎1\sigma=1italic_σ = 1 for all noise types. We also report whether each method is theoretically guaranteed to provide correct coverage.
D
We first re-examine classical reinforcement learning problems, formulated with the bottleneck objectives as introduced in Section III-D1. In many classical optimal control and reinforcement learning applications, the agent’s success is largely based on its ability to avoid failure or defeat. This is particularly the case when the MDPs lack significant intermediate milestones or checkpoints, such as the CartPole problem and the Atari game Breakout. Instead of regarding such tasks as collecting as many rewards as possible, the agent can interpret the tasks with the equally valid strategy of avoiding the worst outcome (corresponding to the lowest reward) as much as possible.
To solve the CartPole task with the Q-Min algorithm, when the pole falls outside of the pre-defined angle range (±12∘plus-or-minussuperscript12\pm 12^{\circ}± 12 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT from the up-right position), we assign a negative reward of -1 to the agent. To encourage the agent to postpone negative reward occurrence, we use a discount factor γ=0.95𝛾0.95\gamma=0.95italic_γ = 0.95 in Eq. 49. For learning with the Q-Sum algorithm, we follow the conventional incremental rewarding scheme that has been long used in this task.
To solve Atari with the proposed Q-Min algorithm, we utilize a simple reward scheme under Q-Min: we assign a negative reward of -1 to the agent each time it fails to catch the ball with the paddle, and set γ=0.98𝛾0.98\gamma=0.98italic_γ = 0.98 in Eq. 49 to encourage the agent to postpone such failure events. For learning with the Q-Sum algorithm, we follow the conventional incremental rewarding scheme originally built into the Atari game engine.
To formulate the task with the bottleneck objective for such classical tasks, we assign a negative reward to the agent when an undesired or failure event occurs after executing a certain action. For the other actions that do not directly lead to the failure events, we simply assign a zero intermediate reward. In the CartPole task, the agent aims to control the cart to vertically balance the pole. When the pole falls outside a pre-defined angle range, a negative reward is assigned to the agent. Similarly, for the Atari game Breakout, the agent controls the movement of a paddle to catch and reflect a bouncing ball upwards to destroy layers of bricks located above. Each time the agent fails to catch the falling ball with the paddle, it is assigned a negative reward. With the discount factor γ𝛾\gammaitalic_γ applied on rewards over time steps, the later the negative rewards occur, the higher the bottleneck objective is.
Conventionally, both tasks are formulated with the cumulative objective, each with an incremental rewarding scheme. In the CartPole task, a positive reward is assigned to the agent for every timestep it maintains the pole in the upright position; while in Atari, a positive reward is assigned each time the agent breaks a brick with the bouncing ball.
D
Furthermore, one could also extend the proposed method to a continuous y𝑦yitalic_y, for instance, between 00 and 1111, describing the severity of the disease. Indeed, practitioners could define a function σp⁢(y)subscript𝜎𝑝𝑦\sigma_{p}(y)italic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_y ) that would map the severity score y𝑦yitalic_y to a salient prior standard deviation (e.g., σp⁢(y)=ysubscript𝜎𝑝𝑦𝑦\sigma_{p}(y)=yitalic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ( italic_y ) = italic_y). In this way, we could extend our framework to the case where pathological variations would follow a continuum from no (or mild) to severe patterns.
Background samples (y=0𝑦0y=0italic_y = 0) salient space is set to an informationless value s′=0superscript𝑠′0s^{\prime}=0italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = 0.
In the salient prior regularization, as in previous works, we encourage background and target salient factors to match two different Gaussian distributions, both centered in 00 (we assume s′=0superscript𝑠′0s^{\prime}=0italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = 0) but with different covariance.
, as in (Zou et al., 2022), that p⁢(s|x,y=0)∼𝒩⁢(s′,σp⁢I)similar-to𝑝conditional𝑠𝑥𝑦0𝒩superscript𝑠′subscript𝜎𝑝𝐼p(s|x,y=0)\sim\mathcal{N}(s^{\prime},\sqrt{\sigma_{p}}I)italic_p ( italic_s | italic_x , italic_y = 0 ) ∼ caligraphic_N ( italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , square-root start_ARG italic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT end_ARG italic_I ), with s′=0superscript𝑠′0s^{\prime}=0italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = 0 and σp<1subscript𝜎𝑝1\sqrt{\sigma_{p}}<1square-root start_ARG italic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT end_ARG < 1, namely a Gaussian distribution centered on an informationless reference s′superscript𝑠′s^{\prime}italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT with a small constant variance σpsubscript𝜎𝑝\sigma_{p}italic_σ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT.
|^{2}_{2}caligraphic_L start_POSTSUBSCRIPT rec end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT | | italic_x - italic_d start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( [ italic_c , italic_y italic_s + ( 1 - italic_y ) italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ] ) | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Importantly, for background samples, we set the salient latent vectors to s’=0s’0\textbf{s'}=0s’ = 0. This choice enables isolating the background factors of variability in the common space only.
B
This expression of the ML estimator is relatively well known; see e.g. Section 4.2.2 in Xu and Stein, (2017) or Proposition 7.5 in Karvonen and Oates, (2023).
On the other hand, the CV estimator σ^CV2superscriptsubscript^𝜎CV2\hat{\sigma}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{%
\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{>0}}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT > 0, where k𝑘kitalic_k is a fixed kernel, and study the estimation of σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT using the CV and ML estimators, denoted as σ^CV2superscriptsubscript^𝜎CV2\hat{\sigma}_{\rm CV}^{2}over^ start_ARG italic_σ end_ARG start_POSTSUBSCRIPT roman_CV end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and σ^ML2superscriptsubscript^𝜎ML2\hat{\sigma}_{\rm ML}^{2}over^ start_ARG italic_σ end_ARG start_POSTSUBSCRIPT roman_ML end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, respectively. In this case, both σ^ML2superscriptsubscript^𝜎ML2\hat{\sigma}_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{%
=𝒪⁢(N1−2⁢α)→0absent𝒪superscript𝑁12𝛼→0\displaystyle=\mathcal{O}(N^{{\color[rgb]{0,0,0}\definecolor[named]{%
∑n=0N−1[f(xN,n+1)−f(xN,n)]2≤NL2maxn(ΔxN,n)2⁢α\displaystyle\sum_{n={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{%
A
When the true incident count n𝑛nitalic_n is large, the term n−1/2+ϵ/psuperscript𝑛12italic-ϵ𝑝n^{-1/2+\epsilon/p}italic_n start_POSTSUPERSCRIPT - 1 / 2 + italic_ϵ / italic_p end_POSTSUPERSCRIPT and e−n2⁢ϵ/3⁢psuperscript𝑒superscript𝑛2italic-ϵ3𝑝e^{-n^{2\epsilon}/3p}italic_e start_POSTSUPERSCRIPT - italic_n start_POSTSUPERSCRIPT 2 italic_ϵ end_POSTSUPERSCRIPT / 3 italic_p end_POSTSUPERSCRIPT all go to zero. The corollary shows that the random variable log⁡y𝑦\log yroman_log italic_y will highly concentrate around log⁡n+log⁡p𝑛𝑝\log n+\log proman_log italic_n + roman_log italic_p. With the property introduced, we are able to derive recovery guarantee for our proposed optimization problem.
The theorem, denoted as (3), offers a recovery limit concerning the optimization problem expressed in (7). This recovery bound is a key measure as it reflects the potential effectiveness and accuracy of our proposed solution. It serves as a performance metric for how closely the solution we obtain aligns with the true optimal solution.
In the following section, we want to provide more insights and theoretical guarantees on the optimization problem we formulate and GRAUD.
In this section, we present theoretical results concerning the uniqueness and the accuracy of the solution to our proposed optimization problem 7.
In this paper, we proposed a novel graph prediction method for debiasing under-count data. The idea is to utilize the intrinsic graph structure of the problem and thus overcome the identifiability issue. We reformulate the problem as a constrained convex optimization problem and establish the connection between the binomial n problem and the graph signal separation problem. We provide an alternating minimization optimization algorithm for efficiently recovering the true count. Recovery bounds and convergence results are also established for our proposed method. We approach the binomial n problem from a novel perspective, contrasting with traditional methods, and view it through the lens of signal processing. Both the synthetic data experiment and the real data experiment demonstrate the accuracy and efficiency of our proposed method.
C
)-\langle\nabla\psi_{t}({\bm{q}}),{\bm{p}}-{\bm{q}}\ranglecaligraphic_D start_POSTSUBSCRIPT italic_ψ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( bold_italic_p , bold_italic_q ) = italic_ψ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( bold_italic_p ) - italic_ψ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( bold_italic_q ) - ⟨ ∇ italic_ψ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( bold_italic_q ) , bold_italic_p - bold_italic_q ⟩ is the induced Bregman divergence for any 𝒑,𝒒∈Δd𝒑𝒒subscriptΔ𝑑{\bm{p}},{\bm{q}}\in\Delta_{d}bold_italic_p , bold_italic_q ∈ roman_Δ start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, 𝒎tsubscript𝒎𝑡\bm{m}_{t}bold_italic_m start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the optimism, ℓtsubscriptbold-ℓ𝑡\bm{\ell}_{t}bold_ℓ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the loss vector and 𝒂tsubscript𝒂𝑡\bm{a}_{t}bold_italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is a bias term.
Technical Contributions.   Our first contribution is proposing a multi-layer online ensemble approach with effective collaboration among layers, which is achieved by a carefully-designed optimism to unify different kinds of functions and cascaded correction terms to improve the algorithmic stability within the multi-layer structure. The second contribution arises from efficiency. Although there are multiple layers, our algorithm only requires one gradient query per round, which is achieved by a novel regret decomposition equipped with carefully designed surrogate losses. Two interesting byproducts rises in our approach. The first one is the negative stability term in the analysis of MsMwC (Chen et al., 2021), which serves as an important building block of our algorithm. And the second byproduct contains a simple approach and analysis for the optimal worst-case universal guarantees, using one gradient query within each round.
At the end of this part, we explain why we choose MsMwC as the meta algorithm. Apparently, a direct try is to keep using Adapt-ML-Prod following Zhang et al. (2022a). However, it is still an open problem to determine whether Adapt-ML-Prod contains negative stability terms in the analysis, which is essential to realize effective cancellation in our problem. Another try is to explore the titled exponentially weighted average (TEWA) as the meta algorithm, following another line of research (van Erven and Koolen, 2016), as introduced in Section 1.1. Unfortunately, its stability property is also unclear. Investigating the negative stability terms in these algorithms is an important open problem, but beyond the scope of this work.
It is worth noting that MsMwC is based on OMD, which is well-studied and proved to enjoy negative stability terms in analysis. However, the authors omitted them, which turns out to be crucial for our purpose. In Lemma 2 below, we extend Lemma 1 of Chen et al. (2021) by explicitly exhibiting the negative terms in MsMwC. The proof is deferred to Appendix B.2.
In this paper, we obtain universal gradient-variation guarantees via a multi-layer online ensemble approach. We first propose a novel optimism design to unify various kinds of functions. Then we analyze the negative terms of the meta algorithm MsMwC and inject cascaded correction terms to improve the algorithmic stability to realize effective cancellations in the multi-layer structure. Furthermore, we provide a novel regret decomposition combined with carefully designed surrogate functions to achieve one gradient query per round. Finally, we deploy the our approach into two applications, including the stochastically extended adversarial (SEA) model and two-player zero-sum games, to validate its effectiveness, and obtain best known universal guarantees therein. Due to page limits, the applications are deferred to Appendix A. Two byproducts rise in our work. The first one is negative stability terms in the analysis of MsMwC. And the second one contains a simple approach and analysis for the optimal worst-case universal guarantees, using one gradient per round.
C
Publicly available datasets such as those from NASA [27, 28], CALCE [29, 30], and Sandia National Lab [31] contain cells of different chemistries cycled under a range of charge rates, discharge rates, and temperatures. These datasets are frequently used in research studies since they comprehensively report capacity, internal resistance (NASA and CALCE), voltage, current, and temperature. However, the relatively small size of these datasets (roughly 30 cells per group) makes investigating machine learning-based approaches to early life prediction challenging. On the other hand, datasets such as those from the Toyota Research Institute [5, 6] and Argonne National Lab [22] contain many more cells (> 150 cells). However, they focus on a limited range of operating conditions—fast charging and symmetric C/2 cycling, respectively—making it difficult to build machine learning models that generalize across cycling conditions.
Despite this growing body of research, many fundamental questions about battery life modeling remain unanswered. One fundamental issue is that, in order to train machine learning models to predict lifetime from early-life cycles, data from the entire lifetime is required. Therefore these approaches are best suited to applications such as screening cells after manufacturing, or relative comparisons, rather than quantitatively absolute predictions. A second issue is a lack of publicly available battery-lifetime data that covers a wide range of conditions. The dataset published in [5, 6] was specifically generated to study high-rate fast charging protocols for LFP cells, leaving the discharge rate and depth of discharge fixed. Even though the dataset is relatively large compared to existing publicly available datasets (N=169𝑁169N=169italic_N = 169 cells), the limited range of operating conditions, in this case, induced a single dominant degradation mode (loss of active material at the anode or negative electrode, “LAMNEsubscriptLAMNE\mathrm{LAM}_{\mathrm{NE}}roman_LAM start_POSTSUBSCRIPT roman_NE end_POSTSUBSCRIPT”), causing all of the capacity degradation trajectories to have very similar shapes, and perhaps making lifetime prediction easier [23]. While the relationships between cell operating conditions and the corresponding degradation modes are well understood [1, 3, 24, 25], it remains unclear how the Δ⁢Q⁢(V)Δ𝑄𝑉\Delta Q(V)roman_Δ italic_Q ( italic_V ) feature transfers to cells of different chemistries and to situations where multiple interacting degradation modes are present. This is especially the case for cells that experience milder degradation resulting in less obvious changes in the Q⁢(V)𝑄𝑉Q(V)italic_Q ( italic_V ) curve. Furthermore, all cells in the dataset from [5, 6] were cycled under a fixed depth of discharge, making it easy to extract features from any cycle along the cell’s degradation trajectory. However, in practice, cells are rarely subjected to full depth-of-discharge cycles, so there is a need to explore alternative methods of collecting early-life feature data and validating results using periodic reference performance tests or other means.
In this work, we investigate new early-life features derived from capacity-voltage data that can be used to predict the lifetimes of cells cycled under a wide range of charge rates, discharge rates, and depths of discharge. To study this, we generated a new battery aging dataset from 225 nickel-manganese-cobalt/graphite cells, cycled in groups of four per condition, under a much wider range of operating conditions than existing publicly available datasets [26]. The cells in our dataset exhibit larger variations in their capacity degradation trajectories than previous open datasets, driven by the interactions and accumulations of various component-level degradation mechanisms [1, 23]. To predict the lifetimes of cells experiencing different degradation pathways accurately, we introduce new early-life features extracted from the differential voltage (d⁢V/d⁢Q𝑑𝑉𝑑𝑄dV/dQitalic_d italic_V / italic_d italic_Q vs. Q𝑄Qitalic_Q) and incremental capacity (d⁢Q/d⁢V𝑑𝑄𝑑𝑉dQ/dVitalic_d italic_Q / italic_d italic_V vs. V𝑉Vitalic_V) data gathered during regular weekly reference performance tests (RPTs). The RPTs, two complete cycles at full depth of discharge, enable consistent feature extraction and lifetime prediction for cells that normally cycle at fractional depths of discharge, some as low as 4.0%. Using as little as the first 5% of the aging data, we achieve a prediction error of 22% MAPEMAPE\mathrm{MAPE}roman_MAPE on the lifetime. Including up to 15% of the entire cell lifetime data, we achieve an average prediction error of 2.8 weeks RMSERMSE\mathrm{RMSE}roman_RMSE and 15.1% MAPEMAPE\mathrm{MAPE}roman_MAPE on in-distribution test sets when testing the new features in traditional machine learning models built with regularized linear regression. Given that our dataset has a hierarchical structure (i.e., the ‘group’ level and the ‘cell’ level) in nature, we also explore the possibility of applying hierarchical Bayesian linear modeling to predict lifetime, which achieves better extrapolation performance on out-of-distribution samples, viz. 7.3 weeks RMSERMSE\mathrm{RMSE}roman_RMSE and 21.8% MAPEMAPE\mathrm{MAPE}roman_MAPE lifetime prediction error.
Dataset partitioning was done at the group rather than the cell level, for three reasons. First, practical battery aging tests for product validation typically cycle multiple cells under the same conditions to capture the aging variability due to manufacturing. Second, it is desirable to build an early prediction model to predict the lifetimes of cells cycled under previously untested conditions. Finally, although building an early prediction model with cells tested under rapidly accelerated aging conditions is useful in minimizing the time and costs of collecting aging data, one cannot preemptively know the lifetime (before tests), so grouping must be done using an alternative indicator of cell lifetime. Since the depth of discharge is the dominant cycling stress factor impacting the battery lifetimes in our aging dataset (Fig. 4a), this was used to determine the dataset partitioning.
In light of this, we designed our battery aging dataset to study more cells under a broader range of operating conditions than current publicly available datasets [26]. Our dataset comprises 225 cells cycled in groups of four to capture some of the intrinsic cell-to-cell aging variability [32]. A unique feature of our dataset is the many capacity degradation trajectories that reflect different accumulated degradation modes induced by the various operating conditions. These trajectories, shown in Fig. 2, exhibit different one-, two-, and three-stage degradation trends driven by the interaction and accumulation of hidden, threshold, and snowballing degradation modes [23]. These varying trends produce cell lifetimes from 1.5 to 60.9 weeks. Experimental details and testing procedures used to generate the dataset can be found in Sec. 3.3 and Supplementary Information.
D
The data used to compare methods are available from the Zenodo repository (https://doi.org/10.5281/zenodo.5048449) as compiled by Squair and coauthors [58]. Reversion scRT-qPCR data are available in the SRA repository number SRP076011, and fully described in the original publication [65]. Single-cell chIP-Seq data can be found on GEO with the accession number GSE164385 [38].
Details on the experiment and on the data can be found in the original paper [65]. The kernel-based testing framework was performed on the log⁡(x+1)𝑥1\log(x+1)roman_log ( italic_x + 1 ) normalized RT-qPCR data and on the Pearson residuals of the 2000 most variable genes of the scRNA-Seq data obtained through the R package sctransform [21]. For both datasets, we corrected for the batch effect in the feature space. The gene clusters were computed on the data after correcting for the batch effect in the input space. The truncation parameter of the global comparisons (T=10𝑇10T=10italic_T = 10 for both technologies) was chosen to be large enough for the discriminant analysis to capture enough of the multivariate information and to maximize the discriminant ratio. The truncation parameter retained for univariate testing (T=4𝑇4T=4italic_T = 4) was chosen according to the simulation study.
Simulations are required to compare the empirical performance of DE methods on controlled designs, to check their type-I error control and compare their power on targeted alternatives. We challenged our kernel-based test with six standard DEA methods (Table S.1) on mixtures of zero-inflated negative binomial data reproducing the DE, DM, DP and DB alternatives [15] (as detailed in Material and Methods). Kernel testing was performed on the raw data using the Gauss and ZI-Gauss kernels, but we also considered the linear kernel (scalar product) to illustrate the interest of a non-linear method. The type-I errors of the kernel test are controlled at the nominal levels α=5%𝛼percent5\alpha=5\%italic_α = 5 % and the performance increases with n𝑛nitalic_n (the asymptotic regime of the test is reached for n≥100𝑛100n\geq 100italic_n ≥ 100). The Gauss-kernel test is the best method for detecting the DB alternative, considered as the most difficult to detect, and it outperforms every other method in terms of global power excepted SigEMD. This gain in power can be explained by the non-linear nature of our method: despite the equality of means, the kernel-based transform of the data onto the discriminant axis allows a clear separation between distributions (Fig. 1). This is well illustrated by the global lack of power of the test based on the linear kernel (especially on the DB alternative). The Gaussian kernel shows its worst performances on the DP alternative, which is the only alternative for which all the values are covered by both conditions with different proportions. It shows that our method is particularly sensitive to alternatives where some values are occupied by one condition only (Fig. 2). Note that the ZI-Gauss kernel did not improve the global performance, which indicates that the Gaussian kernel-based test is robust to zero inflation. This could also be due to the equality of the zero-inflation proportions between conditions. Finally, results on log-normalized data are similar. We also checked that the median heuristic was a reasonable choice for the bandwidth parameter (Fig S.7), as it established a good type-I/power trade-off. Note that when the bandwidth of the Gaussian kernel increases, the truncation parameter should be calibrated accordingly to reach the same type-I/power performance.
In the simulations, the ZI-Gauss kernel was computed using the parameters of the Binomial distributions used to determine the drop-out rates of the simulated data (drawn uniformly in [0.7,0.9]0.70.9[0.7,0.9][ 0.7 , 0.9 ]), the variance parameter σ𝜎\sigmaitalic_σ was set as the median distance between the non-zero observations and the Gaussian means μ𝜇\muitalic_μ were set as the observed values.
The research was supported by a grant from the Agence Nationale de la Recherche ANR-18-CE45-0023 SingleStatOmics, by the projects AI4scMed, France 2030 ANR-22-PESN-0002, and SIRIC ILIAD (INCA-DGOS-INSERM-12558).
D
We hypothesise that the first issue is due to the use of the max aggregator function, which backpropagates gradients only along the largest of the similar values, making it harder for the learning process to identify whether it made a suboptimal choice. We propose to use softmax instead of max as aggregator, allowing gradients to flow through all paths proportional to their magnitude. We empirically validate this choice at scale on the CLRS-30 benchmark.
We hypothesise that the first issue is due to the use of the max aggregator function, which backpropagates gradients only along the largest of the similar values, making it harder for the learning process to identify whether it made a suboptimal choice. We propose to use softmax instead of max as aggregator, allowing gradients to flow through all paths proportional to their magnitude. We empirically validate this choice at scale on the CLRS-30 benchmark.
The second issue for the Bellman-Ford algorithm happens when accumulating distances between nodes. The issue is that depending on the graph connectivity the distribution of distances and the embeddings in latent space can change drastically. We propose a simple fix – decaying the magnitude of the embedding by a fixed rate at every execution step. This allows the embeddings to consistently stay in a similar range. Decay, coupled with the choice of softmax aggregator, provides improvement across many algorithms in CLRS-30 benchmark.
The second weakness is that the model tends to struggle when encountering out-of-distribution values during algorithm execution. We propose that GNN should decay magnitude of the representations at each step, allowing slightly out-of-range values to become within range during the execution of the algorithm. We show that these changes bring improvements to state-of-the-art models on the majority of algorithms from the commonly used CLRS-30 benchmark [velickovic_clrs_2022, ibarz_generalist_2022].
As the model is struggling with large out-of-distribution values, we propose using a decay-like regularisation where, at every message passing step, we scale the embeddings by a constant c<1𝑐1c<1italic_c < 1. We show in the following section that this provides improvements not only on the Bellman-Ford algorithm, but on other algorithms in the CLRS-30 benchmark [velickovic_clrs_2022] as well.
B
This could be due to the Chinese New Year holidays and the imposition of a restriction on large trucks in that particular area of the southern region.
Figure 3 provides a zoomed-in view on the last seven days for the same three time series (hence, from 2021-04-25 23:00:00 to 2021-04-30 22:00:00).
Figure 2 displays three examples of Taiwan highway hourly traffic time series corresponding to different vehicle types in different regions, stations, and traffic directions (for four months, from 2021-01-10 23:00:00 to 2021-04-30 22:00:00).
Examples of Taiwanese highway hourly time series, zoomed in view on the last seven days of the time series shown in Figure 2 (from 2021-04-23 23:00:00 to 2021-04-30 22:00:00).
Examples of Taiwanese highway hourly time series (from 2021-01-10 23:00:00 to 2021-04-30 22:00:00) in three regions for different stations, traffic directions, and vehicle types.
D
Output: Estimator β^^𝛽\hat{\beta}over^ start_ARG italic_β end_ARG for β𝛽\betaitalic_β, an estimator of its asymptotic variance V^^𝑉\hat{V}over^ start_ARG italic_V end_ARG and 1−α1𝛼1-\alpha1 - italic_α level confidence interval C^⁢(α)^𝐶𝛼\hat{C}(\alpha)over^ start_ARG italic_C end_ARG ( italic_α ) for β𝛽\betaitalic_β.
Algorithm 1 introduced a generic approach for incorporating weight functions learnt from the data into an estimator for β𝛽\betaitalic_β via approximately minimising the sandwich loss over some class of functions 𝒲𝒲\mathcal{W}caligraphic_W. We now introduce an approach for performing this approximate minimisation over a class 𝒲𝒲\mathcal{W}caligraphic_W defined implicitly through a user-chosen regression method.
Given a class 𝒲𝒲\mathcal{W}caligraphic_W of functions W𝑊Witalic_W, we then propose to find an (approximate) minimiser W^^𝑊\hat{W}over^ start_ARG italic_W end_ARG of L^SLsubscript^𝐿SL\hat{L}_{\mathrm{SL}}over^ start_ARG italic_L end_ARG start_POSTSUBSCRIPT roman_SL end_POSTSUBSCRIPT over 𝒲𝒲\mathcal{W}caligraphic_W (see Section 3.2 for our sandwich boosting approach for carrying this out).
In this work we have highlighted and clarified the shortcomings of some popular classical methods in the estimation of weights for weighted least squares-type estimators in partially linear models when the conditional covariance is misspecified. We instead advocate for choosing weights to minimise a sandwich estimate of the variance, what we call the sandwich loss in this context. A main contribution of ours, in the spirit of the trend towards using machine learning methods for the purposes of statistical inference, is a practical gradient boosting scheme for approximately minimising this loss over a potentially flexible family of functions defined implicitly through a user-chosen base-learner. Despite the unusual form of our loss that does not decompose as a sum over data points as with the standard case of the empirical risk, we show that for certain versions of our algorithm, the boosting updates can be performed in linear time.
over some class of functions 𝒲𝒲\mathcal{W}caligraphic_W corresponding to a working covariance structure (e.g. using sandwich boosting, see Section 3.2).
A
This work highlights the importance of using a properly constrained null model when extracting the backbone of bipartite projections, and identifies several avenues for future research. First, while 𝐐𝐐\mathbf{Q}bold_Q under the SDSM can be estimated quickly and precisely using the BiCM [14, 15], 𝐐𝐐\mathbf{Q}bold_Q under the SDSM-EC must be estimated using logistic regression, which is slower and less precise [11]. Future work should investigate improved methods for estimating 𝐐𝐐\mathbf{Q}bold_Q, which has the potential to benefit not only the SDSM-EC, but all variants of the SDSM. Second, while a broad class of bipartite null models exist [16] and now include edge constraints, future work should investigate the importance and feasibility of incorporating other types of constraints.
Data availability statement. The data and code necessary to reproduce the results reported above are available at https://osf.io/7z4gu.
Many null models exist for extracting the backbone of bipartite networks, with each model specifying different constraints on the random networks against which an observed network is compared. However, none of the existing models permit constraints on specific edges. In this paper, we extend the fastest and most robust existing backbone model – the stochastic degree sequence model (SDSM) [11] – to accommodate one type of edge constraint: prohibited edges. Prohibited edges are edges that in principle cannot occur in the network, and can arise in many contexts. For example, in a bipartite author-paper network, an author cannot write a paper before their birth, and in a bipartite executive-board network, anti-trust laws prevent executives from serving on the boards of competitors. We illustrate the new stochastic degree sequence model with edge constraints (SDSM-EC) first in toy data, then in empirical data recording young childrens’ membership in play groups.
Figure 3 illustrates two backbones extracted from these data, using shape to represent classroom (circles = 3-year-olds, squares = 4-year-olds) and color to represent attendance status (black = full day, gray = AM only, white = PM only). Figure 3A was extracted using the SDSM and therefore does not consider these edge constraints, while Figure 3B was extracted using the SDSM-EC and does consider these edge constraints. There are some similarities between the SDSM and SDSM-EC backbones that reflect characteristics of the setting: 3-year-olds (circles) are never connected to 4-year-olds (squares), and AM children (gray) are never connected to PM children (white), because it was not possible to observe such children together. However, there are also differences that highlight the impact of incorporating edge constraints using SDSM-EC. The SDSM-EC backbone contains many fewer edges (E=85𝐸85E=85italic_E = 85) than the SDSM backbone (E=153𝐸153E=153italic_E = 153). This occurs for similar reasons to the loss of edges in the toy example above, although is less extreme.
These data were collected in Spring 2013 by observing the behaviors of 53 children in a preschool in the Midwestern United States [3, 6, 7, 8]. A scan observation method was employed whereby a randomly selected child was observed for a period of 10 seconds. After the 10 second period had elapsed, the trained observer coded the child’s predominant behavior and, if applicable, the peers with whom they were interacting [4]. Here, we focus only on social play behaviors because they were the most common form of social behavior, and the most likely to involve direct interaction with peers. A total of 1829 social play events were observed during data collection. These data are organized as a bipartite network 𝐁𝐁\mathbf{B}bold_B where Bi⁢k=1subscript𝐵𝑖𝑘1B_{ik}=1italic_B start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT = 1 if child i𝑖iitalic_i was observed participating in a play group during observation k𝑘kitalic_k. A projection of 𝐏=𝐁𝐁T𝐏superscript𝐁𝐁𝑇\mathbf{P}=\mathbf{BB}^{T}bold_P = bold_BB start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT, where Pi⁢jsubscript𝑃𝑖𝑗P_{ij}italic_P start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT indicates the number of times children i𝑖iitalic_i and j𝑗jitalic_j were observed playing together provides an indirect indicator of the children’s’ social network, particularly when refined using backbone extraction [8].
A
^{T}\cdot)]\,.roman_D ( italic_γ , italic_β ) = italic_E [ ( italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_γ + ( 1 , italic_Z ) italic_β ) roman_exp ( bold_i italic_W start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ⋅ ) ] .
(a) E⁢[Y2]<∞𝐸delimited-[]superscript𝑌2E[Y^{2}]<\inftyitalic_E [ italic_Y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] < ∞; (b)
Assumption 3.1 holds and E⁢‖X‖2<∞𝐸superscriptnorm𝑋2E\|X\|^{2}<\inftyitalic_E ∥ italic_X ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT < ∞.
The following assumption ensures that E[Yexp(𝐢WT⋅)]∈Lμ2E[Y\exp(\mathbf{i}W^{T}\cdot)]\in L^{2}_{\mu}italic_E [ italic_Y roman_exp ( bold_i italic_W start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ⋅ ) ] ∈ italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT, and that AA\operatorname{A}roman_A and BB\operatorname{B}roman_B are onto
g(Z)|W]( italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_g start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT × caligraphic_G ↦ italic_E [ italic_X start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_γ + italic_g ( italic_Z ) | italic_W ] is injective.
B
Compute the low-dimensional embedding Y=X⁢V1:k∈ℝn×k𝑌𝑋subscript𝑉:1𝑘superscriptℝ𝑛𝑘Y=XV_{1:k}\in\mathbb{R}^{n\times k}italic_Y = italic_X italic_V start_POSTSUBSCRIPT 1 : italic_k end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_k end_POSTSUPERSCRIPT.
A primary drawback of Scheme 1 is its unfeasibility when the dimensionality is high - that is, when p𝑝pitalic_p is large. Computing the empirical covariance matrix becomes impractical. In the R programming environment (R Core
When the data is centered – ‖x¯‖=0norm¯𝑥0\|\bar{x}\|=0∥ over¯ start_ARG italic_x end_ARG ∥ = 0 – or centering is applied directly to the data, Scheme 2 avoids the need to create and store an empirical covariance matrix Σ^^Σ\hat{\Sigma}over^ start_ARG roman_Σ end_ARG in memory.
Suppose we are given a data matrix X∈ℝn×p𝑋superscriptℝ𝑛𝑝X\in\mathbb{R}^{n\times p}italic_X ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_p end_POSTSUPERSCRIPT, consisting of n𝑛nitalic_n observations, each represented as a p𝑝pitalic_p-dimensional vector. The derivation of PCA directly leads to a sequential algorithm that applies eigendecomposition to an (empirical) covariance matrix derived from the data. While straightforward, this method has an inherent drawback: when the data dimensionality p𝑝pitalic_p is high, it requires an excessive amount of computational resources. A popular alternative is to apply the Singular Value Decomposition (SVD) to the data matrix after subtracting the mean location.
In this paper, we revisited the classical problem of PCA from an algorithmic perspective. A common choice of implementation is to apply SVD onto the data matrix for efficient computation, which is a coveted property in an era where data is increasingly large and high-dimensional. While the method is straightforward, a critical point in the SVD-based approach is often overlooked: acquiring a basis for projection via SVD is only valid if the data matrix is centered. This detail is often bypassed by employing a folklore strategy to compute an extra base.
A
In this paper, we present a base-to-global framework to quantify the uncertainty of global FI values. We define a two-level hierarchy of importance values, namely the base and global FI values, where the global FI values are the average of independent base FI values.
In this section, we evaluate our base-to-global framework and ranking method. We use synthetic data to assess our method’s validity (simultaneous coverage) and efficiency. We analyze our ranking method by generating base FI values directly (Section 5.1). We note that feature ranking is an interpretability step at the end of an ML task, as shown in Figure 3; therefore, we simulate the entire process of training and explaining a model with simulated data (Section 5.2) and real data (Section 5.3).
Based on this framework, we propose a novel method for confidently ranking features. We define the true rank as a feature’s rank, obtained based on an infinite sample, for both a trained prediction model and an FI method. Our ranking method reports simultaneous CIs, ensuring, with high probability, that each feature’s true rank is covered by the appropriate interval. We construct the intervals by examining all pairs of features, testing hypotheses regarding differences in means, and counting the number of rejections for each feature. The examination process tackles the multiple tests problem, which might result in the false discovery of a feature as relevant. The validity of our proposed method is demonstrated in a comprehensive evaluation on both synthetic and real-world datasets. Our findings confirm our method’s effectiveness and highlight its potential in quantifying and enhancing ranking stability. Our base-to-global framework can be viewed as a generalization of the formulate, approximate, explain (FAE) [10] framework for generating and interpreting Shapley-value-based FI methods. We extend the FAE concept in two respects: first, we generalize it to other post-hoc FI methods by defining the base values in a general way; and second, we address the uncertainty in the ranking of the global FI values.
In this section, we introduce our ranking method which is designed to rank FI values while taking into account the uncertainty associated with the post-hoc FI method and the sampling process. Using our base-to-global framework, we are able to quantify the uncertainty by calculating simultaneous CIs for the true ranks.
Existing uncertainty measures are insufficient, because stakeholders often rely on the rank of the FI value, rather than the value itself, in their decisions. Feature rankings are unit-independent and are therefore easy to interpret and compare across FI methods [21, 22]. Instability in the global FI values can lead to instability in their ranking [23] (an example is provided in Figure 1). A simple ranking of the features based on the FI values cannot reflect this uncertainty. Moreover, due to the ranking’s discrete nature, existing methods for quantifying uncertainty in FI values cannot easily be modified to work for ranking uncertainty. For example, we show that confidence intervals (CIs) produced by a naive bootstrapping method based on the estimation of the ranking distribution do not cover the true ranks. The previously mentioned challenges point to the need for a framework for defining, estimating, and reporting ranking uncertainty. To properly model ranking uncertainty, we first model the uncertainty of the global FI values and then infer the effect of this uncertainty on the rankings.
B
1}^{(i)},x_{t-1}^{(i)},y_{t})}.∝ divide start_ARG italic_h start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∣ bold_italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) italic_g start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( bold_italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ∣ bold_italic_θ start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) italic_p start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ∣ italic_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) end_ARG start_ARG italic_q start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( bold_italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ∣ bold_italic_θ start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_x start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) end_ARG .
Table 5 reports the posterior mean, median, standard deviation, and 95 % credible intervals of the parameter estimates. The posterior distributions of estimated model parameters are provided in Figure 10. For the COVID-19 in BC, the estimated incubation period is \added0.562 (0.384, 0.997) weeks, and the recovery period is estimated to be \added2.674 (1.588, 5.948) weeks. The baseline transmission rate is estimated to be \added0.649 (0.358, 1.081). Given the posterior estimates of the transmission rate and the recovery rate, the estimated mean of the basic reproductive number is \added1.735, which lies in the range of 1.4 to 6.49 reported in published studies (Liu et al., 2020). \addedAfter the approval of antigen test in Canada (Tasker, 2020), the estimated identification rate increased from 0.196 (0.116, 0.287) to 0.376 (0.286, 0.470). The transmission rate modifier in the second regime reduces the transmission rate by \added76.6 % (50.7 %, 98.6 %). Conditional on the posterior summary statistics of π11subscript𝜋11\pi_{11}italic_π start_POSTSUBSCRIPT 11 end_POSTSUBSCRIPT and π22subscript𝜋22\pi_{22}italic_π start_POSTSUBSCRIPT 22 end_POSTSUBSCRIPT, the probability of transitioning from one regime to another can be obtained. At time t𝑡titalic_t, the probability of transitioning from regime 1 to regime 2 is \added0.119 (0.030, 0.257), while the probability of transitioning from regime 2 to regime 1 is estimated to be \added0.193 (0.065, 0.393). This suggests that staying in the same regime is more likely than changing abruptly at any time point. The estimated latent SEIR trajectories are shown in Figure 11. A gradual decrease is observed in the susceptible proportion, reaching 50 % by February 8, 2022, indicating that approximately half of the population in British Columbia has been infected.
The prior distributions of the model parameters ψ𝜓\psiitalic_ψ are specified in Table 2. We assume the hyperparameters of α,β𝛼𝛽\alpha,\betaitalic_α , italic_β and γ𝛾\gammaitalic_γ were derived from the historical information on similar epidemics. The transition probability matrix 𝑷Xsubscript𝑷𝑋\boldsymbol{P}_{X}bold_italic_P start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT indicates a higher likelihood of remaining in the same regime rather than switching between regimes frequently. The precision parameters λ𝜆\lambdaitalic_λ and κ𝜅\kappaitalic_κ are assigned a Gamma distribution with E⁢(λ)=2×103,V⁢a⁢r⁢(λ)=2×106formulae-sequence𝐸𝜆2superscript103𝑉𝑎𝑟𝜆2superscript106E(\lambda)=2\times 10^{3},Var(\lambda)=2\times 10^{6}italic_E ( italic_λ ) = 2 × 10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT , italic_V italic_a italic_r ( italic_λ ) = 2 × 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT and E⁢(κ)=2×104,V⁢a⁢r⁢(κ)=2×106formulae-sequence𝐸𝜅2superscript104𝑉𝑎𝑟𝜅2superscript106E(\kappa)=2\times 10^{4},Var(\kappa)=2\times 10^{6}italic_E ( italic_κ ) = 2 × 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT , italic_V italic_a italic_r ( italic_κ ) = 2 × 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT, as suggested by Osthus et al. (2017) and Kobayashi et al. (2020). We ran \added2 chains with \added30000 MCMC iterations each, discarding the first 1000 iterations of each chain. The number of particles used in each cycle of SMC or CSMC is N=M⁢K=100𝑁𝑀𝐾100N=MK=100italic_N = italic_M italic_K = 100. The total running time for this two-regime setting on a single 3 GHz Intel i5 Core is around \added12 hours. Metropolis-Hastings steps within the particle Gibbs sampler are implemented to draw model parameters. \addedWe applied five Metropolis-Hastings updates of the parameters in-between each CSMC update of the latent states. Step sizes were tuned to guarantee an acceptance rate greater than \added30 %. The trace plots \addedand kernel density plots for parameters are shown in Figure S1 to monitor the convergence of the particle Gibbs sampler, suggesting convergence to a stationary distribution. \addedThe Gelman-Rubin diagnostic less than 1.2 (Brooks and Gelman, 1998) implies that no non-convergence issues are detected. See Table S1 for more details. The marginal posterior distributions of model parameters ψ𝜓\psiitalic_ψ are explicitly shown in Figure 3. We propose π11subscript𝜋11\pi_{11}italic_π start_POSTSUBSCRIPT 11 end_POSTSUBSCRIPT and π22subscript𝜋22\pi_{22}italic_π start_POSTSUBSCRIPT 22 end_POSTSUBSCRIPT from a Normal distribution truncated between 0 and 1 in the MH step. The remaining transition probabilities are computed by subtracting one from the proposed element. All of the true parameter values fall within their 95 % Bayesian credible intervals (CI). However, we found the precision parameter κ𝜅\kappaitalic_κ sometimes could be difficult to estimate. A larger prior variance of κ𝜅\kappaitalic_κ can result in a wider range of possible parameter values, which can lead to a more diffuse posterior distribution, and can affect the convergence of the MCMC algorithm. The estimation of κ𝜅\kappaitalic_κ also heavily relies on the reference trajectory in each MCMC iteration. We suggest users to tune the priors of κ𝜅\kappaitalic_κ or choose priors on α,β𝛼𝛽\alpha,\betaitalic_α , italic_β and γ𝛾\gammaitalic_γ as informative as possible for a more accurate estimate of κ𝜅\kappaitalic_κ.
According to the posterior distribution in (6), the prior distribution π⁢(ψ)𝜋𝜓\pi(\psi)italic_π ( italic_ψ ) plays a critical role in Bayesian inference as it allows us to incorporate prior knowledge and beliefs about the unknown parameters into our analysis, and it can heavily influence the posterior distribution. In the BDSSS-SEIR model, it is crucial to choose informative priors to ensure accurate inference on parameters and latent trajectories. A general framework of prior distribution on these unknown parameters is specified in Table 1. See Section A.2 for a more detailed form of posterior distribution incorporating these priors.
The importance weight represents the likelihood of obtaining a specific sample from the true posterior distribution given the proposal distribution. It is used to adjust for the discrepancy between the proposal distribution and the true posterior distribution. A detailed derivation of the importance weight is described in A.1. In practice, it is preferable to choose a proposal distribution that is similar to the target so that a finite number of weighted particles can estimate the target distribution closely. We propagate particles by directly simulating from the state transition density in (4) and regime transition density in (2). The proposal density becomes
D
The need for such a method extends beyond intercropping applications. Any phenomenon in which asymmetric spatial effects between multiple categories can be postulated provides a potential application. The categories are features inherent to each location in the data, such that based on the label of that feature, asymmetric spatial relations can occur between neighbouring locations with different feature labels. One potential area of application is the field of epidemiology, where researchers might want to evaluate the existence and extend of asymmetric effects in rural-urban disease transmissions and driving variables (Ferrari et al., 2010). Alternatively, for a more timely example, whether there is asymmetry among the inter-country transmission rates of COVID-19 between neighbouring countries, where the countries can be categorised in strict anti-COVID policy — assuming that the movement of people between countries remains possible — versus lax anti-COVID policy (Keita, 2020). A last example is within economics or sociology, where observed values on variables such as crime, unemployment rate, income per capita, population size and congestion within a city are not only a result of within-city processes, but are also affected by what happens in neighbouring cities (Goetzke, 2008). The asymmetry arises, for example, due to different spatial relations between cities in the USA with a democratic versus a republication major, which in turn might imply different policies being put into practice that affect the aforementioned variables.
In this contribution, we propose a new statistical methodology that is able to infer multivariate symmetric within-location and asymmetric between-location spatial effects. In essence, the proposed model can be seen as a fusion, and extension, of two different models: the multivariate spatial autoregressive model and the Gaussian graphical model. By exploiting the flexibility offered by the spatial weight matrices, asymmetry in spatial effects is accounted for. As this results in a highly parameterised model, identifiability restrictions are imposed on the spatial effect matrices. The inferred local and spatial effects are represented by means of a graphical framework, facilitating interpretability. Extensive simulations for a variety of network structures demonstrate that the proposed method is able to effectively reconstruct both the within- and between-location effects.
Despite a growing interest in the design of productive intercropping systems (Federer 2012), there has been little methodological development around the identification of the kind of multi-trait and multi-species interactions that would determine which crops should ideally be combined (Brooker et al., 2015). Given that intercropping data typically consist of observations on locations in a finite domain, the set of spatial autoregressive models (Ord, 1975) makes for a logical starting point in this methodological pursuit. An application of this model on yield data in a monocropping system can be found in Long (1998). Whilst being suitable for data arising from monocropping systems, this approach is unsuitable for modern intercropping type data that consist of multiple traits measured across multiple crops, as the spatial autoregressive model is univariate in its response and has no way of isolating asymmetric spatial effects or within-plot effects. Another example of an existing approach, by Dobra (2011; 2016), explicitly accounts for spatial autocorrelation by developing Bayesian models that construct two graphs: a neighbourhood graph where the vertices indicate different regions and the presence of an edge indicates whether the regions share a border (are neighbours), and a conditional dependence graph that shows which variables are independent given all other variables. However, this approach has no way of inferring the spatial effects of one variable of crop c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT on another of crop c2subscript𝑐2c_{2}italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, which is of interest to researchers wanting to evaluate, for example, the impact of applying fertiliser on one plot on the growth of plants in surrounding plots. Whilst other spatial (autoregressive) models exist (Ord, 1975; Yang & Lee, 2017), none can model heterogeneous multivariate spatial effects across multiple crops, where the heterogeneity arises from differing spatial effects per crop, whilst simultaneously capturing the complex dependence relations that occur within plots.
In line with Dahlhaus and Eichler (2003), who proposed a time series (vector autoregressive) chain graph, showcasing contemporaneous conditional dependencies and dynamic effects, we propose a spatial autoregressive graphical model that fills the methodological gap of methods than can capture asymmetric between-location spatial effects together with within-location effects. Our proposed spatial autoregressive graphical model builds on the recently proposed multivariate spatial autoregressive model (MSAR) by Yang and Lee (2017), who extended the univariate spatial autoregressive model (SAR) to the multivariate setting. The asymmetry of spatial effects between different categories, i.e. crops in intercropping, is captured through a straightforward manipulation of the spatial weight matrices of the MSAR model, whilst the within-location dependencies share some of the properties of the Gaussian graphical model; namely that the within-location effects are derived from an underlying network consisting of conditional dependencies, which, in turn, offer a parsimonious representation of the complex within-location dependencies as well as being easily interpretable for researchers. The proposed method has attractive asymmetric between-location independence relationships, that are rarely present in spatial autoregressive models. Using this approach, we can identify positive and negative interaction effects between crops that optimise collective performance, thereby selecting combinations of genotypes or crops that are promising in intercropping situations.
This article proposes a new statistical methodology: the spatial autoregressive graphical model. The methodological novelty arises from the method’s capacity to learn multivariate asymmetric between-location effects, combined with the capacity of illustrating complex within-location effects through a conditional independence structure, whereby the between- and within-location effects are illustrated by means of a graph, thereby facilitating interpretability. Section 2, introduces and formalises the methodological framework. Bayesian inference is discussed in Section 3. Section 4 presents an elaborate simulation study, where the performance of the newly proposed method is evaluated on simulated spatial data. An application of the new method on real intercropping data, illustrating the usage of the proposed methodology, is given in Section 5. Finally, the conclusion and discussion can be found in Section 6.
C
Here we derive the RR α∈ℛ𝛼ℛ\alpha\in\mathcal{R}italic_α ∈ caligraphic_R that optimises the efficiency bound of a sample analogue of θwsubscript𝜃𝑤\theta_{w}italic_θ start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT. By Theorem 1, the optimal RR implies a corresponding optimally weighted WADE. Our chosen optimality criteria is exactly that of Crump et al., (2006, 2009) who derive optimal WATE weights when the exposure is binary and the weight is known (see Remark 3 at the end of this Section for details).
Furthermore, since our class of estimands represents a unified view of WADEs and WATEs, it enables us to extend WATE results from the binary exposure setting, to new WADE results in the continuous exposure setting. In particular, we derive the estimand in our class which is optimally efficient, in the sense of minimising the efficiency bound of a sample analogue of θ𝜃\thetaitalic_θ. This is exactly the definition of efficiency considered by Crump et al., (2006, 2009), and our optimal estimand reduces to their optimal WATE estimand when the exposure is binary. When the exposure is continuous, however, then our estimand is a new optimally efficient WADE.
Next, we show that least squares estimands, which are estimands connected to partially linear model projections, are in fact WADEs for a particular choice of weight. We further motivate least squares estimands by considering the RR that minimises the nonparametric efficiency bound of the WADE, when the weight is known and the outcome is homoscedastic. Our efficiency analysis generalises the results of Crump et al., (2006) to the setting of continuous exposures.
Thus, our contribution is to extend their method to continuous exposures with the extra subtlety being that the WADE weight depends on the exposure as well as covariates.
We compare estimators of ψ𝜓\psiitalic_ψ and ΨΨ\Psiroman_Ψ, the former being a contribution of our work, and the latter following from existing results. These estimators do not require estimation of the exposure density, thus alleviating the aforementioned concerns regarding kernel estimation in other WADEs (ψ𝜓\psiitalic_ψ and ΨΨ\Psiroman_Ψ are WADEs according to Section 2.2). Inspired by the binary exposure setting, our preferred estimator of ψ𝜓\psiitalic_ψ, is based on the R-learner of the conditional ATE (Nie and Wager,, 2021; Robinson,, 1988), and an analogous learner of the function 1/var⁢(A|𝒁=𝒛)1varconditional𝐴𝒁𝒛1/\mathrm{var}(A|\bm{Z}=\bm{z})1 / roman_var ( italic_A | bold_italic_Z = bold_italic_z ), which we have not seen used elsewhere. Generally our estimators are amenable to data adaptive/ machine learning of requisite statistical functionals, as we demonstrate on simulated data in Section 4, and on clinical data to determine the effect of Warfarin dose on blood clotting function in Section 5.
C
Assuming a normal distribution for random effects can be problematic when the true distribution is far from normal. For instance, McCulloch and Neuhaus, (2011) discovered that when the true distribution is multi-modal or long-tailed, the distribution of the EB predictions may reflect the assumed Gaussian distribution rather than the true distribution of effects. To address this issue and safeguard against model misspecification, researchers have proposed more flexible distributional assumptions for random effects. These alternatives encompass continuous parametric non-Gaussian distributions (Liu and Dey,, 2008); arbitrary discrete distributions obtained using nonparametric maximum likelihood estimation (Rabe-Hesketh et al.,, 2003); and mixture distributions (Ghidey et al.,, 2004; Paddock et al.,, 2006; Verbeke and Lesaffre,, 1996).
To address the threats outlined in the previous section, two strategies can be employed. The first strategy entails adopting flexible semiparametric or nonparametric specifications for the prior distribution G𝐺Gitalic_G to protect against model misspecification (Paddock et al.,, 2006). One prominent Bayesian nonparametric specification is the Dirichlet process (DP) prior, which has gained widespread use in the literature (Paganin et al.,, 2022). The second strategy focuses on employing posterior summary methods, such as the constrained Bayes or the triple-goal estimators. These estimators are designed to directly target the loss functions associated with specific inferential goals. The following sections discuss these two strategies and their implications for addressing the challenges in finite-population estimation.
In practice, the joint application of these two strategies has been relatively rare, with only a few notable exceptions (e.g., Paddock et al.,, 2006; Lockwood et al.,, 2018). The costs and benefits of these strategies have not been systematically compared in previous simulation studies exploring similar topics (e.g., Kontopantelis and Reeves,, 2012; Paddock et al.,, 2006; Rubio-Aparicio et al.,, 2018). For instance, if the true distribution is not Gaussian and the inferential goal is to estimate the empirical distribution of site-specific effects, a question arises about which approach performs better: (a) combining the misspecified Gaussian model for random effects with a targeted posterior summary method (CB or GR), or (b) utilizing the flexible semiparametric model for the prior in conjunction with the misselected posterior mean estimator that solely aims at the optimal estimation of individual site-specific effects. To our knowledge, no prior studies have compared these two strategies within the context of multisite trials, thereby revealing an unexplored area in the literature that warrants further investigation.
Instead of relaxing the normality assumption, some approaches replace EB or PM estimators with alternative posterior summaries, such as constrained Bayes (CB, Ghosh,, 1992) and triple-goal (GR111The abbreviation ”GR” denotes the dual inferential objectives: the EDF (G𝐺Gitalic_G) and the rank (R𝑅Ritalic_R) of site-specific parameters, commonly referenced as the triple-goal estimator in studies such as Paddock et al., (2006)., Shen and Louis,, 1998) estimators. These alternatives, designed to correct shrinkage-induced underdispersion in PM estimates, directly adjust the loss function minimized by the estimator in order to target specific inferential goals. Such strategies have received less attention compared to flexible modeling of the random-effects distribution.
The three inferential goals, their associated loss functions, and their optimal estimators reveal two primary challenges in achieving valid finite-population estimation. The first challenge is model misspecification, which arises when we assume an incorrect parametric form for the super population distribution G𝐺Gitalic_G. If the true distribution G𝐺Gitalic_G is not Gaussian, assuming normality may result in insensitivity to skewness, long tails, multimodality, and other complexities in the EDF of the τjsubscript𝜏𝑗\tau_{j}italic_τ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT’s (McCulloch and Neuhaus,, 2011). Consequently, estimators based on the standard Rubin model become unreliable, especially for the third inferential goal. The second challenge emerges when, for a given goal, an unsuitable estimator is chosen, even when the prior distribution G𝐺Gitalic_G is accurately specified in a model. The optimal estimator is contingent upon the chosen loss function or inferential goal. However, practitioners often use the same set of posterior mean effect estimates for all three goals, leading to suboptimal outcomes for at least some of them. For instance, the EDF of posterior mean effect estimates will tend to be underdispersed compared to the EDF of τjsubscript𝜏𝑗\tau_{j}italic_τ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT due to shrinkage towards the prior mean effect τ𝜏\tauitalic_τ. Conversely, the EDF of raw observed ML effect estimates τ^jsubscript^𝜏𝑗\hat{\tau}_{j}over^ start_ARG italic_τ end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is overdispersed because of sampling error (Mislevy et al.,, 1992).
C
In order to provide a comprehensive comparison between the simulation and surrogate model runtimes, it is important to include information about the computational environment. The simulations are performed on a machine running the Ubuntu 22.04 operating system, equipped with an AMD Ryzen9 3900X CPU (12 Cores/24 Threads) and 64GB DRAM (3,200 MHz). PHITS version 3.24, compiled by our group with the Intel Fortran Compiler (Intel(R) Parallel Studio XE 2020 Update 1 for Linux), is used for hybrid MPI and OpenMP parallel computing. The number of threads (or cores) to be used for OpenMP is set to 2222 threads per core, and one for MPI is 11111111 cores. Based on the log data, the time required per simulation was 30.54±3.76plus-or-minus30.543.7630.54\pm 3.7630.54 ± 3.76 seconds.
This section compares the model’s performance trained with Set1 on the test data against FCN and CNN. As mentioned earlier, DeepONet takes functions as inputs, and the model test evaluates its response to unseen input functions. In this study, 380 test input functions were provided to the model, and the obtained model outputs were compared to the true values. The same metrics used in the previous section were calculated for each input function.
In order to showcase the capabilities of DeepONet, a surrogate model is constructed for calculating the 2-dimensional spatial distribution of neutron flux in a maze. The training and test datasets used for training the DeepONet model are prepared using Particle and Heavy Ion Transport code System (PHITS) version 3.24 Sato et al. (2018). This section elaborates on the methodology employed for data generation and the simulation setup utilized in this study.
Furthermore, as demonstrated in Fig. 3 (c), while the entire set of 6,40064006,4006 , 400 (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) coordinate pairs with corresponding simulation results ψ⁢(x,y)𝜓𝑥𝑦\psi(x,y)italic_ψ ( italic_x , italic_y ) is available for surrogate model construction, we have methodically created multiple sub-datasets, labeled as Set1 through Set5, to evaluate the impact of data volume on DeepONet’s performance. Each dataset is generated by randomly sampling a specific proportion of the total data in increments of 10%, starting from 50% and extending up to 90% of the 6,400 pairs. This stratified sampling strategy facilitates a comprehensive evaluation, allowing us to methodically analyze how the variation in the volume of training data affects the model’s predictive accuracy. Creating these subsets, Set1 to Set5 enables a systematic investigation into the relationship between the quantity of training data and the fidelity of the DeepONet model.
A systematic multi-stage protocol is implemented for preprocessing the data to trainining and test the DeepONet model.
D
We are given a signal in ℝpsuperscriptℝ𝑝\mathbb{R}^{p}blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT that is expressed as a linear combination of some unknown source signals and the goal is to estimate these sources. The poset here is the collection of linearly independent subsets of unit-norm vectors in ℝpsuperscriptℝ𝑝\mathbb{R}^{p}blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ordered by inclusion, the least element is the empty set, and the rank of a linearly independent subset is equal to the cardinality of the subset.
In this section, we turn our attention to the task of identifying models of large rank that provide false discovery control. We begin in Section 3.1 with a general greedy strategy for poset search that facilitates the design of model selection procedures, and we specialize this framework to specific approaches in Sections 3.2 and 3.3. Some of the discussion in Section 3.1 is relevant for all of the posets in Examples 1-9, while the methodology presented in Sections 3.2-3.3 is applicable to general discrete posets with integer-valued similarity valuations such as in Examples 1-7. Along the way, we remark on some of the challenges that arise in the two continuous cases of Examples 8-9.
Classic approaches to model selection such as the AIC and BIC assess and penalize model complexity by counting the number of attributes included in a model [1, 22]. More generally, such complexity measures facilitate a hierarchical organization of model classes, and this perspective is prevalent throughout much of the model selection literature [9, 13, 29, 19, 8, 2]. However, these complexity measures rely on a Boolean logical structure underlying a collection of models, and are therefore not well-suited to model classes that are not characterized in this manner. The poset formalism presented in this paper is sufficiently flexible to facilitate model selection over model classes that are more complex than those characterized by Boolean logical structure (such as the illustration presented previously with clustering, see also Example 2), while being sufficiently structured to permit precise definitions of model complexity as well as false positive and false negative errors.
With respect to formalizing the notion of false positive and false negative errors, Example 1 is prominently considered in the literature, while Examples 3 and 5 are multivariate generalizations of previously studied cases [10, 12]. Finally, Example 8 was studied in [25], although that treatment proceeded from a geometric perspective rather than the order-theoretic approach presented in this paper. With the exception of Example 1, none of the other examples permit a natural formulation within the traditional multiple testing paradigm due to the lack of a Boolean logical structure underlying the associated model classes. Moreover, Examples 8-9 are model classes consisting of infinitely many elements. Nonetheless, we describe in the sequel how the poset formalism enables a systematic and unified framework for formulating model selection in all of the examples above.
In these preceding examples, we lack a systematic definition of model complexity, false positive error, and false negative error due to the absence of Boolean logical structure in each collection of models. In particular, in the first three examples, valid models are characterized by structural properties such as transitivity, set partitioning, and graph acyclicity, respectively; these properties are global in nature and are not concisely modeled via separable and local characteristics such as an attribute (a variable or edge) being included in a model independently of other attributes. In the fourth example of blind source separation, false positive and false negative errors should not be defined merely via the inclusion or exclusion of true source vectors in an estimated set but should instead consider the degree of alignment between the estimated and true sources, which again speaks to the lack of a natural Boolean logical structure underlying the associated model class.
C
\end{subarray}}(\theta^{k}).= roman_prox start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_η ∥ ⋅ ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_CELL end_ROW end_ARG end_POSTSUBSCRIPT ( italic_θ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ) .
Motivated by the promising results in the deterministic setting, we now study the MNIST dataset in a stochastic setting, i.e., using mini-batches for the loss function and a neural network with 16,3301633016,33016 , 330 parameters as described previously in IV-A. First, we obtain the initial point on the Pareto front after performing 500 iterations, using Algorithm 1. The remaining points on the Pareto front are computed by using small consecutive predictor corrector steps. For the subsequent 43 points of the Pareto front found after the initial point, we used 7 iterations for the predictor steps and 20 iterations for the corrector steps. In total, we computed 44 points, i.e., Nc⁢o⁢n⁢t=44subscript𝑁𝑐𝑜𝑛𝑡44N_{cont}=44italic_N start_POSTSUBSCRIPT italic_c italic_o italic_n italic_t end_POSTSUBSCRIPT = 44. Figures 5 (5(a)) and (5(b)) show the Pareto front and accuracy versus the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm, respectively. In this setting unlike in the Iris dataset, we did not set our neural network weights to zeros rather we started by finding a point in the middle of the front in blue and then applied the continuation method twice (once in each direction, i.e., loss and ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm). As indicated in the plots, we observe overfitting for the non-sparse architectures, which indicates that we do not necessarily have to pursue the regularization path until the end, but we can stop once the slope of the Pareto front becomes too steep. This provides an alternative training procedure for DNNs where in contrast to pruning we start sparse and then get less sparse only as long as we don’t run into overfitting.
The joint consideration of loss and ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT regularization is well-studied for linear systems. However, it is much less understood for the nonlinear problems that we face in deep learning. In DNN training, the regularization path is usually not of interest. Instead, methods aim to find a single, suitable trade-off between loss and ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm [8, 9, 10, 11, 12]. When interpreting the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT regularization problem as a multiobjective optimization problem (MOP), a popular approach to obtain the entire solution set (the Pareto set) is via continuation methods [13, 14]. They usually consist of a predictor step (along the tangent direction of the Pareto set) and a corrector step that converges to a new point on the Pareto set close by. However, as the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm is non-smooth, classical manifold continuation techniques fail. Due to this fact, a first extension of regularization paths from linear to nonlinear models was recently presented in [15], where continuation methods were extended to non-smooth objective functions. Although this extension provides a rigorous treatment of the problem, it results in a computationally expensive algorithm, which renders it impractical for DNNs of realistic dimensions.
In this work, we consider two objective functions, namely the empirical loss and the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm of the neural network weights. The Pareto set connecting the individual minima (at least locally), is also known as the regularization path. In the context of MOPs, we are looking for the Pareto set of
Equation (8) is the gradient step for the loss objective function, i.e., “move left” in Fig. 2 and equation (9) represents the shrinkage performed on the ℓ1superscriptℓ1\ell^{1}roman_ℓ start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm, i.e., “move down” in Fig. 2.
D
The adjective “visible” refers to the martingale (Sn)subscript𝑆𝑛(S_{n})( italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT )
P⁢({ω})>0𝑃𝜔0P(\{\omega\})>0italic_P ( { italic_ω } ) > 0 for all ω∈Ω𝜔Ω\omega\in\Omegaitalic_ω ∈ roman_Ω.
(and not depending on the hidden aspects of the realized sample point ω∈Ω𝜔Ω\omega\in\Omegaitalic_ω ∈ roman_Ω).
We say that a sequence (Yn)subscript𝑌𝑛(Y_{n})( italic_Y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) of random variables in (Ω,P)Ω𝑃(\Omega,P)( roman_Ω , italic_P ) is adapted
(i.e., not on θ𝜃\thetaitalic_θs, parameter values, but on the x𝑥xitalic_xs and y𝑦yitalic_ys, observables)
B
The deep learning (DL) revolution impacts almost every branch of life and sciences [1]. The deep learning models are also adapted and developed for solving problems in physics [2]. The statistical analysis of the data is one of the popular and straightforward domains of application of neural networks in physics [3, 4]. Another one is using DL-like models to optimize and modernize computational systems in physics, starting from solvers for fluid flow [5, 6] and finishing on the Monte Carlo generators in particle physics [7]. Deep neural networks are examples of artificial neural networks (NNs) studied and utilized in various branches of life and science for years [8]. In this paper, we consider the feed-forward shallow neural networks, with one or maximally two layers of hidden units, that are utilized to solve partial differential equations (PDEs).
Differential equations are the essence of physics. Indeed, they describe the system and its evolution in time in classical mechanics, electrodynamics, fluid physics, quantum mechanics, Etc. Some can be solved analytically, but a vast class of problems can be solved numerically only.
The deep learning (DL) revolution impacts almost every branch of life and sciences [1]. The deep learning models are also adapted and developed for solving problems in physics [2]. The statistical analysis of the data is one of the popular and straightforward domains of application of neural networks in physics [3, 4]. Another one is using DL-like models to optimize and modernize computational systems in physics, starting from solvers for fluid flow [5, 6] and finishing on the Monte Carlo generators in particle physics [7]. Deep neural networks are examples of artificial neural networks (NNs) studied and utilized in various branches of life and science for years [8]. In this paper, we consider the feed-forward shallow neural networks, with one or maximally two layers of hidden units, that are utilized to solve partial differential equations (PDEs).
Karniadakis et al. [25] provides a comprehensive review of the PINN approach. In particular, they point out the major difficulties of the approach, namely, the problem of tuning the hyperparameters, fixing relative weights between various terms in the loss function, and the convergence to the global minimum. The PINN idea can be naturally extended to a broader class of problems than PDEs, in which the neural networks or machine learning systems are adapted to solve the problem or to optimize the system that solves the problem, see reviews by Faroughi et al. [26] (the physics guided, informed, and encoded neural network approaches), Meng et al. [27] and Hao et al. [28] (physics informed machine learning including PINNs).
One of the exciting ideas is to adapt a neural network framework to solve PDEs numerically. The problem of numerical integration comes down to the optimization problem. Indeed, the approximate solution of a differential equation is parametrized by a feed-forward neural network that depends on the parameters (weights). The optimal solution minimizes a cost function defined for a given equation. One of the first successful formulations of this idea was provided by Lagaris et al. [9]. They applied the method to ordinary and partial differential equations. It was assumed that the feed-forward neural network, modified to satisfy initial or boundary conditions, was a solution of an equation. A similar idea was exploited by Lagaris et al. for solving eigenvalue problems in quantum mechanics [10].
A
We have so far assumed that the spectral density is available in closed form. However, we only need regularly spaced point evaluations of the spectral density, for which it suffices to evaluate the discrete Fourier transform of regularly spaced evaluations of the covariance function. This adds, at worst, O⁢(M2)𝑂superscript𝑀2O(M^{2})italic_O ( italic_M start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) computation to each step.
IFF can be used for faster learning for large datasets in low dimensions, which matches our target applications. Typically, it will perform poorly for D⪆4greater-than-or-approximately-equals𝐷4D\gtrapprox 4italic_D ⪆ 4, and both in this case and for low N𝑁Nitalic_N, we expect SGPR to outperform all alternatives, including IFF, and our analysis and evaluation are limited to the conjugate setting.
We seek to show that IFF gives a significant speedup for large datasets in low dimensions, with a particular focus on spatial modelling. Amongst other fast sparse methods, we compare against VFF and B-Spline features. For spherical harmonics, learning independent lengthscales for each dimension is incompatible with precomputation. In any case, we found that we were unable to successfully learn reasonable hyperparameters with that method in our setting, except if the number of feature was very small. For a conventional (no pre-compute) sparse baseline, we use inducing points sampled according to the scheme of Burt et al. (2020a). For our synthetic experiments, we also used inducing points initialised using k𝑘kitalic_k-means and kept fixed. For the real-world spatial datasets, we also tested SKI, due to its reputation for fast performance, and its fairly robust implementation.
In Section 2 we review variational GP regression in the conjugate setting, and we review related work in Section 3. In Section 4 we present our IFF method, and the complexity analysis; the main convergence results and guidance for tunable parameter selection follows in Section 4.1. Finally in Section 5 we evaluate our method experimentally, showing significant speedup relative to SGPR in low dimensions, and competitive performance compared to other fast methods, with broader applicability. A summary of our contributions is as follows.
We exclude SKI in Figure 5 in order to zoom in on the curves for the variational methods. We are interested in the regime where M≪Nmuch-less-than𝑀𝑁M\ll Nitalic_M ≪ italic_N; as we move to the right and M𝑀Mitalic_M is similar to N𝑁Nitalic_N, inducing points will become competitive with the faster methods, since the O⁢(M3)𝑂superscript𝑀3O(M^{3})italic_O ( italic_M start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) cost dominates. Comparing IFF to VFF, we see that always performs at least as well, and produces a substantially better performance for a given time on the temperature and precipitation datasets, due to a more flexible choice of covariance function – in particular, note that the training objective of IFF is substantially lower than VFF on these datasets, but comparable to that of inducing points, which uses the same covariance function.
A
In terms of variable selection, the Adaptive Lasso and the Adaptive Transfer Lasso outperformed the others, and the Adaptive Transfer Lasso was slightly superior to the Adaptive Lasso.
We provide the property of the Adaptive Lasso for an initial estimator with source data of size m𝑚mitalic_m.
We mainly considered two cases: one with a large amount of source data and the other with the same amount of source data as the target data.
The Transfer Lasso [16], in contrast, is performed on target data using the initial estimator without the need for source data.
These results imply the superiority of the Adaptive Transfer Lasso with initial estimators using large amounts of source data.
D
Every ϵitalic-ϵ\epsilonitalic_ϵ-DP algorithm is ρ𝜌\rhoitalic_ρ-zCDP with ρ=12⁢ϵ2𝜌12superscriptitalic-ϵ2\rho=\frac{1}{2}\epsilon^{2}italic_ρ = divide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (Proposition 1.4, [23]). Due to this observation, it is possible to provide zCDP regret upper bounds from the ϵitalic-ϵ\epsilonitalic_ϵ-DP bandit literature, by replacing ϵitalic-ϵ\epsilonitalic_ϵ with 2⁢ρ2𝜌\sqrt{2\rho}square-root start_ARG 2 italic_ρ end_ARG in those results. Our zCDP upper bounds improve on these “converted” upper bounds on logarithmic terms in T𝑇Titalic_T, K𝐾Kitalic_K, and d𝑑ditalic_d. This improvement is due to the use of the Gaussian Mechanism rather than the Laplace mechanism. Table II summarises the comparison.
To prove regret lower bounds in bandits, we leverage the generic proof ideas in [2]. The main technical challenge in these proofs is to quantify the extra cost of “indistinguishability” due to DP. This cost is expressed in terms of an upper bound on KL-divergence of observations induced by two ‘confusing’ bandit environments. For pure DP [8], the upper bound on the KL-divergence (Theorem 10 in [8]) is proved by adapting the Karwa-Vadhan lemma [24] to the bandit sequential setting. To our knowledge, there is no zCDP version of the Karwa-Vadhan lemma. Thus, we first provide a general result in Theorem 6, which could be seen as a generalisation of the Karwa-Vadhan lemma to zCDP. To prove this result, we derive a new maximal coupling argument relating the KL upper bound to an optimal transport problem, which can be of parallel interest. Then, we adapt it to the bandit setting in Theorem 7. The regret lower bounds are retrieved by plugging in these upper bounds on the KL-divergence in the generic lower bound proof of bandits.
In order to prove the lower bounds, we deploy the KL upper bound of Theorem 7 in the classic proof scheme of regret lower bounds [2]. The high-level idea of proving bandit lower bounds is selecting two hard environments, which are hard to statistically distinguish but are conflicting, i.e. actions that may be optimal in one are sub-optimal in other.
Hardness of Preserving Privacy in Bandits as Lower Bounds. Addressing the open problem of [11, 8], we prove minimax lower bounds for finite-armed bandits and linear bandits with ρ𝜌\rhoitalic_ρ-Interactive zCDP, that quantify the cost to ensure ρ𝜌\rhoitalic_ρ-Interactive zCDP in these settings. To prove the lower bound, we develop a new proof technique that relates minimax lower bounds to a transport problem. The minimax lower bounds show the existence of two privacy regimes depending on the privacy budget ρ𝜌\rhoitalic_ρ and the horizon T𝑇Titalic_T. Specifically, for ρ=Ω⁢(T−1)𝜌Ωsuperscript𝑇1\rho=\Omega({T}^{-1})italic_ρ = roman_Ω ( italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ), an optimal algorithm does not have to pay any cost to ensure privacy in both settings. The regret lower bounds show that 𝖠𝖽𝖺𝖢⁢-⁢𝖴𝖢𝖡𝖠𝖽𝖺𝖢-𝖴𝖢𝖡\mathsf{AdaC\text{-}UCB}sansserif_AdaC - sansserif_UCB, 𝖠𝖽𝖺𝖢⁢-⁢𝖦𝖮𝖯𝖤𝖠𝖽𝖺𝖢-𝖦𝖮𝖯𝖤\mathsf{AdaC\text{-}GOPE}sansserif_AdaC - sansserif_GOPE, and 𝖠𝖽𝖺𝖢⁢-⁢𝖮𝖥𝖴𝖫𝖠𝖽𝖺𝖢-𝖮𝖥𝖴𝖫\mathsf{AdaC\text{-}OFUL}sansserif_AdaC - sansserif_OFUL are optimal, up to poly-logarithmic factors. In Table I, we summarise the corresponding regret lower bounds.
In this section, we quantify the cost of ρ𝜌\rhoitalic_ρ-Interactive zCDP for bandits by providing regret lower bounds for any ρ𝜌\rhoitalic_ρ-Interactive zCDP policy. These lower bounds on regret provide valuable insight into the inherent hardness of the problem and establish a target for optimal algorithm design. We first derive a ρ𝜌\rhoitalic_ρ-Interactive zCDP version of the KL decomposition Lemma using a sequential coupling argument. The regret lower bounds are then retrieved by plugging the KL upper bound in classic regret lower bound proofs. A summary of the lower bounds is in Table I, while the proof details are deferred to Appendix G.
A
Corollary 8 generalizes the results in Chaudhuri and Tewari (2017) that showed local observability fails only for k=1𝑘1k=1italic_k = 1, and rules out the possibility of better regret for values of k𝑘kitalic_k that are practically interesting. Also, there are efficient algorithms for k=1,2,…,m−2𝑘12…𝑚2k=1,2,...,m-2italic_k = 1 , 2 , … , italic_m - 2 (Chaudhuri and Tewari, 2017) and for k=m𝑘𝑚k=mitalic_k = italic_m (Suehiro et al., 2012; Ailon, 2014). Again, we are not interested in designing an efficient algorithm for k=m−1𝑘𝑚1k=m-1italic_k = italic_m - 1.
We are interested in ranking measures that can be expressed in the form of f⁢(σ)⋅R⋅𝑓𝜎𝑅f(\sigma)\cdot Ritalic_f ( italic_σ ) ⋅ italic_R where f:ℝm→ℝm:𝑓→superscriptℝ𝑚superscriptℝ𝑚f:\mathbb{R}^{m}\to\mathbb{R}^{m}italic_f : blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, is composed of m𝑚mitalic_m copies of a univariate monotonically non-decreasing scalar-valued function fs:ℝ→ℝ:superscript𝑓𝑠→ℝℝf^{s}:\mathbb{R}\to\mathbb{R}italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT : blackboard_R → blackboard_R. We say that fssuperscript𝑓𝑠f^{s}italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT is monotonically non-decreasing if and only if σ−1⁢(i)>σ−1⁢(j)superscript𝜎1𝑖superscript𝜎1𝑗\sigma^{-1}(i)>\sigma^{-1}(j)italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_i ) > italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_j ) implies fs⁢(σ−1⁢(i))≥fs⁢(σ−1⁢(j))superscript𝑓𝑠superscript𝜎1𝑖superscript𝑓𝑠superscript𝜎1𝑗f^{s}(\sigma^{-1}(i))\geq f^{s}(\sigma^{-1}(j))italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_i ) ) ≥ italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_j ) ). The monotonic non-increasing is defined analogously. Then, f⁢(σ)𝑓𝜎f(\sigma)italic_f ( italic_σ ) can be written as
The ranking loss measure R⁢L⁢(σ,R)𝑅𝐿𝜎𝑅RL(\sigma,R)italic_R italic_L ( italic_σ , italic_R ) can be expressed in the form f⁢(σ)⋅R⋅𝑓𝜎𝑅f(\sigma)\cdot Ritalic_f ( italic_σ ) ⋅ italic_R where f:ℝm→ℝm:𝑓→superscriptℝ𝑚superscriptℝ𝑚f:\mathbb{R}^{m}\to\mathbb{R}^{m}italic_f : blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT, is composed of m𝑚mitalic_m copies of a univariate strictly increasing scalar-valued function fs:ℝ→ℝ:superscript𝑓𝑠→ℝℝf^{s}:\mathbb{R}\to\mathbb{R}italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT : blackboard_R → blackboard_R, that is, σ−1⁢(i)>σ−1⁢(j)superscript𝜎1𝑖superscript𝜎1𝑗\sigma^{-1}(i)>\sigma^{-1}(j)italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_i ) > italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_j ) implies fs⁢(σ−1⁢(i))>fs⁢(σ−1⁢(j))superscript𝑓𝑠superscript𝜎1𝑖superscript𝑓𝑠superscript𝜎1𝑗f^{s}(\sigma^{-1}(i))>f^{s}(\sigma^{-1}(j))italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_i ) ) > italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_j ) ).
The negated P@n does not satisfy Assumption 1 because fssuperscript𝑓𝑠f^{s}italic_f start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT is not strictly increasing (see Eq. (7)), so Theorem 6 does not apply to negated P@n.
Similarly, since negated DCG also satisfies Assumption 1, we have the following corollary from Theorem 6.
C
Electroencephalogram (EEG) captures signals from electrodes, thereby recording the electrical activity during epileptic seizures. The EEG dataset encompasses both pre-ictal and ictal data, organized in a matrix format of channels over time, with a sampling rate of 256 points per second. Here, each electrode signal undergoes rescaling to an interval of [−0.5,0.5]0.50.5[-0.5,0.5][ - 0.5 , 0.5 ] and is subsequently subsampled by averaging every sixteen points, resulting in the dataset with the time step Δ⁢t=0.0625Δ𝑡0.0625\Delta t=0.0625roman_Δ italic_t = 0.0625. In Fig. 2, there is a sample image in which the doctor labels T⁢i⁢m⁢e=500𝑇𝑖𝑚𝑒500Time=500italic_T italic_i italic_m italic_e = 500 as the separation time for the pre-ictal and ictal periods.
To construct the diffusion matrix, the data from all electrodes at each time point is viewed as a high-dimensional node in the diffusive graph. Notably, electrode signals display abnormal fluctuations preceding the onset of an epileptic seizure. Timely detection of abnormalities in electrical signals holds the potential to provide early warnings for epileptic seizures. We suggest identifying early warnings within a lower-dimensional space through the application of directed anisotropic diffusion maps.
In summary, the goal of the present study is to model and analyze the brain activity data from epileptic patients to identify early warnings of epileptic seizures automatically, with stochastic dynamical systems tools. Our main contributions are
Early warning of epileptic seizures is of paramount importance for epileptic patients. The abrupt change is caught for early warning in the latent space, where normal state and ictal state can be viewed as two meta-stable states. The ability to identify transitions between meta-stable states plays a pivotal role in predicting and controlling brain behavior. We derive three effective warning signals—namely, the Onsager-Machlup indicator, the sample entropy indicator, and the transition probability indicator—utilizing information from the latent coordinates and the latent stochastic dynamical systems. These indicators enhance the robustness and accuracy of early warning systems for epileptic seizures. Furthermore, the computational cost of calculating these indicators from low dimensional data is much lower than that of the original high-dimensional data. This framework of learning latent stochastic systems and detecting abnormal dynamics has the potential to extend to general scenarios for other complex high dimensional time evolutionary data.
The diffusion matrix is constructed by the diffusion kernel and shows the transition among all high-dimensional nodes in the diffusive graph, which exhibits the transition probability from one point to another.
A
There has been several recent results on computationally efficient learning of unbounded Gaussians \citepkamath2022private,kothari2022private,ashtiani2022private, with the method of \citetashtiani2022private achieving a near-optimal sample complexity using a sample-and-aggregate-based technique. Another sample-and-aggregate framework that can be used for this task is FriendlyCore \citeptsfadia2022friendlycore.
In density estimation, which is the main focus of this work, the goal is to find a distribution which is close to the underlying distribution w.r.t. dTVsubscriptdTV\operatorname{d_{\textsc{TV}}}roman_d start_POSTSUBSCRIPT TV end_POSTSUBSCRIPT. Unlike parameter estimation, the sample complexity of density estimation can be polynomial in both the dimension and the number of components. In the non-private setting, there has been several results about the sample complexity of learning GMMs \citepdevroye2001combinatorial, ashtiani2018sample, culminating in the work of \citepashtiani2018nearly,ashtiani2020near which gives the near-optimal bound of Θ~⁢(k⁢d2/α2)~Θ𝑘superscript𝑑2superscript𝛼2\tilde{\Theta}(kd^{2}/\alpha^{2})over~ start_ARG roman_Θ end_ARG ( italic_k italic_d start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / italic_α start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).
There has been several recent results on computationally efficient learning of unbounded Gaussians \citepkamath2022private,kothari2022private,ashtiani2022private, with the method of \citetashtiani2022private achieving a near-optimal sample complexity using a sample-and-aggregate-based technique. Another sample-and-aggregate framework that can be used for this task is FriendlyCore \citeptsfadia2022friendlycore.
The methods of \citetashtiani2022private,kothari2022private also work in the robust setting achieving sub-optimal sample complexities.
Recently, \citetalabi2023privately improved this result in terms of dependence on the dimension. Finally, \citethopkins2023robustness achieved a robust and efficient learner with near-optimal sample complexity for unbounded Gaussians.
C
Cooperative Diffusion Recovery Likelihood (CDRL), that jointly estimates a sequence of EBMs and MCMC initializers defined on data perturbed by a diffusion process. At each noise level, the initializer and EBM are updated by a cooperative training scheme (Xie et al., 2018a): The initializer model proposes initial samples by predicting the samples at the current noise level given their noisy versions at a higher noise level. The initial samples are then refined by a few MCMC sampling steps from the conditional distribution defined by the EBM. Given the refined samples, the EBM is updated by maximizing recovery likelihood, and the initializer is updated to absorb the difference between the initial samples and the refined samples. The introduced initializer models learn to accumulate the MCMC transitions of the EBMs, and reproduce them by direct ancestral sampling. Combining with a new noise schedule and a variance reduction technique, we achieve significantly better performance than the existing methods of estimating EBMs. We further incorporate classifier-free guidance (CFG) (Ho & Salimans, 2022) to enhance the performance of conditional generation, and we observe similar trade-offs between sample quality and sample diversity as CFG for diffusion models when adjusting the guidance strength. In addition, we showcase that our approach can be applied to perform several useful downstream tasks, including compositional generation, image inpainting and out-of-distribution detection.
We first showcase our model’s capabilities in unconditional image generation on CIFAR-10 and ImageNet datasets. The resolution of each image is 32×32323232\times 3232 × 32 pixels. FID scores (Heusel et al., 2017) on these two datasets are reported in Tables 1 and 4.3, respectively, with generated examples displayed in Figure 1. We adopt the EBM architecture proposed in Gao et al. (2021). Additionally, we utilize a larger version called “CDRL-large”, which incorporates twice as many channels in each layer. For the initializer network, we follow the structure of (Nichol & Dhariwal, 2021), utilizing a U-Net (Ronneberger et al., 2015) but halving the number of channels. Compared to Gao et al. (2021), CDRL achieves significant improvements in FID scores. Furthermore, CDRL uses the same number of noise levels (6 in total) as DRL but requires only half the MCMC steps at each noise level, reducing it from 30 to 15. This substantial reduction in computational costs is noteworthy. With the large architecture, CDRL achieves a FID score of 3.68 on CIFAR-10 and 9.35 on ImageNet (32×32)3232(32\times 32)( 32 × 32 ). These results, to the best of our knowledge, are the state-of-the-art among existing EBM frameworks and are competitive with other strong generative model classes such as GANs and diffusion models.
Our main contributions are as follows: (1) We propose cooperative diffusion recovery likelihood (CDRL) that tractably and efficiently learns and samples from a sequence of EBMs and MCMC initializers; (2) We make several practical design choices related to noise scheduling, MCMC sampling, noise variance reduction for EBM training; (3) Empirically we demonstrate that CDRL achieves significant improvements on sample quality compared to existing EBM approaches, on CIFAR-10 and ImageNet 32×32323232\times 3232 × 32 datasets; (4) We show that CDRL has great potential to enable more efficient sampling with sampling adjustment techniques; (5) We demonstrate CDRL’s ability in compositional generation, image inpainting and out-of-distribution (OOD) detection, as well as its compatibility with classifier-free guidance for conditional generation.
Cooperative Diffusion Recovery Likelihood (CDRL), that jointly estimates a sequence of EBMs and MCMC initializers defined on data perturbed by a diffusion process. At each noise level, the initializer and EBM are updated by a cooperative training scheme (Xie et al., 2018a): The initializer model proposes initial samples by predicting the samples at the current noise level given their noisy versions at a higher noise level. The initial samples are then refined by a few MCMC sampling steps from the conditional distribution defined by the EBM. Given the refined samples, the EBM is updated by maximizing recovery likelihood, and the initializer is updated to absorb the difference between the initial samples and the refined samples. The introduced initializer models learn to accumulate the MCMC transitions of the EBMs, and reproduce them by direct ancestral sampling. Combining with a new noise schedule and a variance reduction technique, we achieve significantly better performance than the existing methods of estimating EBMs. We further incorporate classifier-free guidance (CFG) (Ho & Salimans, 2022) to enhance the performance of conditional generation, and we observe similar trade-offs between sample quality and sample diversity as CFG for diffusion models when adjusting the guidance strength. In addition, we showcase that our approach can be applied to perform several useful downstream tasks, including compositional generation, image inpainting and out-of-distribution detection.
We propose CDRL, a novel energy-based generative learning framework employing cooperative diffusion recovery likelihood, which significantly enhances the generation performance of EBMs. We demonstrate that the CDRL excels in compositional generation, out-of-distribution detection, image inpainting, and compatibility with classifier-free guidance for conditional generation. One limitation is that a certain number of MCMC steps are still needed during generation. Additionally, we aim to scale our model for high-resolution image generation in the future. Our work aims to stimulate further research on developing EBMs as generative models. However, the prevalence of powerful generative models may give rise to negative social consequences, such as deepfakes, misinformation, privacy breaches, and erosion of public trust, highlighting the need for effective preventive measures.
B
\right\}.over^ start_ARG italic_L end_ARG start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( italic_M ) - over^ start_ARG italic_L end_ARG start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) ≤ roman_max { italic_ρ ( 1 + square-root start_ARG divide start_ARG italic_s ( caligraphic_X ) end_ARG start_ARG italic_k end_ARG end_ARG + square-root start_ARG divide start_ARG 2 roman_ln divide start_ARG 1 end_ARG start_ARG italic_ϵ end_ARG end_ARG start_ARG italic_k end_ARG end_ARG ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , italic_ρ } .
Since the first term inside the maximum is always greater than ρ𝜌\rhoitalic_ρ, this simplifies to our desired result.
To upper bound the absolute value in (51), we need to both lower and upper bound the quantity inside, with respect to R𝑅Ritalic_R, and take the maximum of the two. There are two terms inside the maximum, which must be lower and upper bounded separately.
Examining the bound in Theorem 9, we can see it does not depend on the ambient dimension, but on the stable dimension of the data support, just like the bound in Theorem 6. This means that if the empirical error in the ambient space is small, the empirical error in the compressed space scales with the stable dimension, instead of the ambient dimension. It is also decreasing in k𝑘kitalic_k as expected. Finally, the sample size, n𝑛nitalic_n, does not appear at all, as it is assumed the same for training both M𝑀Mitalic_M and M0subscript𝑀0M_{0}italic_M start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, and simplifies out in the derivation.
As an important ingredient of our analysis, we revisit a well-known result due to Gordon [11] that uniformly bounds the maximum norm of vectors in the compressed unit sphere under a Gaussian RP. We extend this result into a dimension-free version, for arbitrary domains, in Lemma 4, which may be of independent interest.
A
We apply our framework to yield novel results for three applications. In our first application, we study bounds on the APO with a treatment that may be selected on an unobserved confounder.
The result in Proposition 1 extends the analysis of Tan (2022); Frauen et al. (2023) to allow ℓ⁢(X)ℓ𝑋\ell(X)roman_ℓ ( italic_X ) to be arbitrarily small or equal to zero. As a result, it includes Masten and Poirier (2018)’s conditional c-dependence assumption so long as λ⁢(R)⁢Y𝜆𝑅𝑌\lambda(R)Yitalic_λ ( italic_R ) italic_Y is integrable under the target distribution. The characterization of bounds under conditional c-dependence is formally simpler than Masten and Poirier (2018)’s characterization: their characterization involves an integral over a transformation of the full quantile regression function. Our approach also yields estimates of valid bounds under only the IPW estimation assumptions. An interesting question for future work is whether Masten et al. (2024)’s proposed estimator for conditional c-dependence possesses similar validity guarantees.
Our framework nests unconfoundedness, Manski-type bounds that restrict only the support of the unobserved potential outcomes, and Tan (2006)’s Marginal Sensitivity Model as special cases. As a corollary, we obtain a simpler characterization of bounds under Masten and Poirier (2018)’s conditional c-dependence model.
Our work is related to the recent literature on sensitivity analysis for IPW estimators, which relates to our first application. A sensitivity analysis is an approach to partial identification that begins from assumptions that point-identify the causal estimand of interest and then considers increasing relaxations of those assumptions (Molinari, 2020). Our analysis is an extension of Dorn and Guo (2023)’s sharp characterization of bounds under Tan (2006)’s marginal sensitivity model. Tan (2022) and Frauen et al. (2023) previously extended this characterization to families that bound the Radon-Nikodym derivative of interest. We generalize these results to also include unbounded Radon-Nikodym derivative, so that we can include a compact characterization of bounds under Masten and Poirier (2018)’s conditional c-dependence model as a special case. There is rich work in this literature under other sensitivity assumptions like f𝑓fitalic_f-divergences and Total Variation distance. These other assumptions also fit within our framework, because our target distribution constructions are independent of the L∞subscript𝐿L_{\infty}italic_L start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT sensitivity assumptions that we analyze.
This family has several advantages. The restrictions on d⁢ℚd⁢ℙObs𝑑ℚ𝑑superscriptℙObs\frac{d\mathbb{Q}}{d\mathbb{P}^{\textup{Obs}}}divide start_ARG italic_d blackboard_Q end_ARG start_ARG italic_d blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_ARG decouple across values of R𝑅Ritalic_R, enabling tractable characterizations of the identified set. The family nests both strong observational assumptions and Manski-type bounds as limits. Point identification corresponds to the case w¯⁢(R)=w¯⁢(R)=1¯𝑤𝑅¯𝑤𝑅1\underline{w}(R)=\bar{w}(R)=1under¯ start_ARG italic_w end_ARG ( italic_R ) = over¯ start_ARG italic_w end_ARG ( italic_R ) = 1 almost surely. Manski-type bounds that only restrict the support of Y𝑌Yitalic_Y correspond to w¯⁢(R)=∞¯𝑤𝑅\bar{w}(R)=\inftyover¯ start_ARG italic_w end_ARG ( italic_R ) = ∞ with domain-appropriate w¯⁢(R)¯𝑤𝑅\underline{w}(R)under¯ start_ARG italic_w end_ARG ( italic_R ). In between, the causal estimand is only point identified. When the outcome Y𝑌Yitalic_Y is binary, the restriction can equivalently be viewed as a restriction on the conditional mean of Y∣Rconditional𝑌𝑅Y\mid Ritalic_Y ∣ italic_R. We show below that, as in the Tan (2006) model that inspired this generalization, the resulting bounds are highly tractable for estimating sharp and valid bounds.
B
We build a comprehensive ECG dataset to evaluate various deep learning algorithms. The dataset consists of 220,251 recordings with 28 common ECG diagnoses annotated by medical experts and significantly surpasses the sample size of publicly available ECG datasets.
After pre-training, we fine-tune the pre-trained encoder on the same dataset. For fine-tuning, we also use the AdamW optimizer and the cosine learning rate schedule. The default hyperparameters includes
3. Strong Pre-training and Fine-tuning Recipe: We conduct comprehensive experiments to explore the training strategies on the proposed ECG dataset. The key components contributing to the proposed method are presented, including the masking ratio,
In the ablation study, we explore the properties of important components for the proposed method on the Fuwai dataset, and report the marco F1 score on the validation set.
We conduct experiments across three different settings, indicated as Fuwai, PTB-XL, and PCinC in Table 2. For the two-stage methods, including CLECG, MaeFE, CRT and MTECG-T, we develop algorithms as follow. In the first setting, we pre-train and fine-tune the models on the training set of the Fuwai dataset. In the second setting, we pre-train the models on the PCinC dataset, excluding PTB-XL, and then fine-tune them on the training set of PTB-XL. In the third setting, both pre-training and fine-tuning are performed on the training set of the PCinC dataset. For the single-stage method, i.e., BaT, we train the model from scratch on the training set of Fuwai, PTB-XL and PCinC, respectively.
B
Πkm−1superscriptsubscriptΠ𝑘𝑚1\Pi_{k}^{m-1}roman_Π start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT, see the considerations in Subsection 5.3.
We now estimate the conditional expectation with respect to the given u(m)superscript𝑢𝑚u^{(m)}italic_u start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT, separately for the three terms in (22). Here and in the following we denote this conditional expectation by 𝔼′superscript𝔼′\mathbb{E}^{\prime}blackboard_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT.
For the third term, by (10) and the definition (16) of the noise variance σH2superscriptsubscript𝜎𝐻2\sigma_{H}^{2}italic_σ start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, we have
Michael Griebel and Peter Oswald were supported by the Hausdorff Center for Mathematics in Bonn, funded by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) under Germany’s Excellence Strategy - EXC-2047/1 - 390685813 and the CRC 1060 The Mathematics of Emergent Effects of the Deutsche Forschungsgemeinschaft.
C
Sensitivity Analysis. We perform sensitivity analysis on Heckman-FA by testing the approach over different values for the number of epochs T𝑇Titalic_T, fixed initial value c𝑐citalic_c, and number of Gumbel-Softmax samples B𝐵Bitalic_B drawn during assignment extraction. Table II gives the testing MSE of Heckman-FA across different values of T𝑇Titalic_T and c𝑐citalic_c while fixing B=1,000𝐵1000B=1,000italic_B = 1 , 000. For most combinations of T𝑇Titalic_T and c𝑐citalic_c listed in Table II, the testing MSEs of Heckman-FA are almost equal to each other for both datasets. We have a similar observation for each combination of T𝑇Titalic_T and B𝐵Bitalic_B as shown in the right three columns of Table VII in the Appendix. These results show that Heckman-FA is not sensitive to changes in how 𝝅𝝅\bm{\pi}bold_italic_π is initialized and the number of Gumbel-Softmax samples examined during extraction.
Execution Time. We report the execution time after running Heckman-FA across different values of T𝑇Titalic_T and B𝐵Bitalic_B in the left three columns of Table VII in the Appendix. For both datasets, Heckman-FA runs fast for each combination of T𝑇Titalic_T and B𝐵Bitalic_B.
Sensitivity Analysis. We perform sensitivity analysis on Heckman-FA by testing the approach over different values for the number of epochs T𝑇Titalic_T, fixed initial value c𝑐citalic_c, and number of Gumbel-Softmax samples B𝐵Bitalic_B drawn during assignment extraction. Table II gives the testing MSE of Heckman-FA across different values of T𝑇Titalic_T and c𝑐citalic_c while fixing B=1,000𝐵1000B=1,000italic_B = 1 , 000. For most combinations of T𝑇Titalic_T and c𝑐citalic_c listed in Table II, the testing MSEs of Heckman-FA are almost equal to each other for both datasets. We have a similar observation for each combination of T𝑇Titalic_T and B𝐵Bitalic_B as shown in the right three columns of Table VII in the Appendix. These results show that Heckman-FA is not sensitive to changes in how 𝝅𝝅\bm{\pi}bold_italic_π is initialized and the number of Gumbel-Softmax samples examined during extraction.
We also run a paired t𝑡titalic_t-test on 10 different prediction feature assignments to analyze the significance of comparing Heckman-FA to the other baselines. Table VI in the Appendix shows results of the test. We find that the p-value is very small after running the hypothesis test on both datasets. Given that Heckman-FA significantly outperforms Naive and RU, the results in the two tables show that Heckman-FA outputs a robust regression model under MNAR sample selection bias.
We also consider the complexity of Heckman-FA*. Similar to Heckman-FA, we first see that ψ𝜓\psiitalic_ψ is trained in O⁢(n⁢K⁢T)𝑂𝑛𝐾𝑇O(nKT)italic_O ( italic_n italic_K italic_T ) time when running Heckman-FA*. However, the complexity of extraction is different for Heckman-FA* than for Heckman-FA. Since the Heckman model is called for K−1𝐾1K-1italic_K - 1 sets of selection features, the extraction process runs in O⁢(m⁢(K−1))𝑂𝑚𝐾1O(m(K-1))italic_O ( italic_m ( italic_K - 1 ) ) time. Thus Heckman-FA* runs in O⁢(n⁢K⁢T+m⁢(K−1))𝑂𝑛𝐾𝑇𝑚𝐾1O(nKT+m(K-1))italic_O ( italic_n italic_K italic_T + italic_m ( italic_K - 1 ) ) time.
A
Originally motivated for solving the variable selection problem in linear regression, spike-and-slab priors (or, “discrete spike-and-slab”, Tadesse and Vannucci 2021) have the marginal form of a two-component mixture for each parameter element: one component (spike) from a point mass at zero, and the other (slab) from a continuous distribution (Mitchell and Beauchamp, 1988); the continuous elements can be independent or dependent a priori.
Since one could reparameterize a discrete spike-and-slab prior as a special case of the L1-ball prior by setting κ𝜅\kappaitalic_κ according to a quantile of π0⁢(β)subscript𝜋0𝛽\pi_{0}(\beta)italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_β ), we expect our geometric ergodicity result could be extended to analyzing the ODA algorithm. In comparison, our algorithm has the advantages of not having to create an augmented design matrix and easy application in non-linear models. We expect some nice properties that are further exploited in the ODA algorithm, such as the collapsed sampling step based on marginalizing out θjsubscript𝜃𝑗\theta_{j}italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT given if bj=1subscript𝑏𝑗1b_{j}=1italic_b start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = 1 or 00, could be obtained in some specific form of latent Gaussian model after we augment the anti-correlation Gaussian.
With the rich literature, there is a recent interest in structured sparsity (Hoff, 2017; Griffin and Hoff, 2023) that has inspired new extensions of sparsity priors. Specifically, the sparsity is “structured” in the sense that: (i) the occurrences of zeros could be dependent, according to some temporal, spatial, or group structure; (ii) the non-zeros could have some correlation structure, such as smoothness over some spatial domain. We now provide a few examples. In the task of change-point detection, one may model a time series as having the mean increments to be sparse over a continuous period of time so that the mean function would become a step function (Tibshirani et al., 2005; Betancourt et al., 2017). In the scalar-on-image regression, one may model the regression coefficients to be spatially smooth for those non-zeros, while being zero over several continuous regions (Kang et al., 2018). For these models, a critical computational challenge arises that the above existing algorithms for spike-and-slab priors cannot be applied directly here, due to the lack of conjugacy or fixed variance for latent Gaussian. Because of the correlations among the elements of the parameter, updating the full conditional of one element at a time suffers from slow mixing of Markov chains.
For linear regression with Gaussian errors, very efficient Markov chain Monte Carlo (MCMC) algorithms have been developed. When the slab prior distribution follows a Gaussian, the Stochastic Search Variable Selection (SSVS) algorithm (George and McCulloch, 1995) exploits the posterior conjugacy and samples from the marginal posterior of the binary inclusion variables. SSVS is a Gibbs sampler that tries to flip each binary inclusion variable one at a time from a Bernoulli full conditional distribution; at the end of each iteration, it samples the regression coefficients given the vector of binary inclusion variables. As an alternative to using the marginal posterior, the Orthogonal Data Augmentation (ODA) algorithm (Ghosh and Clyde, 2011) introduces an augmented design matrix (along with augmented responses) to append the observed design matrix, such that the Gram matrix becomes diagonal hence easily invertible, enabling the use of two-block Gibbs sampler. The ODA algorithm can be extended to some generalized linear models if the latent Gaussian has a fixed variance, such as probit regression (Albert and Chib, 1993). Further, for design matrix that is high-dimensional or contains highly correlated predictors, various algorithms have been proposed such as shotgun algorithm (Hans et al., 2007), parallel tempering (Bottolo and Richardson, 2010), correlation-based search (Kwon et al., 2011), two-parameter flipping Metropolis-Hastings algorithm under g𝑔gitalic_g-prior slab (Yang et al., 2016). In the meantime, there is a comparably vast literature on continuous shrinkage priors with excellent performance for the task of variable selection (George and McCulloch, 1995; Rocková and George, 2018; Polson and Scott, 2010; Carvalho et al., 2010; Piironen and Vehtari, 2017; Armagan et al., 2013; Bhattacharya et al., 2015; Bai and Ghosh, 2019). Since our focus is on the posterior with exact zeros, for brevity, we will skip the detail of continuous shrinkage.
Focusing on the computational aspect, the soft-thresholding transform is differentiable almost everywhere with respect to π0βsuperscriptsubscript𝜋0𝛽\pi_{0}^{\beta}italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT. This means we can use off-the-shelf gradient-based MCMC algorithms (Duane et al., 1987; Girolami and Calderhead, 2011; Hoffman and Gelman, 2014; Livingstone and Zanella, 2022) for its posterior estimation. The strength of these algorithms, besides low implementation cost due to the good accessibility of software (Carpenter et al., 2017; Bingham et al., 2019), is in the rapid convergence to the region near the posterior mode and a high acceptance rate for changing the zero/non-zero status of multiple elements at the same time. On the other hand, notice that if the likelihood is parameterized via θ𝜃\thetaitalic_θ and κ𝜅\kappaitalic_κ but not dependent on β𝛽\betaitalic_β a priori, then at the state with some θj=0subscript𝜃𝑗0\theta_{j}=0italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = 0, the partial derivative of the log-posterior density with respect to βjsubscript𝛽𝑗\beta_{j}italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is zero. As a consequence, the algorithm relying on one-step diffusion [such as Metropolis-adjusted Langevin algorithm (MALA) (Rossky et al., 1978)] would be not efficient to explore changing θjsubscript𝜃𝑗\theta_{j}italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT to a non-zero state. That is why multiple-step diffusion algorithms such as Hamiltonian Monte Carlo (Neal, 2011) and No-U-Turn sampler (Hoffman and Gelman, 2014) are preferable.
C
Mcculloch (1997), and Hoeting et al. (1999) for more details and references therein. From a frequentist perspective, several attractive strategies have been proposed to combine models, including boosting (Freund, 1995), bagging (Breiman, 1996), random forest (Amit and
Claeskens, 2003), adaptive regression by mixing (Yang, 2001; Yuan and Yang, 2005), exponentially weighted aggregation (Leung and
However, it has been increasingly recognized that choosing just one model inherently ignores possibly high uncertainty in the selection process (Chatfield, 1995; Draper, 1995; Yuan and Yang, 2005). Model averaging (MA), on the other hand, provides an alternative to reduce the variability in MS while offering a possibility of reducing modeling bias by averaging over the candidate models properly.
In statistical modeling, multiple candidate models are usually considered to explore the data. Model selection (MS) guides us in search for the best model among candidates based on a traditional selection criterion, such as AIC (Akaike, 1973), Cpsubscript𝐶𝑝C_{p}italic_C start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT (Mallows, 1973), and BIC (Schwarz, 1978), the use of cross-validation (Allen, 1974; Stone, 1974), and solving a penalized regression problem, such as Lasso (Tibshirani, 1996), adaptive Lasso (Zou, 2006), SCAD (Fan and Li, 2001), and MCP (Zhang, 2010). The key theoretical properties of these methods, namely consistency in selection, asymptotic efficiency, and minimax-rate optimality, have been well established in the literature (see Claeskens and
Condition 1 includes the case θj=j−α1subscript𝜃𝑗superscript𝑗subscript𝛼1\theta_{j}=j^{-\alpha_{1}}italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_j start_POSTSUPERSCRIPT - italic_α start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT for α1>1/2subscript𝛼112\alpha_{1}>1/2italic_α start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT > 1 / 2, which serves as the principal example in the MA literature since in this case, the optimal MA risk can significantly reduce the optimal MS risk (Peng and Yang, 2022). In contrast, the coefficients satisfying Condition 2 decay much faster. An example is the exponentially decaying coefficients θj=exp⁡(−jα2)subscript𝜃𝑗superscript𝑗subscript𝛼2\theta_{j}=\exp(-j^{\alpha_{2}})italic_θ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = roman_exp ( - italic_j start_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ) for some α2>0subscript𝛼20\alpha_{2}>0italic_α start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT > 0. In this scenario, the asymptotic improvement of MA over MS is negligible (Peng and Yang, 2022).
A
{ and }X_{2}\neq 3]\right)italic_Y = roman_max ( blackboard_𝟙 [ italic_X start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT = 3 and italic_X start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT = 1 ] , blackboard_𝟙 [ italic_X start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT ≠ 4 and italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ≠ 3 ] ) for the same covariates in Monk 1. Also, 5%percent55\%5 % label noise is added. Here, X2,X4,subscript𝑋2subscript𝑋4X_{2},X_{4},italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT , and X5subscript𝑋5X_{5}italic_X start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT are relevant.
To identify which genes are stably importance across good models, we evaluated this dataset using RID over the model class of sparse decision trees using subtractive model reliance. We selected 14,614 samples (all 7,307 high HIV load samples and 7,307 random low HIV load samples) from the overall dataset in order to balance labels, and filtered the complete profiles down to the top 100 variables by individual AUC. We consider the binary classification problem of predicting high versus low HIV load. For full experimental details, see Section D of the supplement. Section E.5 of the supplement contains timing experiments for RID using this dataset.
We compare the ability of RID to identify extraneous variables with that of the following baseline methods, whose details are provided in Section D of the supplement: subtractive model reliance ϕsubsuperscriptitalic-ϕsub\phi^{\text{sub}}italic_ϕ start_POSTSUPERSCRIPT sub end_POSTSUPERSCRIPT of a random forest (RF) [6], LASSO [20], boosted decision trees [16], and generalized optimal sparse decision trees (GOSDT) [26];
Several methods for measuring the MR of a model from a specific model class exist, including the variable importance measure from random forest which uses out-of-bag samples [7] and Lasso regression coefficients [20]. Lundberg et al. [28] introduce a way of measuring MR in tree ensembles using SHAP [27]. Williamson et al. [48] develop MR based on the change in performance between the optimal model and the optimal model using a subset of features.
To create the uncertainty interval on the training dataset and for each method, we first find the subtractive model reliance ϕ(s⁢u⁢b)superscriptitalic-ϕ𝑠𝑢𝑏\phi^{(sub)}italic_ϕ start_POSTSUPERSCRIPT ( italic_s italic_u italic_b ) end_POSTSUPERSCRIPT across 500 bootstrap iterations of a given dataset for the four algorithms shown in Figure 3 (bottom) (baseline results without bootstrapping are in Section E of the supplementary material). Additionally, we find the VIC for the Rashomon set of GOSDTs on the original dataset. We summarize these model reliances (500 bootstraps ×\times× 28 variables across datasets ×\times× 4 algorithms + 8,247 models in VIC’s + 10,840,535 total models across Rsets ×\times× 28 variables from RID) by computing their box-and-whisker ranges (1.5 ×\times× Interquartile range [46]). To compare with “ground truth,” we sample 500 test datasets from the DGP and calculate ϕ(s⁢u⁢b)superscriptitalic-ϕ𝑠𝑢𝑏\phi^{(sub)}italic_ϕ start_POSTSUPERSCRIPT ( italic_s italic_u italic_b ) end_POSTSUPERSCRIPT for the DGP for that dataset. For example, assume the DGP is Y=X2+ε𝑌superscript𝑋2𝜀Y=X^{2}+\varepsilonitalic_Y = italic_X start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_ε. We would then use f⁢(X)=X2𝑓𝑋superscript𝑋2f(X)=X^{2}italic_f ( italic_X ) = italic_X start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT as our predictive model and evaluate ϕ(s⁢u⁢b)⁢(f,𝒟(n))superscriptitalic-ϕ𝑠𝑢𝑏𝑓superscript𝒟𝑛\phi^{(sub)}(f,\mathcal{D}^{(n)})italic_ϕ start_POSTSUPERSCRIPT ( italic_s italic_u italic_b ) end_POSTSUPERSCRIPT ( italic_f , caligraphic_D start_POSTSUPERSCRIPT ( italic_n ) end_POSTSUPERSCRIPT ) on f𝑓fitalic_f for each of the 500 test sets. We then check if the box-and-whisker range of each method’s interval constructed on the training set contains the computed ϕ(s⁢u⁢b)superscriptitalic-ϕ𝑠𝑢𝑏\phi^{(sub)}italic_ϕ start_POSTSUPERSCRIPT ( italic_s italic_u italic_b ) end_POSTSUPERSCRIPT for the DGP for each test dataset. Doing this allows us to understand whether our interval contains the true ϕ(s⁢u⁢b)superscriptitalic-ϕ𝑠𝑢𝑏\phi^{(sub)}italic_ϕ start_POSTSUPERSCRIPT ( italic_s italic_u italic_b ) end_POSTSUPERSCRIPT for each test set.
B
Since the inception of the field, a strong parallelism has been drawn between PQC-based QML models and kernel methods [15, 14, 21].
Yet, unlike neural networks, kernel methods reach the solution by solving a linear optimization task on a larger feature space, onto which input data is mapped.
The data input is mapped onto the “quantum feature space” of quantum density operators via a quantum embedding.
Kernel methods solve ML tasks as linear optimization problems on large feature spaces, sometimes implicitly.
The ultimate goal is to, given the data distribution, find a map onto a feature space where the problem becomes solvable by a linear model.
A
This paper establishes an explicit link between MA and shrinkage in a multiple model setting, which significantly enhances the previous understanding of the relationship between MA and shrinkage in the two-model settings. It is revealed that the MMA estimator can be viewed as a variant of the positive-part Stein estimator, as both are derived from the principle of URE. The key distinction lies in the optimization approach: MMA minimizes the principle of URE within a unit simplex, whereas the latter operates within a more relaxed weight set. Building upon the established connections, we extend the penalized blockwise Stein rule to the linear regression setting to develop the asymptotically optimal MA estimators. We provide some specific candidate model sets on which the proposed Stein-type MA estimator achieves the performance of the optimal convex combination of all the nested models (i.e., the full asymptotic optimality). The improvement of the proposed Stein-type MA over the existing MA approaches is illustrated theoretically and numerically. Note that a limitation of our Stein-type MA method is that it requires the variance of the error terms to be known. Thus, extending our results to the case of unknown variance is a pressing topic for future research.
Despite the extensive theoretical work and wide applications of MA, there is a commonly held viewpoint that MA is essentially a shrinkage estimator, and that other shrinkage methods can also achieve the objectives of MA. This view has been substantiated by several studies. For instance, the results in Section 5.1 of Kneip, (1994) indicate that combining two linear smoothers by minimizing Mallows’ Cpsubscript𝐶𝑝C_{p}italic_C start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT yields a James-Stein estimator. The relationship between Mallows model averaging (MMA) and Stein shrinkage estimation has been further explored in Blaker, (1999), Hansen, (2007), and Hansen, (2014) in the context of two nested models. In a semiparametric regression setting, Ullah et al., (2017) established the connection between MA and ridge shrinkage on the basis of the orthogonal model. Additionally, in a Gaussian location model, Green and Strawderman, (1991) proposed a James-Stein type estimator to estimate the best linear combination of two independent biased and unbiased estimators. The methodology in Green and Strawderman, (1991) has been further explored by Kim and White, (2001), Judge and Mittelhammer, (2004), and Mittelhammer and Judge, (2005). More recently, Hansen, (2016) proposed a Stein method to combine the restricted and unrestricted maximum likelihood estimators in a local asymptotic framework, and showed the asymptotic risk of this shrinkage estimator is strictly less than that of the maximum likelihood estimator.
This paper addresses the previously mentioned questions in a general linear model setting with multiple nested candidate models. The main contribution is twofold. First, we demonstrate that the optimal MA estimator is equivalent to the optimal linear estimator with monotonically non-increasing weights in a specific Gaussian sequence model. And the MMA estimator (Hansen,, 2007), which targets the optimal MA risk, can be regarded as a variation of the sum of a set of positive-part Stein estimators from multiple mutually orthogonal subspaces. Specifically, both the MMA estimator and the positive-part Stein estimator share the common objective of minimizing unbiased risk estimation, albeit within different weight constraints. Second, we introduce a novel MA procedure to achieve asymptotic optimality by adapting the blockwise Stein rules from prior works (Donoho and Johnstone,, 1995; Nemirovski,, 1998; Cavalier and Tsybakov,, 2001) to linear regression. In particular, when the candidate model set is properly constructed, this Stein-type MA estimator achieves the full potential of MA (i.e., the minimal MA risk over all the nested models) in a sufficiently large parameter space. The results of finite sample simulations support our theories.
This paper establishes an explicit link between MA and shrinkage in a multiple model setting, which significantly enhances the previous understanding of the relationship between MA and shrinkage in the two-model settings. It is revealed that the MMA estimator can be viewed as a variant of the positive-part Stein estimator, as both are derived from the principle of URE. The key distinction lies in the optimization approach: MMA minimizes the principle of URE within a unit simplex, whereas the latter operates within a more relaxed weight set. Building upon the established connections, we extend the penalized blockwise Stein rule to the linear regression setting to develop the asymptotically optimal MA estimators. We provide some specific candidate model sets on which the proposed Stein-type MA estimator achieves the performance of the optimal convex combination of all the nested models (i.e., the full asymptotic optimality). The improvement of the proposed Stein-type MA over the existing MA approaches is illustrated theoretically and numerically. Note that a limitation of our Stein-type MA method is that it requires the variance of the error terms to be known. Thus, extending our results to the case of unknown variance is a pressing topic for future research.
The unveiled connections between MA and shrinkage offer the possibility of novel methodological developments in the area of MA. The focus of this paper has been on a linear regression setting. It is of great interest to bridge the gap between MA and shrinkage in the generalized linear model setting, and then apply the Stein estimators in some general distribution families (e.g., see Chapter 5 of Hoffmann,, 2000, for a review) to combine models. In addition, given the approximate and exact distributions of the Stein-type estimators (Ullah,, 1982; Phillips,, 1984) and the techniques of constructing Stein-type confidence intervals (Hwang and Casella,, 1982; He,, 1992), it is greatly desirable to conduct inference for the asymptotically optimal MA estimators. Note that Hansen, (2014) and Zhang and Liu, (2019) have previously investigated the inference of MA but without the asymptotic optimality results. Another research direction is building a unified theory for Bayesian and frequentist MA. Indeed, the BIC weighting method considered in the frequentist literature (Buckland et al.,, 1997; Hjort and Claeskens,, 2003) can be seen as an approximation of Bayesian MA. We conjecture that the asymptotically optimal MA estimator may also have a Bayesian interpretation since the Stein-type estimation is essentially an empirical Bayes approach (see, e.g., Efron and Morris,, 1973). We leave these for future work.
D
Mathematically, consider a sample Y¨1⁢(ti),Y¨2⁢(ti),…,Y¨D⁢(ti)subscript¨𝑌1subscript𝑡𝑖subscript¨𝑌2subscript𝑡𝑖…subscript¨𝑌𝐷subscript𝑡𝑖\ddot{Y}_{1}(t_{i}),\ddot{Y}_{2}(t_{i}),...,\ddot{Y}_{D}(t_{i})over¨ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , over¨ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , … , over¨ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) of multivariate time-series for D𝐷Ditalic_D metocean variables observed at regularly-spaced time points tisubscript𝑡𝑖t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, i=1,2,3,…𝑖123…i=1,2,3,...italic_i = 1 , 2 , 3 , … over some period. Data preparation requires the isolation of a sample of values {y˙i⁢1,y˙i⁢2,…,y˙i⁢D}i=1Nsuperscriptsubscriptsubscript˙𝑦𝑖1subscript˙𝑦𝑖2…subscript˙𝑦𝑖𝐷𝑖1𝑁\{\dot{y}_{i1},\dot{y}_{i2},...,\dot{y}_{iD}\}_{i=1}^{N}{ over˙ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , over˙ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , over˙ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_i italic_D end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT of N𝑁Nitalic_N storm peak events (by convention for the first variable) and associated events (for the remaining variables) summarising the peak characteristics of each of N𝑁Nitalic_N storms observed, and corresponding storm peak covariate values {xi⁢1,xi⁢2,…,xi⁢C}i=1Nsuperscriptsubscriptsubscript𝑥𝑖1subscript𝑥𝑖2…subscript𝑥𝑖𝐶𝑖1𝑁\{x_{i1},x_{i2},...,x_{iC}\}_{i=1}^{N}{ italic_x start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_i italic_C end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT for C𝐶Citalic_C storm peak covariates X1,X2,…,XCsubscript𝑋1subscript𝑋2…subscript𝑋𝐶X_{1},X_{2},...,X_{C}italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT defined on some domain 𝒳𝒳\mathcal{X}caligraphic_X. (Note that the “double dot” notation (e.g. Y¨¨𝑌\ddot{Y}over¨ start_ARG italic_Y end_ARG) indicates time-series variables from which storm peak events, indicated by “single dot” notation (e.g. Y˙˙𝑌\dot{Y}over˙ start_ARG italic_Y end_ARG) must be isolated.) Storm peak events are identified as local maxima (between successive up- and down-crossings of a given level) of Y¨1subscript¨𝑌1\ddot{Y}_{1}over¨ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Associated values for a storm are the values of the other random variables at the time of occurrence of the storm peak event. Storm peak and associated values are taken to be conditionally-independent given covariates, in the sense that y˙i⁢dsubscript˙𝑦𝑖𝑑\dot{y}_{id}over˙ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_i italic_d end_POSTSUBSCRIPT can be viewed as an independent draw from Y˙d|(X1=xi⁢1,X2=xi⁢2,…,XC=xi⁢C)conditionalsubscript˙𝑌𝑑formulae-sequencesubscript𝑋1subscript𝑥𝑖1formulae-sequencesubscript𝑋2subscript𝑥𝑖2…subscript𝑋𝐶subscript𝑥𝑖𝐶\dot{Y}_{d}|(X_{1}=x_{i1},X_{2}=x_{i2},...,X_{C}=x_{iC})over˙ start_ARG italic_Y end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT | ( italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i 2 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT = italic_x start_POSTSUBSCRIPT italic_i italic_C end_POSTSUBSCRIPT ), for i=1,2,…,N𝑖12…𝑁i=1,2,...,Nitalic_i = 1 , 2 , … , italic_N, d=1,2,…,D𝑑12…𝐷d=1,2,...,Ditalic_d = 1 , 2 , … , italic_D.
Note that covXtreme also provides functionality to simulate data with known characteristics for checking of the performance of the statistical methodology.
In addition we require that the statistical model also describes the joint tail of all metocean variables in general. However the nature of extremal dependence between different metocean variables is generally unknown. The specification of the statistical model therefore needs to be sufficiently general to admit different extents of extremal dependence, the specifics of which are then estimated by fitting the model to data. The conditional extremes model of Heffernan and Tawn (2004) is an attractive candidate, because it admits different classes of extremal dependence, it has a simple form and is relatively easily interpretable. There is also evidence that the nature and extent of extremal dependence also varies systematically with covariates (e.g. Jonathan et al. 2013, Ross et al. 2018, Shooter et al. 2021).
The objective of the current article is to provide motivation and description of the covXtreme software, and illustrations of its use in the development of design conditions for ocean engineering. The layout of the article is as follows. Section 2 provides an overview of the software and the statistical methodology on which it is based. Sections 3 and 4 present case studies, involving a bivariate response and single covariate (Section 3), and trivariate response with 2-D covariate (Section 4). An accompanying user guide (available at Towe et al. 2023b) provides a detailed step-by-step description of developing a covXtreme model for ocean engineering data sets provided with the software.
The covXtreme methodology makes a number of simplifying assumptions, motivated by the authors’ experience of extreme value analysis applied to the ocean environment using a range of methodologies of difference complexities. For example, covXtreme relies on sensible user-specified partitioning of the covariate domain into bins within which it is reasonable to assume common marginal tails and a common dependence structure; this simplifies inference considerably compared with competitor approaches. Moreover, we believe that inferences using covXtreme with good partitioning are competitive with alternatives using more sophisticated tools. Specifically, covXtreme marginally is equivalent to a Voronoi set representation with pre-specified covariate partition, which was demonstrated by Zanini et al. (2020) to be competitive with P-spline and Bayesian adaptive regression spline covariate representations. covXtreme further assumes that the generalised Pareto shape parameter ξ𝜉\xiitalic_ξ in each marginal model is constant with covariate; because of the relative difficulty of estimating the shape parameter compared with the scale, this would appear reasonable in the absence of strong evidence to the contrary, especially for small samples of data. Likewise, the β𝛽\betaitalic_β, μ𝜇\muitalic_μ and σ𝜎\sigmaitalic_σ parameters of the conditional extremes model are assumed stationary. This appears reasonable since β𝛽\betaitalic_β is an exponent, again difficult to estimate. Moreover, μ𝜇\muitalic_μ and σ𝜎\sigmaitalic_σ are essentially nuisance parameters; any model misspecification caused by the assumption of stationarity will be accommodated to some extent by the adoption of residuals from model fitting for inferences under the model. It might be appropriate to relax some of the assumptions for specific applications, for example (a) when there is strong evidence that generalised Pareto ξ𝜉\xiitalic_ξ is unlikely to be constant (e.g. due to land shadow and fetch limitation effects on HSsubscript𝐻𝑆H_{S}italic_H start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT), or (b) since parameter estimates for conditional extremes α𝛼\alphaitalic_α and μ𝜇\muitalic_μ are highly correlated when β𝛽\betaitalic_β is close to unity. We also note that it is not clear whether non-stationary marginal extreme analysis is necessarily the best approach to estimate marginal return values from non-stationary data. Provided that sufficient sample is available, so that a sufficiently high threshold for peaks over threshold analysis can be set, often a stationary marginal analysis is at least competitive if not preferable; see Mackay and Jonathan (2020) for further discussion.
A
Under mild regularity condition on the density of the considered generative models, we prove the stability of iterative retraining of generative models under the condition that the initial generative model is close enough to the real data distribution and that the proportion of real data is sufficiently large (Theorems 1 and 2) during each iterative retraining procedure.
We empirically validate our theory through iterative retraining on CIFAR10 and FFHQ using powerful diffusion models in OTCFM, DDPM, and EDM.
We then prove in Theorem 2 that, with high probability, iterative retraining remains within a neighborhood of the optimal generative model in parameter space when working in the stable regime. Finally, we substantiate our theory on both synthetic datasets and high dimensional natural images on a broad category of models that include continuous normalizing flows  (Chen et al., 2018) constructed using a conditional flow-matching objective (OTCFM, Tong et al. 2023), Denoising Diffusion Probabilistic Models (DDPM, Ho et al. 2020) and Elucidating Diffusion Models (EDM, Karras et al. 2022).
We perform experiments on synthetic toy data as found in Grathwohl et al. (2018), CIFAR-10  (Krizhevsky and Hinton, 2009), and FlickrFacesHQ 64×64646464\times 6464 × 64 (FFHQ-64646464) datasets (Karras et al., 2019). For deep generative models, we conduct experiments with continuous normalizing flows (Chen et al., 2018) constructed using a conditional flow-matching loss (CFM (Lipman et al., 2022; Tong et al., 2023)) and two powerful diffusion models in Denoising Diffusion Probabilistic Models (DDPM, Ho et al. 2020) and Elucidating Diffusion Models (EDM, Karras et al. 2022) where we relied on the original codebases torch-cfm (https://github.com/atong01/conditional-flow-matching), ddpm-torch (https://github.com/tqch/ddpm-torch) and edm (https://github.com/NVlabs/edm) for faithful implementations.
Our main contribution is showing that if the generative model initially trained on real data is good enough, and the iterative retraining is made on a mixture of synthetic and real data, then the retraining procedure Algorithm 1 is stable (Theorems 1 and 2). Additionally, we validate our theoretical findings (Theorems 1 and 2) empirically on natural image datasets (CIFAR-10101010 and FFHQ-64646464) with various powerful generative models (OTCFM, DDPM, and EDM).
A
The objective function in (1) linearly approximates electric losses. Eq. (2)-(5) describe the Linearized DistFlow model [1] which assumes lossless power balance (2)-(3), and approximates Ohm’s Law as a linear relationship between voltages and power (4)-(5). Eq. (5) accommodates switches in the model with a conditional constraint where Ohm’s Law is enforced for closed switches. The big-ℳℳ\mathcal{M}caligraphic_M constraint in  (6) and (7) enforces power flows through open switches to be zero. Eq. (8) describes the nodal injection constraints. Eq. (9) sets the voltage constraints and the slack bus voltage. Eq. (10)-(11) describe radiality and connectivity constraints required for distribution grids under normal operations. We assume existing protection schemes are used which require radiality of the grid topology; we then enforce radiality in the reconfiguration problem. Note that (11) is not sufficient to enforce connectivity. To maintain a simple problem formulation, we leverage the fact that the power flow constraints implicitly enforce connectivity: a load cannot be supplied if it is disconnected from the grid.
The GNN models the distribution grid topology as an undirected graph, with switch embeddings modeling the switches in the electrical grid. The GNN’s message passing layers incorporate these embeddings as gates, which enables GraPhyR to learn the representation of linearized Ohm’s law of (5) across multiple topologies in a physics-informed way. The input to the GNN are the grid topology and nodal loads, and the output is a set of node and switch embeddings which will be used to make reconfiguration, power flow, and voltage predictions.
We propose GraPhyR, a physics-informed machine learning framework to solve (1)-(11). Our framework in Fig. 1 features four architectural components: (A) gated message passing to model switches, (B) local predictions to scale across nodes, (C) physics-informed rounding to handle binary variables, and (A) topology input data for adaptability during online deployment. We embed the physics of the distribution grid and reconfiguration problem within each component of the GraPhyR framework. First, the GNN embeds the topology of the underlying distribution grid, and explicitly models the switches using gated message passing. Second, the topology selection embeds the discrete open/close decision of the switches using the physics-informed rounding. Third, we use the power flow equations to predict a subset of variables (denoted as the independent variables), and compute the remaining variables in a recovery step. The GraPhyR framework uses these physics-informed layers to learn to optimize the reconfiguration task while satisfying equality and binarity constraints. The framework is presented in detail next.
After the ℒℒ\mathcal{L}caligraphic_L message passing layers, the embeddings extracted from the input data are used to predict the switch open/close status and a subset of the power flow variables, denoted as independent variables.
Our local predictors exploit the full flexibility of GNNs. They are permutation invariant to the input graph data; are independent of the size of the graph (scale-free); and are smaller than the corresponding global predictor for the same grid. The first feature means our framework is robust to changes in input data. The last two features means our framework is lightweight and scalable. This would not be possible with a global predictor which predicts all independent variables from node and switch embeddings across the graph. The size of the input and output layers of a global predictor would depend on the size of the graph and the number of switches, and is the limitation in [3]. Table I summarizes the size of local and global predictors for the reconfiguration problem, where hℎhitalic_h is the dimension of the hidden graph embeddings.
B
Our second example considers a classical dataset of wind catastrophes taken from Hogg and Klugman, (1984, p. 64). It represents 40 losses (in million U.S. dollars) due to wind-related disasters. Data are reported to the nearest million, including only losses of 2 million or more.
Thus, there is no concentration of mass and the family of Pareto distributions takes over the role of the exponential distributions in the first scenario.
This is not a contradiction to the above remark: the fact that the goodness-of-fit tests do not reject the hypothesis of a Pareto distribution does not prove that the hypothesis holds.
Brazauskas and Serfling, (2003) and Rizzo, (2009) proposed goodness-of-fit tests for the Pareto model and applied them to the de-grouped wind catastrophes data, and concluded that there were no evidence against the model.
Table 3: Values of t^n⁢(u)subscript^𝑡𝑛𝑢\hat{t}_{n}(u)over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_u ) for the wildfire suppression cost data for specific values of the threshold u𝑢uitalic_u and the corresponding shape parameter α𝛼\alphaitalic_α under the assumption that the tail data follow a Pareto model.
C
\mbox{shot}_{i}italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ⋅ version start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⋅ prompt start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ⋅ temperature start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ⋅ role start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_β start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT ⋅ shot start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT
This computational process requires priors to be placed on R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, the
where ζ𝜁\zetaitalic_ζ is a vector of cutpoints. The latent variable y∗superscript𝑦y^{*}italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is
For the prior on R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, we follow Gelman, Hill, and Vehtari (2020,
related to an underlying continuous latent variable, y∗superscript𝑦y^{*}italic_y start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT, through a
A
This distinguishes GSBM from prior methods (e.g., Liu et al. (2022)) that learn approximate solutions to the same problem (3) but whose subsequent solutions only approach (μ,ν)𝜇𝜈(\mu,\nu)( italic_μ , italic_ν ) after final convergence.
This further results in a framework that relies solely on samples from μ,ν𝜇𝜈\mu,\nuitalic_μ , italic_ν—without knowing their densities—and enjoys stable convergence,
This distinguishes GSBM from prior methods (e.g., Liu et al. (2022)) that learn approximate solutions to the same problem (3) but whose subsequent solutions only approach (μ,ν)𝜇𝜈(\mu,\nu)( italic_μ , italic_ν ) after final convergence.
such that X0∼μsimilar-tosubscript𝑋0𝜇X_{0}\sim\muitalic_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ italic_μ X1∼νsimilar-tosubscript𝑋1𝜈X_{1}\sim\nuitalic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∼ italic_ν follow the (unknown) laws of two distributions μ,ν𝜇𝜈\mu,\nuitalic_μ , italic_ν.
By default, we use the explicit matching loss (5) without path integral resampling, mainly due to its scalability, but ablate their relative performances in Sec. 4.4.
A
CTM, a novel generative model, addresses issues in established models. With a unique training approach accessing intermediate PF ODE solutions, it enables unrestricted time traversal and seamless integration with prior models’ training advantages. A universal framework for Consistency and Diffusion Models, CTM excels in both training and sampling. Remarkably, it surpasses its teacher model, achieving SOTA results in FID and likelihood for few-steps diffusion model sampling on CIFAR-10 and ImageNet 64×64646464\times 6464 × 64, highlighting its versatility and process.
CTM poses a risk for generating harmful or inappropriate content, including deepfake images, graphic violence, or offensive material. Mitigating these risks involves the implementation of strong content filtering and moderation mechanisms to prevent the creation of unethical or harmful media content.
ImageNet CTM surpasses any previous non-guided generative models in FID. Also, CTM most closely resembles the IS of validation data, which implies that StyleGAN-XL tends to generate samples with a higher likelihood of being classified for a specific class, even surpassing the probabilities of real-world validation data, whereas CTM’s generation is statistically consistent in terms of the classifier likelihood. In sample diversity, CTM reports an intermediate level of recall, but the random samples in Figure 16 exhibits the actual samples are comparably diverse to those of EDM or CM. Furthermore, the high likelihood of CTM on CIFAR-10 indirectly indicates that CTM has no issue on mode collapse. Lastly, we emphasize that all results in Tables 5 and 5 are achieved within 30303030K training iterations, requiring only 5%percent55\%5 % of the iterations needed to train CM and EDM.
CTM poses a risk for generating harmful or inappropriate content, including deepfake images, graphic violence, or offensive material. Mitigating these risks involves the implementation of strong content filtering and moderation mechanisms to prevent the creation of unethical or harmful media content.
CTM’s anytime-to-anytime jump along the PF ODE greatly enhances its training flexibility as well. It allows the combination of the distillation loss and auxiliary losses, such as denoising score matching (DSM) and adversarial losses. These auxiliary losses measures statistical divergences111The DSM loss is closely linked to the KL divergence (Song et al., 2021; Kim et al., 2022c). Also, the adversarial GAN loss is a proxy of f𝑓fitalic_f-divergence (Nowozin et al., 2016) or IPMs (Arjovsky et al., 2017). between the data distribution and the sample distribution, which provides student high-quality training signal for better jump learning. Notably, leveraging these statistical divergences to student training enables us to train the student as good as teacher, reaffirming the conventional belief established in the distillation community of classification tasks that auxiliary losses beyond distillation loss can enhance student performance. In experiments, we achieve the new State-Of-The-Art (SOTA) performance in both density estimation and image generation for CIFAR-10 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015) at a resolution of 64×64646464\times 6464 × 64.
A
\mathop{\mathrm{subject\,\,to}}\quad&{\bm{x}}\geq\bm{0}.\end{aligned}\end{cases}italic_s start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( bold_italic_y ) := { start_ROW start_CELL start_ROW start_CELL roman_min start_POSTSUBSCRIPT bold_italic_x end_POSTSUBSCRIPT end_CELL start_CELL ∥ bold_italic_y - bold_italic_K bold_italic_x ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL roman_subject roman_to end_CELL start_CELL bold_italic_x ≥ bold_0 . end_CELL end_ROW end_CELL start_CELL end_CELL end_ROW
Comparison of (1.5) with (1.3) shows that Rust and Burrus proposed a “simultaneous-like” construction.
In Theorem 4.1, we leverage this novel interpretation to disprove the Burrus conjecture Rust and Burrus, (1972); Rust and O’Leary, (1994) in the general case, by refuting a previously proposed counterexample and providing a new, provably correct counterexample in Lemma 4.5.
In this setting, Burrus, (1965); Rust and Burrus, (1972) posed that the following interval construction yields valid 1−α1𝛼1-\alpha1 - italic_α confidence intervals, a result now known as the Burrus conjecture Rust and O’Leary, (1994):
Rust and Burrus, (1972) and subsequently Rust and O’Leary, (1994) investigated the conjecture posed in Burrus, (1965).
A
In Tab. 3 we present further results on the challenging and widely-adopted ImageNet-1k dataset. The results are consistent with those found in the CIFAR100 case, strengthening the general applicability of our methods, and its scalability to larger models and more challenging datasets. We also stress the fact that, especially with this difficult dataset, even after finetuning, VF fails to recover a comparable accuracy, converging to suboptimal performance.
In Tab. 3 we present further results on the challenging and widely-adopted ImageNet-1k dataset. The results are consistent with those found in the CIFAR100 case, strengthening the general applicability of our methods, and its scalability to larger models and more challenging datasets. We also stress the fact that, especially with this difficult dataset, even after finetuning, VF fails to recover a comparable accuracy, converging to suboptimal performance.
We evaluate the quality of our approach with two prominent transformer-based architectures: the ViT (Dosovitskiy et al., 2020) and BERT (Devlin et al., 2018). Our focus is to assess the performance and robustness of our proposed fusion techniques in both image and NLP domains. These models offer a direct comparison as they share the same encoder-only architecture. We conducted our experiments on multiple well-known image classification datasets: CIFAR10, CIFAR100, Tiny ImageNet, and ImageNet-1k. We used Hugging Face both for the implementation of the ViT and for retrieving the datasets. Besides the image classification tasks, we showcase our fusion strategy on the BERT model for an NLP task. We train from scratch multiple BERT models on the masked language modeling (MLM) task over a subset of the Wikipedia dataset, publicly available on the Hugging Face Hub.
We show the finetuning results on the widely adopted datasets CIFAR100, and ImageNet-1k (results on Tiny ImageNet in the Appendix).
In this work, we focused on the vision application of the Transformer architecture, but our method is agile to architectural changes, and we demonstrate its wide applicability to the BERT model. Although preliminary explorations of our fusion strategy on the BERT model show some differences with respect to the ViT case (more details on this in App D), the results are on par with those presented above. In particular, the fused and finetuned model, outperforms both parents and VF on the widely adopted GLUE benchmark (Wang et al., 2018). The results are presented in Tab. 17 of the App. E.
D
∙∙\bullet∙ Unified framework. The Ito chain equation 1 incorporates a variety of practical approaches and techniques – see Table 1. In particular, equation 1 can be used to describe:
The key and most popular MC is Langevin-based (Raginsky et al., 2017; Dalalyan, 2017; Cheng et al., 2018; Erdogdu et al., 2018; Durmus & Moulines, 2019; Orvieto & Lucchi, 2018; Cheng et al., 2020) (which corresponds to Langevin diffusion). Such a chain is found in most existing works. In this paper, we propose a more general Ito chain:
Dynamics. Primarily, chain equation 1 is suitable for analyzing Langevin Dynamics, which have a wide range of applications. Here we can note the classical results in sampling (Ma et al., 2019; Chatterji et al., 2020; Dalalyan, 2017; Durmus et al., 2019; Durmus & Moulines, 2019), continuous optimization (Gelfand et al., 1992), as well as modern and hot techniques in generative models (Gidel et al., 2018).
Non-normality of noise. The central and widely used assumption about noise in analyses of MC satisfying equation 1 (e.g., Langevin-based) is that it has a normal distribution (Raginsky et al., 2017; Dalalyan, 2017; Cheng et al., 2018; Durmus & Moulines, 2019; Ma et al., 2019; Feng et al., 2019; Orvieto & Lucchi, 2018; Chatterji et al., 2020; Xie et al., 2021). However, practice suggests otherwise. For example, stochastic gradient noise in the training of neural networks is not Gaussian for classical and small models (Simsekli et al., 2019), as well as for modern and large transformers (Zhang et al., 2020).
Without convexity and dissipativity assumptions. Note also that often, when dealing with Langevin MC, the authors consider the convex/monotone setup (Dalalyan, 2017; Erdogdu et al., 2018; Durmus & Moulines, 2019; Li et al., 2019b; Chatterji et al., 2020; Xie et al., 2021), which is possible and relevant, but at the same time restricted. This is primarily because a large number of practical problems (including ML problems) are non-convex: neural networks (Goodfellow et al., 2016), adversarial training (Goodfellow et al., 2014), games (Hazan et al., 2017), problems with specific losses (Nguyen & Sanner, 2013) and many others examples. However, even those works (Raginsky et al., 2017; Cheng et al., 2018; Ma et al., 2019; Feng et al., 2019; Orvieto & Lucchi, 2018; Ankirchner & Perko, 2021; Hu et al., 2017; Ustimenko & Prokhorenkova, 2021; Cheng et al., 2020) which consider the non-convex case make it under the dissipativity assumption (see for example, A.3 from (Cheng et al., 2020)). This assumption means non-convexity inside some ball and strong convexity outside the ball. However, it is not always fulfilled for practical problems and is primarily needed to simplify the analysis.
B
}(\bm{x}\cdot\bm{x}^{\prime})^{2}italic_K ( bold_italic_x , bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = bold_italic_x ⋅ bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT + italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( bold_italic_x ⋅ bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT
These 𝒘¯¯𝒘\bar{\bm{w}}over¯ start_ARG bold_italic_w end_ARG and 𝑴𝑴\bm{M}bold_italic_M can be plugged into the loss decomposition in the main text. At initialization, the kernel has the form
The value of α𝛼\alphaitalic_α controls the scale of the output, and consequently the speed of feature learning. The value of ϵitalic-ϵ\epsilonitalic_ϵ alters how difficult the task is for the initial NTK. We consider training on a fixed dataset {(𝒙μ,yμ)}μ=1Psuperscriptsubscriptsubscript𝒙𝜇subscript𝑦𝜇𝜇1𝑃\{(\bm{x}_{\mu},y_{\mu})\}_{\mu=1}^{P}{ ( bold_italic_x start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_μ end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_μ = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT of P𝑃Pitalic_P samples. The inputs 𝒙𝒙\bm{x}bold_italic_x are drawn from an isotropic Gaussian distribution 𝒙∼𝒩⁢(0,1D⁢𝑰)similar-to𝒙𝒩01𝐷𝑰\bm{x}\sim\mathcal{N}(0,\frac{1}{D}\bm{I})bold_italic_x ∼ caligraphic_N ( 0 , divide start_ARG 1 end_ARG start_ARG italic_D end_ARG bold_italic_I ). It will be convenient to introduce the following two summary statistics
The Mercer eigenvalue problem for data distribution p⁢(𝒙)𝑝𝒙p(\bm{x})italic_p ( bold_italic_x ) has the form
\lambda\phi(\bm{x}^{\prime})∫ italic_d bold_italic_x italic_p ( bold_italic_x ) italic_K ( bold_italic_x , bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) italic_ϕ ( bold_italic_x ) = italic_λ italic_ϕ ( bold_italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT )
C
In the dataset, all subjects are diagnosed as healthy, so we sampled 3-second crops in order to capture at least one complete heartbeat. Due to the imbalanced nature of the dataset, we opted for a 60/20/20 split between training, validation, and test sets at the subject level. Importantly, we maintained the age-group distribution in each set as far as possible while simultaneously ensuring that every age group was represented in all three sets. For consistency, all models presented in this work use identical splits for their training, validation, and test data. Table I contains additional descriptive statistics of the processed dataset. See our source code for the dataset pre-processing steps [21] for more details.
Age-group distribution Fig 1 shows the age distribution across the dataset in terms of 15 age groups, where the first age group contains subjects aged 18 to 19, whereas all following age groups but the last cover age intervals of 5 years. There is a clear imbalance in the age distribution, with the majority in age group 20-24 with 422 samples, followed by age group 25-29 with 105 samples. On the contrary, the last four classes represent only 39 samples or correspondingly 3.4% of the full dataset. Furthermore, it is important to mention that there are no male samples available for the age groups 75-79 years and 85-92 years. As past studies did not indicate a strong interaction effect between gender and age prediction, and dividing the dataset by gender would worsen the imbalance in the smaller age groups, we decided to ignore gender as a covariant in this study.
Predictive performance results of the models per age group in terms of AUC on the test set, where the yellow (left) age group represents the XGBoost and the right (blue) the XResNet.
Age-group distribution in terms of age groups provided in the Autonomic Aging dataset[11]. The age groups span a range from 18 to 92 years, where the majority of patients are between 20 to 50 years old.
Beat-level descriptive analysis At first, we explore superimposed mean heartbeats for all age groups in Fig 4 as a plausibility test and to compare with literature statements. The amplitude of the T-wave decreases with age and shifts to the right, indicating an overall longer cardiac cycle, meaning a slower heart rate. Furthermore, the T- and P-wave intervals shorten with age; moreover, the absolute magnitude of the S-peak, Q-peak, and P-wave appears to diminish with age as well, which is in accordance with [38][39]. It is noteworthy that the amplitude of the R-peak shows no conclusive trend with age.
A
In addition we also exhibit how we can use our results to conduct inference on the mode of the target distribution and how the permissible range for γ𝛾\gammaitalic_γ changes when the preconditioning matrix is no longer spatially varying. This is shown in Corollary 2.1 and Propositions 2.1, 2.3.
We establish a fast sampling bound of the preconditioned LMC algorithm to the target distribution when the preconditioning is spatially invariant, in the Wasserstein distance. This may be viewed in Theorem 4.
We establish the convergence of the preconditioned LMC algorithm for general preconditioning matrices to a stationary distribution dependent on the step size in total variation. This is given in Theorem 1.
In our work, as mentioned previously, we consider the problem of inferential and approximate sampling guarantees using the preconditioned LMC algorithm. In this regard we establish a Central Limit Theorem for preconditioned LMC around the mode which may be used for the purposes of statistical inference. We also, in addition to this, establish explicit convergence bounds of the algorithm to some stationary distribution in the Total Variation norm. We also establish approximate sampling bounds, in the Wasserstein distance, given a specific target as a function of the step size and the dimension. These results seem to be new in literature and are the main theoretical contributions of our paper.
There has been some recent work on the analysis of preconditioned algorithms [24, 11, 4]. These works mainly address the problem of establishing guarantees for fast sampling using preconditioned LMC in KL-divergence or in Wasserstein distance in the dissipative setting and also establishing geometric ergodicity conditions for the purpose of sampling using preconditioned MALA. The novelty of our results in the fast sampling case is the existence of non-asymptotic bounds in the Wasserstein distance, in the strongly convex regime, in terms of the dimension and the step size which we believe are novel. In the case of inference the novelty of our results lie in establishing a Central Limit Theorem and also obtain exact convergence bounds for the convergence of the preconditioned LMC algorithm to a stationary distribution, in total variation, dependent on the step size. Again, we believe that these results have not been established for the preconditioned algorithm and hence provide some addition to the already rich literature of fast sampling.
A
\partial x_{j}x_{k}}.- divide start_ARG 1 end_ARG start_ARG 2 end_ARG ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT roman_Σ start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT divide start_ARG ∂ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_f end_ARG start_ARG ∂ italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_ARG .
Input features are often correlated with one another. When this is the case, it may be imprudent to falsely assume independence for the sake of computational convenience. This may produce Shapley values that misrepresent the relationship between inputs and predictions, and rely on the values that a machine learning function makes in areas where it has very little data [18].
While ControlSHAP is generally strong for neural networks, it tends to work better when in the dependent features case. On the simulated, bank, and census datasets, variance reductions are typically below 25% assuming independence and above 50% otherwise. Presumably, this owes to the fact that neural networks are less smooth. The quadratic model from the independent features case (Thm. 3.1) is more prone to neural networks’ steep gradients and hessians, and may poorly approximate model behavior as a result.
We seek to mitigate this issue by employing Monte Carlo variance reduction techniques. In particular, we use control variates, a method that adjusts one random estimator based on the known error of another. Here, the related estimator approximates the Shapley values of a first or second order Taylor expansion to the original model, depending on whether the value function assumes features are correlated or independent. In the independent case, these estimates entail essentially no additional computation; otherwise we must put some effort into pre-computing terms which can then be used for any query point for which we need to calculate Shapley values. While variations on our methods are possible, from tuning parameters to more complex Monte Carlo schemes [29], our goal is to provide a default scheme that requires minimal computational or intellectual effort.
ControlSHAP can be employed as a relatively “off-the-shelf” tool, in the sense that it stabilizes Shapley estimates with close to no extra computational or modeling work. The only model insight required is the gradient, as well as the hessian in the independent features case. Computationally, the single substantial cost is in the correlated features case, which requires accurate estimation of pre-compute matrices. Otherwise, ControlSHAP only necessitates passing each X|XSconditional𝑋subscript𝑋𝑆X|X_{S}italic_X | italic_X start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT through the Taylor approximation, and computing the Shapley values’ covariance - both of which are extremely quick tasks.
A
When T𝑇Titalic_T is sufficiently large, the term 1n⁢T1𝑛𝑇\frac{1}{\sqrt{nT}}divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_n italic_T end_ARG end_ARG (or 1n⁢T1𝑛𝑇\frac{1}{nT}divide start_ARG 1 end_ARG start_ARG italic_n italic_T end_ARG for the strongly convex setting) will dominate the rate. In this scenario, ProxSkip requires T=Ω⁢(1n⁢ϵ2)𝑇Ω1𝑛superscriptitalic-ϵ2T=\Omega\left(\frac{1}{n\epsilon^{2}}\right)italic_T = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ) (or T=Ω⁢(1n⁢ϵ)𝑇Ω1𝑛italic-ϵT=\Omega\left(\frac{1}{n\epsilon}\right)italic_T = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n italic_ϵ end_ARG )) iterations to reach a desired ϵitalic-ϵ\epsilonitalic_ϵ-accurate solution, thus the convergence accuracy improves linearly with n𝑛nitalic_n.
Step 2. (Lemma 2) Based on this equivalent update of ProxSkip and by the L𝐿Litalic_L-smoothness of fisubscript𝑓𝑖f_{i}italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, we establish the following descent inequality.
In addition, based on Theorem 2, we can even get a tighter rate by carefully selecting the stepsize to obtain the following result.
where α𝛼\alphaitalic_α is the stepsize of ProxSkip, σ2superscript𝜎2\sigma^{2}italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT denotes the variance of the stochastic gradient, 1−λ21subscript𝜆21-\lambda_{2}1 - italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is a topology-dependent quantity that approaches 00 for a large and sparse network, μ𝜇\muitalic_μ is the strongly convex constant, and a0subscript𝑎0a_{0}italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is a constant that depends on the initialization. To the best of our knowledge, it is the first work that establishes the convergence rate of probabilistic decentralized methods for non-convex settings. We offer a comparison of convergence rates of ProxSkip for problem (1) in Table 1.
Achieving linear speedup by n𝑛nitalic_n and 1/p1𝑝\nicefrac{{1}}{{p}}/ start_ARG 1 end_ARG start_ARG italic_p end_ARG. We choose the regularizer r⁢(𝐱)=12⁢‖𝐱‖2𝑟𝐱12superscriptnorm𝐱2r({\bf{x}})=\frac{1}{2}\|{\bf{x}}\|^{2}italic_r ( bold_x ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ∥ bold_x ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT to demonstrate the results in the convex setting. The results are shown in Fig. 1. The relative error ‖𝐱¯t−𝐱⋆‖/‖𝐱⋆‖normsuperscript¯𝐱𝑡superscript𝐱⋆normsuperscript𝐱⋆\nicefrac{{\|\bar{{\bf{x}}}^{t}-{\bf{x}}^{\star}\|}}{{\|{\bf{x}}^{\star}\|}}/ start_ARG ∥ over¯ start_ARG bold_x end_ARG start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT - bold_x start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ∥ end_ARG start_ARG ∥ bold_x start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ∥ end_ARG is shown on the y𝑦yitalic_y-axis. Here, we set α=12⁢L𝛼12𝐿\alpha=\frac{1}{2L}italic_α = divide start_ARG 1 end_ARG start_ARG 2 italic_L end_ARG, which independent of the network topology. We show the performance of ProxSkip at different number of nodes n𝑛nitalic_n, network connectivity ι𝜄\iotaitalic_ι, and communication probability p𝑝pitalic_p. The results show that, when the number of nodes is increased, the relative errors of ProxSkip is reduced under a constant and network-independent stepsize, which validates our results about linear speedup. Moreover, Fig. 1 shows that we can save on communication rounds by reducing p𝑝pitalic_p, i.e., increasing the number of local steps reduces the amount of communication required to achieve the same level of accuracy.
B
}^{m}\{p_{1,(i)}-p_{1,(i-1)}\}y_{i}=1.over^ start_ARG bold_y end_ARG = roman_argmax start_POSTSUBSCRIPT bold_y ∈ caligraphic_Q end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_t start_POSTSUBSCRIPT 1 , ( italic_i ) end_POSTSUBSCRIPT roman_log italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , subject to ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT { italic_p start_POSTSUBSCRIPT 1 , ( italic_i ) end_POSTSUBSCRIPT - italic_p start_POSTSUBSCRIPT 1 , ( italic_i - 1 ) end_POSTSUBSCRIPT } italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 .
To get the rate of convergence of the MLE based on the Hellinger distance (13), we use the bracketing entropy of ℱ¯1/2superscript¯ℱ12\mathcal{\bar{F}}^{1/2}over¯ start_ARG caligraphic_F end_ARG start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT employed with the metric ∥⋅∥2.\|\cdot\|_{2}.∥ ⋅ ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT .
The estimation of f2subscript𝑓2f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT can be obtained by solving the optimization problem (11) in the same way, and we omit the details.
In this paper, we develop a robust and powerful empirical Bayes approach for high dimensional replicability analysis. We assume that the data are summarized in p𝑝pitalic_p-values for each study. We use p𝑝pitalic_p-values mainly for versatility. Without loss of generality, we use two studies to illustrate. To account for the heterogeneity of different studies, we use four different states to model hidden states of p𝑝pitalic_p-values from two studies. The composite null of replicability analysis comprises three different states: zero effect in both studies, zero effect in the first study, and non-zero effect in the second study and vice versa. The empirical Bayes approach allows us to enumerate different states of the composite null. Conditional on hidden states, the distribution of paired p𝑝pitalic_p-values is modeled by a four-group mixture model (Efron, 2012). We allow the density functions of p𝑝pitalic_p-values under non-null to vary across different studies. Furthermore, instead of the predominant parametric modeling of the p𝑝pitalic_p-value density function under the non-null, we use a non-parametric density function estimation under the shape constraint (Grenander, 1956). This has the flexibility of non-parametric modeling and the convenience of no tuning parameters. The local false discovery rate (Lfdr), defined as the posterior probability of being replicability null, is used as a test statistic. We combine the EM algorithm (Dempster
We first ignore the monotonic constraint 𝒬𝒬\mathcal{Q}caligraphic_Q. By applying the Lagrangian multiplier, the objective function to maximize is
D
An example is the f𝑓fitalic_f-divergence subclass [21], commonly employed as an extension of Shannon entropy for various purposes in statistics, such as, variational inference [45, 1], surrogate model design [52], PAC Bayesian learning [54] and Differential Privacy [46].
The reference prior in the sense of Bernardo [8] is an asymptotic maximal point of the mutual information defined in equation (3). Using the formalization built in [10] for such a notion of asymptotic maximization as a reference, we suggest the following definition for what we call generalized reference priors, i.e. optimal priors within the consideration of our generalized mutual information.
A study of those divergences within our generalized mutual information is also a main contribution of this paper, with the goal of deriving what one is invited to call generalized reference priors.
The following definition results from the use of f𝑓fitalic_f-divergences as dissimilarity measures. They constitute the class of generalized mutual information we focus on within this paper.
In the next section, we formalize the usual Bayesian framework that we consider in our work. Our motivation supported by a Global Sensitivity Analysis viewpoint for an enrichment of the mutual information is elucidated in section 2. Afterwards, a sub-class of that generalized mutual information is studied in section 3 to define and derive what we call their generalized reference priors.
B
Data-driven Koopman learning methods are founded on the assumption that a non-trivial finite-dimensional Koopman invariant subspace exists [20]. Even if this assumption holds true, it has proven to be exceedingly challenging to resolve this finite set of observables that completely closes the dynamics [5]. In order to obtain a closed dynamics, we need to account for the effects of the unresolved observables that complete the invariant Koopman subspace. Mori and Zwanzig introduced a general framework for the closed equations of the resolved observables. They demonstrated that the interactions between resolved and unresolved observables manifest themselves as non-Markovian non-local effects on the resolved observables. To accommodate these interactions, it decomposes the dynamics into three parts – a Markovian term, a non-Markovian or memory term, and a noise term – which together form a so-called Generalised Langevin Equation (GLE). In this decomposition, the memory and the noise terms are responsible for the effects of the unresolved observables. While the evolution equations obtained for resolved observables are methodically exact, it does not provide reduced computational complexity without approximations. This is primarily because deriving the analytical form of the memory kernel which accounts for the non-Markovian effect is an arduous task. Further, there is no information available for the noise term since it accounts for the dynamics of unresolved observables and is generally neglected or modelled as noise in statistical mechanics. However, the GLE provides an excellent starting point to model closure terms in a non-Markovian form.
Data-driven Koopman learning methods are founded on the assumption that a non-trivial finite-dimensional Koopman invariant subspace exists [20]. Even if this assumption holds true, it has proven to be exceedingly challenging to resolve this finite set of observables that completely closes the dynamics [5]. In order to obtain a closed dynamics, we need to account for the effects of the unresolved observables that complete the invariant Koopman subspace. Mori and Zwanzig introduced a general framework for the closed equations of the resolved observables. They demonstrated that the interactions between resolved and unresolved observables manifest themselves as non-Markovian non-local effects on the resolved observables. To accommodate these interactions, it decomposes the dynamics into three parts – a Markovian term, a non-Markovian or memory term, and a noise term – which together form a so-called Generalised Langevin Equation (GLE). In this decomposition, the memory and the noise terms are responsible for the effects of the unresolved observables. While the evolution equations obtained for resolved observables are methodically exact, it does not provide reduced computational complexity without approximations. This is primarily because deriving the analytical form of the memory kernel which accounts for the non-Markovian effect is an arduous task. Further, there is no information available for the noise term since it accounts for the dynamics of unresolved observables and is generally neglected or modelled as noise in statistical mechanics. However, the GLE provides an excellent starting point to model closure terms in a non-Markovian form.
It has been shown that a higher-order correction to the approximate Koopman operator can be obtained using the Mori-Zwanzig formalism by accounting for the residual dynamics through the non-Markovian term. Lin et al. [21] proposed a data-driven method for this purpose that recursively learns the memory kernels using Mori’s linear projection operator. This work was further extended by using a regression-based projection operator in [22]. Curtis et al. [23] used the popular optimal prediction framework[24] to provide higher-order correction terms for DMD. This was further improved
Observing these problems, this work proposes an interpretable data-driven reduced order model termed Mori-Zwanzig autoencoder (MZ-AE), that exploits the Mori-Zwanzig formalism and approximates the invariant Koopman subspace in the latent manifold of a nonlinear autoencoder. A higher-order non-Markovian correction is provided to the approximate Koopman operator which guides it back to the true trajectory upon deviation. Through this approach, we tackle the following challenges:
passed through a nonlinear autoencoder to produce a small set of observables enriched with the nonlinearities of the dynamical system. To ensure the observables lie in the linearly invariant subspace, an approximate Koopman operator is obtained through linear regression in time over these observables. The motivating idea behind this approach is a two-step identification where (i) the autoencoder learns energetically dominant modes, and (ii) the approximate Koopman operator learns dynamically important features. Otto et al. [14] used a Linear Recurrent Neural Network framework where the error of the learned Koopman operator is minimized over multiple timesteps. Lusch et al. [15] extended this work to dynamical systems with continuous frequency spectra. They obtained the parametric dependence of the Koopman operator on the continuously varying frequency using an auxiliary network. DMD based approach that involves Moore-Penrose pseudo-inverse for approximating the finite Koopman operator has also been tested with these neural network-based dictionaries [18]. Pan et al. [19] proposed a probabilistic Koopman learning framework based on Bayesian neural networks for continuous dynamical systems while offering a stability constraint on their Koopman parameterization.
B
We now compare our contributions to [13], which is the closest related work. In the aforementioned reference, the authors derive a minimax sample-complexity lower bound of Ω⁢(1ϵ)Ω1italic-ϵ\Omega\left(\frac{1}{\epsilon}\right)roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_ϵ end_ARG ) in a probabilistic sense for estimating the mean of the infinite-horizon discounted cost of an MCP; see row 1 of Table 2. Their proof involves a two-state Markov chain with {0,1}01\{0,1\}{ 0 , 1 } rewards. In this setting, the mean of the cumulative discounted cost can be explicitly written as a function of the transition probabilities.
In Section 3.1, we looked at the case where the cost function f𝑓fitalic_f was deterministic. However, to get the Ω⁢(ϵ−2)Ωsuperscriptitalic-ϵ2\Omega(\epsilon^{-2})roman_Ω ( italic_ϵ start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ) sample complexity, we require that the cost f1⁢(A)subscript𝑓1𝐴f_{1}(A)italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_A ) increase suitably as ϵitalic-ϵ\epsilonitalic_ϵ decays to 0;00;0 ; see (24). In this subsection, we show that, by allowing the single-stage costs to be stochastic, we can obtain similar sample complexity lower bounds as in Theorem 3.1 even when these costs have bounded mean. To derive these bounds, we use a radically novel proof idea to that of Theorem 3.1 and also to the one used in [13].
Lower bounds: We derive a minimax sample complexity lower bound of Ω⁢(1/ϵ2)Ω1superscriptitalic-ϵ2\Omega(1/\epsilon^{2})roman_Ω ( 1 / italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) for risk estimation in two types of MCP problem instances: one with deterministic costs and the other with stochastic costs. In either case, our bound is order optimal and the first of its kind for risk estimation. Our first bound applies to VaR and CVaR of the infinite-horizon discounted cost, while the second applies even to its mean and variance.
It now remains to derive the lower bounds for the CVaR case. The above proof works in more or less the same way, except for some minor modifications. In the CVaR case, consider the optimization problem that is analogous to (33). Due to (2), any (p,q)𝑝𝑞(p,q)( italic_p , italic_q )-pair that is feasible for (33) is also feasible for this new optimization problem. Hence, the VaR lower bounds hold for CVaR estimation as well.
In contrast, the proofs of our lower bounds are more challenging owing to the lack of a closed form expression for the risk measures we consider. Moreover, our lower bounds, when specialized to mean estimation, leads to an improvement in comparison to [13].
D
,x^{\beta}\rangle]_{\alpha,\beta=1}^{m}\,,italic_d italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_b ( italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) italic_d italic_t , italic_V start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_n start_POSTSUBSCRIPT in end_POSTSUBSCRIPT end_ARG [ ⟨ italic_x start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT , italic_x start_POSTSUPERSCRIPT italic_β end_POSTSUPERSCRIPT ⟩ ] start_POSTSUBSCRIPT italic_α , italic_β = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ,
Starting with the Markov chain in Lemma 3.2, we will treat the random term of order O⁢(n−1/2)𝑂superscript𝑛12O(n^{-1/2})italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ) as part of the drift instead.
At the same time, since the higher order term in the Markov chain is at the desired order of O⁢(n−3⁢p)𝑂superscript𝑛3𝑝O(n^{-3p})italic_O ( italic_n start_POSTSUPERSCRIPT - 3 italic_p end_POSTSUPERSCRIPT ), which will vanish in the limit, we get the desired result.
We will start by deriving the precise Markov chain update up to a term of size O⁢(n−3⁢p)𝑂superscript𝑛3𝑝O(n^{-3p})italic_O ( italic_n start_POSTSUPERSCRIPT - 3 italic_p end_POSTSUPERSCRIPT ), which will be a slight modification of the Euler discretization we saw in Equation 2.4.
In view of the SDE convergence theorem Proposition A.7, if we eventually reach an SDE, we will only need to keep track of the expected drift μrsubscript𝜇𝑟\mu_{r}italic_μ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT instead of the random drift.
A
It is shown that a sequence of slowly decreasing step sizes γi=A⁢i−ζsubscript𝛾𝑖𝐴superscript𝑖𝜁\gamma_{i}=Ai^{-\zeta}italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_A italic_i start_POSTSUPERSCRIPT - italic_ζ end_POSTSUPERSCRIPT would lead to rate-optimal estimators, where A𝐴Aitalic_A is a positive constant and ζ𝜁\zetaitalic_ζ is typically small, ζ∈(0,1/2)𝜁012\zeta\in(0,1/2)italic_ζ ∈ ( 0 , 1 / 2 ). However, the best choice of ζ𝜁\zetaitalic_ζ depends on the expansion coefficients ⟨f0,ϕk⟩L2⁢(P⁢(X))subscriptsubscript𝑓0subscriptitalic-ϕ𝑘subscript𝐿2𝑃𝑋\langle f_{0},\phi_{k}\rangle_{L_{2}(P(X))}⟨ italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_ϕ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ⟩ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_P ( italic_X ) ) end_POSTSUBSCRIPT and its relative magnitude to the eigenvalues qksubscript𝑞𝑘q_{k}italic_q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT, which is in general not available to the algorithm. Data-adaptive procedures of choosing γisubscript𝛾𝑖\gamma_{i}italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is needed for better performance of the kernel-SGD in practice. Since theoretical results told us a polynomially decreasing step size is enough, the task of selecting γisubscript𝛾𝑖\gamma_{i}italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT can be reduced to selecting proper A𝐴Aitalic_A and ζ𝜁\zetaitalic_ζ. Our framework also allows a varying reproducing kernel such as Gaussian kernels with a varying bandwidth. In the language of (2), the hyperparameters λi=γisubscript𝜆𝑖subscript𝛾𝑖\lambda_{i}=\gamma_{i}italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is of one dimension when assuming the kernel function is pre-specified and is of two dimension λi=(γi,σi)⊤subscript𝜆𝑖superscriptsubscript𝛾𝑖subscript𝜎𝑖top\lambda_{i}=(\gamma_{i},\sigma_{i})^{\top}italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ( italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT when there is a varying kernel bandwidth σisubscript𝜎𝑖\sigma_{i}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
The second line of the kernel-SGD update (8) also suggests a direct way to construct the update from basis expansions without specifying a kernel.
The performance of kernel SGD depends on the chosen kernel function 𝒦𝒦\mathcal{K}caligraphic_K and the learning rate sequence γisubscript𝛾𝑖\gamma_{i}italic_γ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
In a stochastic approximation-type estimator (2), each new sample point is used to update the current estimates. Our method for tuning sequence selection is based on the idea of “rolling validation”, which, in addition to the estimate update, also uses the new sample point to update the online prediction accuracy of each candidate tuning sequence.
We illustrate this issue using the reproducing kernel stochastic gradient descent estimator (kernel-SGD). Let 𝒦⁢(⋅,⋅):ℝp×ℝp→ℝ:𝒦⋅⋅→superscriptℝ𝑝superscriptℝ𝑝ℝ\mathcal{K}(\cdot,\cdot):\mathbb{R}^{p}\times\mathbb{R}^{p}\rightarrow\mathbb{R}caligraphic_K ( ⋅ , ⋅ ) : blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT × blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT → blackboard_R be a reproducing kernel, with eigen-value/eigen-function pairs (qk,ϕk⁢(⋅):k≥1):subscript𝑞𝑘subscriptitalic-ϕ𝑘⋅𝑘1(q_{k},\phi_{k}(\cdot):k\geq 1)( italic_q start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_ϕ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( ⋅ ) : italic_k ≥ 1 ). The kernel-SGD update rule is
A
Here, we mimic outbreak data collected from households (Walker et al., 2017), by generating hℎhitalic_h observations with household sizes uniformly sampled between 2 and 7. We use values h=100,200ℎ100200h=100,~{}200italic_h = 100 , 200 and 500, assume that all households are independent, and that all outbreaks are governed by the same values of R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and 1/γ1𝛾1/\gamma1 / italic_γ, which have been chosen to be typical for this problem. The likelihood of an observation where yτ=1subscript𝑦𝜏1y_{\tau}=1italic_y start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT = 1, or where the household size is 2, can be calculated analytically, so we incorporate these terms into inference manually. We therefore train the autoregresssive model to approximate p⁢(y1:τ⁢|𝜽,yτ>⁢1)𝑝subscript𝑦:1𝜏ket𝜽subscript𝑦𝜏1p(y_{1:\tau}|\bm{\theta},y_{\tau}>1)italic_p ( italic_y start_POSTSUBSCRIPT 1 : italic_τ end_POSTSUBSCRIPT | bold_italic_θ , italic_y start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT > 1 ) for household sizes of 3 through 7.
Table 2 shows a comparison of the posterior statistics for the parameters R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and κ𝜅\kappaitalic_κ. The metrics for the other parameters are deferred to Appendix C. The bias in the mean value of R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is consistently small, but the standard deviations are slightly inflated, more so with larger population sizes. The R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT posterior for these experiments is quite right skewed, and SNL produces posteriors which are slightly too heavy in the tail, which explains the inflated variance estimates. For κ𝜅\kappaitalic_κ, the posterior means do exhibit a noticeable bias, but the posterior standard deviations do not on average. The bias in κ𝜅\kappaitalic_κ is lowest for the N=500𝑁500N=500italic_N = 500 experiment, where the posterior has a mode near κ=0𝜅0\kappa=0italic_κ = 0. The parameter κ𝜅\kappaitalic_κ only has a weak relationship with the data, being the most informative near 0 or 1, hence the effect of κ𝜅\kappaitalic_κ on the likelihood is presumably difficult to learn. This would explain the biases in the κ𝜅\kappaitalic_κ posteriors, and would also explain why the N=500𝑁500N=500italic_N = 500 experiment had the lowest bias on average. SNL manages to correctly reproduce the banana shaped posterior between R0subscript𝑅0R_{0}italic_R start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and q𝑞qitalic_q, which can be seen for N=500𝑁500N=500italic_N = 500 in Figure 5. In terms of runtime, the centre panel of Figure 4 shows that the performance of SNL drops off at a slower rate than PMMH with increasing N𝑁Nitalic_N, and is noticeably faster for the N=1000𝑁1000N=1000italic_N = 1000 and 2000 experiments.
Our proof of concept example is a single observed outbreak in a population of size 50. The left of Figure 3 shows the posterior pairs plot of SNL compared to PMMH, which clearly shows that SNL provides an accurate approximation of the true posterior for this experiment; see Appendix C for a quantitative comparison. For this particular experiment, PMMH obtains an ESS/s of 69.38, which outperforms SNL with only 3.74. This is unsurprising, as we are not using a large amount of data and the particle filter we used was specifically designed to perform well for observations of this form.
Table 3 shows the comparison between the posterior statistics for the experiments with r=0.9𝑟0.9r=0.9italic_r = 0.9, which suggests negligible differences between the posterior mean and standard deviations. It is also worth noting that SNL correctly reproduces a highly correlated posterior between d1subscript𝑑1d_{1}italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Furthermore, the d2subscript𝑑2d_{2}italic_d start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT posterior is almost indistinguishable from the prior, suggesting that the autoregressive model can learn to reparameterise 𝜽𝜽\bm{\theta}bold_italic_θ into a set of identifiable parameters and suppress dependence on unidentifiable parameters. The accuracy of SNL increases with increasing r𝑟ritalic_r for this model (see Appendix C for the other metrics), which is unsurprising, as there is a higher ‘signal-to-noise ratio’ in the training data for larger r𝑟ritalic_r, which leads to training data which is more informative for p⁢(𝒚1:n|𝜽)𝑝conditionalsubscript𝒚:1𝑛𝜽p(\bm{y}_{1:n}|\bm{\theta})italic_p ( bold_italic_y start_POSTSUBSCRIPT 1 : italic_n end_POSTSUBSCRIPT | bold_italic_θ ). This is in direct contrast to PMMH, where more noise makes the particle filter less prone to weight degeneracy. The right of Figure 4 shows that SNL is an order of magnitude more efficient than PMMH for these experiments, and the ESS/s does not drop off with increasing r𝑟ritalic_r.
Table 1 shows the quantitative comparison between the two posteriors, indicating a negligible bias in the SNL means on average. The SNL variance appears to be slightly inflated on average, mainly for the h=500ℎ500h=500italic_h = 500 experiments, though this is partly due to the PMMH variance shrinking with hℎhitalic_h, so the difference between the PMMH and SNL posteriors is still small, as indicated by the right of Figure 3. The ESS/s of SNL and PMMH are compared in the left panel of Figure 4, which clearly shows that SNL outperforms PMMH across each experiment, and that the drop off in SNL’s performance is less significant for increasing hℎhitalic_h. This is mainly because the autoregressive model can be evaluated efficiently on the observations in parallel.
D
\mathscr{Y}}}}}^{*}({\mathscr{H}}_{n+1}))( script_E start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT italic_h start_POSTSUBSCRIPT script_Y end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_n + 1 end_POSTSUBSCRIPT ) - script_E start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT italic_h start_POSTSUBSCRIPT script_Y end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( script_H start_POSTSUBSCRIPT italic_n + 1 end_POSTSUBSCRIPT ) ) to
Γ1⁢(ϵ1)+Γ2⁢(ϵ2)subscriptΓ1subscriptitalic-ϵ1subscriptΓ2subscriptitalic-ϵ2\Gamma_{1}(\epsilon_{1})+\Gamma_{2}(\epsilon_{2})roman_Γ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_ϵ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) + roman_Γ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_ϵ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ).
constant, the score-based abstention estimation loss (ℰ𝖫abs⁢(h)−ℰ𝖫abs∗⁢(ℋ))subscriptℰsubscript𝖫absℎsuperscriptsubscriptℰsubscript𝖫absℋ({\mathscr{E}}_{{{\mathsf{L}}_{\rm{abs}}}}(h)-{\mathscr{E}}_{{{\mathsf{L}}_{%
ϵ2subscriptitalic-ϵ2\epsilon_{2}italic_ϵ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, then, modulo constant factors, the score-based abstention
calibration gap of the score-based abstention loss 𝖫abssubscript𝖫abs{{\mathsf{L}}_{\rm{abs}}}sansserif_L start_POSTSUBSCRIPT roman_abs end_POSTSUBSCRIPT and that
C
(Mozannar and Sontag, 2020; Cao et al., 2022; Mao et al., 2024b). Another problem closely related to
(Madras et al., 2018; Raghu et al., 2019a; Mozannar and Sontag, 2020; Okati et al., 2021; Wilder et al., 2021; Verma and Nalisnick, 2022; Narasimhan et al., 2022; Verma et al., 2023; Mao et al., 2023a; Cao et al., 2023; Mao et al., 2024a; Chen et al., 2024; Mao et al., 2024d).
(Cortes et al., 2016a, b, 2023; Cheng et al., 2023; Mohri et al., 2024; Li et al., 2024); and a more
Awasthi et al. (2021a, c, 2022a, 2022b, 2023, 2024); Mao et al. (2023c, d, e); Zheng et al. (2023); Mao et al. (2023b, f, 2024e, 2024c).
(Mozannar and Sontag, 2020; Cao et al., 2022; Mao et al., 2024b). Another problem closely related to
A
\mathscr{H}})\right).script_E start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_h ) - script_E start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) + script_M start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) ≤ roman_Γ ( script_E start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_h ) - script_E start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) + script_M start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) ) .
ℳℓ2⁢(ℋ)=𝒜ℓ2⁢(ℋ)subscriptℳsubscriptℓ2ℋsubscript𝒜subscriptℓ2ℋ{\mathscr{M}}_{\ell_{2}}({\mathscr{H}})={\mathscr{A}}_{\ell_{2}}({\mathscr{H}})script_M start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) = script_A start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ). For a surrogate loss function
ℳℓ2⁢(ℋ)=𝒜ℓ2⁢(ℋ)subscriptℳsubscriptℓ2ℋsubscript𝒜subscriptℓ2ℋ{\mathscr{M}}_{\ell_{2}}({\mathscr{H}})={\mathscr{A}}_{\ell_{2}}({\mathscr{H}})script_M start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) = script_A start_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( script_H ) when ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT represents the
For a target loss function ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with discrete outputs, such as the
target loss function ℓ2subscriptℓ2\ell_{2}roman_ℓ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and a surrogate loss function ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT,
C
 2. We obtain fast rates for empirical risk minimization procedures under an additional classical assumption called a Bernstein condition. Namely we prove upper bounds on the excess risk scaling as 1/(n⁢p)1𝑛𝑝1/(np)1 / ( italic_n italic_p ), which matches fast rate results in the standard, balanced case, up to replacing the full sample size n𝑛nitalic_n with the expected minority class size n⁢p𝑛𝑝npitalic_n italic_p. To our best knowledge such fast rates are the first of their kind in the imbalanced classification literature.
The argument from the cited reference relies on a fixed point technique relative to a sub-root function upper bounding some local Rademacher complexity. Leveraging fine controls of the latters (Section D.1) we establish that the fixed point of the sub-root function is of order O⁢(log⁡(n)/n)𝑂𝑛𝑛O(\log(n)/n)italic_O ( roman_log ( italic_n ) / italic_n ) and we obtain an explicit control of the deviations of the (standard) empirical measure, under a Bernstein condition (see Proposition D.5). Finally the main result is obtained by applying the latter proposition to the specific class of convex combinations defined in the statement, and rescaling the obtained bound by the quantity 2⁢q⁢(1−q)2𝑞1𝑞2q(1-q)2 italic_q ( 1 - italic_q ), see Section D.1 for details.
Outline. Some mathematical background about imbalanced classification and some motivating examples are given in Section 2. In Section 3, we state our first non-asymptotic bound on the estimation error over VC class of functions and consider application to k𝑘kitalic_k-nearest neighbor classification rules. In Section 4, fast convergence rates are obtained and an application to ERM is given. Finally, some numerical experiments are provided in Section 5 to illustrate the theory developed in the paper. All proofs of the mathematical statements are in the supplementary material.
The previous result shows that whenever n⁢p→∞→𝑛𝑝np\to\inftyitalic_n italic_p → ∞, learning from ERM based on a VC-type class of functions is consistent. Another application of our result pertains to k𝑘kitalic_k-nearest neighbor classification algorithms. In this case the sharpness of our bound is fully exploited by leveraging the variance term σ+subscript𝜎\sigma_{+}italic_σ start_POSTSUBSCRIPT + end_POSTSUBSCRIPT. This is the subject of the next section.
Our purpose is to obtain upper bounds on the deviations of the empirical risk (and thus on the empirical risk minimizer) matching the state-of-the art, up to replacing the sample size n𝑛nitalic_n with n⁢p𝑛𝑝npitalic_n italic_p, the mean size of the rare class. To our best knowledge, the theoretical results which come closest to this goal are normalized Vapnik-type inequalities (Theorem 1.11 in Lugosi, (2002)) and relative deviations (Section 5.1 in Boucheron et al., (2005)). However the latter results only apply to binary valued functions and as such do not extend immediately to general real valued loss functions which we consider in this paper, nor do they yield fast rates for imbalanced classification problems, although relative deviations play a key role in establishing fast rates in standard classification as reviewed in Section 5 from Boucheron et al., (2005). Also, as explained above, we have not found any theoretical result regarding imbalanced classification which would leverage these bounds in order to obtain guarantees with leading terms depending on n⁢p𝑛𝑝npitalic_n italic_p instead of n𝑛nitalic_n.
B
We have presented novel empirical evidence for the existence of grokking in non-neural architectures and discovered a data augmentation technique which induces the phenomenon. Relying upon these observations and analysis of training trajectories in a GP and BNN, we suggested a mechanism for grokking in models where solution search is guided by complexity and error. Importantly, we argued that this theory is congruent with previous empirical evidence and many previous theories of grokking. In future, researchers could extend the ideas in this paper by undertaking a theoretical analysis of the concealment strategy discovered and by conducting further studies to assess the role of complexity penalties.
For both GP learning scenarios we also completed experiments without the complexity term arising under the variational approximation. The results of these experiments, namely a lack of grokking, can be seen in Appendix L. This demonstrates that some form of regularisation is needed in this scenario and provides further evidence for the possible necessity of the grokking mechanism we propose in Section 3.
All experiments can be found at this GitHub page. They have descriptive names and should reproduce the figures seen in this paper. For Figure 6, the relevant experiment is in the feat/info-theory-description branch.
To discover the relationship between concealment and grokking, we measured the “grokking gap” ΔksubscriptΔ𝑘\Delta_{k}roman_Δ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT. In particular, we considered how an increase in the number of spurious dimensions relates to this gap. The algorithm used to run the experiment is detailed in Algorithm 1 (Appendix I.1). The result of running this algorithm can be seen in Figure 4. In addition to visual inspection of the data, a regression analysis was completed to determine whether the relationship between grokking gap and additional dimensionality might be exponential. The details of the regression are provided in Appendix F and its result is denoted as Regression Fit in the figure. The Pearson correlation coefficient (Pearson, 1895; SciPy developers, 2023) was also calculated in log space for all points available and for each dataset individually. Further, we completed a test of the null hypothesis that the distributions underlying the samples are uncorrelated and normally distributed. The Pearson correlation r𝑟ritalic_r and p𝑝pitalic_p-values are presented in Table 2 (Appendix F.2) The Pearson correlation coefficients are high in aggregate and individually, indicating a positive linear trend in log space. Further, p𝑝pitalic_p values in both the aggregate and individual cases are well below the usual threshold of α=0.05𝛼0.05\alpha=0.05italic_α = 0.05.
Having been proposed to explain the empirical observation we have uncovered in this paper, Mechanism 1 should be congruent with these new findings – the first of which is the existence of grokking in non-neural models. Indeed, one corollary of our theory (Corollary 1) is that grokking should be model agnostic. This is because the proposed mechanism only requires certain properties of error and complexity landscapes during optimisation. It is blind to the specific architecture over which optimisation occurs.
B
Further, for any bivariate copula C𝐶Citalic_C and for all univariate distribution functions F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and F2⁢,subscript𝐹2,F_{2}\,,italic_F start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , the right-hand side of (2) defines a bivariate distribution function.
Furthermore, the upper orthant order on 𝒞2subscript𝒞2\mathcal{C}_{2}caligraphic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is defined by the pointwise comparison of survival functions of bivariate copulas, i.e.,
To be more precise, consider for a bivariate copula E𝐸Eitalic_E the subclass 𝒞E:={C∈𝒞2∣C≤∂1SE}assignsuperscript𝒞𝐸conditional-set𝐶subscript𝒞2subscriptsubscript1𝑆𝐶𝐸\mathcal{C}^{E}:=\{C\in\mathcal{C}_{2}\mid C\leq_{\partial_{1}S}E\}caligraphic_C start_POSTSUPERSCRIPT italic_E end_POSTSUPERSCRIPT := { italic_C ∈ caligraphic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∣ italic_C ≤ start_POSTSUBSCRIPT ∂ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT italic_E } of bivariate copulas that are smaller than E𝐸Eitalic_E or equal to E𝐸Eitalic_E in the Schur order for copula derivatives with respect to the first component.
Statement (i) is equivalent to 𝒞D⊆𝒞Esuperscript𝒞𝐷superscript𝒞𝐸\mathcal{C}^{D}\subseteq\mathcal{C}^{E}caligraphic_C start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT ⊆ caligraphic_C start_POSTSUPERSCRIPT italic_E end_POSTSUPERSCRIPT.
Denote by 𝒞2subscript𝒞2\mathcal{C}_{2}caligraphic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT the class of bivariate copulas.
D
Theorem 1 implies that our proposed weights, 𝔼⁢[Z|XE]p𝔼delimited-[]conditional𝑍subscript𝑋𝐸𝑝\frac{\mathbb{E}\left[Z|X_{E}\right]}{p}divide start_ARG blackboard_E [ italic_Z | italic_X start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT ] end_ARG start_ARG italic_p end_ARG and 1−𝔼⁢[Z|XE]1−p1𝔼delimited-[]conditional𝑍subscript𝑋𝐸1𝑝\frac{1-\mathbb{E}\left[Z|X_{E}\right]}{1-p}divide start_ARG 1 - blackboard_E [ italic_Z | italic_X start_POSTSUBSCRIPT italic_E end_POSTSUBSCRIPT ] end_ARG start_ARG 1 - italic_p end_ARG, achieve
In this section, we construct a potential outcomes model (Imbens and Rubin, 2015) for A/B tests that incorporate the training
In this section, we present simulation results. In subsection 5.1, we specify the simulation setup and the implementation
Once again, our approach demonstrates the lowest bias and reasonable variance. However, it’s important to note that in this case with p=0.2𝑝0.2p=0.2italic_p = 0.2, the data splitting method exhibits higher bias and variance compared to the simulation with p=1/2𝑝12p=1/2italic_p = 1 / 2.
The rest of the paper is organized as follows: Section 2 discusses related literature on interference in A/B tests. Section 3 introduces a potential outcome framework modeling interference caused by data training loops. Section 1 presents our weighted training approach along with theoretical justification. Section 5 showcases extensive simulation studies to demonstrate the performance of our proposed approach. Finally, we conclude with future works in Section 6.
B
Each of the five factors is accompanied by a designated set of predefined levels of variation, which are listed in Table 5.1. These levels were determined to cover a range of values that would effectively capture the variability and impact of these factors on the desired coating properties. The chosen levels allow for a systematic and comprehensive exploration of the parameter space.
Figure 5.3: Photograph illustrating the experimental setup during the HVOF coating process, showing the robot, turning lathe, and coating stream in action.
Thermal spraying is a versatile and widely used surface engineering technique that involves the deposition of coatings on the surface of a substrate to enhance its functional properties, such as wear resistance, corrosion resistance, and thermal insulation. The thermal spray coating process typically involves the application of thermal and kinetic energy to induce partial liquefaction of the coating material, thereby accelerating its projection towards the substrate surface. The amount of thermal and kinetic energy depends on the thermal spray coating technique. Various techniques, such as flame spraying, plasma spraying, arc spraying, and high-velocity oxygen fuel (HVOF) spraying can be used for coating using different types of coating material such as powder or wire. In this work we focus on the gas-fuel HVOF technology, which is described in more detail below.
The HVOF coatings were produced using an Oerlikon Metco thermal spraying equipment, namely the DJ 2700 gas-fuel HVOF system with water-cooled gun assembly. The fuel gas used for these tests was propane, its amount and ratio defined by the two key factors TGF and Lambda. For the process preparation, steel plates of type 1.4404 were welded onto an axis mounted on a turning lathe for rotational spraying. All samples were degreased with acetone and sandblasted with alumina before thermal spraying. The powder used for the spraying process was an agglomerated sintered tungsten carbide powder (WC-10Co-4Cr) with a grain size in the range of -45+15 µm, supplied by Oerlikon Metco. The photograph presented in Figure 5.3 showcases the experimental setup employed during the HVOF coating process, wherein the dynamic engagement of the robot, turning lathe, and coating stream can be observed.
The selected factors play a critical role in the HVOF coating process, exerting significant influence on the quality and performance of the resultant coatings. The PFR governs the amount of coating material supplied, while the SOD regulates the spacing between the spray gun and the substrate. The stoichiometric ratio of oxygen to fuel (λ𝜆\lambdaitalic_λ) ensures specific combustion conditions. Furthermore, the CV, determined by the combined influence of the robot traverse speed and the rotational speed of the turning lathe (cf. Figure 5.2), enables precise control over the deposition process. Finally, the TGF is constituted by the summed gas flow of fuel, oxygen, and air, collectively governing the overall flow rate of the combustion gases.
C
\varepsilon\cdot\mathbb{E}\left[f(x)\right]\,.roman_Minimize start_POSTSUBSCRIPT start_ARG start_ROW start_CELL italic_c ∈ caligraphic_C end_CELL end_ROW end_ARG end_POSTSUBSCRIPT | italic_c | , subject to roman_Δ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ≥ italic_ε ⋅ blackboard_E [ italic_f ( italic_x ) ] .
In essence, using Eq. (1), we attribute to any candidate the drop in prediction of samples where the candidate is perturbed.
This definition guarantees that, for a large amount of samples, the empirical drop is a good estimate of Eq. (1), as expressed by the following:
Calculating the prediction drop for each candidate in closed form, as formulated in Eq. (1), necessitates an exhaustive search and evaluation of 2bsuperscript2𝑏2^{b}2 start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT candidates—an impractical endeavor for large documents.
The optimal candidate, denoted as c⋆superscript𝑐⋆c^{\star}italic_c start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT, is determined by minimizing the size of the candidate subset while ensuring that it causes the average prediction 𝔼⁢[f⁢(x)]𝔼delimited-[]𝑓𝑥\mathbb{E}\left[f(x)\right]blackboard_E [ italic_f ( italic_x ) ] to drop by a significant amount, i.e., it is such that Δc⋆≥ε⋅𝔼⁢[f⁢(x)]subscriptΔsuperscript𝑐⋆⋅𝜀𝔼delimited-[]𝑓𝑥\Delta_{c^{\star}}\geq\varepsilon\cdot\mathbb{E}\left[f(x)\right]roman_Δ start_POSTSUBSCRIPT italic_c start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ≥ italic_ε ⋅ blackboard_E [ italic_f ( italic_x ) ], as formulated by the optimization problem:
C
From 2016 to 2023, a noticeable shift in price dynamics emerge towards the end of 2021. As a result, we can observe three distinct phases: a period of stability, a subsequent phase characterized by increased volatility, and an intermediate transitory interval.
While the electricity market has been gaining attention over the years (Hong et al., 2020), and a rich literature related to the Day-Ahead market price forecasting has been developed (Lago et al., 2021), most studies focus on older stable periods that do not reflect the peculiarities of the current market.
In this work, a new framework for a price prediction model in the Day-Ahead market based on the price dynamics has been proposed. This new approach has been thoroughly studied, demonstrating improved results across various metrics and showing statistical improvement in five different markets and two distinct market periods. The outlier mitigation process has proven to be vital for achieving these results for the proposed models, although it could still be improved for better results. Additionally, two new and recent datasets have been made available to the community, aiming to explore new models on datasets that are closer to the current market situation. To get the best potential out of the models, combining them has proven to be key. In particular, combining the two methodologies evaluated produces the best results.
EPF is an open field in which a wide variety of tasks are included, mainly depending on the market being dealt with: Day-Ahead market, Intra-Day markets or Balancing markets. Among these, the Day-Ahead market has garnered the most significant attention. While the regulatory framework of this market varies across countries, its structure, follows a standard pattern in Europe: on day D, prior to a designated hour H, all market participants are required to submit their bids for purchasing or selling energy for each hour of the subsequent day, D+1. The price of each hourly period is established independently through an auction-based format.
Although probabilistic forecasting is beyond the scope of this study, it is worth noting that a significant portion of the EPF field is devoted to this area. Therefore, it is necessary to mention some of the main works within this particular trend. A satisfactory idea was introduced in Nowotarski and Weron (2015): applying quantile regression (Koenker and Bassett Jr, 1978) using the predictions obtained by point estimators as explanatory variables (QRA). Given the good results of the LEAR model in point estimation, Uniejewski and Weron (2021) propose the use of quantile regression following the same philosophy, but applying L1 regularisation on the loss function, so that an automatic selection of variables is performed, improving the results. Complex recurrent or convolutional neural network structures in a probabilistic context are compared in Mashlakov et al. (2021) but focusing on the electricity market in general, not only on price. More classical techniques such as bootstrap over residuals (Efron, 1992) to obtain probabilistic results have also been used for price forecasting in the electricity market in Narajewski and Ziel (2021). A more in-depth study on the price appears in Marcjasz et al. (2023) through the use of distributional neural networks, used for the first time in this field. The results are better than those obtained by applying QRA to the LEAR model or to the neural network of Lago et al. (2018), although not entirely satisfactory as observed by the number of hours that pass the Kupiec test (Kupiec et al., 1995) at 50% and 90%. It is also worth mentioning that the use of the novel framework of conformal prediction (Vovk et al., 2005) has also been applied in EPF (Kath and Ziel, 2021), where it is concluded that valid prediction intervals can be obtained on the predictions made, improving on several metrics of QRA.
A
For the nonmixing case, Config3 (Fig. 4, third row), the univariate estimator H¯^Usuperscript¯^𝐻U\underline{\hat{H}}^{\rm U}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_U end_POSTSUPERSCRIPT naturally shows much smaller biases compared to the multivariate estimators H¯^Msuperscript¯^𝐻M\underline{\hat{H}}^{\rm M}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_M end_POSTSUPERSCRIPT and H¯^M,bcsuperscript¯^𝐻Mbc\underline{\hat{H}}^{\rm M,bc}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_M , roman_bc end_POSTSUPERSCRIPT. Yet, it also displays larger variances. Thus, as a result of bias-variance trade-off, the MSE of all three estimators are equivalent.
The asymptotic estimation performance is studied theoretically in Section IV-B. The finite-size estimation performance is investigated numerically in Section V.
The estimation performance of the proposed multivariate estimator is compared, in terms of biases, variances, MSE, and covariance structures, to an earlier multivariate estimator defined in [25, 26] and also to the univariate estimator defined in [19].
Importantly, this shows that even for data corresponding to nonmixing situations, there is no cost in estimation performance associated with the use of multivariate estimators.
For the nonmixing case with equal Hmsubscript𝐻𝑚H_{m}italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT, Config4 (Fig. 4, fourth row), the univariate estimator H¯^Usuperscript¯^𝐻U\underline{\hat{H}}^{\rm U}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_U end_POSTSUPERSCRIPT and bias-corrected multivariate estimator H¯^M,bcsuperscript¯^𝐻Mbc\underline{\hat{H}}^{\rm M,bc}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_M , roman_bc end_POSTSUPERSCRIPT perform similarly to Config2. This suggests that, for Hmsubscript𝐻𝑚H_{m}italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT all equal, the presence of mixing has no impact in terms of bias or variance. However, estimation performance of the multivariate estimator H¯^Msuperscript¯^𝐻M\underline{\hat{H}}^{\rm M}under¯ start_ARG over^ start_ARG italic_H end_ARG end_ARG start_POSTSUPERSCRIPT roman_M end_POSTSUPERSCRIPT is slightly more affected than in Config2, showing that the repulsion bias is more substantial in the absence of mixing.
C
The section culminates in a result showing that the empirical variance is a distribution-uniform almost-surely consistent estimator for the true variance and its convergence rate is polynomial in the sample size (Section 3.2), which, when combined with Eq. 16 from Section 2 yields our main result in Section 3.3.
We will now shift our focus to sequential conditional independence testing with anytime-valid type-I error guarantees. Before deriving an explicit test, we first demonstrate in Section 4.3 that the hardness of conditional independence testing highlighted in (42) has a similar analogue in the anytime-valid regime.
Section 4 applies the content of the previous sections to the problem of anytime-valid conditional independence testing. We first show that distribution-uniform anytime-valid tests of conditional independence are impossible to derive without imposing structural assumptions, a fact that can be viewed as a time-uniform analogue of the hardness result due to Shah and Peters [38, §2]. We then develop a sequential version of the Generalized Covariance Measure test due to Shah and Peters [38, §3] and show that it distribution- and time-uniformly controls the type-I error (and has nontrivial power) as long as certain regression functions are estimated at sufficiently fast rates. To the best of our knowledge, this is the first anytime-valid test of conditional independence that does not rely on Model-X assumptions.
While Section 2 is a natural extension of distribution-uniform inference to the anytime-valid setting, it is deceptively challenging to derive procedures satisfying Section 2 even for the simplest of statistical problems such as tests for the mean of independent and identically distributed random variables and the main results of this section themselves rely on certain technical underpinnings such as distribution-uniform almost-sure consistency and strong Gaussian approximations.
The proof can be found in Section A.4. It should be noted that Section 4.3 is not an immediate consequence of S&P’s fixed-n𝑛nitalic_n hardness result in (42) since while it is true that the time-uniform type-I error in the right-hand side of (51) is always larger than its fixed-n𝑛nitalic_n counterpart, the time-uniform power in the left-hand side of (51) is typically much larger than the fixed-n𝑛nitalic_n power. Indeed, while an important facet of hypothesis testing is to find tests with power as close to 1 as possible, the time-uniform power of anytime-valid tests is typically equal to 1, and such tests are sometimes referred to explicitly as “tests of power 1” for this reason [34]. This should not be surprising since the ability to reject at any stopping time (data-dependent sample size) larger than m𝑚mitalic_m introduces a great deal of flexibility. The fact that this flexibility is insufficient to overcome 𝒫0⋆superscriptsubscript𝒫0⋆\mathcal{P}_{0}^{\star}caligraphic_P start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT-uniform control of the time-uniform type-I error is what makes Section 4.3 nontrivial.
B
By applying the loss function shown in Eq.5, we can have representations qϕu⁢(ru∣x)subscript𝑞subscriptitalic-ϕ𝑢conditionalsubscript𝑟𝑢𝑥q_{\phi_{u}}(r_{u}\mid x)italic_q start_POSTSUBSCRIPT italic_ϕ start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_r start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ∣ italic_x ), qϕi⁢(ri∣x)subscript𝑞subscriptitalic-ϕ𝑖conditionalsubscript𝑟𝑖𝑥q_{\phi_{i}}(r_{i}\mid x)italic_q start_POSTSUBSCRIPT italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∣ italic_x ) of the unmeasured confounders that are independent of user and item.
Thanks to the availability of the Reasoner dataset, we are able to have both general feedback data and real user preference labels. We evaluate the performance of all baselines and SLFR on real user preference label data, trained with general feedback data. The results are shown in Table 4. The methods that fit the data purely perform worse on real labels than on regular data, because the resulting preferences are mixtures with biases that differ from the true preferences or are even diametrically opposite. Even if user preferences are disentangled in the preference modeling phase (e.g. DICE), performance is still not guaranteed. Other debiasing methods can provide some performance guarantees on real label data due to their ability to mitigate the effects of one or more biases, among which InvPref, which uses general debiasing, has outstanding performance. SLFR achieves the best results on the real label dataset, which demonstrates that SLFR performs excellently in capturing the real preferences of users compared to other debiasing methods.
where γ𝛾\gammaitalic_γ is a temperature hyperparameter, used to control the debiasing strength of the model, higher values of γ𝛾\gammaitalic_γ imply stronger debiasing, we will discuss the effect of the value of γ𝛾\gammaitalic_γ in the following experiments. The Framework of SLFR is shown in Figure 3.
(1) We investigate the new problem of debiasing in recommender systems when incorporating the effects of former recommender systems and unmeasured confounders. (2) We state the assumption of independence of confounders and user preferences, the basis for separating them in the latent parameter space. (3) We propose a novel framework, Separating and Learning Latent Confounders For Recommendation (SLFR), which obtains the representation of latent confounders to assist the model in capturing the true preferences of users. (4) We conduct extensive experiments that include both general and specific debiasing scenarios to validate the advantages of our method.
In order to address the General Debiasing Recommendation Problem, we propose a novel debiasing framework named SLFR, which consists of two stages.
D
Qτ⁢(Y|∅)=Qτ⁢(Y)subscript𝑄𝜏conditional𝑌subscript𝑄𝜏𝑌Q_{\tau}(Y|\emptyset)=Q_{\tau}(Y)italic_Q start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( italic_Y | ∅ ) = italic_Q start_POSTSUBSCRIPT italic_τ end_POSTSUBSCRIPT ( italic_Y ), which is the τ𝜏\tauitalic_τ-th unconditional quantile of Y𝑌Yitalic_Y.
This condition is often used in the literature; See, for example, Huang et al., (2010), Fan et al., (2011), He et al., (2013), Zhong et al., (2020), and references therein.
also widely used in the variable screening literature; see, for example, Fan and Lv, (2008), Fan et al., (2011), Li et al., (2012), He et al., (2013), Ma et al., (2017),
The B-spline approximation technique has been widely used to approximate unknown functions in nonparametric regression; see, for example, Sherwood and Wang, (2016), Fan et al., (2011), He et al., (2013),
also widely used in the variable screening literature; see, for example, Fan and Lv, (2008), Fan et al., (2011), Li et al., (2012), He et al., (2013), Ma et al., (2017),
C
Qb,c3subscriptsuperscript𝑄3𝑏𝑐Q^{3}_{b,c}italic_Q start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b , italic_c end_POSTSUBSCRIPT \tabto1.5cm a locally defined index-set (see (3.81) or (4.25))
S∙subscript𝑆∙S_{\bullet}italic_S start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT \tabto1.5cm an operator on layouts (see (2.1)-(2.3) and (4.2))
J(k,q)1subscriptsuperscript𝐽1𝑘𝑞J^{1}_{(k,q)}italic_J start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_k , italic_q ) end_POSTSUBSCRIPT \tabto1.5cm a locally defined operator on route sequences (see (2.24) or (3.21))
J(k,q)2subscriptsuperscript𝐽2𝑘𝑞J^{2}_{(k,q)}italic_J start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ( italic_k , italic_q ) end_POSTSUBSCRIPT \tabto1.5cm a locally defined operator on route sequences (see (2.41) or (3.52))
𝒮∙subscript𝒮∙\mathcal{S}_{\bullet}caligraphic_S start_POSTSUBSCRIPT ∙ end_POSTSUBSCRIPT \tabto1.5cm an operator on layouts (see (3.1)-(3.4))
A
Additionally, one can notice that the rows of the generated matrix mirror those of the Hadamard matrix HNsubscript𝐻𝑁H_{N}italic_H start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT, but in a different order. Essentially, there exist a permutation matrix Oi⁢jsubscript𝑂𝑖𝑗O_{ij}italic_O start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT, such that the expression Oi⁢k⁢HN,k⁢jsubscript𝑂𝑖𝑘subscript𝐻𝑁𝑘𝑗O_{ik}H_{N,kj}italic_O start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT italic_H start_POSTSUBSCRIPT italic_N , italic_k italic_j end_POSTSUBSCRIPT gives the matrix of the stored orthogonal binary patterns. This can lead to the alternative process of the pattern generation.
Figure 5 illustrates the relative frequency distribution of the energies of discovered solutions in relation to the planted energies. The symbol ΩΩ\Omegaroman_Ω represents the entire probability space, which is partitioned based on the likelihood of events associated with finding a solution within specified energy ranges. The probability of finding patterns with low enough energies relative to the planted energies is comparatively low in the primary region 10<K<5510𝐾5510<K<5510 < italic_K < 55, where most obtained energies are localized below the average value of the planted energies (red colour). 0.5⁢Δ⁢E0.5Δ𝐸0.5\Delta E0.5 roman_Δ italic_E is the actual average of the planted energies and it depends on the parameter K. Another significant measure consists of patterns above the mean but below three-quarters of the planted energies range. There are found solutions that lie outside of the given range for some K𝐾Kitalic_K. For other values of K<10𝐾10K<10italic_K < 10 or K>55𝐾55K>55italic_K > 55, the dynamics are trivial, and the solutions are closer to the ground state. A more detailed picture can be found in the Supplementary Material.
One of the benchmarks for the initial testing of the CIM was the specific Möbius ladder graph instance [15]. However, it appeared that the minimisation of Ising Hamiltonian on such graphs does not pose serious difficulty, and many optical optimization machines show a good performance using such instances. This problem can be made harder by introducing the rewiring procedure for the connectivity graphs to increase the complexity of the problem and the specific simplicity criteria to measure this complexity [23]. The statistical approach is another way to measure the complexity of the computational problems [24, 25]. The complexity can be elucidated in the vicinity of the easy-hard-easy complexity transition in the SAT tasks [26] or in the generalization in neural networks (NNs) [27, 28]. This statistical approach draws the correspondence between models of phase transitions in physics and complexity transitions in computational problems. Wishart planted ensemble [29] and 3D tiling problem [30] were proposed as the problem instances with a tunable hardness to address the benchmarking issues. The statistical approach also has many results concerning inference [25], ML-related tasks [31, 32, 33] and compressed sensing [34, 35], that later was used to reevaluate the performance of the CIM [36]. The research on different optimization problems over random structures beyond QUBO condensed in the general criteria of statistical hardness. A new approach for algorithmic intractability is called the Overlap Gap Property and is based on the topological disconnectivity property of the set of pairwise distances of near-optimal solutions [37]. It emerges in many models with the prior random structures, coincides with the conventional hardness phase transition and is related to the stability of the algorithms. This property can be applied to the description of the hardware operating principles. See also the review article [38], which highlights connections between the physics of disordered systems, phase transitions in inference problems, and computational hardness.
Models with planted solutions are an old subject in information theory and statistical physics [55, 56, 57]. Addressing the planted solution problems appeared in many other domains beyond optimization, e.g. inference-related tasks [25] or image-reconstruction [58, 59]. For instance, the Wishart planted ensemble was introduced to check the performance of optimization algorithms and related statistical properties [29].
However, many of the suggested benchmarking instances have specific drawbacks and cannot characterise the physical systems’ evolution. Some of them are specifically tailored to particular hardware to highlight its strengths (e.g. Möbius ladder instances) or inherently possess statistical properties that make it hard to analyze (e.g. Wishart planted ensemble or 3D tiling), i.e. to characterize the solution space properties (e.g. the unified framework via generic tensor networks in [39]). We aim to construct instances not only with controllable hardness, but also with controllable distances between clusters of low-energy solutions and the energy difference between them. Our construction uses the methodology based on the well-known associative memory model [40, 41] with additional modifications for the optimization context. It has similar advantages and also eliminates many drawbacks of the previous models. By introducing asymmetry among the planted memory patterns, we eliminate the degeneracy between multiple ground states. This not only allows for more expressive representations of results in various optimization contexts but also enables easy comparison using the corresponding distributions. Moreover, it is possible to study the solution space properties and even go beyond, i.e. to study individual dynamical trajectories and transformations of the phase space of possible solutions.
C
For the first component z1subscript𝑧1z_{1}italic_z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, we select the parameters 𝒅σ=(1,1)subscript𝒅𝜎11\boldsymbol{d}_{\sigma}=(1,1)bold_italic_d start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = ( 1 , 1 ), ασ=2subscript𝛼𝜎2\alpha_{\sigma}=2italic_α start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = 2, 𝒅ν=(0,1)subscript𝒅𝜈01\boldsymbol{d}_{\nu}=(0,1)bold_italic_d start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT = ( 0 , 1 ), αν=5subscript𝛼𝜈5\alpha_{\nu}=5italic_α start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT = 5, 𝒅ϕ=(1,1)subscript𝒅italic-ϕ11\boldsymbol{d}_{\phi}=(1,1)bold_italic_d start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = ( 1 , 1 ), αϕ=5subscript𝛼italic-ϕ5\alpha_{\phi}=5italic_α start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = 5, and cϕ=10subscript𝑐italic-ϕ10c_{\phi}=10italic_c start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = 10; for z2subscript𝑧2z_{2}italic_z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, we select 𝒅σ=(0,1)subscript𝒅𝜎01\boldsymbol{d}_{\sigma}=(0,1)bold_italic_d start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = ( 0 , 1 ), ασ=1.5subscript𝛼𝜎1.5\alpha_{\sigma}=1.5italic_α start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = 1.5, 𝒅ν=(1,0)subscript𝒅𝜈10\boldsymbol{d}_{\nu}=(1,0)bold_italic_d start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT = ( 1 , 0 ), αν=4subscript𝛼𝜈4\alpha_{\nu}=4italic_α start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT = 4, 𝒅ϕ=(1,1)subscript𝒅italic-ϕ11\boldsymbol{d}_{\phi}=(1,1)bold_italic_d start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = ( 1 , 1 ), αϕ=−8subscript𝛼italic-ϕ8\alpha_{\phi}=-8italic_α start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = - 8, and cϕ=40subscript𝑐italic-ϕ40c_{\phi}=40italic_c start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = 40; for z3subscript𝑧3z_{3}italic_z start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, we select 𝒅σ=(1,0)subscript𝒅𝜎10\boldsymbol{d}_{\sigma}=(1,0)bold_italic_d start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = ( 1 , 0 ), ασ=2subscript𝛼𝜎2\alpha_{\sigma}=2italic_α start_POSTSUBSCRIPT italic_σ end_POSTSUBSCRIPT = 2, 𝒅ν=(1,1)subscript𝒅𝜈11\boldsymbol{d}_{\nu}=(1,1)bold_italic_d start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT = ( 1 , 1 ), αν=3subscript𝛼𝜈3\alpha_{\nu}=3italic_α start_POSTSUBSCRIPT italic_ν end_POSTSUBSCRIPT = 3, 𝒅ϕ=(0,1)subscript𝒅italic-ϕ01\boldsymbol{d}_{\phi}=(0,1)bold_italic_d start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = ( 0 , 1 ), αϕ=4subscript𝛼italic-ϕ4\alpha_{\phi}=4italic_α start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = 4, and cϕ=10subscript𝑐italic-ϕ10c_{\phi}=10italic_c start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT = 10. By using the above correlation structure, the variance, range, and scale parameters of the random field vary in space differently within each component.
While we have the theoretical results of identifiability when the variances of latent components are varying enough based on the auxiliary variable 𝒖𝒖\boldsymbol{u}bold_italic_u, these results apply only in limit of infinite data. In real life applications, with finite data, there is however no guarantee that the identifiablity conditions are fulfilled. In this section, we aim to study the performance of iVAE in various spatial scenarios with finite sample size using the simulation studies. We consider six different simulation settings, where the observed data are generated from nonlinear ICA model (2). The underlying latent components 𝒛𝒛\boldsymbol{z}bold_italic_z are generated differently in each setting. Some of the settings exhibit nonstationary variance, meaning that the identifiability conditions are fulfilled, while some settings are stationary, for which the identifiability conditions are not fulfilled. We compare iVAE to SBSS and SNSS, although they are developed for linear mixing and are thus not optimal when the mixing function is nonlinear. In addition, iVAE is compared against a modified version of time contrastive learning (TCL) [16], where the auxiliary variable is a spatial segmentation instead of a time segmentation. TCL exploits nonstationarity in variance when solving the BSS problem and can estimate nonlinear unmixing transformations. The simulations can be reproduced using R 4.3.0 [24] together with R packages SpatialBSS [25], gstat [26] and NonlinearBSS. The NonlinearBSS package, which is available at https://github.com/mikasip/NonlinearBSS, contains an R implementation of spatial iVAE and some methods to generate nonstationary spatial data from the nonlinear ICA model (2). The simulations were executed on the CSC Puhti cluster, a high-performance computing environment.
Settings 1 and 2 are considered the easiest settings for the iVAE model as variances (and means in Setting 2) of the latent fields explicitly change between the clusters. These settings are spatial variants of the time series settings of some previous simulation studies, such as [13, 16], where the latent components have varying means/variances based on some time segements. The settings aim to set a baseline performance under conditions, where the variances are explicitly changing based on our chosen auxiliary variable. Meanwhile, Settings 4 and 5 are used to measure the performance when the latent fields are stationary, and thus not identifiable in theory. Of these two, Setting 4 has higher variability in sample mean and in sample variance through the latent fields making it more optimal for iVAE. Settings 3 and 6 are chosen to illustrate the performance when the latent fields are nonstationary.
In Settings 1, 2, 3 and 6, where the variances of the latent fields were varying based on the spatial location, and thus fulfilling the identifiability conditions, iVAE showed superior performance as compared with SBSS, SNSS, and TCL. In Setting 1, where the latent fields are zero-mean Gaussian with varying variances between the clusters, SNSS performed almost as well as iVAE, especially when the number of mixing layers was one. However, in Setting 2, where the mean also changed between the clusters, the performance of SNSS dropped considerably, whereas the performance of iVAE remained good. In Setting 3, SBSS and SNSS had similar performances, but the methods did not outperform iVAE, especially when the number of mixing layers was high. Meanwhile, Settings 4 and 5 are stationary and thus, in theory, more favorable for SBSS. Surprisingly, in Setting 4, where the Matern covariance function’s range parameters ϕitalic-ϕ\phiitalic_ϕ were high and the scale parameters ν𝜈\nuitalic_ν were low, iVAE outperformed SBSS. This might be because such Matern parameters lead to high spatial dependence and larger variability in sample variance compared to Setting 5, which means that the mean and the variance are changing in some degree through out the spatial domain allowing the identifiability. However, in Setting 5, where the parameters were selected so that the mean and the variance were more stable throughout the field, SBSS slightly outperformed iVAE. In Setting 6, iVAE was the only method that was reliably capable of separating the sources. In conclusion, as long as the sample variance has enough variability in space, iVAE recovers the latent components well and outperform the competing methods.
Based on the results of this paper, iVAE is a preferable method in settings, where the variances of the latent fields are not stable across space. However, in stationary settings where the sample mean and sample variance did not change enough, SBSS still performed better. In practice, this means for example having small range and large shape parameters in a Matern covariance function. However, many real-world spatial phenomena, such as temperature or humidity, often show high spatial dependence making iVAE the preferable method. To overcome the performance drop in stationary settings with low spatial dependence, it requires development of new models/methods which do not assume nonstationarity for identifiability. This is left for future work.
B
Hence, since the comonotonic coupling is optimal for submodular cost functions [24], the result follows.
The optimal coupling of the MK minimisation problem induced by the score given in (11) is the comonotonic coupling.
The comonotonic coupling is also optimal when the cost function is a score that elicits the α𝛼\alphaitalic_α-expectile.
The optimal coupling of the MK minimisation problem induced by the scoring function given in (14) is the comonotonic coupling.
The optimal coupling of the MK minimisation problem induced by any consistent scoring function for the entropic risk measure is the comonotonic coupling.
B
))/\log p(\xi(X_{u})=f(X_{u}))∥ italic_u - italic_v ∥ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT / ∥ italic_u ∥ start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ∼ roman_log italic_p ( italic_ξ ( italic_X start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ) = italic_f ( italic_X start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ) | italic_ξ ( italic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) = italic_f ( italic_X start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) ) / roman_log italic_p ( italic_ξ ( italic_X start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ) = italic_f ( italic_X start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT ) )
Hence, the relative error in KFs is like log-likelihood ratio. This fact allows the application of some tools from AIT, as explained below, to show that the relative error, and consequently KFs, can be viewed as a measure of data compression in AIT.
In Section 2 we show that the relative error used to learn the kernel in the original version of Kernel Flows can be viewed as a log-likelihood ratio. In Section 3, we give a brief introduction to AIT and introduce Kolmogorov Complexity (KC) and the Minimal Description/Message Length (MDL/MML) principle. In Section 4 we establish the link between MDL and KFs,
Now, let us consider the problem of learning the kernel from data. As introduced in [OY19], the method of KFs is based on the premise that a kernel is good if there is no significant loss in accuracy in the prediction error if the number of data points is halved. This led to the introduction of333A variant of KFs based on Lyapunov exponents in order to capture long-term behavior of the system is at [HO21], another one based on the Maximum Mean Discrepancy that allows capturing the statistical properties of the system is at [HO21], and another one based on the Hausdorff distance that allows reconstructing attractors is at [YHK+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT23].
In this paper, we look at the problem of learning kernels from data from an AIT point of view and show that the problem of learning kernels from data can be viewed as a problem of compression of data. In particular, using the Minimal Description Length (MDL) principle, we show that Sparse Kernel Flows [YSH+{}^{+}start_FLOATSUPERSCRIPT + end_FLOATSUPERSCRIPT24] is a natural approach for learning kernels from data from an AIT point of view and that it is not necessary to use a cross-validation argument to justify its efficiency, thus giving it a more solid theoretical foundation.
A
This work contains three sections along with the introductory Section 1. In the preliminary Section 2, we present all our q𝑞qitalic_q-definitions.
In the main Section 3, we state and prove our results concerning the q𝑞qitalic_q-order statistics and their distributional properties.
We have studied their main properties concerning the q𝑞qitalic_q-distribution functions and q𝑞qitalic_q-density functions of the relative q𝑞qitalic_q-ordered random variables.
Order statistics and their properties have been studied thoroughly the last decades. The literature devoted to order statistics
The main objective of this work is to introduce q𝑞qitalic_q-order statistics, for 0<q<10𝑞10<q<10 < italic_q < 1, arising from dependent and not identically distributed q𝑞qitalic_q-continuous random variables and to study their distributional properties. We introduce q𝑞qitalic_q-order statistics as q𝑞qitalic_q-analogues of the classical order statistics.
A