robench-2024b
Collection
48 items
•
Updated
text_with_holes
stringlengths 148
4.35k
| text_candidates
stringlengths 68
2.48k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
<|MaskedSetence|> <|MaskedSetence|> In total, the mice came from 85 distinct families. The obvious confounding variable is genetic inheritance due to family relationships. <|MaskedSetence|> These 27 response variables fall into six different categories, relating to the glucose level, insulin level, immunity, EPM, FN and OFT respectively.
.
|
**A**:
V-A2 Heterogeneous Stock Mice
The heterogeneous stock mice data set contains measurements from around 1700 mice, with 10,000 genetic variables [51].
**B**: We study the association between the genetic variables and a set of 27 response variables that could possibly be affected by inheritance.
**C**: These mice were raised in cages by four generations over a two-year period.
|
CBA
|
ACB
|
ACB
|
ACB
|
Selection 3
|
Figure 2 presents the three population generating models that we use, labeled Simulations 1, 2 and 3. In all cases the true model includes both the solid and dashed lines. <|MaskedSetence|> Simulated data were generated from a multivariate normal with mean vector of 00 and a covariance matrix implied by the population generating models. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: All data were simulated using the lavaan R package (Rosseel, \APACyear2012).
**B**: For both Simulation 1 and 2, we are interested in estimating the factor loading for Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in the latent to observed variable transformed equation, which corresponds to
.
**C**: The misspecified model omits the parameters represented by the dashed lines.
|
CAB
|
CAB
|
CAB
|
BCA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> The level of stochasticity is game dependent; however, it can be observed in many Atari games.
An example of such behavior can be observed in the game Kung Fu Master – after eliminating the current set of opponents, the game screen always looks the same (it contains only player’s character and the background). The game dispatches diverse sets of new opponents, which cannot be inferred from the visual observation alone (without access to the game’s internal state) and thus cannot be predicted by a deterministic model. Similar issues have been reported in Babaeizadeh et al. (2017a), where the output of their baseline deterministic model was a blurred superposition of possible random object movements. <|MaskedSetence|>
|
**A**: As can be seen in Figure 11 in the Appendix, the stochastic model learns a reasonable behavior – samples potential opponents and renders them sharply.
.
**B**:
A crucial decision in the design of world models is the inclusion of stochasticity.
**C**: Although Atari is known to be a deterministic environment, it is stochastic given only a limited horizon of past observed frames (in our case 4444 frames).
|
BCA
|
BCA
|
ABC
|
BCA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> They studied stochastic form of Lanchester model and enquired whether there is role of any attacking and defending army on the number of casualties of the battle. They compared their results with the results of the Bracken and Fricker and results were found to be different. They concluded that logarithmic and linear-logarithmic forms fits more appropriately as compared to the linear form found by Bracken. <|MaskedSetence|> They have applied the Gibbs sampling approach along with Monte Carlo simulation for deriving the distribution patterns of the parameters involved..
|
**A**: This was a battle of an air combat between German and Britain.
Wiper, Pettit and Young [44] applied Bayesian computational techniques to fit the Ardennes Campaign data.
**B**:
NR Johnson and Mackey [22]analysed the Battle of Britain using the Lanchester model.
**C**: They also concluded that the Bayesian approach is more appropriate to make inferences for battles in progress as it uses the prior information from experts or previous battles.
|
BAC
|
ABC
|
BAC
|
BAC
|
Selection 1
|
We will consider the case of unidirectional manipulation. <|MaskedSetence|> If being part of the treatment group is beneficial, one faces incentives to manipulate the running variable to be eligible but not to be ineligible. Similarly, if being in the treatment group is detrimental, people face incentives to manipulate the running variable to be ineligible but not to be eligible. Furthermore, unidirectional manipulation of the running variables leads to bunching at the threshold, which is observed in many empirical applications. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: In Diamond and Persson, (2016) teachers have incentives to inflate students’ scores but have no incentives to reduce students’ scores (see section 2.2 where teachers’ incentives are discussed).
.
**B**: For instance, taxpayers benefit by misreporting income below kink points but do not have any reason to misreport income above those kink points (Saez,, 2010).
**C**: In most applications, assuming the manipulation to be unidirectional is well-grounded and fits empirical evidence.
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 1
|
<|MaskedSetence|> Thus we ran ten consecutive learning trails and averaged them. <|MaskedSetence|> The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
For the experiments, fully connected neural network architecture was used. <|MaskedSetence|> To minimize the
DQN loss, ADAM optimizer was used[25]..
|
**A**: We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment.
**B**: It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers.
**C**:
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL) methodology, where the performance of the agent is assessed at the conclusion of the training epochs.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 1
|
<|MaskedSetence|> Zhang et al. <|MaskedSetence|> <|MaskedSetence|> (2020) analyze the performance across different dataset sizes.
Olson et al. (2018) evaluate the performance of modern neural networks using the same test strategy as Fernández-Delgado et al. (2014) and find that neural networks achieve good results but are not as strong as random forests.
.
|
**A**: Bornschein et al.
**B**:
Neural networks are universal function approximators.
The generalization performance has been widely studied.
**C**: (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data.
|
BCA
|
BCA
|
BCA
|
ABC
|
Selection 3
|
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). <|MaskedSetence|> (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). <|MaskedSetence|> Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. <|MaskedSetence|> In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions.
.
|
**A**: In particular, our setting is the same as the linear setting studied by Ayoub et al.
**B**: (2019).
**C**: It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020).
|
BCA
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> The top 6 representatives (according to a user-selected quality measure) are still shown at the top of the main view (Figure 1(e)), and the projection can be switched at any time if the user is not satisfied with the initial choice. We also provide the mechanism for a selection-based ranking of the representatives. During the exploration of the projection, if the user finds a certain pattern of interest (i.e., cluster, shape, etc.), one possible question might be whether this specific pattern is better visible or better represented in another projection. After selecting these points, the list of top representatives can be ranked again to contain the projections with the best quality regarding the selection (as opposed to the best global quality, which is the default). The way this “selection-based quality” is computed is by adapting the global quality measures we used, taking advantage of the fact that they all work by aggregating a measure-specific quality computation over all the points of the projection. In the case of the selection-based quality, we aggregate only over the selected points to reach the final value of the quality measure, which is then used to re-rank the representatives.
Figure 2: Hyper-parameter exploration (presented in a dialog at the beginning of an analytical session), with 25 representative projections from a pool of 500 alternatives obtained through a grid search. <|MaskedSetence|> The thumbnails are sorted according to the QMA and ordered row-wise from top to bottom. The currently-selected projection is indicated by a red box (top row, third column).
.
|
**A**: Five quality metrics, plus their Quality Metrics Average (QMA), are also displayed to support the visual analysis.
**B**: After choosing a projection, users will proceed with the visual analysis using all the functionalities described in the next sections.
**C**: However, the hyper-parameter exploration does not necessarily stop here.
|
BAC
|
BCA
|
BCA
|
BCA
|
Selection 4
|
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We analyze the degeneration theoretically and experimentally to understand the phenomenon. We further propose a simple but effective strategy to avoid it.
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining..
|
**A**: The main contributions are listed as follows:
(1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well.
**B**: Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios.
**C**: The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders.
(2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph.
|
BAC
|
BAC
|
BAC
|
CAB
|
Selection 2
|
Once, the lasso estimation has been performed, the corresponding residuals are plugged into the variance-covariance matrix. This, in turn, is used to construct the simultaneous confidence bands via a multiplier bootstrap procedure. <|MaskedSetence|> standard normal distributed random variable. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: It is generally recommended to use a large number of bootstrap repetitions, B≥500𝐵500B\geq 500italic_B ≥ 500.
.
**B**: The latter is based on a random perturbation of the score function, for example, by an i.i.d.
**C**: This procedure is very appealing from a computational point of view as it does not require resampling and reestimation of the parameters, as for example in classical bootstrap procedures.
|
ABC
|
BCA
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> (a) presents the selection of appropriate validation metrics for the specification of the data set. <|MaskedSetence|> (c) presents the per-class performance of all the models vs. the active ones per algorithm.
Figure 7: The exploration of the models’ and predictions’ spaces and the metamodel’s results. (a) presents the initial models’ space and how it can be simplified with the removal of unnecessary models. <|MaskedSetence|> This leads to an updated models’ space in (c), where we can even fine-tune and choose diverse concrete models. The results of our actions can always be monitored in the performance line chart and the history preservation for stacks views in (d)..
|
**A**: (b) aggregates the information after the exploration of different models and shows the active ones which will be used for the stack in the next step.
**B**:
Figure 6: The process of exploration of distinct algorithms in hypotheticality stance analysis.
**C**: The predictions’ space is then updated, and the user is able to select instances that are not well classified by the stack of models in (b).
|
BAC
|
BAC
|
BAC
|
CBA
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We defer the detailed discussion on the approximation analysis to §B. Proposition 3.1 allows us to convert the TD dynamics over the finite-dimensional parameter space to its counterpart over the infinite-dimensional Wasserstein space, where the infinitely wide neural network Q(⋅;ρ)𝑄⋅𝜌Q(\cdot;\rho)italic_Q ( ⋅ ; italic_ρ ) in (3.2) is linear in the distribution ρ𝜌\rhoitalic_ρ.
Feature Representation. We are interested in the evolution of the feature representation
.
|
**A**: Thus, their analysis is not directly applicable to our setting.
**B**: (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional.
**C**: In contrast to Mei et al.
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 2
|
The structure of the work is the following: in Sec. 2 we provide an extension to the theory and we define a new set of Finite Change Sensitivity Indices (FCSIs) for functional-valued responses, while in Sec. 3 we then proceed to present and develop the methodology to assess the uncertainty associated with these FCSIs.
Finally, in Sec. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Sec.
**B**: 5 concludes and devises additional research directions.
In the Supplementary Material to this paper the interested reader can find an extensive simulation study that puts the proposed indices, estimation and inference technique to the test..
**C**: 4 we tackle the motivating problem: moving from [17], we extend their results by providing, using the previously developed theory, an analysis of the time variability of sensitivities in time, as well as a quantification of the statistical significance and an analysis of its sparsity.
|
CAB
|
CAB
|
CBA
|
CAB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> (2009); Huang et al. <|MaskedSetence|> (2012); Fan et al. (2011); Chen et al. (2018)
may enable consistent estimation of the regression function.
Nevertheless, general sparse estimators, when applied to a vectorized tensor covariate, ignore the potential tensor structure and may produce a large bias, especially when the sample size n𝑛nitalic_n is much smaller than s𝑠sitalic_s.
.
|
**A**: (2010); Raskutti et al.
**B**: (2009); Ravikumar et al.
**C**: In this case, the sparsity assumption
Lin and Zhang (2006); Meier et al.
|
CBA
|
ACB
|
CBA
|
CBA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> For example, UCB-type exploration does not have incentive to take actions other than the one with the largest upper confidence bound of Q𝑄Qitalic_Q-value, and if it has collected sufficient number of samples, it very likely never explores the new optimal action thereby taking the former optimal action forever. On the other hand, in gradually-changing environment, LSVI-UCB and Epsilon-Greedy can perform well in the beginning when the drift of environment is small. However, when the change of environment is greater, they no longer yield satisfactory performance since their Q𝑄Qitalic_Q function estimate is quite off. This also explains why LSVI-UCB and Epsilon-Greedy outperform ADA-LSVI-UCB at the beginning in the gradually-changing environment, as shown in Figure 1.
Figure 2 shows that the running times of LSVI-UCB-Restart and Ada-LSVI-UCB-Restart are roughly the same. <|MaskedSetence|> This is because LSVI-UCB-Restart and Ada-LSVI-UCB-Restart can automatically restart according to the variation of the environment and thus have much smaller computational burden since it does not need to use the entire history to compute the current policy at each time step. The running time of LSVI-UCB-Unknown is larger than LSVI-UCB-restart since the epoch larger is larger due to the lack of the knowlege of total variation B𝐵Bitalic_B, but it still does not use the entire history to compute its policy. Although Random-Exploration takes the least time, it cannot find the near-optimal policy. This result further demonstrates that our algorithms are not only sample-efficient, but also computationally tractable..
|
**A**: They are much less compared with MASTER, OPT-WLSVI, LSVI-UCB, Epsilon-Greedy.
**B**:
From Figure 1, we find that the restart strategy works better under abrupt changes than under gradual changes, since the gap between our algorithms and the baseline algorithms designed for stationary environments is larger in this setting.
**C**: The reason is that the algorithms designed to explore in stationary MDPs are generally insensitive to abrupt change in the environment.
|
BCA
|
BCA
|
BCA
|
BAC
|
Selection 1
|
I think I would make what these methods doing clearer. <|MaskedSetence|> <|MaskedSetence|> If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, if the unconstrained nuisance variables have enough capacity, the model can use them to achieve a high quality reconstruction while ignoring the latent variables related to the disentangled factors. <|MaskedSetence|>
|
**A**: They aren’t really separating into nuisance and independent only..
**B**: This phenomena is sometimes called the "shortcut problem" and has been discussed in previous works [DBLP:conf/iclr/SzaboHPZF18].
.
**C**: they are also throwing away nuisance.
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 2
|
<|MaskedSetence|> In our simulations, the interpolating predictor often performed worse than the other meta-learners on at least one outcome measure. For example, when the sample size was larger than the number of views, the interpolating predictor often had the lowest TPR in view selection, as well as the lowest test accuracy, particularly when there was no correlation between the different views. When the sample size was smaller than the number of views, the interpolating predictor had a FPR in view selection that was considerably higher than that of all other meta-learners. In terms of accuracy it performed very well in the breast cancer data, but less so in the colitis data. However, in both cases it produced very dense models, which additionally had low view selection stability. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: 6 Discussion
In this article we investigated how different view-selecting meta-learners affect the performance of multi-view stacking.
**B**: The fact that its behavior varied considerably across our experimental conditions, combined with its tendency to select very dense models when the meta-learning problem is high-dimensional, suggests that the interpolating predictor should not be used when view selection is among the goals of the study under consideration.
**C**: However, it may have some use when its interpretation as a weighted mean of the view-specific models is of particular importance.
.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 2
|
<|MaskedSetence|> in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. [2020], Filippi et al. <|MaskedSetence|> <|MaskedSetence|> In non-linear reward models, both approaches may not follow similar trajectory but may have overlapping analysis styles (see Filippi et al. [2010] for a short discussion).
.
|
**A**: Optimistic parameter search provides a cleaner description of the learning strategy.
**B**:
CB-MNL enforces optimism via an optimistic parameter search (e.g.
**C**: [2010].
|
BAC
|
BCA
|
BCA
|
BCA
|
Selection 3
|
5 Use Case
Figure 5: The exploration of clusters of interest that contain performant ML models. View (a) presents the user’s selection that drive the analyses performed in the remaining subfigures. (b.1) provides an overview of the performance, showing that \raisebox{0.15pt}{\resizebox{!}{0.8ex}{\textbf{\textsf{C3}}}}⃝ has underperforming KNN and GradB models. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> (c.2) gives supporting evidence to the user’s selection, since all validation metrics are higher than the average values for all models, along with the in-depth visualization in (d.2).
.
|
**A**: On the other hand, (b.2) shows that the user’s choice of models retains both performance and diversity.
**B**: Those models appear to perform better for the hard-to-classify instances; however, this is a misconception.
**C**: In (c.1), we observe that g-mean and ROC AUC scores are very low, which is a problem investigated further in view (d.1).
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 4
|
The stochastic blockmodel (SBM) (SBM, ) is one of the most used models for community detection in which all nodes in the same community are assumed to have equal expected degrees. Some recent developments of SBM can be found in (abbe2017community, ) and references therein. <|MaskedSetence|> DCSBM is widely used for community detection for non-mixed membership networks (zhao2012consistency, ; SCORE, ; cai2015robust, ; chen2018convexified, ; chen2018network, ; ma2021determining, ). MMSB constructed a mixed membership stochastic blockmodel (MMSB) which is an extension of SBM by letting each node have different weights of membership in all communities. However, in MMSB, nodes in the same communities still share the same degrees. <|MaskedSetence|> <|MaskedSetence|> In this paper, we design community detection algorithms based on the DCMM model.
.
|
**A**: To overcome this shortcoming, mixedSCORE proposed a degree-corrected mixed membership (DCMM) model.
**B**: Since in empirical network data sets, the degree distributions are often highly inhomogeneous across nodes, a natural extension of SBM is proposed: the degree-corrected stochastic block model (DCSBM) (DCSBM, ) which allows the existence of degree heterogeneity within communities.
**C**: DCMM model allows that nodes for the same communities have different degrees and some nodes could belong to two or more communities, thus it is more realistic and flexible.
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 1
|
<|MaskedSetence|> (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. <|MaskedSetence|> (2018); Boumal et al. (2018); Bécigneul and Ganea (2018); Zhang and Sra (2018); Sato et al. <|MaskedSetence|> (2019); Weber and Sra (2019) and the references therein..
|
**A**: (2019); Zhou et al.
**B**:
Related Works.
There is a large body of literature on manifold optimization where the goal is to minimize a functional defined on a Riemannian manifold.
See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al.
**C**: (2018); Tripuraneni et al.
|
BCA
|
BCA
|
BCA
|
CAB
|
Selection 2
|
<|MaskedSetence|> This will help the system to reduce the computational time needed to train the model—something significant in a real-world scenario. <|MaskedSetence|> Thus, we exclude those features one by one with the interactive cells from the # Action # column. <|MaskedSetence|> The remaining open question that we will examine in Section 4.3 and Section 4.4 is: shall we continue with the removal of features (e.g., F4 marked with the red ellipsoid shape) or stop at this point?
.
|
**A**: Indeed, if we take a closer look, the last five features underperform and are having a shallow impact on the final result (see Fig. 3(b)).
**B**: At this phase, we want to identify any number of features that can be excluded from the analysis because they contribute only slightly to the final outcome.
**C**: In Fig. 3(c), we can notice the black-and-white stripe pattern indicating that a feature is excluded.
|
CBA
|
BAC
|
BAC
|
BAC
|
Selection 2
|
So far, there is no study comparing methods from either group comprehensively. Often papers fail to compare against recent methods and vary widely in the protocols, datasets, architectures, and optimizers used. <|MaskedSetence|> <|MaskedSetence|> For CelebA, [46] uses ResNet-18 whereas [50] uses ResNet-50, but the comparison was done without taking this architectural change into account. <|MaskedSetence|>
|
**A**: These discrepancies make it difficult to judge the methods on an even ground.
.
**B**: For instance, the widely used Colored MNIST dataset, where colors and digits are spuriously correlated with each other, is setup differently across papers.
**C**: Some use it as a binary classification task (class 0: digits 0-4, class 1: digits: 5-9) [5, 50], whereas others use a multi-class setting (10 classes) [37, 40].
|
BCA
|
ACB
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> To predict the model output time series, sample paths from the emulated flow map are drawn and employed in an iterative fashion for one-step ahead predictions. <|MaskedSetence|> However, obtaining a GP sample path that can be evaluated at any location x∈𝒳𝑥𝒳x\in\mathcal{X}italic_x ∈ caligraphic_X in closed form is not possible [31, 7]. To overcome this issue, we employ RFF which is a popular technique for approximating the kernel and generating GP samples in an approximate manner, leveraging both theoretical guarantees and computational efficiency. <|MaskedSetence|> The other applications of kernel approximation with RFF can be found in Bayesian optimisation (which is referred to as Thompson sampling) [51], deep learning [37], and big data modelling [34]. We start the discussion by introducing RFF which offers an effective way to approximate stationary kernels.
3.1 Kernel approximation with RFF.
|
**A**: The resulting approximate GP sample paths are analytically tractable.
**B**: This section provides the material necessary for sampling from the GP posterior distribution using RFF.
**C**: In this framework, the GP sample paths need to effectively represent the flow map function across its entire domain.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 1
|
[Bach and Jordan (2003)], [Chen and Bickel (2006)], [Samworth and Yuan (2012)] and [Matteson and Tsay (2017)]. <|MaskedSetence|> (2014)],
[Pfister et al. <|MaskedSetence|> The traditional approach for testing independence is based on Pearson’s correlation coefficient; for instance, refer to Binet and Vaschide (1897), Pearson (1920), Spearman (1904), Kendall (1938). However, its lack of robustness to outliers and departures from normality eventually led researchers to consider alternative nonparametric procedures.
To overcome such a problem, a natural approach is to consider the functional difference between
the empirical joint distribution and the product of the empirical marginal distributions, see Hoeffding (1948), Blum, Kiefer and Rosenblatt (1961) and Bouzebda (2011).
This approach can also use characteristic empirical functions; see Csörgő (1985). <|MaskedSetence|>
|
**A**: Testing independence also has many applications, including causal inference ([Pearl (2009)], [Peters et al.
**B**: (2018)], [Chakraborty and Zhang (2019)]), graphical modeling ([Lauritzen (1996)], [Gan, Narisetty and Liang (2019)]), linguistics ([Nguyen and Eisenstein (2017)]), clustering (Székely and Rizzo, 2005), dimension reduction (Fukumizu, Bach and Jordan, 2004; Sheng and Yin, 2016).
**C**: Inspired by the work of Blum, Kiefer and Rosenblatt (1961) and Dugué (1975), Deheuvels (1981) studied a test of multivariate independence based on the Möbius decomposition, generalized in Bouzebda (2014)..
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This was also the case in Ostrovskii & Bach [2021] and Tran-Dinh et al. [2015], in which more general properties of these
pseudo-self-concordant functions were established. This was fully formalized in Sun & Tran-Dinh [2019], in which the concept of generalized self-concordant functions was introduced, along with key bounds, properties, and variants of Newton methods for the unconstrained setting which make use of this property.
.
|
**A**: For example, the logistic loss function used in logistic regression is not strictly self-concordant, but it fits into a class of pseudo-self-concordant functions, which allows one to obtain similar properties and bounds as those obtained for self-concordant functions [Bach, 2010].
**B**:
Self-concordant functions have received strong interest in recent years due to the attractive properties that they allow to prove for many statistical estimation settings [Marteau-Ferey et al., 2019, Ostrovskii & Bach, 2021].
**C**: The original definition of self-concordance has been expanded and generalized since its inception, as many objective functions of interest have self-concordant-like properties without satisfying the strict definition of self-concordance.
|
BCA
|
CAB
|
BCA
|
BCA
|
Selection 4
|
Another line of work (e.g., Gehrke et al. (2012); Bassily et al. <|MaskedSetence|> (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bayesian perspective, by restricting some distance measure between some prior distribution and some posterior distribution induced by the mechanism’s behavior (Dwork et al., 2006; Kasiviswanathan and Smith, 2014). <|MaskedSetence|> Unfortunately, these definitions have at best extremely limited adaptive composition guarantees. Bassily and Freund (2016) connect this Bayesian intuition to statistical validity via typical stability, an approach that discards “unlikely” databases that do not obey a differential privacy guarantee, but their results require a sample size that grows linearly with the number of queries even for iid distributions. Triastcyn and Faltings (2020) propose the notion of Bayesian differential privacy which leverages the underlying distribution to improve generalization guarantees, but their results still scale with the range in the general case.
An alternative route for avoiding the dependence on worst case queries and datasets was achieved using expectation based stability notions such as mutual information and KL stability Russo and Zou (2016); Bassily et al. <|MaskedSetence|> Using these methods Feldman and Steinke (2018) presented a natural noise addition mechanism, which adds noise that scales with the empirical variance when responding to queries with known range and unknown variance. Unfortunately, in the general case, the accuracy guarantees provided by these methods hold only for the expected error rather than with high probability.
.
|
**A**: This perspective was used Shenfeld and Ligett (2019) to propose a stability notion which is both necessary and sufficient for adaptive generalization under several assumptions.
**B**: (2021); Steinke and Zakynthinou (2020).
**C**: (2013); Bhaskar et al.
|
CAB
|
CAB
|
BAC
|
CAB
|
Selection 4
|
<|MaskedSetence|> This automatically incorporates out-of-distribution uncertainty. The default implementation from the GPyTorch library gardner2018gpytorch was used. <|MaskedSetence|> A heteroscedastic likelihood function with trainable noise parameter was used and the model was trained for 50 epochs (following the library’s heuristics). <|MaskedSetence|>
|
**A**: Optimization of the noise parameter was handled internally, so no additional validation set was required.
.
**B**: This library provides a multitude of different approximations and deep learning adaptions for Gaussian processes.
**C**:
(i)
Gaussian Process: As kernel a standard radial basis function (RBF kernel) was used.
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 1
|
On the other hand, the second component of reciprocity places positive weight on questions involving positive reciprocity and negative weight on questions involving negative reciprocity or punishment. <|MaskedSetence|> While there is some tradeoff in the treatment, the sign of the aggregate interaction term remains negative in the treatment suggesting that these players are still behaving more altruistically than average. <|MaskedSetence|> These together suggest that their increased sharing is not conditional on having received more benefits from their group, possibly representing a tendency to share in anticipation that others will behave reciprocally. <|MaskedSetence|> In other words, these individuals reciprocate by sharing with the entire group, and trusting in the reciprocity of others, rather than by using new information as a tool for punishment.
.
|
**A**: Individuals who align with this characteristic place much lower weight on the actual cost of contributing, suggesting some altruism.
**B**: This interpretation is reinforced by a large positive effect of the treatment on generalized reciprocity for this group, offset by a small decrease in direct reciprocity.
**C**: Perhaps surprisingly, there is a strong negative coefficient on the interaction between positive reciprocity and generalized reciprocity in the baseline.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
By looking at the picture, an immediate observation is that areas at the corners are simply too far from any of the city center hospitals, meaning that going towards the center from there would be impractical. <|MaskedSetence|> <|MaskedSetence|> Lastly, the always failing areas at the corners of the grid, and at the bottom-center tell us something different: for the spatial configuration we are considering, they always violate the requirement to reach a hospital in the city center in dP.4subscript𝑑P.4d_{\text{P.}4}italic_d start_POSTSUBSCRIPT P. 4 end_POSTSUBSCRIPT. <|MaskedSetence|>
|
**A**: This can be surprising at first, but a look at the broader map of the city clarifies that they are closer to hospitals that are not in our grid and, therefore cannot be fully analyzed by our model.
.
**B**: Even worse is the 3 Duomo area, which, despite being quite close to a hospital, experiences such high levels of crowdedness that make reaching the hospital almost impossible in crowded times (in terms of the requirement we have defined), while it is relatively easier in medium-crowded times.
**C**: However, it is interesting to see that, while it is never very easy to get to hospitals in busy times (like at 18:20), the 2 Central Station is still in a good spot (as it is not too far, and not too crowded), conversely the 1 Garibaldi Station is in a less favorable location, as it becomes practically inaccessible in crowded times.
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 2
|
Non-Business day. Values are in percentage.
We remark that this example is just for illustration and showcasing the interpretation of the proposed tensor factor model. Again we note that for the TFM-tucker model, one needs to identify a proper representation of the loading space in order to interpret the model. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: In Chen et al., (2022), varimax rotation was used to find the most sparse loading matrix representation to model interpretation.
**B**: For TFM-cp, the model is unique hence interpretation can be made directly.
**C**: Interpretation is impossible for the vector factor model in such a high dimensional case.
.
|
ABC
|
CAB
|
ABC
|
ABC
|
Selection 1
|
The green color in the center of a point indicates that a decision is from RF, while blue is for AB. <|MaskedSetence|> The size maps the number of training instances that are classified by a specific decision, and the opacity encodes the impurity of each decision. Low impurity (with only a few training instances from other classes) makes the points more opaque. The positioning of the points can be used to observe if the RF and AB models produced similar rules, offering a comparison between algorithm decisions. The histogram in Figure 1(c) shows the number of decisions (y-axis) and the distribution of training instances in these paths (x-axis), and can also be used to filter the number of visible decisions in the projection-based view to avoid overfitting rules containing only a few instances (as shown in Figure 6(a)) or general rules that might not apply in problematic cases.
UMAP is initiated with variable n_neighbors and min_dist fixed to 0.10.10.10.1. To determine the optimal number of clusters to be visualized, DBSCAN Ester1996A is used to compute an estimated number of core clusters from the derived decisions, which is then used to tune the n_neighbors, with a minimum of 2 and a maximum of 100 neighbors (the aim is to have the same magnitude in both). <|MaskedSetence|> <|MaskedSetence|>
|
**A**: On the other hand, in the usage scenario of Section Usage Scenario, DBSCAN estimated 477 clusters, which tuned the hyperparameter to the maximum value.
.
**B**: The outline color reflects the training instances’ class based on a decision’s prediction.
**C**: For the first experiment in Section Use Case, n_neighbors was automatically set to 20.
|
BCA
|
BCA
|
BCA
|
CAB
|
Selection 1
|
Data used in the preparation of this article were obtained from two sources: the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Parkinson’s Progression Markers Initiative (PPMI). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. <|MaskedSetence|> <|MaskedSetence|> Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). <|MaskedSetence|> ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
PPMI — a public-private partnership — is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners..
|
**A**: The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California.
**B**: The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada.
**C**: Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> We see that DFSP returns larger fuzzy weighted modularity than its competitors except for the Karate-club-weighted network. Meanwhile, according to the fuzzy weighted modularity of DFSP in Table 3, we also find that Gahuku-Gama subtribes, Karate-club-weighted, Slovene Parliamentary Party, Les Misérables, and Political blogs have a more clear community structure than Train bombing, US Top-500 Airport Network, US airports, and Cond-mat-1999 for their larger fuzzy weighted modularity. <|MaskedSetence|> Hence, DFSP runs faster than its competitors.
.
|
**A**: Furthermore, the running times of DFSP, GeoNMF, SVM-cD, and OCCAM for the Cond-mat-1999 network are 29.06 seconds, 32.33 seconds, 90.63 seconds, and 300 seconds, respectively.
**B**: From now on, we use the number of communities determined by KDFSP in Table 2 for each data to estimate community memberships.
**C**: We compare the fuzzy weighted modularity of DFSP and its competitors, and the results are displayed in Table 3.
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 2
|
<|MaskedSetence|> We then introduce our model for semiparametric CCA, a Gaussian transformation model whose multivariate margins are parameterized by cyclically monotone functions. In Section 3, we define the multirank likelihood and use it to develop a Bayesian inference strategy for obtaining estimates and confidence regions for the CCA parameters. We then discuss the details of the MCMC algorithm allowing us to simulate from the posterior distribution of the CCA parameters. In Section 4 we illustrate the use of our model for semiparametric CCA on simulated datasets and apply the model to two real datasets: one containing measurements of climate variables in Brazil, and one containing monthly stock returns from the materials and communications market sectors. We conclude with a discussion of possible extensions to this work in Section 5. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: In the first part of Section 2 of this article, we describe a CCA parameterization of the multivariate normal model for variable sets, which separates the parameters describing between-set dependence from those determining the multivariate marginal distributions of the variable sets.
**B**: However, where necessary, we use italicized and un-italicized roman characters to distinguish between random variables and elements of their sample spaces.
2 Semiparametric CCA.
**C**: By default, roman characters referring to mathematical objects in this article are italicized.
|
BCA
|
ACB
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> Contrary to generating paths, we will produce “quasi-paths” from discrete observations of the Lévy process. This idea is similar to bootstrapping. <|MaskedSetence|> <|MaskedSetence|> Additionally, we can provide sufficient conditions to ensure that the M-estimator can be consistent and asymptotically normal, which are the main results of this study.
The advantage of this method lies in its versatility. It can be applied to any complex random variable or loss function because it approximates the distribution by a path-base without specifying a concrete model of the Lévy process..
|
**A**: By shuffling the order of the increments, we can create different step functions, which can be regarded as different (discrete) paths from the true distribution.
Using these “quasi-paths”, we can estimate the expected functions of X𝑋Xitalic_X and construct an estimator of ϑ0subscriptitalic-ϑ0\vartheta_{0}italic_ϑ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT as an M-estimator.
**B**: Given discrete samples of X=(Xt)t≥0𝑋subscriptsubscript𝑋𝑡𝑡0X=(X_{t})_{t\geq 0}italic_X = ( italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_t ≥ 0 end_POSTSUBSCRIPT, say (Xt0,Xt1,…,Xtn)subscript𝑋subscript𝑡0subscript𝑋subscript𝑡1…subscript𝑋subscript𝑡𝑛(X_{t_{0}},X_{t_{1}},\dots,X_{t_{n}})( italic_X start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_X start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) with 0=t0<t1<⋯<tn=T0subscript𝑡0subscript𝑡1⋯subscript𝑡𝑛𝑇0=t_{0}<t_{1}<\dots<t_{n}=T0 = italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT < italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT < ⋯ < italic_t start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = italic_T and hn≡tk−tk−1(k=1,2,…,n)subscriptℎ𝑛subscript𝑡𝑘subscript𝑡𝑘1𝑘12…𝑛h_{n}\equiv t_{k}-t_{k-1}\ (k=1,2,\dots,n)italic_h start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ≡ italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_t start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT ( italic_k = 1 , 2 , … , italic_n ), we approximate the path by a step function that jumps at tk(k=1,2,…,n)subscript𝑡𝑘𝑘12…𝑛t_{k}\ (k=1,2,\dots,n)italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_k = 1 , 2 , … , italic_n ) with jump size ΔkX=Xtk−Xtk−1subscriptΔ𝑘𝑋subscript𝑋subscript𝑡𝑘subscript𝑋subscript𝑡𝑘1\Delta_{k}X=X_{t_{k}}-X_{t_{k-1}}roman_Δ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_X = italic_X start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT - italic_X start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT; see Section 2 for details.
**C**: This paper proposes a new methodology to solve such problems.
|
CBA
|
CBA
|
CBA
|
ABC
|
Selection 2
|
<|MaskedSetence|> Organization of the paper
In Section 2, we include the definitions of adjacency matrices of hypergraphs. <|MaskedSetence|> The algorithms for partial recovery are presented in Section 4. The proof for the correctness of our algorithms for Theorem 1.7 and Corollary 1.9 are given in Section 5. <|MaskedSetence|>
|
**A**: The proof of Theorem 1.6, as well as the proofs of many auxiliary lemmas and useful lemmas in the literature, are provided in the supplemental materials..
**B**: The concentration results for the adjacency matrices are provided in Section 3.
**C**:
1.4.
|
CBA
|
CBA
|
BCA
|
CBA
|
Selection 2
|
Since it is natural to expect the adjustment of migratory flows in response to climate change is not instantaneous, especially in the case of gradual phenomena, most of the studies use a panel structure with a macroeconomic focus and attempt to assess the impact of changes in climatic conditions on human migratory flows in the medium-long term. Microeconomic analyses mostly use cross-section data to explain causal relationships between specific features of individuals, collected through surveys and censuses, and various factors determining migration by isolating the net effect of the environment. <|MaskedSetence|> As already said, for micro-level analyses in Cluster 1 controls related to sample characteristics have opposite signs.
Looking at dummies for the estimation techniques, our evidence suggests that the diversity in the effect sizes is in part explained by differences in techniques. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Analyses at Individual level tend to capture a more negative impact of climate changes on migration, whereas analyses at Country level tend to find a more positive effect.
**B**: In particular, positive and significant coefficients are found for controls as OLS and ML estimators for cross-section analyses, same for panel studies that use Panel estimation techniques, and Instrumental Variables (IV) or GMM estimators to correct for endogeneity.
**C**: Micro-economic analyses (Cluster 1) use more disaggregated data, while the high presence of zeros in the dependent variable is treated with a Poisson estimator, which tends to produce lower estimates.
.
|
ABC
|
ABC
|
ABC
|
BAC
|
Selection 2
|
5 Discussion
We have established the asymptotic theory in a class of directed random graph model parameterized by the differentially private bi-sequence and illustrated application to the Probit model. The result shows that statistical inference can be made using the noisy bi-sequence. We assume that the edges are mutually independent in this work. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In the further, we may relax our theoretical conditions to ignore the independence of edges..
|
**A**: To avoid this problem, we need appropriately select a probability distribution for directed random graphs when using the existing method.
**B**: We should be able to obtain consistent conclusion if the edges are dependent, provided that the conditions stated in Theorem 1 are met.
**C**: However, the asymptotic normality of the estimator is not clear.
|
BCA
|
BCA
|
BCA
|
BAC
|
Selection 1
|
There are considerable challenges in these contexts for efficient Bayesian computation when avoiding Gaussian distributional assumptions on the outcomes. General purpose Markov chain Monte Carlo (MCMC) methods can in principle be used to draw samples from the posterior distribution of the latent process by making local proposals within accept/reject schemes. However, due to the huge dimensionality of the parameter space, poor mixing and slow convergence are likely.
For instance, random-walk Metropolis proposals are cheaply computed but lack in efficiency as they overlook the local geometry of the high dimensional posterior.
Alternatively, one may consider gradient-based MCMC methods such as the Metropolis-adjusted Langevin algorithm (MALA; Roberts and Stramer 2002), Hamiltonian Monte Carlo (HMC; Duane et al. 1987; Neal 2011; Betancourt 2018) and others such as MALA and HMC on the Riemannian manifold (Girolami and Calderhead, 2011) or the no-U-turn sampler (NUTS; Hoffman and Gelman, 2014) used in the Stan probabilistic programming language (Carpenter et al., 2017). <|MaskedSetence|> <|MaskedSetence|> Although it is common in other contexts to rely on subsamples to cheaply approximate gradients, Johndrow et al. <|MaskedSetence|>
|
**A**: (2020)
show that such approximate MCMC algorithms are either slow or have large approximation error.
.
**B**: These methods are appealing because they modulate proposal step sizes using local gradient and/or higher order information of the target density.
**C**: Unfortunately, their performance very rapidly drops with parameter dimension (Dunson and Johndrow, 2020).
|
BCA
|
BCA
|
BCA
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> Then, we propose our central notion of causal validity including an example how it may fail. Proposition 1 establishes the key role of re-weighting with a likelihood-ratio process to obtain the interventional distribution. The central result providing graphical rules for non-parametric identifiability is given in Section 4. Proposition 2 then gives general sufficient (graphical) conditions, essentially only assuming that intensities exist, for causal validity in the context of censoring. Section 6 joins all these pieces for main result on how to fit a marginal structural model using suitable re-weighting privded a sufficient set of covariate processes. <|MaskedSetence|> (2014).
.
|
**A**:
The outline of our paper is as follows.
**B**: We conclude by an illustration of our approach with the example of HPV-testing for cervical cancer screening; this is a more advanced analysis, and provides a formal justification for the analysis in Nygård et al.
**C**: We begin with some background, focusing on local independence as a dynamic notion of independence and its graphical representation.
|
ACB
|
BCA
|
ACB
|
ACB
|
Selection 1
|
Note that this discrepancy between Bayesian and frequentist measures differs considerably from the situation in standard statistical inference with non-adaptive samples. For a fixed-sample problem, the Bernstein–von Mises theorem describes the asymptotic equivalence of Bayesian and frequentist inference. <|MaskedSetence|> <|MaskedSetence|> Bayesian algorithms are robust up to a polynomially small underestimation, whereas frequentist algorithms are robust up to an exponentially small underestimation.
Our results offer analytical innovation by establishing foundational principles for a formal analysis that yields exact solutions in dynamic programming. <|MaskedSetence|> We demonstrate several instances in which it is feasible to perform exact analyses of dynamic programming regardless of the need to project the evolution of the posterior over extended future periods.
|
**A**: Bayesian and frequentist BAI algorithms are both robust against such randomness but with different confidence levels.
**B**: This is notable because it enables solutions that extend beyond the conventional one- or two-step lookahead.
**C**: However, in an adaptive sampling scheme, underestimation of an arm due to the randomness of the empirical mean results in a smaller number of samples of the arm in the future.
|
CAB
|
CAB
|
CBA
|
CAB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> [2019], which shows how large scale genomic data provide a fertile ground for SSPs. Although sequencing technologies have advanced the understanding of genome biology, observed samples may not be perfectly representative of the molecular heterogeneity or species composition of the underlying DNA library, often providing a poor representation due to low-abundance molecules that are hard to sample. <|MaskedSetence|> Deng et al. [2019] identified three major questions of interest:
Q1)
.
|
**A**: Due to the impossibility of sequencing DNA libraries up to complete saturation, it is common to make use of the observed samples, typically collected under suitable budget constraints, to infer the molecular heterogeneity of additional unobserved samples from the library, as well as of the library itself.
**B**: This is testified by the work of Deng et al.
**C**: Biological sciences are the field where SSPs have been most investigated over the past three decades, raising several challenges in both methods and applications.
|
BAC
|
CBA
|
CBA
|
CBA
|
Selection 4
|
The appealing properties of the Lorenz curve are well captured by the formulation given in Gastwirth (1971). <|MaskedSetence|> The relation to majorization and the convex order follows immediately, as shown in section C of Marshall et al. (2011). As pointed out by Arnold (2008), this makes the Lorenz ordering an uncontroversial partial inequality ordering of univariate distributions, and most open questions concern the higher dimensional case.
Dispersion in multivariate distributions is not adequately described by the Lorenz curve of each marginal, and a genuinely multidimensional approach is needed. <|MaskedSetence|> More generally, the literature on multidimensional inequality of outcomes and its measurement is vast, as evidenced by many recent surveys, see for instance Decancq and Lugo (2012), Aaberge and Brandolini (2014), Andreoli and Zoli (2020). <|MaskedSetence|>
|
**A**: In that formulation, the Lorenz curve is the graph of the Lorenz map, and the latter is the cumulative share of individuals below a given rank in the distribution, i.e., the normalized integral of the quantile function.
**B**: Even for utilitarian welfare inequality, Atkinson and Bourguignon (1982) motivate the need for the multidimensional approach initiated by Fisher (1956).
**C**: We only discuss it insofar as it relates to the Lorenz curve..
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 2
|
<|MaskedSetence|> She receives a manually-labeled data set with 9 features related to breast cancer [DG17b]. <|MaskedSetence|> From her experience, she knows that instance hardness and class imbalance can be troublesome for the ML model. Thus, she wants to experiment with well-known algorithms for undersampling and oversampling the data. However, especially with medical records, the use of merely automated methods is questionable because they cannot be trusted blindly. <|MaskedSetence|> In reality, patients who are healthy but predicted as ill will undergo extensive follow-up diagnostic tests before treatments such as surgery and chemotherapy are advised; however, the opposite is not true. To accomplish this main objective and to control the sampling techniques, Zoe deploys HardVis..
|
**A**: This data set is rather imbalanced, with 458 benign and 241 malignant cases.
**B**: The doctors need explanations, and the minority class in this binary classification problem is of more importance than the majority consisting of healthy patients.
**C**:
5.1 Usage Scenario: Local Assessment of Undersampling
Supposedly Zoe is a data analyst in a hospital, working primarily with healthcare data.
|
BCA
|
CAB
|
CAB
|
CAB
|
Selection 2
|
Although many recent works focus on learning in the presence of strategic behavior, learning in the presence of capacity constraints and strategic behavior has not previously been studied in depth. <|MaskedSetence|> Competition for the treatment arises when agents are strategic and the decision maker is capacity-constrained, complicating estimation of the optimal policy.
We adopt a flexible model where agents are heterogenous in their raw covariates and their ability to modify them. <|MaskedSetence|> In some applications, strategic behavior may be a form of “gaming the system,” e.g. cheating on exams in the context of college admissions, and the decision maker may not want to assign treatment to agents who have high ability to modify their covariates. In other applications, the decision maker may want to accept such agents because the agents who would benefit the most from the treatment are those who can invest effort to make themselves look desirable. Lastly, as demonstrated by Liu et al. <|MaskedSetence|> Our model permits all of these interpretations because we allow for potential outcomes to be flexibly related to the agent’s type..
|
**A**: Many motivating applications for learning with strategic behavior, such as college admissions and hiring, are precisely settings where the decision maker is capacity-constrained.
**B**: Depending on the context, strategic behavior may be harmful, beneficial, or neutral for the decision maker.
**C**: (2022), when all agents have identical ability to modify their covariates, the strategic behavior may be neutral for the decision maker because it does not affect which agents are assigned treatment.
|
ABC
|
ABC
|
ABC
|
BCA
|
Selection 1
|
We experiment with Voronoi diagrams in which the cells in the center of the diagram tend to be smaller. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Further details of the data generation process may be found in Appendix B.
We compare the performances of the proposed filtration against that of the distance-to-measure filtration. The sample points are shown in Figure 9, the persistence diagrams are shown in Figure 11, and the significant loops found by oracle and subsample bootstrapping are shown in Figure 10.
.
|
**A**: This results in a higher sampling density on boundaries of smaller cells.
**B**: We further inject additive noise.
**C**: A point is sampled by first choosing a random cell and then choosing a uniform point on its boundary.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 3
|
<|MaskedSetence|> (2019) collected data before, during, and after the intervention period. The pre-intervention ran between January 2009 and April 2010. During this period, the sample average of the week of the first prenatal visit is 16.97, and only 34.46% of these visits occur before week 13. Figure 2 shows a histogram of this distribution. <|MaskedSetence|> While the treatment and control group affected approximately the same number of facilities, the number of treated and control patients are very different due to the unequal number of patients across facilities. <|MaskedSetence|> This distribution has a mean and a standard deviation of 33.6 and 16.3 patients per clinic, respectively. Finally, the post-intervention period goes between January 2011 and March 2012.
.
|
**A**:
Celhay et al.
**B**: The aforementioned treatment occurred exclusively during the intervention period, which ran between May 2010 and December 2010.
**C**: Figure 3 provides a histogram of the number of patients attending each clinic for their first prenatal visit during the intervention period.
|
ABC
|
ACB
|
ABC
|
ABC
|
Selection 3
|
More specifically, partial observability poses both statistical and computational challenges. <|MaskedSetence|> In particular, predicting the future often involves inferring the distribution of the state (also known as the belief state) or its functionals as a summary of the history, which is already challenging even assuming the (observation) emission and (state) transition kernels are known (Vlassis et al., 2012; Golowich et al., 2022). Meanwhile, learning the emission and transition kernels faces various issues commonly encountered in causal inference (Zhang and Bareinboim, 2016). For example, they are generally nonidentifiable (Kallus et al., 2021). <|MaskedSetence|> <|MaskedSetence|> From a computational perspective, it is known that policy optimization is generally intractable (Vlassis et al., 2012; Golowich et al., 2022). Moreover, infinite observation and state spaces amplify both statistical and computational challenges. On the other hand, most existing results are restricted to the tabular setting (Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a; Xiong et al., 2021), where the observation and state spaces are finite.
.
|
**A**: Such statistical challenges are already prohibitive even for the evaluation of a policy (Nair and Jiang, 2021; Kallus et al., 2021; Bennett and Kallus, 2021), which forms the basis of policy optimization.
**B**: From a statistical perspective, it is challenging to predict future rewards, observations, or states due to a lack of the Markov property.
**C**: Even assuming they are identifiable, their estimation possibly requires a sample size that scales exponentially in the horizon and dimension (Jin et al., 2020a).
|
CAB
|
BCA
|
BCA
|
BCA
|
Selection 2
|
The primary implication of this analysis is that if the standard of evidence required by the FDA is loosened, it may cease to be incentive-aligned for the more profitable drugs. The right standard of evidence for the FDA is a source of ongoing debate, and some call for much looser protocols. <|MaskedSetence|> <|MaskedSetence|> We emphasize that their analysis does not consider how the incentive landscape is impacted by adopting this looser standard of evidence. <|MaskedSetence|>
|
**A**: For example, the Bayesian decision analysis of Isakov
et al.
**B**: (2019) prompts the authors to call for thresholds from 1% to 30%, depending on the class of drug.
**C**: In particular, in view of Table 1, we worry that greatly loosening the standard of evidence may incentivize
clinical trials for unpromising candidates, resulting in too many false positives.
.
|
ABC
|
ABC
|
CAB
|
ABC
|
Selection 2
|
For the bounds on the marginal probability an individual benefits from treatment which are estimated with data from a randomized experiment or observational study with known treatment probabilities, we derive in Section 2 a closed-form concentration inequality depending on only the sample size and the desired frequentist confidence level. As discussed in Section 2.4, this allows for a formal statistical power analysis, albeit conservative, but notably without the requirement of an asymptotic limiting distribution nor the specification of any unknown parameters (e.g. plausible effect sizes). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Our main results are presented in a general manner in terms of sub-groups, delineated by pre-treatment features, and estimators for bounds on pibt based on inverse probability of treatment weighting (IPTW) (Imbens and Rubin,, 2015; Hernán and Robins,, 2020).
**B**: Also different from a margin of error that can be obtained via Bootstrap, our approach can be up to an order M𝑀Mitalic_M (the number of bootstrap samples) faster; see Remark 2.1 for details on the relatively low computational complexity of our approach.
.
**C**: The inference on pibt with a randomized experiment is handled as a specific case.
Different from the non-asymptotic margin of error that can be obtained with bootstrap re-sampling (Efron and Tibshirani,, 1994; Bickel et al.,, 1997), our non-asymptotic margin of error will be closed-form and simultaneous for all thresholds δ𝛿\deltaitalic_δ that can be used to define pibt, thus allowing for a form of sensitivity analysis on its definition, and even cherry-picking.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Since then, some KD methods regard knowledge as final responses to input samples [3, 31, 58], some regard knowledge as features extracted from different layers of neural networks [24, 23, 41], and some regard knowledge as relations between such layers [57, 40, 9]. The purpose of defining different types of knowledge is to efficiently extract the underlying representation learned by the teacher model from the large-scale data. If we consider a network as a mapping function of input distribution to output, then different knowledge types help to approximate such a function.
Based on the type of knowledge transferred, KD can be divided into response-based, feature-based, and relation-based [15].
The first two aim to derive the student to mimic the responses of the output layer or the feature maps of the hidden layers of the teacher, and the last approach uses the relationships between the teacher’s different layers to guide the training of the student model.
Feature-based and relation-based methods [24, 57], depending on the model utilized, may leak the information of structures and parameters through the intermediate layers’ data.
For example, we can reconstruct a ResNet [18] based on the feature dimensions of different layers, and calculate each neuron’s parameter using specific images and their responses in the feature maps..
|
**A**: [19] propose an original teacher-student architecture that uses the logits of the teacher model as the knowledge.
**B**: Hinton et al.
**C**: Knowledge Distillation (KD).
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
<|MaskedSetence|> Such approaches typically recover the filtering of predictive representations by solving moment equations. <|MaskedSetence|> <|MaskedSetence|> (2016) establishes such moment equations based on structural assumptions on the filtering of such predictive states. Similarly, Anandkumar et al. (2012); Jin et al. (2020a) establishes a sequence of observation operators and recovers the trajectory density via such observation operators.
Motivated by the previous work, we aim to construct a embedding that are both learn-able and sufficient for control. A sufficient embedding for control is the density of the trajectory, namely,
.
|
**A**: In particular, Hefny et al.
**B**: (2015); Sun et al.
**C**: In the case that maintaining a belief or conducting the prediction is intractable, previous approaches establish predictive states (Hefny et al., 2015; Sun et al., 2016), which is an embedding that is sufficient for inferring the density of future observations given the interaction history.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 4
|
(Mariadassou et al.,, 2010). The block memberships are not known a priori, they are recovered a posteriori by the inference algorithm.
In social (resp. <|MaskedSetence|> species) with the same block membership play the same social/ecological role in its system (Boorman and White,, 1976; Luczkovich et al.,, 2003). <|MaskedSetence|> When analysing the roles in food webs, Luczkovich et al., (2003) use the notion of regular equivalence to define trophic role. Two species are said to be regularly equivalent if they feed on equivalent species and are preyed on by equivalent species. <|MaskedSetence|>
|
**A**: In food webs, species playing the same ecological role are said to be ecologically equivalent (see Cirtwill et al.,, 2018, for a review of species role concepts in food webs).
**B**: ecological) networks, individuals (resp.
**C**: This notion of regular equivalence is a relaxation of structural equivalence which imposes that structurally equivalent species have exactly the same trophic relations in the food web.
In practice, Luczkovich et al., (2003) find that species are grouped into blocks by trophic level and some separation might occur based on trophic chains..
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> It aims essentially at demonstrating the interest of carrying out the selection of covariates from the data of all the individuals simultaneously thanks to the mixed effects model. <|MaskedSetence|> To both show the flexibility of our approach and simplify the presentation of this study, this second part is conducted on another nonlinear mixed effects model, this time with a one-dimensional random effect. In the third part, a comparison of SAEMVS in terms of computation time with an MCMC implementation is presented, to quantify precisely the speed improvement afforded by the SAEM algorithm.
5.1 Comparison with a two-step approach.
|
**A**: The first part is a comparison with strategies that can be easily implemented from existing methods.
**B**: The numerical study is divided into three parts.
**C**: The second part studies in great detail the influences of the number of subjects, the number of covariates, the signal-to-variability ratio and the collinearity between covariates on the performance of SAEMVS.
|
ACB
|
BAC
|
BAC
|
BAC
|
Selection 4
|
Lamy et al. (2019) study fairness for binary fairness attributes under a noise model of Scott et al. <|MaskedSetence|> <|MaskedSetence|> They then show how this model can be combined with previously proposed fairness constraints to yield an adapted constrained optimization formulation for this restricted model.
An earlier short version of this work appeared as Sabato and Yom-Tov (2020). That version studied the case of binary classification, but did not study the case of multiclass classification. <|MaskedSetence|> The current work also provides a new fast minimization algorithm for the binary case, additional experiments, and significantly expanded and in-depth discussions.
.
|
**A**: In this model, it is assumed that the fairness attribute is noisy independently of the classifier’s prediction, leading to a model in which the confusion matrix conditioned on the attribute value is a mixture of the two true confusion matrices conditioned on the two attribute values.
**B**: As we show below, multiclass classification with more than two labels requires a different approach.
**C**: (2013).
|
CAB
|
BAC
|
CAB
|
CAB
|
Selection 3
|
Although regularization is an effective method to deal with linear regression problems, it brings essential difficulties to the convergence analysis of the algorithm. Compared with the non-regularized decentralized linear regression algorithm, the estimation error equation of this algorithm contains a non-martingale difference term with the regularization parameter, which cannot be directly analyzed by using martingale convergence theorem as [39].
We no longer require that the sequences of regression matrices and graphs satisfy special statistical properties, such as mutual independence, spatio-temporal independence and stationarity. Compared with the case with i.i.d. data, dependent observations and data contain less information and therefore lead to more unstable learning errors as well as the performance degradation [40].
Besides, we consider both additive and multiplicative communication noises in the process of the information exchange among nodes. All these challenges make it difficult to analyze the convergence and performance of the algorithm, and the methods in the existing literature are no longer applicable.
For example, the methods in [27]-[30] and [34] are applicable for the case that the graphs, regression matrices and noises are i.i.d. <|MaskedSetence|> [25] studied the decentralized regularized gossip gradient descent algorithm for linear regression models, where the method is applicable for the case that only two nodes exchange information at each instant. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: and bounded..
**B**: and mutually independent and it is required that the expectations of the regression matrices be known in [28]-[29].
Liu et al.
**C**: In addition, they require that the graphs be strongly connected and the observation vectors and the noises be i.i.d.
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 3
|
While (III) and (IV) hold for mmd with a bounded kernel without additional assumptions, the currently–available bounds on the Rademacher complexity ensure that mmd with an unbounded kernel meets the above conditions only under specific models and data generating processes, even within the i.i.d. setting. <|MaskedSetence|> <|MaskedSetence|> In particular, as shown in Proposition 4.1, under mmd with an unbounded kernel, the existence Assumptions 1 and 2 in Bernton et al. (2019) can be directly related to constructive conditions on the kernel, inherently related to our Assumption (IV). <|MaskedSetence|> Notice that these inequalities also hold for summary–based abc with routinely–used unbounded summaries (e.g., moments) as a direct consequence of the discussion in Example 3.3.
.
|
**A**: In this context, it is however possible to revisit the results for the Wasserstein case in Proposition 3 of Bernton et al.
**B**: (2019) under the new Rademacher complexity framework introduced in the present article.
**C**: This in turn yields informative concentration inequalities that are reminiscent of those in Theorem 3.2 and Corollary 4.2.
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> Any continuous function over a bounded domain can be approximated by a depth-2222 network [3, 11, 22] and this universality result holds for networks with threshold or ReLU as activation functions. Our first main result supports the contrary to this belief. <|MaskedSetence|> Thereafter, a simple argument shows that monotone networks of bounded depth are universal approximators of monotone functions. As noted, this is in sharp contrast to general neural networks, where adding extra layers can affect the efficiency of the representation [16], but does not change the expressive power.
.
|
**A**:
Given the above result, it may seem that, similarly to the case of monotone networks with ReLU activations, the class of monotone networks with threshold activations is too limited, in the sense that it cannot approximate any monotone function with a constant depth (allowing the depth to scale with the dimension was considered in [12], see below).
**B**: We establish a depth separation result for monotone threshold networks and show that monotone networks can interpolate arbitrary monotone data sets by slightly increasing the number of layers.
**C**: One reason for such a belief is that, for non-monotone networks, depth 2222 suffices to ensure universality.
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 3
|
This is, of course, not a novel observation, and the disconnection between evaluating raters’ reliability and whether the best applicants were selected was noted earlier [[, e.g.,]]kraemer1991we, nelson1991process, mayo2006peering. In cases where applicant selection is based on a fixed threshold (i.e., pass/fail tests), the expected classification accuracy can be estimated by methods outlined by [107, 108] and [79] [[, also see]]lee2010classification, livingston1995estimating, hanson1990investigation. We extend the aforementioned approach to settings where a proportion of the best candidates is selected and show that, under the assumption of a normally distributed latent variable, the expected classification accuracy can be directly obtained from IRR and the proportion of selected candidates. Subsequently, the selection procedures can be characterized as binary classification and evaluated via well-known and interpretable metrics such as sensitivity or false positive/negative rates. Furthermore, the binary classification framework allows researchers and stakeholders to evaluate and improve selection procedures while incorporating the costs of incorrect decisions, increasing the number of raters, or modifying the rating procedure. <|MaskedSetence|> Compared to typical classification tasks, which aim to separate subjects into different categories [73], we assume the existence of a single continuous latent trait (or a composite score based on a multidimensional assessment) measured by the ratings. <|MaskedSetence|> <|MaskedSetence|> However, such a “gold standard” measure of success needed for directly assessing the validity of selection procedure is often not available ([86, 100, 111], also see [78] for alternatives); either because the success measure is located too far in the future, or it is difficult to agree on the success measure itself (see [85, 89] for suggestion to use bibliometric measures in grant reviews, and [76, 90] for a subsequent critique). Consequently, we are often left to assess the reliability of the selection procedures, as reliability limits the usefulness even of completely valid indicators [113]. Therefore, our approach results in the lower bound on the corresponding error probabilities as it does not account for additional miss-classifications due to (a lack of) validity.
.
|
**A**: In fact, connecting reliability to binary classification recalls classical models that evaluate selection procedures based on validity [[, e.g.,]]taylor1939relationship, cronbach1957psychological.
While our approach is related to univariate classification under measurement error, this use case differs.
**B**: Ideally, the validity of the observed indicators (and their combination into the overall assessment) would be evaluated directly, thus answering the question of how well the results of selection procedures predict applicants’ success.
**C**: Then, we aim to select the best applicants defined by the latent trait and evaluate the (miss)classification probabilities due to the measurement error contained in the observed ratings.
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
In contrast, as a special case of the low-rank model, linear MDPs have a similar form of structures but with an extra assumption that the linear representation is known a priori (Du et al., 2019b; Yang & Wang, 2019; Jin et al., 2020; Xie et al., 2020; Ayoub et al., 2020; Cai et al., 2020; Yang & Wang, 2020; Chen et al., ; Zhou et al., 2021a, b). <|MaskedSetence|> <|MaskedSetence|> In contrast, our work recovers the representation via contrastive self-supervised learning.
Upon acceptance of our work, we notice a concurrent work (Zhang et al., 2022) studies contrastive learning in RL on linear MDPs.
There is a large amount of literature studying contrastive learning in RL empirically. To improve the sample efficiency of RL, previous empirical works leverages different types of information for representation learning, e.g., temporal information (Sermanet et al., 2018; Dwibedi et al., 2018; Oord et al., 2018b; Anand et al., 2019; Schwarzer et al., 2020), local spatial structure(Anand et al., 2019), image augmentation(Srinivas et al., 2020), and return feedback(Liu et al., 2021). Our work follows the utilization of contrastive learning for RL to extract temporal information. Similar to our work, recent work by Misra et al. <|MaskedSetence|> In contrast, our work analyzes contrastive learning in RL under the more general low-rank setting, which includes Block MDP as a special case (Agarwal et al., 2020) for both MDPs and MGs..
|
**A**: Our theory is motivated by the recent progress in low-rank MDPs (Agarwal et al., 2020; Uehara et al., 2021), which show that the transition dynamics can be effectively recovered via maximum likelihood estimation (MLE).
**B**: Our work focuses on the more challenging low-rank setting and aims to recover the unknown state-action representation via contrastive self-supervised learning.
**C**: (2020) shows that contrastive learning provably recovers the latent embedding under the restrictive Block MDP setting (Du et al., 2019a).
|
CAB
|
BAC
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> In Section 2, we introduce the MV-SDE and associated notation, motivate MC methods to estimate expectations associated with its solution and set forth the problem to be solved. In Section 3, we introduce the decoupling approach for MV-SDEs (dos Reis et al., 2023) and formulate a DLMC estimator. Next, we state the optimal importance sampling control for the decoupled MV-SDE derived using stochastic optimal control and introduce the DLMC estimator with importance sampling from (Ben Rached et al., 2023) in Section 4. <|MaskedSetence|> We combine the multilevel DLMC estimator with the proposed importance sampling scheme and develop an adaptive multilevel DLMC algorithm that feasibly estimates rare-event quantities associated with MV-SDEs. <|MaskedSetence|>
|
**A**: Finally, we apply the proposed methods to the Kuramoto model from statistical physics in Section 6 and numerically verify all assumptions in this work and the derived complexity rates for the multilevel DLMC estimator for two observables..
**B**:
The remainder of this paper is structured as follows.
**C**: Then, we introduce the novel multilevel DLMC estimator in Section 5, develop an antithetic sampler for it, and derive new complexity results for the estimator.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
3.5 Predicting Abnormal Returns for Simulated Stock Market Events
In this exercise, we rely on data used in Baker and Gelbach (2020) to predict abnormal stock returns for simulated events. This exercise is well suited for assessing the performance of the various estimators for use in financial event study analyses – i.e., estimates of the causal effect of a shock (such as securities litigation or a merger announcement) on stock prices. <|MaskedSetence|> It also contains 10,000 randomly selected, unique firm-level pseudo-events (i.e., the events do not correspond to anything that would be expected to systematically affect the firms’ stock price). <|MaskedSetence|> <|MaskedSetence|>
|
**A**: For each firm-level event, we use returns data for the 250 trading days prior to the event to predict the returns on the event date.
**B**: As in Baker and Gelbach (2020), for each firm, our pool of control units contains all firms with the same four-digit SIC industry code; if there are fewer than eight such firms, we include peers with the same three-digit SIC industry code.
.
**C**: The data set includes returns for firms with a share price above $5 between 2009 and 2019.
|
CAB
|
CAB
|
CBA
|
CAB
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> This approach marks a departure from traditional descriptive inference by treating weights as random variables in analytic inference, thereby leveraging the weight model to improve the efficiency of parameter estimation.
Third, to mitigate bias arising from potential misspecification of weight models, we have developed a nonparametric estimation of the weight models. Utilizing the debiased/double machine learning method [6], this innovation is applied to our semiparametric adaptive estimators, enhancing their robustness. <|MaskedSetence|> As a result, this foundation enables statistical inference using confidence intervals, providing a scientific framework for assessing the precision and reliability of estimates derived from large samples..
|
**A**: Our proposed method, therefore, has broad applicability across numerous survey sampling scenarios where sampling weights are known.
Second, we have introduced adaptive estimators that asymptotically attain the semiparametric efficiency bound, rendering them asymptotically optimal within the extensive class of Regular Asymptotically Linear (RAL) estimators for the parameters under consideration.
**B**: By innovatively incorporating models on the sampling weights, our estimators utilize these weights more effectively, enhancing the estimation accuracy of various parameters.
**C**: Consequently, our methodology stands out not only for its efficiency, but also for its robustness, thanks to the nonparametric estimation of the weight model.
Moreover, we have rigorously established the large-sample properties of the adaptive efficient estimator, including
n𝑛\sqrt{n}square-root start_ARG italic_n end_ARG-consistency and asymptotic normality.
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 3
|
In the literature, the closest work to ours is the one of Bilokon et al. (2021). Given a set of paths, they propose to compute the signature of these paths in order to apply the clustering algorithm of Azran and Ghahramani (2006) and to ultimately identify market regimes. <|MaskedSetence|> Using Black-Scholes sample paths corresponding to four different configurations of the drift and volatility parameters, they show the ability of their methodology to correctly cluster the paths and to identify the number of different configurations (i.e. four). Let us point out that our contributions are different from theirs in several ways. <|MaskedSetence|> Second, our numerical results go beyond the Black-Scholes model and explore more sophisticated models. Moreover, we also provide numerical results on historical data. <|MaskedSetence|>
|
**A**: The similarity function underlying the clustering algorithm relies in particular on the Maximum Mean Distance.
**B**: Finally, our numerical experiments are conducted in a setting closer to our practical applications where the marginal one-year distributions of the two samples are the same or very close while the frequency of observation of the paths is lower in our case (we consider monthly observations over one year while they consider 100 observations).
Table 1: Summary of the studied risk factors and associated models in each framework (synthetic data and real historical data).
**C**: First, their objective is to cluster a set of paths in several groups while our objective is to statistically test whether two sets of paths come from the same probability distribution.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
Online learning methods enable model updates incrementally from sequential data, offering greater efficiency and scalability than traditional batch learning. Regularization technique is widely used in online convex optimization problems [40]. <|MaskedSetence|> Adaptive subgradient method [42] dynamically adjusts regularization term based on its current subgradient. Follow-the-Regularized-Leader [43, 44] is stable extension of Follow-the-Leader [45, 46] by adding a strong convex regularization term to the objective function to achieve a sublinear regret bound.
In this work, we present an innovative methodology that combines changepoints detection with PINNs to address changes and instabilities in the dynamics of PDEs. This approach marks the first exploration into simultaneously detecting changepoints and estimating unknown parameters within PDE dynamics based on observed data. <|MaskedSetence|> This approach not only identifies the timing of changes but also facilitates the estimation of unknown system parameters. <|MaskedSetence|> By adaptively adjusting these weights, our method not only enhances the model’s estimation accuracy but also increases its robustness against the instabilities associated with rapid parameter variations. (iii) We present several theoretical results to show that our re-weighting approach minimizes the training loss function with a regularizer and demonstrates that the regret is upper bounded. The theoretical results also indicate that the weight update method does not alter the neural network’s optimization objective on average..
|
**A**: Online Mirror Descent, an extension of Mirror Descent [41], utilizes a gradient update rule in the dual space, leading to improved bounds.
**B**: We have three main contributions: (i) We introduce a novel strategy that leverages PINNs alongside the Total Variation method for detecting changepoints within PDE dynamics.
**C**: (ii) We propose an online learning technique aimed at optimizing the weights within the loss function during training.
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The resulting statistical uncertainty, as we will argue here, inhibits how well quantum kernel methods may perform.
The heart of the problem is that, in a wide range of circumstances, the value of quantum kernels exponentially concentrate. That is, as the size of the problem increases, the difference between kernel values become increasingly small and so, more shots are required to distinguish between kernel entries. With a polynomial shot budget this leads to an optimized model which is insensitive to the input data and cannot generalize well.
.
|
**A**: By virtue of their convex optimization landscapes, kernel methods are guaranteed to obtain the optimal model from a given Gram matrix.
**B**: However, due to the probabilistic nature of quantum devices, in practice the entries of the Gram matrix can only be estimated via repeated measurements on a quantum device.
**C**: Thus the model is only ever trained on a statistical estimate of the Gram matrix, K^^𝐾\hat{K}over^ start_ARG italic_K end_ARG, instead of the exact one, K𝐾Kitalic_K.
|
ABC
|
ABC
|
ABC
|
CBA
|
Selection 1
|
II Related Work
Learning invariant representations. Due to the limitations of covariate shift, particularly in the context of image data, most current research on domain adaptation primarily revolves around addressing conditional shift. <|MaskedSetence|> (2016); Zhao et al. (2018); Saito et al. (2018); Mancini et al. (2018); Yang et al. (2020); Wang et al. (2020); Li et al. (2021); Wang et al. (2022b); Zhao et al. (2021). <|MaskedSetence|> The central challenge in these methods lies in enforcing the invariance of the learned representations. <|MaskedSetence|> However, all these methods assume label distribution invariance across domains. Consequently, when label distributions vary across domains, these methods may perform well only in the overlapping regions of label distributions across different domains, encountering challenges in areas where distributions do not overlap. To overcome this, recent progress focuses on learning invariant representations conditional on the label across domains (Gong et al., 2016; Ghifary et al., 2016; Tachet des Combes et al., 2020). One of the challenges in these methods is that the labels in the target domain are unavailable. Moreover, these methods do not guarantee that the learned representations align consistently with the true relevant information.
.
|
**A**: This approach focuses on learning invariant representations across domains, a concept explored in works such as Ganin et al.
**B**: Various techniques are employed to achieve this, such as maximum classifier discrepancy (Saito et al., 2018), domain discriminator for adversarial training (Ganin et al., 2016; Zhao et al., 2018, 2021), moment matching (Peng et al., 2019), and relation alignment loss (Wang et al., 2020).
**C**: These invariant representations are typically obtained by applying appropriate linear or nonlinear transformations to the input data.
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 1
|
Finally, and most recently, it was asked in previous work (Velychko et al., 2024) if entropy sums can also be used as learning objective.
This may not be intuitive at first because the entropy sums of Theorems 1 and 2
are by themselves no objectives (and for some models do not even depend on the data, see Damm et al., 2023). <|MaskedSetence|> <|MaskedSetence|> Velychko et al. <|MaskedSetence|> In that case, it was.
|
**A**: (2024) made use of this observation for a probabilistic sparse coding model.
**B**: However, considering Theorems 1 and 2 and their proofs, only a subset of parameters has to be at stationary points.
**C**: All remaining parameters can then be learned
using an entropy sum objective.
|
BCA
|
BCA
|
ACB
|
BCA
|
Selection 1
|
While the methods have different finite-sample properties, SVARs and LPs are equivalent in population, as established by Plagborg-Møller and Wolf (2021) who show that the underlying impulse response estimands are the same for both. This implies that the well-documented necessity for SVARs to add assumptions in order to achieve structural identification of the impulse responses, is equally true for LPs. Indeed, Plagborg-Møller and Wolf (2021) show that the identification strategies typically used for SVARs have an equivalent implementation for LPs, and vice versa. While identification issues therefore do not affect the choice between SVAR or LP, finite-sample considerations about estimation and inference problem do. <|MaskedSetence|> <|MaskedSetence|> While the system estimation of the SVAR can still be done linearly, the inversion to the VMA representation is a nonlinear operation that renders the impulse response coefficients a complex nonlinear functions of all VAR parameters. This makes inference considerably more cumbersome, even in
low-dimensional settings, where the complications and inaccuracies of applying the Delta method in finite samples have led to bootstrap inference becoming the norm (see e.g. Chapter 12 of Kilian and Lütkepohl, 2017, and the references therein). These problems are exacerbated when the dimensionality of the system grows, making LPs our method of choice to obtain impulse responses in high dimensions.
We consider high-dimensional local projections (HDLPs) in a general time series framework where the number of regressors can grow faster than the sample size. Impulse response analysis quickly becomes high-dimensional in macroeconomic research. <|MaskedSetence|> Similarly, additional regressors are often introduced to capture seasonal patterns in impulse responses (see e.g., quarter-dependent coefficients used in Blanchard and Perotti, 2002), or to permit nonlinearities to produce state-dependent impulse responses (Koop et al., 1996; Ramey and Zubairy, 2018)..
|
**A**: LP impulse responses are obtained by estimating only univariate linear regressions, and performing standard inference on (typically) a single parameter of interest across these univariate regressions.
**B**: In contrast, SVARs require estimating the whole system of equations, and transforming these into the Vector Moving Average (VMA) representation from which the impulse responses can be derived.
**C**: Even when considering impulse response analysis with few variables, the number of regressors in LPs or SVARs is often large due to the common practices of including many lags to control for autocorrelation (see e.g., Bernanke and Mihov, 1998, Romer and Romer, 2004 and Sims and Zha, 2006) or to robustify against (near) unit roots via lag augmentation as in Montiel Olea and Plagborg-Møller (2021).
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 3
|
<|MaskedSetence|> The recovery process is demonstrated in Figure 2(a). The recovery path starts from the fail vertex and ends in the AGAN vertex which means the observed maintenance returns the status of the failed part to full working order. Suppose the CEG in Figure 1(b) failthfully models the unmanipulated bushing system, and we observe a failed bushing whose failure was caused by a cracked insulator. Then an example of a perfect remedial intervention is that the engineer replaced the cracked insulator by a new one.
If the root cause is not remedied but only a subset of the secondary or intermediate faults are remedied, then after the intervention the status of the repaired component will not return to AGAN. However it is better than ABAO. We call such an intervention an imperfect remedial intervention. <|MaskedSetence|> The recovery path consists of a black dashed edge and a red dashed edge. The black dashed edge points from the fail vertex to the interior vertex of the failure path, which means the status of the equipment is improved but not AGAN after maintenance. In order to fully restore the system, additional maintenance is needed. <|MaskedSetence|> As for what is further needed to fully restore the system is unknown at that time. This brings uncertainty into this type of remedial intervention. The recovery process corresponding to the additional remedial work is represented by the red dashed edge, which points from the interior vertex to the AGAN vertex..
|
**A**:
A remedial intervention is perfect if the root cause of the failure is correctly identified and successfully fixed by the observed maintenance so that the post-intervention status of the part being maintained is AGAN [45, 47].
**B**: We can visualise the status change of the maintained equipment from Figure 2(b).
**C**: If imperfect remedial work has been made at time t𝑡titalic_t, then the maintenance log will record only that maintenance has happened.
|
BAC
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> We download publicly available demographic, examination and laboratory data from (https://www.cdc.gov/nchs/nhanes.htm). We combine data from the 2015-2016 and 2017-2018 cycles and adjust the survey weights as instructed in the official documentation (https://wwwn.cdc.gov/nchs/nhanes/tutorials/module3.aspx). <|MaskedSetence|> We use a data subset that includes 5483 observations with no missing data and the following variables: age (RIDAGEYR), gender (RIAGENDR), household income (INDHHIN2), BMI (BMXBMI), Creatinine (URXUCR), and 16 phthalate metabolites. We choose this data subset because it has continuous, binary, and ordinal predictors and has extensive correlations between many of them. <|MaskedSetence|> We use the Bayesian Gaussian copula to simulate synthetic data that preserves the dependence structure among the predictors. Figure 4 shows similar Spearman’s correlation matrices between the original data (left) and a simulated data set of 500 observations (right):
.
|
**A**: The data is included in our package as ‘nhanes1518’.
**B**:
In the second example, we generate predictors by estimating multivariate associations in NHANES data.
**C**: We log-transform the phthalate metabolites and Creatinine.
|
BAC
|
BCA
|
BAC
|
BAC
|
Selection 3
|
The rest of the paper is organized as follows. <|MaskedSetence|> We present the ℒ2superscriptℒ2\mathcal{L}^{2}caligraphic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT convergence of eigenfunctions in Section 3, and discuss the uniform convergence problem of functional data in Section 5. Asymptotic normality of eigenvalues is presented in Section 4. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: In Section 2, we give a synopsis of covariance and eigencomponents estimation in functional data.
**B**: The proofs of Theorem 1 can be found in Appendix, while the proofs of other theorems and lemmas are collected in the Supplementary Material.
.
**C**: Section 6 provides an illustration of the phase transition phenomenom in eigenfunctions with synthetic data.
|
ACB
|
ACB
|
ACB
|
BCA
|
Selection 1
|
Our purpose is to test the null hypothesis that the sample of temperatures from 1944 to 1981 comes from the same (functional) distribution to that of the period 1982-2019. The rejection of this null hypothesis could be interpreted as a hint of possible warming in the area. <|MaskedSetence|> This is hardly surprising, in view of Fig. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Indeed, we observe that, in absence of any significant climate change, one would expect that both samples are made of independent trajectories from the same underlying process.
All the considered tests give a nearly null p𝑝pitalic_p-value.
**B**: 5, where the temperature curves are displayed (the blue curves correspond to the earlier period).
**C**: While this is just a small experiment, presented here for illustration purposes, the results are consistent with those of many other deeper analysis published in recent years..
|
ABC
|
ACB
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> However, the information conveyed by the serial correlations is directly discarded, which prevents us from modeling the generative processes of observations. Another way is to explicitly develop the formation of serial correlations based on further assumptions. For example, dynamic regression models leverage linear regression and autoregressive integrated moving average (ARIMA) into a single regression model to forecast time series data [19]. For the main scope of this paper, i.e., calibration and simulation of car-following models, we use GP [20] to model serially correlated errors (i.e., for the model inadequacy part). <|MaskedSetence|>
|
**A**: In general, there are two ways to perform model estimation in the presence of serial correlations: (1) by directly processing the nonstationary data and eliminating serial correlations (e.g., performing the differencing operation), so that one can safely ignore the model inadequacy function and obtain stationary time series; or (2) by explicitly modeling the serial correlations based on specific model inadequacy functions.
**B**: For instance, Hoogendoorn [3] performed a differencing transformation to eliminate serial correlations, which then did not show significant differences between autocorrelation coefficients and zeros in the previously mentioned Durbin–Watson test [18].
**C**: GP provide a solid statistical solution to learn the autocorrelation structure, and more importantly, it allows us to understand the temporal effect in driving behavior through the lengthscale parameter l𝑙litalic_l, which partially explains the memory effect (see [21]) of human driving behaviors.
Figure 2: Physical settings of a car-following scenario.
.
|
ABC
|
ABC
|
BCA
|
ABC
|
Selection 4
|
I-A Characterizing the MI of MIMO Systems by RMT
The MI of the full-rank MIMO channels has been characterized by setting up its CLT using RMT. <|MaskedSetence|> derived the closed-form expressions for the mean and variance of the MI over the i.i.d. MIMO fading channel. In [19], Hachem et al. derived the CLT for the MI of correlated Gaussian MIMO channels and gave the closed-form mean and variance. Hachem et al. <|MaskedSetence|> In [22], Bao et al. derived the CLT for the MI of independent and identically distributed (i.i.d) MIMO channels with non-zero pseudo-variance and fourth-order cumulant. In [23], Hu et al. set up the CLT for the MI of elliptically correlated (EC) MIMO channels and validated the effect of the non-linear correlation. <|MaskedSetence|>
|
**A**: In [24], Kamath et al.
**B**: extended the CLT to the non-Gaussian MIMO channel with a given variance profile and the non-centered MIMO channel in [20] and [21], respectively, which shows that the pseudo-variance and non-zero fourth order cumulant of the random fading affects the asymptotic variance.
**C**: Considering the non-centered MIMO with non-separable correlation structure, the authors of [25] set up the CLT for the MI of holographic MIMO channels..
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
geometric decay, whereas Ray et al. <|MaskedSetence|> So, we assume that the loss functions ltsubscript𝑙𝑡l_{t}italic_l start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are adversarially
chosen, whereas Ray et al. <|MaskedSetence|> <|MaskedSetence|> (2022) assume.
|
**A**: (2022) adopt a stochastic
optimization one.
**B**: (2022) assume lt=lsubscript𝑙𝑡𝑙l_{t}=litalic_l start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_l are fixed.
**C**: Second,
we assume that the dynamics (𝒟𝒟{\cal D}caligraphic_D and ρ𝜌\rhoitalic_ρ) are
known (Assumption A1), whereas Ray et al.
|
ABC
|
ABC
|
ABC
|
CBA
|
Selection 1
|
Boomerang earns its name from its principle mechanism—adding noise of a certain variance to push data away from the image manifold, and then using a diffusion model to pull the noised data back onto the manifold. The variance of the noise is the only parameter in the algorithm, and governs how similar the new image is to the original image, as reported by Ho et al. <|MaskedSetence|> We apply this technique to three applications: (1) data anonymization for privacy-preserving machine learning; (2) data augmentation; and (3) perceptual enhancement for low resolution images. <|MaskedSetence|> For data augmentation we: (2a) obtain higher classification accuracy when trained on the Boomerang-augmented dataset versus no augmentation at all; and (2b) outperform SOTA synthetic data augmentation.
Finally, we show that Boomerang can be used for perceptual image enhancement. <|MaskedSetence|> In Section 3 we introduce our proposed local sampling method—Boomerang—and provide insights on how the amount of added noise affects the locality of the resulting samples. Finally, we describe three applications (Sections 4, 5 and 6) that Boomerang can be used without any modification to the diffusion model pretraining..
|
**A**: (2020).
**B**: We show that the proposed local sampling technique is able to: (1a) anonymize entire datasets to varying degrees; (1b) trick facial recognition algorithms; and (1c) anonymize datasets while maintaining better classification accuracy when compared with SOTA synthetic datasets.
**C**: The images generated via local sampling: 3a) have better perceptual quality than those generated with competing methods; 3b) are generated faster than other deep-learning methods trained methods such as the Deep Image Prior (Ulyanov et al., 2020); and 3c) can be used for any desired upsampling factor without needing to train or fine-tune the network.
In Section 2 we discuss the training framework of diffusion models, introducing the forward and reverse processes.
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 2
|
The rest of the paper is organized as follows. Section 2 introduces the model. Section 3 introduces the algorithm. <|MaskedSetence|> Section 5 introduces the strategy to generate missing edges. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Section 4 shows the consistency of the algorithm and provides some examples for different distributions.
**B**: Section 7 concludes.
2 The Bipartite Mixed Membership Distribution-Free model.
**C**: Section 6 conducts extensive experiments.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> Earlier proposals were based on evaluating the nuisance models associated with the estimators, and the utility of decision policy (Zhao et al., 2017) based on the heterogeneous treatment effects of the estimator. Recently, the focus has shifted towards designing surrogate metrics that approximate the true effect and compute its deviation from the estimator’s treatment effect (Nie & Wager, 2021; Saito & Yasui, 2020), and they have also been shown to be more effective than other metrics (Schuler et al., 2018; Alaa & Van Der Schaar, 2019). <|MaskedSetence|> <|MaskedSetence|> Hence, we have a poor understanding of which surrogate criteria should be used for model selection.
Contributions.
.
|
**A**: Also, there is often a lack of fair comparison between the various metrics as some of them are excluded from the baselines when authors evaluate their proposed metrics.
**B**: Towards this, surrogate metrics have been proposed that perform model selection using only observational data.
**C**: However, most of these evaluation studies have been performed only on a few synthetic datasets, therefore, the trend in such studies could be questionable.
|
BAC
|
BCA
|
BCA
|
BCA
|
Selection 4
|
RFFNet’s objective function (14) is highly non-convex due to the oscillatory behavior of the random features map and the parity symmetry with respect to θ𝜃\thetaitalic_θ. Even if the objective might exhibit a favorable optimization landscape with “natural” input distributions, as discussed in G, the landscape with real-world data, notably in the small sample setting, is affected by these random features’ oscillations.
Nevertheless, for the aforementioned loss functions, the objective has Lipschitz continuous gradients in each block (β𝛽\betaitalic_β and θ𝜃\thetaitalic_θ) of coordinates. <|MaskedSetence|> <|MaskedSetence|> Subsequently, we describe the default Sample, Initialize, and Optimize steps. <|MaskedSetence|>
|
**A**: We give a meta-algorithm describing how RFFNet is trained in Algorithm 1.
**B**: This regularity suggests we solve (14) using carefully initialized first-order optimization methods.
**C**: Importantly, these defaults were only established after analyzing the ablation studies in D.
.
|
BAC
|
CBA
|
BAC
|
BAC
|
Selection 1
|
Predicting power flows on the electricity transmission network is a key motivating application for probabilistic, spatially coherent modelling in energy forecasting. <|MaskedSetence|> <|MaskedSetence|> They are also constrained by the physics of the network and must be forecasted to identify and mitigate any risk of exceeding thermal or stability limits. Therefore, spatial probabilistic forecasts of supply and demand are required to forecast power flows, and quantify uncertainty and risk associated with these constraints. <|MaskedSetence|>
|
**A**: This is important for both network operators, who are responsible for system security, and traders who must be aware of spatial variation in prices.
**B**: Further, as the configuration of the network may change, any forecasting system must be flexible enough to allow the aggregation of supply and demand on the fly to calculate flows across relevant boundaries (tuinema2020probabilistic).
.
**C**: Power flows are influenced by the injection and offtake of power from the network, as well as network configuration.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
<|MaskedSetence|> For a broad overview, we refer the reader to [FVRS22]. <|MaskedSetence|> By taking advantage of this characterization, AMP methods have been used to derive exact high-dimensional asymptotics for convex penalized estimators such as LASSO [BM12], M-estimators [DM16], logistic regression [SC19], and SLOPE [BKRS20]. <|MaskedSetence|> Furthermore, they have been used – in a non-mixed setting – to combine linear and spectral estimators [MTV22].
.
|
**A**: AMP algorithms have been initialized via spectral methods in the context of low-rank matrix estimation [MV21c] and generalized linear models [MV21a].
**B**: Approximate message passing (AMP) algorithms.
AMP is a family of iterative algorithms that has been applied to several problems in high-dimensional statistics, including estimation in linear models [DMM09, BM11, KMS+12], generalized linear models [Ran11, SR14, SC19], and low-rank matrix estimation [DM14, RFG09, LKZ17].
**C**: A key feature of AMP algorithms is that under suitable model assumptions, the empirical joint distribution of their iterates can be exactly characterized in the high-dimensional limit, in terms of a simple scalar recursion called state evolution.
|
BCA
|
BAC
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> In contrast, the DPBM only requires 9999 parameters for this task (4444 parameters for the numerator of (12) and 5555 parameters for the denominator of (12)), offering a more compact representation of the density function.
Additionally, considering the time consumption, the PF outperforms the DPBM in the execution time. In this example, each filtering iteration takes an average execution time of 1.08 seconds on a 2.5 GHz Intel Core i7 CPU. While this may be relatively long compared to the PF execution time, it remains manageable for applications with less sensitivity to the execution time. For example, intraday, daily, weekly, monthly and annual data are widely used in analyzing the futures market [38], where filters serve as approaches for this task [39, 40]. <|MaskedSetence|> Moreover, the optimization in each filtering step is convex with the solution proved to exist and be unique. It makes the execution for each filtering step to be predictable, which is a clear advantage of DPBM..
|
**A**:
From an RMSE standpoint, the DPBM does not outperform the PF, but a notable drawback of the Particle filter is its requirement to store massive amount of data.
**B**: Our proposed filtering scheme is very suitable for this task.
**C**: For instance, in this simulation, the state of each particle consists of two parameters, namely its position and weight, resulting in a need for 10,000 parameters to characterize the system state density.
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 4
|
<|MaskedSetence|> Overcoming these hurdles is critical to establishing trust in the development of DT frameworks. <|MaskedSetence|> (2021): (i) Modeling & Simulation (M&S), which includes uncertainty quantification (UQ) and data analytics Kumar et al. (2019, 2022) through trustworthy Artificial Intelligence/Machine Learning (AI/ML) Kobayashi et al. (2024); Kobayashi and Alam (2024a, b), physics-based models, and data-informed modeling; (ii) advanced sensors/instrumentation with real-time signal processing (Kabir, 2010a, b; Kabir et al., 2010b, a) ; and (iii) data & information management. Among these, UQ is a crucial aspect of DT-enabling technologies that is vital for ensuring trustworthiness, a component that BISON is required to integrate to fulfill DT prerequisites. This study explores the integration of polynomial chaos expansion (PCE)-based UQ employing a non-intrusive approach within BISON to address the M&S requirements of the Accelerated Fuel Qualification (AFQ) for ATF. The innovation of this research lies in its application of a non-intrusive, computationally efficient polynomial chaos-based UQ methodology within the BISON code for sophisticated ATF concepts. Moreover, this research marks the inaugural effort to implement and analyze uncertainty estimations for DT-enabling technology.
It should be noted that a book chapter Kobayashi et al. (2023c) has been exclusively dedicated to the UO2+SiC/SiC system case study (A preprint is also available online). <|MaskedSetence|>
|
**A**: It’s essential to note that DT-enabling technologies encompass three main domains as identified by Yadav et al.
**B**:
From a DT perspective, the development of novel Accident-Tolerant Fuel (ATF) technology encounters several challenges, including (i) data unavailability, (ii) lack of data, missing data, and inconsistencies in data, and (iii) model uncertainty.
**C**: Conversely, this article provides a comprehensive elaboration on the methodologies, the entire process framework for the UQ (Uncertainty Quantification/Sensitivity Analysis) approach, and the relevant outcomes for both U3Si2+SiC/SiC and UO2+SiC/SiC systems, in the context of facilitating digital twin technology..
|
ACB
|
BAC
|
BAC
|
BAC
|
Selection 2
|
Gradients in Active and Curriculum Learning. Gradients have been successfully used as a criterion to select data to process in previous work. Settles et al. <|MaskedSetence|> A batch active learning method introduced in Ash et al. <|MaskedSetence|> In the area of curriculum learning, Graves et al. <|MaskedSetence|> We take inspiration from those approaches to propose a novel usage of the gradient criterion in the field of causal discovery.
.
|
**A**: (2007) introduce Expected Gradient Length (EGL), computed under the current belief, as a criterion for active learning.
**B**: (2017) considers Gradient Prediction Gain (GPG), which is defined as the gradient’s magnitude and is meant to be a proxy for expected learning progress.
**C**: (2020) also targets data points with high gradient magnitude, including uncertainty and diversity in the decision.
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 2
|
<|MaskedSetence|> First, in Section 2 we describe a variety of deep ReLU neural network constructions which will be used to prove Theorem 1. Many of these constructions are trivial or well-known, but we collect them for use in the following Sections. Then, in Section 3 we prove Theorem 4 which gives an optimal representation of sparse vectors using deep ReLU networks and will be key to proving superconvergence in the non-linear regime p>q𝑝𝑞p>qitalic_p > italic_q. In Section 4 we give the proof of the upper bounds in Theorems 1 and 2. <|MaskedSetence|> We remark that throughout the paper, unless otherwise specified, C𝐶Citalic_C will represent a constant which may change from line to line, as is standard in analysis. <|MaskedSetence|>
|
**A**:
The rest of the paper is organized as follows.
**B**: The constant C𝐶Citalic_C may depend upon some parameters and this dependence will be made clear in the presentation.
.
**C**: Finally, in Section 5 we prove the lower bound Theorem 3 and also prove the optimality of Theorem 4.
|
ACB
|
ACB
|
ABC
|
ACB
|
Selection 4
|
However, the constant-width property of standard conformal prediction intervals can be overly restrictive. <|MaskedSetence|> For instance, time series data is often heteroskedastic with the variance increasing along with the horizon due to the accumulation of uncertainty over time. <|MaskedSetence|> CQR borrows techniques from both quantile regression and conformal prediction by applying a conformalized “correction” to the standard quantile regression interval. <|MaskedSetence|>
|
**A**: Constant-width conditional prediction intervals computed for data with heterogeneous variance tend to be inefficient, meaning that they are wider than necessary.
In an effort to achieve more efficient intervals while retaining marginal validity, Romano et al. introduced an elegant method known as Conformalized Quantile Regression (CQR; Romano et al., 2019).
**B**: The resulting prediction is the corrected interval,
.
**C**: The variance of a conditional random variable Y𝑌Yitalic_Y is often heterogeneous (i.e., dependent on the value of the conditioning variable 𝐱𝐱\mathbf{x}bold_x).
|
CAB
|
CAB
|
ABC
|
CAB
|
Selection 4
|
In this paper, we focus on designing Universal Perturbations for Interpretation (UPI) as universal attacks aimed to change the saliency maps of neural nets over a significant fraction of input data. <|MaskedSetence|> <|MaskedSetence|> We demonstrate that the spectral UPI-PCA scheme yields the first-order approximation of the solution to the UPI-Grad optimization problem.
To implement the UPI-PCA scheme for generating universal perturbations, we propose a stochastic optimization method which can efficiently converge to the top singular vector of first-order interpretation-targeting perturbations. Finally, we demonstrate our numerical results of applying the UPI-Grad and UPI-PCA methods to standard image recognition datasets and neural network architectures. Our numerical results reveal the vulnerability of commonly-used gradient-based feature maps to universal perturbations which can significantly alter the interpretation of neural networks. The empirical results show the satisfactory convergence of the proposed stochastic optimization method to the top singular vector of the attack scheme, and further indicate the proper generalization of the designed attack vector to test samples unseen during the optimization of the universal perturbation. <|MaskedSetence|>
|
**A**: To achieve this goal, we formulate an optimization problem to find a UPI perturbation with the maximum impact on the total change in the gradient-based feature maps over the training samples.
**B**: We can summarize the contributions of this work as follows:.
**C**: We propose a projected gradient method called UPI-Grad for solving the formulated optimization problem.
Furthermore, in order to handle the difficult non-convex nature of the formulated optimization problem, we develop a principal component analysis (PCA)-based approach called UPI-PCA to approximate the solution to this problem using the top singular vector of fast gradient method (FGM) perturbations to the interpretation vectors.
|
ACB
|
ACB
|
BCA
|
ACB
|
Selection 2
|
<|MaskedSetence|> View (a) presents the performance of the best-performing metamodel for each cluster according to the seven validation metrics and confidence. The UMAP visible in (b) gathers base models and metamodels predicting similarly the same test instances in groups (Gs) such as G1– G4. On the other hand, (c) visualizes cluster_2, with G1 showcasing that most of the metamodels perform identically, G2 solely with tree-based ML algorithms, and G3 with the two most unconfident metamodels. The unification of predictions from pairs of diverse metamodels is also possible as seen in (d), leading to two promising combinations.
The stacked bar chart in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(b) presents the best-performing metamodel in each cluster, including all base models and the group of outliers for seven different validation metrics also supported by StackGenVis. This visualization provides an overview of performance (in percentage % format) for the best candidate from the 11 metamodels created in every cluster, using the following metrics: Accuracy, Precision, Recall, ROC AUC, Geometric Mean, Matthews Correlation Coefficient (CorrCoeff), F1 Score, and Confidence. <|MaskedSetence|> Additionally, we convert Matthews CorrCoeff to an absolute value ranging from 0 to 100%. The average of all seven validation metrics plus the confidence is then divided by 2 in order to compute the Overall Performance that defines the ranking of the clusters from top to bottom in this visualization (i.e., from best to worst). Therefore, Confidence is multiplied seven times to capture the same space as all validation metrics because users should be able to compare the two main components of overall performance globally. <|MaskedSetence|> If a user deems a metric useless for the given problem, they can deselect this metric and temporarily hide it. If we compare the total length of the stacked bars in MetaStackVis: Visually-Assisted Performance Evaluation of Metamodels(b), cluster_0 contains only 10 instead of 55 base models and reaches the highest overall performance with Linear Discriminant Analysis as the metamodel..
|
**A**:
Figure 1: The investigation of all and cluster_2 comprising 12 base models.
**B**: The last metric is the average predicted probability for all test instances.
**C**: The legend for this view maps the metrics to the different color encodings.
|
ABC
|
ABC
|
ABC
|
CAB
|
Selection 3
|
2.4 Proof of the main result
In [13], the coincidence of posterior modes with minimizers of the Onsager–Machlup functional is stated in a separable Banach space setting as Theorem 3.5 and Corollary 3.10. However, the proof given in [13] is incomplete. <|MaskedSetence|> On the other hand, even in the Hilbert space case the proof contains gaps that are closed in this work. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Our proof follows the fundamental approach of [13], incorporating corrections where necessary.
**B**: On the one hand, parts of the proof only hold for separable Hilbert spaces, as pointed out in section 1.1 of [22].
**C**: Most of these corrections have been introduced in [24], while some were just recently found necessary in [22].
The outline of the proof of Theorem 2.10 can be described as follows..
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 1
|
The rest of this paper is organized as follows. In Section 2, we first establish the strong duality with respect to y𝑦yitalic_y under some feasibility assumption for nonconvex-concave minimax problem (P) with linearly coupled equality or inequality constraints. Then, we propose a primal-dual alternating proximal gradient (PDAPG) algorithm for nonsmooth nonconvex-(strongly) concave minimax problem with coupled linear constraints, and then prove its iteration complexity. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Numerical results in Section 4 show the efficiency of the two proposed algorithms.
**B**: In Section 3, we propose another primal-dual proximal gradient (PDPG-L) algorithm for nonsmooth nonconvex-linear minimax problem with coupled linear constraints, and also establish its iteration complexity.
**C**: Some conclusions are made in the last section.
.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 4
|
6.8 Target Group
In most cases, the visualization tools cover at least the target group of domain experts/practitioners [EGG∗12, FMH16, FCS∗20, GNRM08, HNH∗12, KPN16]. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Providing different prediction capabilities allows for assessing the predictions during the model selection process via an interactive visual environment. Biologists and doctors, for instance, are interested in being able to compare data structures and receive guidance on where to focus on. Ma et al. [MXLM20] employ a multi-faceted visualization schema intended to aid the analysis of ML experts for the domain of adversarial attacks.
.
|
**A**: Beginners/novice users [JSO19, MXQR19, SRM∗15, TLRB18] are rarely considered.
**B**: Then, other target groups such as ML experts [JC17b, KJR∗18, SSK10, WLN∗17] and developers are in the focus of the authors [KFC16, Mad19, RL15b, YZR∗18] (commonly together).
**C**: To give two examples, Bögl et al. [BAF∗14] support with TiMoVA-Predict several types of predictions with a holistic VA approach that focuses on domain experts.
|
BCA
|
BAC
|
BAC
|
BAC
|
Selection 4
|
We apply BR-DTRs to analyze the data from the DURABLE study (Fahrbach et al., 2008). The DURABLE study is a two-phase trial designed to compare the safety and efficacy of insulin glargine versus insulin lispro mix in addition to oral antihyperglycemic agents in T2D patients. During the first phase trial, patients were randomly assigned to the daily insulin glargine group or twice daily insulin lispro mix 75/25 (LMx2) group for 24 weeks. By the end of 24 weeks, patients who failed to reach an HbA1c level lower than 7.0% would enter the second phase intensification study and be randomly reassigned with either basal-bolus therapy (BBT) or LMx2 for insulin glargine group or basal-bolus therapy (BBT) or three times daily insulin lispro mix 50/50 (MMx3) therapy for LMx2 group. <|MaskedSetence|> A flowchart of the study design of the DURABLE trial is provided in Appendix D for reference.
In the DURABLE study, the major objective is lowering patients’ endpoint blood glucose level measured in HbA1c level, and in this analysis, we use the reduction of HbA1c level at 24 weeks since baseline as the reward outcome for the first stage and use the reduction of HbA1c level at 48 weeks since 24 weeks as the reward outcome for the second stage. The risk outcome is set to be hypoglycemia frequency encountered by patients, which reflects the potential risk induced by adopting assigned treatment. <|MaskedSetence|> To accommodate these patients in our proposed framework, we make the additional assumption that for patients in the maintenance study, their first-stage treatment is already optimal and should not be adjusted. This assumption is consistent with the general guidance of treating T2D patients suggested by ADA where the patient’s treatment should be unchanged if the patient’s HbA1c level can be maintained lower than 7% (American Diabetes Association, 2022). <|MaskedSetence|> Consequently, the second stage analysis will only involve patients who entered the intensification study, and only in the first stage will all patients be included in the analysis. In the first stage estimation, for patients in the maintenance study, their future reward outcome (reduction of HbA1c) is assumed to be maintained. That is, in Stage I, the reward outcome becomes.
|
**A**: Under this assumption, in the second stage, patients in the maintenance study are already receiving optimal treatment so it is not necessary to estimate their optimal decision rules.
**B**: Patients who achieved an HbA1c level lower than 7% and entered the maintenance study would not be re-randomized with new treatments during the second stage.
**C**: Any other patients who reached HbA1c 7.0% or lower would enter the maintenance study and keep the initial therapy for another 2 years.
|
CBA
|
CBA
|
CBA
|
ACB
|
Selection 2
|
<|MaskedSetence|> We commence by examining scenarios in Sec. V.1 under the premise that the function 𝓕𝓕\pmb{\mathcal{F}}bold_caligraphic_F is accessible for analytical estimation. In this section, we also employ training data drawn from a uniform distribution 𝒰(0,1)𝒰01\mathcal{U}(0,1)caligraphic_U ( 0 , 1 ), ensuring uniform coverage of the state space. This approach minimizes model bias towards any specific regions within the state space. In Sec. V.2, we relax the assumption of i.i.d. training data to explore the models’ ability to predict and to forecast dynamics given more realistic training conditions. Additionally, in this section we employ the statistical significance testing to quantify the models’ confidence in the prediction. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: We end this section with a discussion on considerations regarding training with noisy and irregularly sampled time series data.
**B**:
V Approximation, prediction and long-range forecasting of dynamics
In this section, we detail the primary findings of this paper, focusing on neural approximation, prediction and forecasting of complex network dynamics.
**C**: The training details are listed in Sec. III of SI..
|
CBA
|
BAC
|
BAC
|
BAC
|
Selection 4
|
Several architectures using the Cox partial log-likelihood have been proposed for conventional tabular datasets. DeepSurv is a neural network-based proportional hazards model (Katzman et al., 2018), which has been used to develop prediction models for several clinical scenarios (Yu et al., 2022) (Bice et al., 2020) (Kim et al., 2019) (She et al., 2020). <|MaskedSetence|> (2018) use gene expression data as input to a single hidden layer neural network to model cancer survival; an approach which is outperformed by an autoencoder with Cox regression model (Yin et al., 2022) and a convolutional model with residual networks (Huang et al., 2020). In parallel, Hao et al. <|MaskedSetence|> As a more complex model for gene expression, Meng et al. (2022) propose to first fit a generative adversarial network (GAN) to the expression data before transferring the weights to a Cox neural network. These methods provide data-agnostic architectures in clinical settings that do not take advantage of known relationships between features.
Other approaches use the prior knowledge of existing structure between features to improve predictive performance. For example,
Chen et al. (2023) transfer the embeddings of pre-trained convolutional layers on histopathological images to an autoencoder, the central layer of which is passed into a linear Cox regression to predict cervical cancer survival. Histopathological images were also used to predict lung cancer survival, using either convolutional layers (Zhu et al., 2017) or a convolutional transformer and Siamese network model with a modified Cox partial log-likelihood loss function that accounts for aleatoric uncertainty (Tang et al., 2023). <|MaskedSetence|> (2023) propose a pathway-informed model integrating metabolomic data and known metabolic pathways to predict survival in patients with glioma. To assess cardiovascular risk, Barbieri et al. (2022) propose a sex-specific neural network model using bidirectional gated recurrent units to represent recent clinical history. These architectures focus on structure within unimodal datasets and extend the concepts proposed by data-agnostic architectures..
|
**A**: (2021) assess simple Cox neural networks in a high dimensional setting against random forest and penalized regression approaches.
**B**: Kaynar et al.
**C**: Ching et al.
|
CAB
|
CAB
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> Local interpretability generally refers to the ability to explain the ML predictions for a specific data point rather than the whole model. <|MaskedSetence|> However, the weight and effect of a feature are specific to the individual data point being explained and may not be generalizable to other data points or the model as a whole. Figure 14a shows the local exact interpretability in terms of weight and effect of ReLU-DNN. It shows that the cycle significantly impacts the RUL prediction. <|MaskedSetence|> Both the local feature and effect confirm the impact of the cycle on RUL prediction. Figure 14c,d show the local interpretability of FIGS and tree. Both algorithms confirm the relationship between the cycle and RUL.
.
|
**A**:
5.2.1 Local Interpretability
Figure 14 shows the local exact interpretability in terms of weight and effect.
**B**: The weight and effect of a feature can provide insight into the most important prediction factors.
**C**: Figure 14b shows the local interpretability of EBM for local feature importance (left in Figure 14b) and local effect importance (right in Figure 14b).
|
ABC
|
ABC
|
CBA
|
ABC
|
Selection 2
|
The architecture of DeepONet is defined as follows: both the branch and trunk networks are fully connected neural networks. The branch network has a size of [100, 40, 40], while the trunk network has a size of [1, 40, 40]. The activation functions utilized in both networks are Rectified Linear Units (ReLU). Weight initialization is performed using the Glorot initialization method, which ensures effective initialization of the network parameters. It is important to note that the same activation functions and initialization methods are employed in the subsequent problems discussed in Sections 5.2 and 5.3.
The Adam optimization algorithm is chosen as the optimization method during the training process. <|MaskedSetence|> The model is trained for 10,000 iterations. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The learning rate for training is set to 0.001, which controls the step size during gradient descent and affects the convergence speed and accuracy of the training process.
**B**: These choices of optimization algorithm, evaluation metric, and learning rate are consistent across the problems discussed in this paper, ensuring a fair and comparable evaluation of the models.
.
**C**: The mean L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT relative error is used as the evaluation metric to measure the performance of the model.
|
CBA
|
CAB
|
CAB
|
CAB
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> The new divergence can be applied to a variety of time series and sequential decision making applications in a versatile way. With regard to time series clustering, it demonstrated obvious performance gain for multivariate or high-dimensional time series. <|MaskedSetence|> We additionally analyzed two special cases of conditional CS divergence and illustrated their implications in other challenging areas such as time series causal discovery and the loss function design of deep regression models.
.
|
**A**: With regard to reinforcement learning without explicit rewards, it outperforms the popular maximum entropy strategy and encourages significantly exploration to states that have not been visited sufficiently for the agent to be familiar with it.
**B**: 6 Conclusions and Implications for Future Work
We developed the conditional Cauchy-Schwarz (CS) divergence to quantify the closeness between two conditional distributions from samples, which can be elegantly evaluated with kernel density estimator (KDE).
**C**: Our conditional CS divergence enjoys simultaneously relatively lower computational complexity, differentiability, and faithfulness guarantee.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 3
|