robench-2024b
Collection
48 items
•
Updated
shuffled_text
stringlengths 267
5.79k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|
**A**: (2005, Eq. 28) provide a closed form approximation of quantile q>0.95𝑞0.95q>0.95italic_q > 0.95 for the sum of Pareto-distributed variables.333The Zaliapin et al**B**: (2005) preprint has a typo, which in the published version has been fixed by redefining the meaning of q𝑞qitalic_q just for this equation, but we stick with the more natural definition of q𝑞qitalic_q and rewrite the equation. To get the corresponding approximation for the mean, we simply divide by S𝑆Sitalic_S to get
**C**: Zaliapin et al
|
BCA
|
ACB
|
BAC
|
CBA
|
Selection 1
|
**A**: To enhance the visualization, we exclude street segments with wealth estimates greater than the 95th percentile ($6,272,010)**B**: **C**:
Figure 2: Counts of residential burglary (slightly jittered) versus wealth estimate by neighborhood for each street segment; illustrates the nonconstant effects of wealth on crime
|
BCA
|
BAC
|
CBA
|
ACB
|
Selection 1
|
**A**: In this paper, we extend the recent solutions of sparse linear mixed model [8, 9] that can correct confounding factors and perform variable selection simultaneously further to account the relatedness between different responses**B**: We propose the tree-guided sparse linear mixed model, namely TgSLMM, to correct the confounder and incorporate the relatedness among response variables simultaneously. With TgSLMM, we are capable to improve the performance of the variable selection when considering the statistical criterion, incorporating the complex tree-based correlation structure in the traits under our consideration. Eventually, we examine our model through plenty of repeated experiments and show that our method is superior to other existing approaches and able to discover the real genome association in the real data set.
**C**: Thus, to improve the performance of the variable selection, incorporating the complex correlation structure in the responses is under our consideration
|
CBA
|
BCA
|
ACB
|
ABC
|
Selection 2
|
**A**: The insulin intakes tend to be more in the evening, when basal insulin is used by most of the patients**B**: 3 times the average insulin dose of others in the morning.**C**: The only difference happens to patient 10 and 12 whose intakes are earlier at day.
Further, patient 12 takse approx
|
ACB
|
CAB
|
BCA
|
BAC
|
Selection 1
|
**A**: Model Implied Instrumental Variable SEM (MIIVSEM; Bollen (\APACyear1996)) uses the structural information from the model to identify variables within the system that can act as instruments for other variables, rather than recruit additional auxiliary variables from outside of the system. We illustrate this approach by way of example.
**B**: In a structural equation modelling framework however, we have multiple sets of equations that together describe the relations between all the variables in the system**C**: Up till now, we have been dealing with a single equation system, which necessitates the selection of auxiliary instrumental variables
|
BCA
|
BCA
|
CBA
|
BCA
|
Selection 3
|
**A**: This can be improved with better dynamics models and, while generally common with model-based RL algorithms, suggests an important direction for future work. Another, less obvious limitation is that the performance of our method generally varied substantially between different runs on the same game**B**:
While SimPLe is able to learn more quickly than model-free methods, it does have limitations. First, the final scores are on the whole lower than the best state-of-the-art model-free methods**C**: The complex interactions between the model, policy, and data collection were likely responsible for this. In future work, models that capture uncertainty via Bayesian parameter posteriors or ensembles (Kurutach et al., 2018; Chua et al., 2018) may improve robustness.
|
CBA
|
CAB
|
ACB
|
BAC
|
Selection 4
|
**A**: They compared their results with the results of the Bracken and Fricker and results were found to be different. They concluded that logarithmic and linear-logarithmic forms fits more appropriately as compared to the linear form found by Bracken**B**: They also concluded that the Bayesian approach is more appropriate to make inferences for battles in progress as it uses the prior information from experts or previous battles. They have applied the Gibbs sampling approach along with Monte Carlo simulation for deriving the distribution patterns of the parameters involved.
**C**: Wiper, Pettit and Young [44] applied Bayesian computational techniques to fit the Ardennes Campaign data. They studied stochastic form of Lanchester model and enquired whether there is role of any attacking and defending army on the number of casualties of the battle
|
ACB
|
BAC
|
CAB
|
BCA
|
Selection 4
|
**A**: For example, we can put the momentum term on the server**B**: However, these ways lead to worse performance than the way adopted in this paper. More discussions can be found in Appendix A.
**C**: There are some other ways to combine momentum and error feedback
|
BCA
|
ABC
|
CBA
|
ACB
|
Selection 1
|
**A**:
, where ∗*∗ is the convolution333We use convolution instead of cross-correlation only as a matter of compatibility with previous literature and computational frameworks**B**: Using cross-correlation would produce the same results and would not require flipping the kernels during visualization**C**: operation.
|
CAB
|
ACB
|
ABC
|
BAC
|
Selection 3
|
**A**: In this context, we study randomization tests on regression residuals, which we will refer to as residual randomization tests**B**: In Section 4, we show how Condition (C1) can be simplified when testing the significance of coefficients in linear regression**C**: These procedures bear strong similarities to permutation tests of significance (Janssen, 1997; DiCiccio and Romano, 2017) and several bootstrap variants (Freedman and Lane, 1983a; Wu, 1986; Davidson and Flachaire, 2008).
Our theory provides simple conditions for the asymptotic validity of residual randomization tests under the invariant hypothesis, while still being able to leverage existing theoretical results under the limit hypothesis. As a useful byproduct, our analysis clarifies the trade-offs between data invariance assumptions of the form described by Equation (1) and i.i.d. assumptions underlying classical bootstrap theory.
|
CAB
|
ABC
|
ACB
|
BAC
|
Selection 4
|
**A**: This paper also contributes to our knowledge of the blood donations market (see Slonim et al., (2014) for an introduction to this market). This market is ideal for studying charitable giving**B**: For instance, Lacetera et al., (2012, 2013) and Goette and Stutzer, (2020) explore the effect of incentives on blood donations. Craig et al., (2017) argue that the mechanism that causes the delay in return is due to donors adjusting their expectations of the cost to make a donation, with longer current wait times causing them to expect longer future wait times**C**: A similar mechanism could also provide an explanation for the current results. Specifically, a deferral, ceteris paribus, could cause donors to update their expectations of the probability of them being able to make a successful future donation, thus reducing their willingness to come back again.
|
ACB
|
BAC
|
ABC
|
CBA
|
Selection 3
|
**A**: In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques**B**: Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance**C**: The effectiveness of our solution is demonstrated through computer simulations in a classic control environment.
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 2
|
**A**: NRFI with and without the original data is shown for different network architectures**B**: The smallest architecture has 2222 neurons in both hidden layers and the largest 128128128128**C**: For NRFI (gen-ori), we can see that a network with 16161616 neurons in both hidden layers (NN-16-16) is already sufficient to learn the decision boundaries of the random forest and achieve the same accuracy. When fewer training samples are available, NN-8-8 already has the required capacity.
In the following, we will further analyze the accuracy and number of network parameters.
|
ABC
|
CAB
|
CBA
|
BCA
|
Selection 1
|
**A**: We refer to the introduction of the latter article for further**B**: SBM and OBM and their local time have been recently investigated in the context of option pricing, as for instance in [20] and [16].
In [37] it is shown that a time series of threshold diffusion type captures leverage and mean-reverting effects**C**: Some models in financial mathematics and econometrics are threshold diffusions, for instance continuous-time versions of SETAR (self-exciting threshold auto-regressive) models, see e.g. [15, 41]
|
BAC
|
CAB
|
BAC
|
CBA
|
Selection 4
|
**A**: However, such computational efficiency guarantees rely on the regularity condition that the state space is already well explored**B**:
A line of recent work (Fazel et al., 2018; Yang et al., 2019a; Abbasi-Yadkori et al., 2019a, b; Bhandari and Russo, 2019; Liu et al., 2019; Agarwal et al., 2019; Wang et al., 2019) answers the computational question affirmatively by proving that a wide variety of policy optimization algorithms, such as policy gradient (PG) (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000), natural policy gradient (NPG) (Kakade, 2002), trust-region policy optimization (TRPO) (Schulman et al., 2015), proximal policy optimization (PPO) (Schulman et al., 2017), and actor-critic (AC) (Konda and Tsitsiklis, 2000), converge to the globally optimal policy at sublinear rates of convergence, even when they are coupled with neural networks (Liu et al., 2019; Wang et al., 2019)**C**: Such a condition is often implied by assuming either the access to a “simulator” (also known as the generative model) (Koenig and Simmons, 1993; Azar et al., 2011, 2012a, 2012b; Sidford et al., 2018a, b; Wainwright, 2019) or finite concentratability coefficients (Munos and Szepesvári, 2008; Antos et al., 2008; Farahmand et al., 2010; Tosatto et al., 2017; Yang et al., 2019b; Chen and Jiang, 2019), both of which are often unavailable in practice.
|
ABC
|
CBA
|
BAC
|
BCA
|
Selection 3
|
**A**: In Wu et al**B**: (2018b), weights, activations, weight gradients, and activation gradients are subject to customized quantization schemes that allow for variable bit widths and facilitate integer arithmetic during training and testing.
In contrast to Zhou et al**C**: (2016), the work of Wu et al. (2018b) accumulates weight changes to low-precision weights instead of full-precision weights.
|
BCA
|
CAB
|
ABC
|
BAC
|
Selection 3
|
**A**: Each axis maps the entire range of each dimension, from bottom to top. A simple example is given in Figure 4(b), where we can see that the dimensions of the selected points roughly appear at the intersection between two species, versicolor (brown) and virginica (orange).
**B**: The colors reflect the labels of the data with the same colors as in the overview (Subsection 4.2), when available, and the rest of the instances of the data—which are not selected—are shown with high transparency**C**: Apart from the adaptive filtering and re-ordering of the axes, we maintained a rather standard visual presentation of the PCP plot, to make sure it is as easy and natural as possible for users to inspect it
|
BAC
|
BCA
|
CBA
|
BCA
|
Selection 3
|
**A**: Roughly speaking, the network embedding approaches can be classified into 2 categories: generative models [13, 14] and discriminative models [15, 16]**B**: The former tries to model a connectivity distribution for each node while the latter learns to distinguish whether an edge exists between two nodes directly.
In recent years, graph neural networks (GNN) [17], especially graph convolution neural networks (GCN) [18, 19], have attracted a mass of attention due to the success made in the neural networks area**C**: GNNs extend classical neural networks into irregular data so that the deep information hidden in graphs is exploited sufficiently. In this paper, we only focus on GCNs and its variants.
|
ABC
|
BAC
|
ACB
|
CAB
|
Selection 1
|
**A**: In Section 4, the main result is provided. Section 5 presents a simulation study, highlighting the small sample properties and implementation of our proposed method. Section 6 provides the proof of the main theorem**B**:
The paper is organized as follows. Section 2 introduces and motivates the main regression problem in a high-dimensional additive model. Section 3 presents the estimation method**C**: The Appendix includes additional technical material. Appendix A presents a general result for uniform inference on a high-dimensional linear functional. Appendix B provides results in terms of uniform lasso estimation rates in high-dimensions which might be of independent interest. Computational details and additional simulation results are presented in Appendix C and Appendix D.
|
BAC
|
CAB
|
ABC
|
BCA
|
Selection 1
|
**A**: To assist the knowledge generation, a comparison between the currently active stack against previously stored versions is important**B**: In general, this includes monitoring the historical process of the stacking ensemble, facilitating interaction and guidance (G4).**C**:
T4: Compare the results of two stages and receive feedback to guide interaction
|
CAB
|
BCA
|
CAB
|
CBA
|
Selection 2
|
**A**: In §6.1, we introduce Q-learning and its mean-field limit**B**: In §6.2, we establish the global optimality and convergence of Q-learning. In §6.3, we further extend our analysis to soft Q-learning, which is equivalent to policy gradient.
**C**: In this section, we extend our analysis of TD to Q-learning and policy gradient
|
ABC
|
CAB
|
BAC
|
BCA
|
Selection 4
|
**A**: Predicting a quantity for the long time scales which matter for the climate is a hard task, with a great degree of uncertainty involved**B**: Many efforts have been undertaken to model and control this and other uncertainties, such as the development of standardized scenarios of future development, called Shared Socio-economic Pathways (SSPs) [22, 30] or the use of model ensembles to tackle the issue of model uncertainty**C**: Given also the relative opaqueness and the complexity of IAMs, post-hoc diagnostic methods have been used, for instance with the purpose of performing Global Sensitivity Analysis. In fact, GSA methods can provide fundamental information to policymakers in terms of the relevance of specific factors over model outputs [17]. Moreover, the specific methodology employed in the paper [4] is able to detect both main and interaction effects with a very parsimonious experimental design, and to do so in the case of finite changes for the input variables.
|
ABC
|
ACB
|
BAC
|
CAB
|
Selection 1
|
**A**: (2019); Chen et al. (2019, 2020b) consider the matrix factor model which is a special case of (1) with M=2𝑀2M=2italic_M = 2 and propose estimation procedures based on the second moments.**B**: (2020) and the references therein provide a thorough review of recent advances and applications of multivariate factor models.
For 2nd-order tensor (or matrix) data, Wang et al**C**: Chapter 11 of Fan et al
|
BAC
|
CAB
|
CBA
|
CAB
|
Selection 3
|
**A**: We train the model with 90 epochs**B**: As recommended in [32], we use warm-up and polynomial learning rate strategy.**C**:
To further verify the superiority of SNGM with respect to LARS, we also evaluate them on a larger dataset ImageNet [2] and a larger model ResNet50 [10]
|
BCA
|
ACB
|
ACB
|
CAB
|
Selection 1
|
**A**: Despite this mild difference in parameter identification, similar assumptions can be found in Zhou et al**B**: (2010).
Recall that we recommend the choice of commonly used cubic splines (i.e., ζ=4𝜁4\zeta=4italic_ζ = 4) in Section 3 to implement our method when prior information about the Hölder smoothness condition of the broadcasted functions is unavailable.**C**: (1998) and Huang et al
|
ABC
|
CBA
|
ACB
|
BCA
|
Selection 3
|
**A**: In addition, Ada-LSVI-UCB-Restart has a huge gain compared to LSVI-UCB-Unknown, which agrees with our theoretical analysis. This suggests that Ada-LSVI-UCB-Restart works well when the knowledge of global variation is unavailable. Our proposed algorithms not only perform systemic exploration, but also adapt to the environment change.
**B**: Ada-LSVI-UCB-Restart also outperforms the baselines because it also takes the nonstationarity into account by periodically updating the epoch size for restart**C**: From Figure 1, we see LSVI-UCB-Restart with the knowledge of global variation drastically outperforms all other methods designed for stationary environments , in both abruptly-changing and gradually-changing environments, since it restarts the estimation of the Q𝑄Qitalic_Q function with knowledge of the total variations
|
BCA
|
CBA
|
ACB
|
BCA
|
Selection 2
|
**A**: The framework is general and can utilize any DGM**B**: The key observation that we make is that the DR learning problem can be cast as a style transfer task [DBLP:conf/cvpr/GatysEB16], thus allowing us to borrow techniques from this extensively explored area.
**C**: Furthermore, even though it involves two stages, the end result is a single model which does not rely on any auxiliary models, additional hyper-parameters, or hand-crafted loss functions, as opposed to previous works addressing the problem (see Section LABEL:sec:related for a survey of related work)
|
BAC
|
CAB
|
BAC
|
ACB
|
Selection 4
|
**A**: The results for the breast cancer data can be observed in Table 3**B**: However, the interpolating predictor selects over 80 times as many views as the lasso, and is less stable. Again, the interpolating predictor and NNFS do not align with the pattern that less sparsity is associated with higher stability.
**C**: The interpolating predictor and the lasso are the best performing meta-learners in terms of all three classification measures, with the interpolating predictor having higher test accuracy and H, and the lasso having higher AUC
|
ABC
|
CBA
|
ACB
|
CAB
|
Selection 3
|
**A**: Though they establish a regret bound that does not depend on the aforementioned parameter κ𝜅\kappaitalic_κ, they work with an inaccurate version of the MNL model. More specifically, in the MNL model, the probability of a consumer preferring an item is proportional to the exponential of the utility parameter and is not linear in the utility parameter as assumed in Ou et al. [2018].
**B**: We note that Ou et al**C**: [2018] also consider a similar problem of developing an online algorithm for the MNL model with linear utility parameters
|
CBA
|
CAB
|
BAC
|
ACB
|
Selection 2
|
**A**: Evolutionary optimization, however, has not experienced similar consideration by the InfoVis and VA communities, with the exception of more general visualization approaches such as EAVis [KE05, Ker06] and interactive evolutionary computation (IEC) [Tak01]**B**: Visualization tools have been implemented for sequential-based, bandit-based, and population-based approaches [PNKC21], and for more straightforward techniques such as grid and random search [LCW∗18]**C**: To the best of our knowledge, there is no literature describing the use of VA in hyperparameter tuning of evolutionary optimization (as defined in Section 1) with the improvement of performance based on majority-voting ensembles.
In this section, we review prior work on automatic approaches, visual hyperparameter search, and tools with which users may tune ML ensembles. Finally, we discuss the differences of such systems when compared to VisEvol in order to clarify the novelty of our tool.
|
BCA
|
ACB
|
BAC
|
ACB
|
Selection 3
|
**A**:
Dolphins: this network consists of frequent associations between 62 dolphins in a community living off Doubtful Sound**B**: The network splits naturally into two large groups females and males dolphins1 ; dolphinnewman , which are seen as the ground truth in our analysis.**C**: In the Dolphins network, node denotes a dolphin, and edge stands for companionship dolphins0 ; dolphins1 ; dolphins2
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 1
|
**A**: Our Contribution**B**: Our contribution is two fold**C**: First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation.
In each iteration, variational transport first solves the variational problem associated with the objective to obtain an estimator of the Wasserstein gradient and then approximately implements Wasserstein gradient descent by pushing the particles.
|
ACB
|
CAB
|
ABC
|
BAC
|
Selection 3
|
**A**: A use case present in a visual diagnosis tool revealed that feature generation involving the combination of two features is capable of a slight increase in performance [30]. The authors tested the same mathematical operations as in our system (i.e., addition, subtraction, multiplication, and division), but the generation was performed manually by the analysts**B**: Also, the decision for this action was based solely upon the similarity in those features’ distributions [30].
In FeatureEnVi, determining which features to match during feature generation is achieved by analyzing linear and nonlinear relations present in the data. For the former, one of the most well-known approaches is Pearson’s correlation coefficient between features and with the target variable [32, 33]**C**: For the latter, mutual information is used in our VA system (also used by May et al. [26], for instance). Features are added to capture the missing information and improve the classifier’s performance [34]. The magnitude of correlation with the dependent variable and in-between features is key to such decisions [35, 36]. However, the aforementioned VA tools work with regression problems and only support feature selection.
|
BCA
|
CAB
|
ABC
|
ACB
|
Selection 3
|
**A**: Systems can exploit correlated variables even if they are not directly a part of the input e.g., through inferred zip codes [21], failing to work effectively on minority groups.
**B**: Systems designed to aid human resources, help with medical diagnosis, determine probation, or loan qualification could be biased against minority groups based on age, gender, religion, sexual orientation, ethnicity, or race [54, 8, 16, 13, 48]**C**: While this is a toy problem, in the real world, hidden minority patterns are common and failing on them can have dire consequences
|
CBA
|
BCA
|
ABC
|
CAB
|
Selection 1
|
**A**: The idea is that GP emulators model the underlying function (in this case, the flow map) as a probabilistic distribution, and their sample paths provide a characterisation of the function throughout its entire domain. These sample paths extend the notion of merely being a distribution over individual function values at specific points, such as those generated from a multivariate normal distribution. The model output time series is then predicted relying on the Markov assumption; a sample path from the emulated flow map is drawn and employed in an iterative manner to perform one-step ahead predictions**B**: By repeating this procedure with multiple draws, we acquire a distribution over the time series whose mean and variance at a specific time point serve as the model output prediction and the associated uncertainty, respectively. However, obtaining a GP sample path, evaluable at any location in the domain for use in one-step ahead predictions, is infeasible. To address this challenge, we employ RFF [42], as described in Section 3. RFF is a technique for approximating the GP kernel using a finite number of its Fourier features. The resulting approximate GP samples, generated with RFF, are analytically tractable, providing both theoretical guarantees and computational efficiency.**C**:
This paper presents a novel data-driven approach for emulating complex dynamical simulators relying on emulating the numerical flow map over a short period of time. The flow map is a function that maps an initial condition to the solution of the system at a future time t𝑡titalic_t. We emulate the numerical flow map of the system over the initial (short) time step via GPs
|
BAC
|
BAC
|
ACB
|
BCA
|
Selection 4
|
**A**: [Pfister et al**B**: The traditional approach for testing independence is based on Pearson’s correlation coefficient; for instance, refer to Binet and Vaschide (1897), Pearson (1920), Spearman (1904), Kendall (1938). However, its lack of robustness to outliers and departures from normality eventually led researchers to consider alternative nonparametric procedures.
To overcome such a problem, a natural approach is to consider the functional difference between**C**: (2018)], [Chakraborty and Zhang (2019)]), graphical modeling ([Lauritzen (1996)], [Gan, Narisetty and Liang (2019)]), linguistics ([Nguyen and Eisenstein (2017)]), clustering (Székely and Rizzo, 2005), dimension reduction (Fukumizu, Bach and Jordan, 2004; Sheng and Yin, 2016)
|
BCA
|
ABC
|
ABC
|
ACB
|
Selection 4
|
**A**: The ZOO oracle is often implicitly assumed to be included with the FOO oracle; we make this explicit here for clarity**B**:
The FOO and LMO oracles are standard in the FW literature**C**: Finally, the DO oracle is motivated by the properties of generalized self-concordant functions. It is reasonable to assume the availability of the DO oracle: following the definition of the function codomain, one could simply evaluate f𝑓fitalic_f at 𝐱𝐱\mathbf{x}bold_x and assert f(𝐱)<+∞𝑓𝐱f(\mathbf{x})<+\inftyitalic_f ( bold_x ) < + ∞, thereby combining the DO and ZOO oracles into one oracle.
|
ACB
|
BAC
|
ACB
|
BCA
|
Selection 2
|
**A**:
We prove these theorems via a new notion, pairwise concentration (PC) (Definition 4.2), which captures the extent to which replacing one dataset by another would be “noticeable,” given a particular query-response sequence**B**: This is thus a function of particular differing datasets (instead of worst-case over elements), and it also depends on the actual issued queries**C**: We then build a composition toolkit (Theorem 4.4) that allows us to track PC losses over multiple computations.
|
BAC
|
BAC
|
CBA
|
ABC
|
Selection 4
|
**A**: The dropout probability was optimized over the interval [0.05,0.5]0.050.5[0.05,0.5][ 0.05 , 0.5 ] with steps of 0.050.050.050.05.
**B**: An early stopping criterion, based on the minimum of the loss function, was employed to avoid overfitting**C**: Dropout ensemble: An ensemble average of 50 MC samples was used
|
CBA
|
CAB
|
ACB
|
CAB
|
Selection 1
|
**A**:
Consider, for example, the time allocation problem faced by a researcher involved in multiple projects with different sets of coauthors. The researcher has a limited amount of time and concentration power to dedicate towards coauthored projects and her own research activity. Allocating attention to coauthored projects benefits the coauthors, but effort is a scarce resource, and spreading effective contributions across multiple projects reduces the impact on each project individually**B**: By creating efficiency gains, this type of collaboration can play a valuable role in the economic systems in which it is embedded. Apart from scientific collaboration, a number of relevant real-world examples fit into the basic framework of collaboration with congestion. On the internet, web pages make linking decisions; they are paid advertising revenue for traffic, and they are indexed by a search engine that favors well-linked pages, such as Google’s PageRank algorithm (Page et al., 1999)**C**: Linking to another site diverts a fraction of traffic and amplifies search appearances for the linked site. Another example concerns the networks of firms who form links by collaborating on R&D projects (see, e.g. Dasaratha, 2023), and determine how much intellectual property they will share with other firms through this process.
|
ACB
|
ABC
|
ACB
|
BAC
|
Selection 2
|
**A**: Thus, we will formulate and consider requirements on the future crowdedness values up to hℎhitalic_h-steps ahead of a given time t𝑡titalic_t**B**: Each requirement that we formulate can be checked for every areal unit i=1,…,I𝑖1…𝐼i=1,...,Iitalic_i = 1 , … , italic_I.**C**: While the framework is rather general, we are primarily interested in
the properties in a predictive context
|
BCA
|
BAC
|
CAB
|
CAB
|
Selection 1
|
**A**: This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details**B**: However, for the exposition in this section it sufficient to know what the properties of the operators 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are.
**C**: The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors
|
BAC
|
CAB
|
BCA
|
ABC
|
Selection 3
|
**A**: This notion is more general than the one initially introduced in Vapnik and**B**: While different definitions of VC classes exist, here we rely on the definition used in the previous references which is based on the covering numbers**C**: Guillou, 2002, 2001) or for multiple ordinary least-squares procedures (Plassier
et al., 2023)
|
BCA
|
CAB
|
ABC
|
CBA
|
Selection 4
|
**A**: On the other hand, the volume of Factor 2 is typically small in the winter time**B**: The volumes of night-life pattern in Factor 1 remain to be volatile**C**: It has many small-value outliers, mostly on the day before a business day (Sundays or the end of holiday.) These can be seen more
clearly in the more detailed Figure 12, which shows the estimated factors of all the non-business days in Year 2011 (year 3), with vertical lines indicating the day before a business day (dashed lines for Sundays and solid lines for Mondays of long weekend when Tuesday is the start of business week.) This is again intuitively understandable, because people tend not to stay out too late if they need to work the next day.
|
BAC
|
CAB
|
ABC
|
CAB
|
Selection 3
|
**A**: A threat in this case is the overconfidence effect and overinterpretation of the models’ capabilities by both domain-specific and ML experts, especially in noisy data scenarios. Despite that, we believe our first user study was an appropriate choice of method to understand preliminarily if VisRuler is usable and effective. In the future, we could further evaluate the particular designs of this multi-component system with both ML and domain experts.
**B**: However, as illustrated in Figure 3, our VA system is designed to be operated with a single workflow for two experts that most of the time are set apart and work independently. The prior knowledge and expertise of each group of experts is useful in specific steps of the collaboration schema, especially since they meet only in step 4, related to the decisions space exploration**C**: Evaluation. While we already conducted a task-based user study with 12 participants that tested the applicability and effectiveness of VisRuler, additional review sessions with experts could help us to validate our tool further
|
CBA
|
BCA
|
BCA
|
BAC
|
Selection 1
|
**A**: The first approach leverages global parametrizations to represent surfaces, employing either an L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT metric \parencitechung2008encoding, epifanio2014hippocampal,ferrando2020detecting or a non-Euclidean metric \parencitejermyn2012elastic, jermyn2017elastic, kurtek2015comprehensive, zhang2022lesa. The second approach, which is more closely related to the one adopted in this work, uses diffeomorphic deformation functions of the surfaces’ embedding space \parencitevaillant2004statistics, younes2019shapes, arguillere2016diffeomorphic, allowing for the inclusion of topological constraints**B**: There has also been considerable work on the simpler setting of random surfaces that are not coupled with functional data. These efforts can be broadly grouped into three main approaches**C**: The third approach, prevalently used in neuroimaging studies, employs pre-specified or spectrum-based descriptors of shape \parencitereuter2006laplacebeltrami, im2008brain, wachinger2015brainprint, hazlett2017early, wang2017holistic, dong2019applying, dong2020applying. A critical drawback of the latter approach is the inability to uniquely map the discrete representations back to the original space of random surfaces.
|
CBA
|
ABC
|
BCA
|
BAC
|
Selection 4
|
**A**:
In Table 1, for networks with known memberships or K𝐾Kitalic_K, their ground truth and K𝐾Kitalic_K are suggested by the original authors or data curators. For the Gahuku-Gama subtribes network, it can be downloaded from http://konect.cc/networks/ucidata-gama/ and its node labels are shown in Figure 9 (b) [29]. For the Karate-club-weighted network, it can be downloaded from http://vlado.fmf.uni-lj.si/pub/networks/data/ucinet/ucidata.htm#kazalo and its true node labels can be downloaded from http://websites.umich.edu/~mejn/netdata/. For the Slovene Parliamentary Party network, it can be downloaded from http://vlado.fmf.uni-lj.si/pub/networks/data/soc/Samo/Stranke94.htm**B**: For US Top-500 Airport Network, it can be downloaded from https://toreopsahl.com/datasets/#online_social_network. For Political blogs, its adjacency matrix and true node labels can be downloaded from http://zke.fas.harvard.edu/software/SCOREplus/Matlab/datasets/. For the Condensed matter collaborations 1999 (Cond-mat-1999 for short) data, it can be downloaded from http://websites.umich.edu/~mejn/netdata/. Cond-mat-1999 has 16726 nodes and only 13861 nodes fall in the largest connected component which is the one we focus on in this paper.**C**: For Train bombing, Les Misérables, and US airports, they can be downloaded from http://konect.cc/networks/ (see also [33]). The original US airports network has 1574 nodes and it is directed. We make it undirected by letting the weight of an edge be the summation of the number of flights between two airports. We then remove two airports that have no connections with any other airport
|
CAB
|
BCA
|
ACB
|
CAB
|
Selection 3
|
**A**: In Section 4 we illustrate the use of our model for semiparametric CCA on simulated datasets and apply the model to two real datasets: one containing measurements of climate variables in Brazil, and one containing monthly stock returns from the materials and communications market sectors. We conclude with a discussion of possible extensions to this work in Section 5. By default, roman characters referring to mathematical objects in this article are italicized. However, where necessary, we use italicized and un-italicized roman characters to distinguish between random variables and elements of their sample spaces.
**B**: In Section 3, we define the multirank likelihood and use it to develop a Bayesian inference strategy for obtaining estimates and confidence regions for the CCA parameters. We then discuss the details of the MCMC algorithm allowing us to simulate from the posterior distribution of the CCA parameters**C**: In the first part of Section 2 of this article, we describe a CCA parameterization of the multivariate normal model for variable sets, which separates the parameters describing between-set dependence from those determining the multivariate marginal distributions of the variable sets. We then introduce our model for semiparametric CCA, a Gaussian transformation model whose multivariate margins are parameterized by cyclically monotone functions
|
CBA
|
ABC
|
ACB
|
ACB
|
Selection 1
|
**A**: The authors also extend their sincere appreciation to the anonymous reviewers for their insightful comments, which have contributed to the enhancement of the quality of this paper.**B**:
The first author was partially supported by JSPS KAKENHI Grant Numbers JP21K03358 and JST CREST JPMJCR14D7, Japan**C**: The second author was supported by JSPS KAKENHI Grant Number JP16K00036
|
CAB
|
ABC
|
ACB
|
CBA
|
Selection 1
|
**A**: However, their methods can not cover the regime when the expected degree is Ω(1)Ω1\Omega(1)roman_Ω ( 1 ) due to the lack of concentration**B**: Additionally, [72] proposed Projected Tensor Power Method as the refinement stage to achieve strong consistency, as long as the first stage partition is partially correct, as ours.
**C**: In subsequent works [25, 71] we proposed algorithms to achieve weak consistency
|
CAB
|
BCA
|
BAC
|
ABC
|
Selection 2
|
**A**: Let n≥2𝑛2n\geq 2italic_n ≥ 2 and assume that the result holds for all inputs of length n−1𝑛1n-1italic_n - 1. We shall consider two cases.
**B**: We shall use induction on n𝑛nitalic_n**C**: The result is trivial if n=1𝑛1n=1italic_n = 1, since both sides of (3.3) are 0
|
BCA
|
BCA
|
CAB
|
CBA
|
Selection 3
|
**A**: From the other sets of controls emerges that specific features of studies included in the MRA differently explain the diversity in the results within clusters. The positive coefficients of controls for corridors such as Internal and Urbanization state that people respond to adverse climatic change with increased internal migration**B**: The only exception is for studies included in Cluster 3, this is the most heterogeneous cluster of most recent papers, where heterogeneous approaches (micro-and macro-level and type of migration) lead to a large heterogeneity in outcomes, varying according to different channels explored. Findings obtained when mobility is measured by Flows seem to be lower in the overall sample**C**: In macroeconomic literature, usually, the measurement of migration is a stock variable, since it is generally easier to find and measure the number of foreign citizens born or resident in a country at any given time. Data on flow variables and migration rates, or the number of people who have moved from an origin to a destination in a specific period, are less available, and analyses often rely on estimates and computations of this data. Therefore, the opposite sign of the coefficient of the variable Flows in Cluster 1 is not surprising since this cluster collects all micro-level studies (where the migration variable refers to the movements of individuals as a unit, based on surveys).
|
ACB
|
ABC
|
CAB
|
ACB
|
Selection 2
|
**A**: (2017)**B**: More precisely,
these authors established the third term on the right-hand side in**C**: The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al
|
ABC
|
BCA
|
BAC
|
ABC
|
Selection 2
|
**A**:
In many cases, the degree sequence is the only information available and many other important properties are constrained by it**B**: However, the degree may carry confidential and sensitive information, such as the sexually transmitted disease [Helleringer and Kohler (2007)]**C**: To solve it, we can add noises into degrees. For example, Hay et al. (2009) proposed efficient algorithms for releasing and
|
ACB
|
CBA
|
CBA
|
ABC
|
Selection 4
|
**A**:
We have applied our methodologies using practical cross-covariance choices such as models of coregionalization built on independent stationary covariances**B**: Recent work (Jin et al., 2021) highlights that DAG choice must be made carefully when considering explicit models of nonstationary, as spatial process models based on sparse DAGs induce nonstationarities even when using stationary covariances.**C**: However, nonstationary models are desirable in many applied settings
|
BAC
|
BAC
|
ACB
|
BCA
|
Selection 3
|
**A**: While most practical inference needs additional modelling assumptions, the data example of section 7 allowed for non-parametric estimation**B**: In addressing identifiability, we have chosen the re-weighting route which appears natural in view of the simplicity of Proposition 1 and corresponds to a change of measure technique. In discrete-time settings, g-computation is an alternative, or doubly-robust and machine-learning extensions thereof (Kallus and Uehara, 2022; Luckett et al., 2020; Nie et al., 2020; Zhang et al., 2013). However, g-computation seems hard in entirely general continuous-time settings, as discussed by Gill (2001) (see also Gill and Robins (2001)), but fully parametric versions exist (Gran et al., 2015).**C**: Our results apply to general multivariate counting processes, which include, e.g., multi-state processes.
In particular, they do not rely on any particular (semi)-parametric class of models
|
CBA
|
ACB
|
BCA
|
ACB
|
Selection 3
|
**A**: A Bayesian algorithm is initialized with a prior belief, and the forecaster learns from each sample to build a posterior belief. Given that, we know the dynamics of the posterior belief as a distribution, exact minimization of the Bayes risk can be formulated as dynamic programming. Nonetheless, exact dynamic programming solutions are computationally intractable because of the curse of dimensionality (Russo, 2020; Powell and Ryzhov, 2013). Consequently, KG and EI are designed to provide the one-step lookahead approximations of an exactly optimal dynamic programming solution.
**B**: Popular algorithms for fixed-budget identification include successive rejects (SR; Audibert et al. 2010) and successive halving (Karnin et al., 2013)**C**: There are also several Bayesian algorithms that utilize a prior, such as top-two Thompson sampling (Russo, 2020), knowledge gradient (KG; Gupta and Miescke 1994), and expected improvement (EI; Jones et al. 1998)
|
ACB
|
ABC
|
CAB
|
BAC
|
Selection 3
|
**A**: Training binary latent VAEs with K=2,3𝐾23K=2,3italic_K = 2 , 3 (except for RELAX which uses 3333 evaluations) on MNIST, Fashion-MNIST, and Omniglot**B**: We report the average ELBO (±1plus-or-minus1\pm 1± 1 standard error) on the training set after 1M steps over 5 independent runs**C**: Test data bounds are reported in Table 4.
|
CBA
|
CAB
|
BCA
|
ABC
|
Selection 4
|
**A**: See also Dolera and Favaro [2020a, b] and references therein.**B**: See Charalambides [2005, Chapter 7] for an account on compound Poisson sampling models and their distributional properties, and to Dolera and Favaro [2020c] for a comprehensive treatment of the large n𝑛nitalic_n asymptotic behaviour of 𝐌(n,z)𝐌𝑛𝑧\mathbf{M}(n,z)bold_M ( italic_n , italic_z ) under the negative Binomial compound Poisson sampling model**C**:
The distribution (S2.2) is referred to as the negative Binomial compound Poisson sampling formula
|
BAC
|
ACB
|
CBA
|
CAB
|
Selection 3
|
**A**: As shown in example 3, the Lorenz map reduces to a simple function of the marginal Lorenz curves in case marginal attribute allocations are independent**B**: This feature is shared with the multivariate Lorenz proposal in Arnold (1983,2012) but not the alternative proposals.
**C**: Decomposition under independent attributes
|
CBA
|
CAB
|
BCA
|
BAC
|
Selection 3
|
**A**: Regarding decision boundaries and borderline examples, Melnik [Mel02] analyzes their structure using connectivity graphs [MS94]. And finally, Ramamurthy et al. [RVM19] utilize persistent homology inference to describe the ambiguity (or even lack) of decision boundaries. All described methods, while being valuable, do not focus on the problem of undersampling or oversampling at all, as it happens with our system.**B**:
Density-based algorithms [HHHM11, HLL08] also work well with the detection of rare categories by discovering substantial changes in data densities using a KNN search in the high-dimensional space. But how to choose the best k-value for a given data set? While it is possible to estimate the best k-value automatically by using the local outlier factor [BKNS00], the balance of the distribution of safe and unsafe instances could be off when focusing merely on rare cases and outliers. Huang et al. [HCG∗14] proposed a method for automatically selecting k-values**C**: However, their algorithm starts with a seed depending on the target category, which is often difficult to set. iFRED and vFRED [LCH∗14] are two approaches for identifying rare categories based on wavelet transformation without the necessity of any predefined seed. Nevertheless, these methods are robust in low-dimensional data only but fail to discover the remaining types of data introduced in Section 1, which are important for HardVis
|
CBA
|
CAB
|
ACB
|
BAC
|
Selection 2
|
**A**: In Section 3, we give conditions on our model that guarantee existence and uniqueness of equilibria in the mean-field regime, the limiting regime where at each time step, an infinite number of agents are considered for the treatment**B**: In Section 4, we translate these results to the finite regime, where a finite number of agents, sampled i.i.d. at each time step, are considered for treatment. We show that as the number of agents grows large, the system converges to the equilibrium of the mean-field model in a stochastic version of fixed-point iteration.
**C**: Furthermore, we show that under additional conditions, the mean-field equilibrium arises via fixed-point iteration
|
BCA
|
BAC
|
ACB
|
BAC
|
Selection 3
|
**A**: We conclude this section with a brief review of the theory of density estimation in Section 2.3.**B**: In Section 2.2, we discuss the distance filtration, which is the backbone of the traditional TDA approach, as well as various alternatives to the distance filtration, and we will explain their relevance to the present work**C**:
We first review the theory of persistent homology in Section 2.1
|
CBA
|
BCA
|
BAC
|
ABC
|
Selection 1
|
**A**: We could in principle modify our framework so that the distribution of cluster sizes is allowed to depend on the number of clusters G𝐺Gitalic_G**B**: Such a modification, however, would complicate the exposition and the resulting procedures would ultimately be the same. We therefore see no apparent benefit and do not pursue it further in this paper.
**C**: By doing so, we would be able to weaken Assumption 2.2.(e) at the cost of strengthening Assumption 2.2.(f) to require, for example, uniformly bounded 2+δ2𝛿2+\delta2 + italic_δ moments for some δ>0𝛿0\delta>0italic_δ > 0
|
BAC
|
ABC
|
ACB
|
CBA
|
Selection 3
|
**A**: In the contexture of reinforcement learning with function approximations, our work is related to a vast body of recent progress (Yang and Wang, 2020; Jin et al., 2020b; Cai et al., 2020; Du et al., 2021; Kakade et al., 2020; Agarwal et al., 2020; Zhou et al., 2021; Ayoub et al., 2020) on the sample efficiency of reinforcement learning for MDPs with linear function approximations**B**: These works characterize the uncertainty in the regression for estimating either the model or value function of an MDP and use the uncertainty as a bonus on the rewards to encourage exploration**C**: However, none of these approaches directly apply to POMDPs due to the latency of the states.
|
ACB
|
ABC
|
CBA
|
BCA
|
Selection 2
|
**A**: These initial successes notwithstanding, the development of e-values is, of course, still in its infancy, competing with almost a century of p-value development**B**: As such, many challenges remain**C**: To appreciate these, we first note that the aforementioned GRO-type approaches can in principle be made competitive, in terms of sample sizes needed to draw a conclusion, with classical ones that rely on BIND — see below; sometimes they even significantly beat such classical methods (e.g. [50, 57]). Also, [13]
shows that GRO e-values exist and can be calculated for very general testing problems.
|
CBA
|
CAB
|
CBA
|
ABC
|
Selection 4
|
**A**: Importantly, inference must be done in a way that takes into account the incentives of the strategic agents. Our work most closely relates to that of Tetenov (2016), who consider setting the type-I error level of a hypothesis test to account for an agent’s payoffs. That work establishes a minimax protocol similar to our work for the case where the principal’s action space involves setting the p𝑝pitalic_p-value threshold for approval, and it also analyzes the incentive structure of Phase III clinical trials for drug approval. See also Viviano**B**: This makes statistical inference challenging. Our focus is on designing contracts that allow the principal to carry out statistical inference in order to properly assess the hidden types of the agents**C**:
Situating our work within contract theory, we study an adverse selection model with a common value structure in the principal’s utility function. Our key departure from the usual adverse selection setup is that we do not assume that the principal has a prior distribution about the agents’ hidden types
|
CAB
|
CBA
|
BCA
|
ABC
|
Selection 2
|
**A**: We now state the conditions we will work with**B**: Throughout this work, we will assume the stable unit treatment value assumption, commonly abbreviated as SUTVA, in Assumption 1.3**C**: We will also assume consistency of the observed outcome throughout as given in Assumption 1.4. Moreover, our inference is valid under conditional ignorability conditions in Assumption 1.5.
|
CAB
|
BCA
|
CBA
|
ABC
|
Selection 4
|
**A**: The purpose of defining different types of knowledge is to efficiently extract the underlying representation learned by the teacher model from the large-scale data. If we consider a network as a mapping function of input distribution to output, then different knowledge types help to approximate such a function.
Based on the type of knowledge transferred, KD can be divided into response-based, feature-based, and relation-based [15].**B**: [19] propose an original teacher-student architecture that uses the logits of the teacher model as the knowledge. Since then, some KD methods regard knowledge as final responses to input samples [3, 31, 58], some regard knowledge as features extracted from different layers of neural networks [24, 23, 41], and some regard knowledge as relations between such layers [57, 40, 9]**C**: Knowledge Distillation (KD). Hinton et al
|
ABC
|
BAC
|
CBA
|
CAB
|
Selection 3
|
**A**: To learn a sufficient embedding for control, we utilize the low-rank transition of POMDPs**B**: In particular, the state transition of a low-rank MDP aligns with that in our low-rank POMDP model. Nevertheless, we remark that such states are observable in a low-rank MDP but are unobservable in POMDPs with the low-rank transition. Such unobservability makes solving a low-rank POMDP much more challenging than solving a low-rank MDP.
**C**: Our idea is motivated by the previous analysis of low-rank MDPs (Cai et al., 2020; Jin et al., 2020b; Ayoub et al., 2020; Agarwal et al., 2020; Modi et al., 2021; Uehara et al., 2021)
|
ABC
|
CAB
|
BAC
|
ACB
|
Selection 4
|
**A**: Table 1: We compare with most related representative works in closely related lines of research**B**: The second line of research studies online RL in POMDPs where the actions are specified by history-dependent policies.
Thus, the actions does not directly depends on the latent states and thus these works do not involve the challenge due to confounded data. The third line of research studies OPE in POMDPs where the goal is to learn the value of the target policy as opposed to learning the optimal policy. As a result, these works do not to need to handle the challenge of distributional shift via pessimism.**C**: The first line of research studies offline RL in standard MDPs without any partial observability
|
CAB
|
CBA
|
ACB
|
BAC
|
Selection 3
|
**A**: Without constraints, one can apply stochastic gradient descent (SGD) and its many variates, whose statistical properties (e.g., asymptotic normality) have been comprehensively studied from different aspects (Robbins1951stochastic; Kiefer1952Stochastic; Polyak1992Acceleration; Ruppert1988Efficient). However, unlike solving unconstrained stochastic programs, there are limited methods proposed for constrained stochastic programs (1.1) that enable online statistical inference. We refer to Section 2.2 for a detailed literature review. One potential exception is the projection-based SGD recently studied in Duchi2021Asymptotic; Davis2023Asymptotic.
Although the literature has shown that projected methods also exhibit asymptotic normality, there are two major concerns when applying these methods for practical statistical inference.**B**: Given the prevalence of streaming datasets in modern problems, offline methods that require dealing with a large batch set in each step are less attractive**C**: It is desirable to design fully online methods, where only a single sample is used in each step, and to perform online statistical inference by leveraging those methods
|
ACB
|
CBA
|
CBA
|
CAB
|
Selection 4
|
**A**: The two networks share block 1111 (for instance basal species) but the remaining nodes of each network cannot be considered as equivalent in terms of connectivity**B**: One may think of species belonging to trophic chains with different connectivity patterns. **C**:
Finally, let us consider two networks with partially overlapping structures
|
ACB
|
BCA
|
CAB
|
ABC
|
Selection 2
|
**A**: This could be improved if structural information on the covariates were a priori known. Indeed, in this article, the i.i.d. Bernoulli prior on the indicators 𝜹𝜹\boldsymbol{\delta}bold_italic_δ (4e), entails the assumption that each covariate has the same probability a priori of being included in the model.**B**: However, when the level of correlation becomes high, the performance decreases**C**:
Moreover, it was observed that a reasonable correlation between covariates has little effect on the selection performance of the proposed procedure
|
BCA
|
CBA
|
BCA
|
BAC
|
Selection 2
|
**A**: Figure (a) illustrates a setting with 3 subpopulations and 2 learners**B**: The dsolid lines correspond to the risk trajectory for the unstable balanced equilibrium at initialization**C**: Dotted and dashed lines illustrate risk trajectories under three different slight perturbations from the initialization.
In Figure (b), the left plot illustrates the reduction in total risk over time. The dashed blue lines indicate when a new learner joins. The right plot shows the equilibrium-risk for a subset of the subpopulations as the number of learners increases.
|
BAC
|
ABC
|
ACB
|
ACB
|
Selection 2
|
**A**: Finally, given the LP approximation detailed above, the algorithm for solving Eq. (29) follows the same lines as Alg. 1.**B**: This condition is trivially incorporated in the box constraints of Eq. (32)**C**:
where ∥⋅∥∞\|\cdot\|_{\infty}∥ ⋅ ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT is the maximum norm
|
CBA
|
ABC
|
BAC
|
BAC
|
Selection 1
|
**A**:
Based on the advantages of the decentralized information structure, the online algorithm and the regularization method, we propose a decentralized online regularized algorithm for the linear regression problem over random time-varying graphs**B**: In each iteration, the innovation term is used to update the node’s own estimation, the consensus term is the weighted sum of estimations of its own and its neighbors with additive and multiplicative communication noises, and the regularization term is helpful for constraining the norm of the estimation in the algorithm and preventing the unknown true parameter vector from being overfitted.**C**: The algorithm of each node contains an innovation term, a consensus term and a regularization term
|
BAC
|
ABC
|
ACB
|
BAC
|
Selection 3
|
**A**:
The above results clarify the fundamental relation among the limiting behavior of abc posteriors and the learning properties of the chosen discrepancy, when measured via Rademacher complexity. Moreover, the bounds derived clarify that a sufficient condition to recover a limiting pseudo–posterior with the same threshold–control on the discrepancy among the truths as the one enforced on the corresponding empirical distributions, is that the selected discrepancy has a Rademacher complexity vanishing to zero in the large–data limit. As proved in Section 3.2, this setting also allows constructive derivations of novel, informative and uniform concentration bounds for discrepancy–based abc posteriors in the challenging regime where the threshold shrinks towards zero as both m𝑚mitalic_m and n𝑛nitalic_n diverge**B**: The former advantage is illustrated within Section 4 through a specific focus on mmd with routinely–implemented bounded and unbounded kernels, whereas the latter is clarified in Section 6, where we extend the theory from Section 3 to non–i.i.d. settings, leveraging results in Mohri & Rostamizadeh (2008) on Rademacher complexity under β𝛽\betaitalic_β–mixing processes (e.g., Doukhan, 1994).**C**: This is facilitated by the existence of meaningful upper bounds for the Rademacher complexity of popular abc discrepancies, along with the availability of constructive conditions for the derivation of these bounds (e.g., Sriperumbudur et al., 2012) which leverage fundamental connections among such a complexity measure and other key quantities in statistical learning theory, such as the Vapnik–Chervonenkis (vc) dimension and the notion of uniform Glivenko–Cantelli classes (see e.g., Wainwright, 2019, Chapter 4). This yields an improved understanding of the factors that govern the concentration of discrepancy–based abc posteriors under a unified perspective that further allows to (i) quantify rates of concentration and (ii) directly translate any advancement on Rademacher complexity into novel abc theory
|
ACB
|
BCA
|
ABC
|
CBA
|
Selection 1
|
**A**:
We thank Arkadev Chattopadhyay for helpful feedback and Todd Millstein for discussing [42] with us which led us to think about monotone neural networks**B**: Finally, we thank Bruno Pasqualotto Cavalar for bringing to our attention the work done in [9, 17].**C**: We are grateful to David Kim for implementing our construction of a monotone neural network and testing it over several monotone data sets
|
BCA
|
CAB
|
BCA
|
ACB
|
Selection 4
|
**A**: A further extension might consider weighting the classification probabilities by a utility function accounting for the degree of the error (i.e., mistakenly refusing an applicant who is right above the classification threshold is less costly than refusing an applicant much higher on the latent ability; see, e.g., [71] for different utility models applied in the context of validity).**B**: Under the specified measurement model, most erroneous classifications happen close to the selection boundary**C**:
Second, we focused on classification probabilities and the resulting binary classification metrics themselves
|
ABC
|
CBA
|
CAB
|
ACB
|
Selection 2
|
**A**: A typical paradigm for such contrastive RL is to construct an auxiliary contrastive loss for representation learning, add it to the loss function in RL, and deploy an RL algorithm with the learned representation being the state and action input. However, the theoretical underpinnings of such an enterprise remain elusive. To summarize, we raise the following question:**B**: Among the recent breakthroughs in representation learning for RL, contrastive self-supervised learning gains popularity for its superior empirical performance (Oord et al., 2018b; Sermanet et al., 2018; Dwibedi et al., 2018; Anand et al., 2019; Schwarzer et al., 2020; Srinivas et al., 2020; Liu et al., 2021)**C**:
To improve the sample efficiency of RL algorithms, recent works propose to learn low-dimensional representations of the states via solving auxiliary problems (Jaderberg et al., 2016; Hafner et al., 2019a, b; Gelada et al., 2019; François-Lavet et al., 2019; Bellemare et al., 2019; Srinivas et al., 2020; Zhang et al., 2020; Liu et al., 2021; Yang & Nachum, 2021; Stooke et al., 2021)
|
BCA
|
BAC
|
CBA
|
BAC
|
Selection 3
|
**A**: The decoupling approach was developed for importance sampling in MV-SDEs (dos Reis et al., 2023; Ben Rached et al., 2023), where the idea is to approximate the MV-SDE law empirically as in (4), use the approximation as input to define a decoupled MV-SDE and apply a change of measure to it**B**: First, we introduce the general decoupling approach.
**C**: We decouple the computation of the MV-SDE law and the change in probability measure required for importance sampling
|
ACB
|
CBA
|
BCA
|
CAB
|
Selection 1
|
**A**: Second, we iteratively predict West Germany’s GDP from 1963 to 1989.**B**: First, we iteratively predict the 1990 GDP of each country in the control group**C**:
Again, we perform two exercises to assess the accuracy of alternative causal inference methods in this setting
|
CBA
|
BAC
|
BAC
|
BCA
|
Selection 1
|
**A**: Informative sampling occurs when there is a discrepancy between design variables and auxiliary variables used for regression analysis, notably even in widely utilized methods such as Poisson sampling and probability proportional to size sampling [29].**B**: In probability sampling, first-order inclusion probabilities are known.
If the sampling weights are correlated with the study outcome variables even after adjusting for the covariates, the sampling design is called informative**C**: Survey sample data enable inferences about superpopulation models without the need to observe every element in the finite population
|
CAB
|
BCA
|
ACB
|
CBA
|
Selection 4
|
**A**: (2007). **B**:
The above definition could be extended to less regular paths, namely to paths of finite p𝑝pitalic_p-variation with p<2𝑝2p<2italic_p < 2. In this case, the integrals can be defined in the sense of Young (1936)**C**: However, if p≥2𝑝2p\geq 2italic_p ≥ 2, it is no longer possible to define the iterated integrals. Still, it is possible to give a sense to the signature but the definition is much more involved and relies on the rough path theory so we refer the interested reader to Lyons et al
|
BCA
|
CAB
|
ABC
|
BCA
|
Selection 2
|
**A**: In this respect we note that the clipping operator, being a projection onto a ball, is not a compressor and moreover it is invoked dynamically with time-varying radii.**B**: Finally we note that stochastic gradient methods have been also
studied in conjunction with biased compressor (nonlinear) operators**C**: See, e.g., [21] and reference therein
|
ACB
|
BAC
|
CAB
|
ABC
|
Selection 3
|
**A**: Based on Petrov-Galerkin method, the hp-variational PINNs (hp-VPINNs) [17] allows for localized parameters estimation with given test functions via domain decomposition. The hp-VPINNs generates a global approximation to the weak solution of the PDE with local learning algorithm that uses a domain decomposition which is preselected manually.**B**: In reinforcement learning, [16] proposes a Bayesian neural network as the prior for PDEs and use Hamiltonian Monte Carlo and variational inference as the estimator of the posterior, resulting in more accurate prediction and less overfitting**C**:
There are many variations of PINNs, e.g., Physics-informed generative adversarial networks [14] which have stochastic differential equations induced generators to tackle very high dimensional problems; [15] rewrites PDEs as backward stochastic differential equations and designs the gradient of the solution as policy function, which is approximated by deep neural networks
|
ABC
|
CAB
|
CAB
|
CBA
|
Selection 4
|
**A**: Finally, the analysis in the case of the projected quantum kernel is slightly more complicated as estimating the kernel requires us to first obtain the statistical estimates of the 2-norms between the reduced data encoding states on all individual qubits from quantum computers**B**: In Appendix C.2, we again use a hypothesis testing framework to analyze the effect of exponential concentration on the projected kernel for these strategies. Similarly to the fidelity kernel we find that the final trained model is in effect independent of the training data.
**C**: Two common strategies to to do so include (i) the full tomography of the single qubit reduced density matrices and (ii) the local SWAP tests
|
BAC
|
ACB
|
BAC
|
ABC
|
Selection 2
|
**A**: The results by different methods on the resampled PACS are presented in Figure 5**B**: We can observe that as the increase of KL divergence of label distribution, the performances of MCDA, M3DA, LtC-MSDA and T-SVDNet, which are based on learning invariant representations, gradually degenerate. In the case where the KL divergence is about 0.7, the performances of these methods are worse than traditional ERM. Compared with IRM, IWCDAN and LaCIM, specifically designed for label distribution shifts, the proposed iLCC-LCS obtains the best performance, due to our theoretical guarantee for identifying the latent causal content variable, resulting in a principled way to guarantee adaptation to the target domain.
**C**: Detailed results can be found in Tables I-III
|
CBA
|
BCA
|
ACB
|
CAB
|
Selection 3
|
**A**:
We also evaluated our estimators using a large-scale ride-sharing simulator adapted from Farias et al. (2022)**B**: The simulator generates drivers and riders based on data from the NYC taxi trip records dataset (Commission, N.D.). In this simulator, drivers enter the system continuously, each with a fixed capacity of 3 riders**C**: Their initial positions are randomly selected from the trip records dataset, and the duration of their shifts follows an exponential distribution. Once a driver completes their shift, they go offline.
|
CAB
|
ACB
|
CAB
|
ABC
|
Selection 4
|
**A**:
Previous work went beyond this paper in other aspects, however**B**: Given a concrete model such as Gaussian VAEs, convergence to entropies was also numerically investigated**C**: It was for instance asked, how close the original ELBO is to the sum of entropies result in practice, i.e., when only the vicinity of a stationary point is reached by stochastic ELBO optimization.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 4
|
**A**: Specifically, we first extend the work of Bernanke et al**B**: (2005) on macroeconomic responses to a shock in monetary policy to a HDLP setting.**C**:
We consider two canonical macroeconomic applications and demonstrate the performance of the proposed desparsified-lasso based estimator for HDLPs in recovering structural impulse responses
|
BAC
|
ACB
|
CBA
|
BCA
|
Selection 4
|
**A**: Note that a remedial intervention is defined to allow multiple root causes to be corrected simultaneously**B**: Such an intervention is of course always non-singular within the CEG representation.**C**:
Assume that the root causes of a specific defect or failure could be multiple and are well-defined
|
BAC
|
BCA
|
ABC
|
BAC
|
Selection 2
|
**A**:
While the outcome and the inference models are ideally the same, in practice our model will at best approximate the “true” predictor-response relationships. In the code below, we show how to use BKMR as the inference model**B**: Additional arguments specific to the inference model (e.g. iter, varsel) can be passed as done below:**C**: The model argument also accepts one of the following values: ‘glm’, ‘mixselect’, ‘qgcomp’, ‘bws’, ‘fin’, and ‘bma’. Advanced users may also define custom inference models (see first example in Section 4)
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 1
|
**A**:
The distribution of eigenvalues plays a crucial role in statistical learning and is of significant interest in the high-dimensional setting**B**: Random matrix theory provides a systematic tool for deriving the distribution of eigenvalues of a squared matrix (Anderson, Guionnet and Zeitouni, 2010; Pastur and Shcherbina, 2011), and has been successfully applied in various statistical problems, such as signal detection (Nadler, Penna and Garello, 2011; Onatski, 2009; Bianchi et al., 2011), spiked covariance models (Johnstone, 2001; Paul, 2007; El Karoui, 2007; Ding and Yang, 2021; Bao et al., 2022), and hypothesis testing (Bai et al., 2009; Chen and Qin, 2010; Zheng, 2012)**C**: For a comprehensive treatment of random matrix theory in statistics, we recommend the monograph by Bai and Silverstein (2010) and the review paper by Paul and Aue (2014).
|
CAB
|
BCA
|
ABC
|
CAB
|
Selection 3
|
**A**: Fig. 9 shows three annual temperature anomaly series from distinct regions: the Northern Hemisphere, the Southern Hemisphere and the Tropics from 1850 to 2021, which are described in detail in [19]**B**: Global warming has attracted significant attention in recent research, as demonstrated by studies such as [8], [12], and [11]**C**: The data are temperature anomalies relative to a reference period of 1961-1990 [19]. Each series consists of 172 yearly observations.
|
BCA
|
CBA
|
BCA
|
BAC
|
Selection 4
|
**A**: This scenario – which also links to out-of-distribution generalization – has attracted various contributions in recent years, such as [22, 23, 24]**B**: Generalization error bounds have also been developed to address scenarios where the training data distribution differs from the test data distribution, known as Distribution Mismatch**C**: In particular, Masiha et al. [21] provides information-theoretic generalization error upper bounds in the presence of training/test data distribution mismatch, using rate-distortion theory.
|
ACB
|
ACB
|
BCA
|
BAC
|
Selection 4
|
**A**: Hence, the current methodologies amount to testing the null hypothesis of equal means in all the populations; see, e.g., [16] for an early contribution and [52] for a broader perspective. Our proposal is therefore quite related to more general approaches, not requiring any homoscedasticity assumption and still valid for a FDA framework. Examples of such similar tests are [29] and [34], as well as the random projections-based methodology in [15].**B**: This is interesting since in FDA there are only a few homogeneity tests in the literature. Some of them have been developed in the setting of ANOVA models (involving several samples) under homoscedasticity (equal covariance operators of the involved processes) and Gaussian assumptions**C**:
The supremum kernel distance (4) entails several advantages and some mathematical challenges: First, the kernel selection problem is considerably simplified and solved in a natural way. Additionally, the approach is general enough to be applied in infinite-dimensional settings as FDA
|
CBA
|
ACB
|
ACB
|
ACB
|
Selection 1
|
**A**: We verify that all calibrated parameters fall into the recommendation ranges according to [38]**B**: The calibration results in Table I demonstrate the reliability of our methods to calibrate plausible IDM parameters**C**: In the following, we make some comparisons among several pairs of calibration results.
|
CAB
|
ACB
|
ACB
|
ABC
|
Selection 4
|
**A**: In [19], Hachem et al. derived the CLT for the MI of correlated Gaussian MIMO channels and gave the closed-form mean and variance. Hachem et al. extended the CLT to the non-Gaussian MIMO channel with a given variance profile and the non-centered MIMO channel in [20] and [21], respectively, which shows that the pseudo-variance and non-zero fourth order cumulant of the random fading affects the asymptotic variance**B**:
The MI of the full-rank MIMO channels has been characterized by setting up its CLT using RMT. In [24], Kamath et al. derived the closed-form expressions for the mean and variance of the MI over the i.i.d. MIMO fading channel**C**: In [22], Bao et al. derived the CLT for the MI of independent and identically distributed (i.i.d) MIMO channels with non-zero pseudo-variance and fourth-order cumulant. In [23], Hu et al. set up the CLT for the MI of elliptically correlated (EC) MIMO channels and validated the effect of the non-linear correlation. Considering the non-centered MIMO with non-separable correlation structure, the authors of [25] set up the CLT for the MI of holographic MIMO channels.
|
ACB
|
ACB
|
BCA
|
BAC
|
Selection 4
|