text_with_holes
stringlengths
390
2.35k
text_candidates
stringlengths
81
848
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
For Heterogeneous Stock Mice data set, ground truth is also available so that we could evaluate the methods based on the area under their ROC Curve as Figure 6. <|MaskedSetence|> The second best model is MCP with the area of 0.604. <|MaskedSetence|> The areas of the remaining models are all around 0.5, showing little ability to process such complex data sets. On traits Glucose_75, Glucose_30, Glucose.DeadFromAnesthetic, Insulin.AUC, Insulin.Delta and FN.postWeight, our method TgSLMM behaves the best. <|MaskedSetence|> This raises the inspiring question of whether or not immune levels in stock mice are largely independent of family origin. .
**A**: The results are interesting: the left side of the figure mostly consists of traits regarding glucose and insulin in the mice, while the right side of the figure consists of traits related to immunity. **B**: The areas under ROC of Tree-Lasso, Lasso and SCAD are 0.582, 0.591 and 0.590 respectively. **C**: TgSLMM behaves as the best one on 22.2% of the traits and achieves the highest ROC area for the whole data set as 0.627.
CBA
CBA
ACB
CBA
Selection 1
The MIIV-2SLS estimator would be greatly strengthened if we knew which specific MIIVs in those equations are invalid or weak. The current manuscript’s unique contribution to the SEM estimation literature is to provide a variant of MIIV-Bayesian Model Averaging for Model Implied Instrumental Variable Two Stage Least Squares Estimators2SLS that allows researchers to identify specific sources of model misspecification at the level of the instrument rather than the equation, while simultaneously accounting for weak instruments. This estimator, which we term MIIV Two Stage Bayesian Model Averaging (MIIV-2SBMA) modifies the procedure developed by Lenkoski \BOthers. (\APACyear2014), using Bayesian model averaging to combine estimates from all possible subsets of MIIVs. Using this approach, we propose Bayesian variants of Sargan’s χ2superscript𝜒2\chi^{2}italic_χ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT Test (Sargan, \APACyear1958) for detecting invalid instruments at the level of the instrument itself, rather than the equation. <|MaskedSetence|> one endogenous variable predicting the outcome within an equation) and multivariate settings (i.e. more than one endogenous variable predicting the outcome within an equation). Additionally, we demonstrate the use of inclusion probabilities to detect weak instruments. <|MaskedSetence|> <|MaskedSetence|>
**A**: Finally, we present an empirical example demonstrating the use of MIIV-2SBMA for estimating a two factor CFA and determining which error covariances need to be included. . **B**: We conduct a series of Monte Carlo experiments to evaluate the performance of MIIV-2SBMA and our misspecification tests, and demonstrate that our approach shows increased power to detect model misspecification and weak instruments without a corresponding increase in the bias or variance of the model estimates, and allows us to identify specific sources of model misspecification. **C**: We propose this test for both univariate (i.e.
CAB
CBA
CBA
CBA
Selection 3
A crucial decision in the design of world models is the inclusion of stochasticity. Although Atari is known to be a deterministic environment, it is stochastic given only a limited horizon of past observed frames (in our case 4444 frames). <|MaskedSetence|> The game dispatches diverse sets of new opponents, which cannot be inferred from the visual observation alone (without access to the game’s internal state) and thus cannot be predicted by a deterministic model. Similar issues have been reported in Babaeizadeh et al. <|MaskedSetence|> <|MaskedSetence|>
**A**: The level of stochasticity is game dependent; however, it can be observed in many Atari games. An example of such behavior can be observed in the game Kung Fu Master – after eliminating the current set of opponents, the game screen always looks the same (it contains only player’s character and the background). **B**: As can be seen in Figure 11 in the Appendix, the stochastic model learns a reasonable behavior – samples potential opponents and renders them sharply. . **C**: (2017a), where the output of their baseline deterministic model was a blurred superposition of possible random object movements.
ACB
BAC
ACB
ACB
Selection 1
2 HISTORY OF THE BATTLE OF KURSK After suffering a terrible defeat at Stalingrad in the winter of 1943, the Germans desperately wanted to regain the initiative. <|MaskedSetence|> The Germans planned in a classic pincer operation named Operation Citadel, to eliminate the salient and destroy the Soviet forces in it. <|MaskedSetence|> It must secure for us the initiative…. <|MaskedSetence|>
**A**: In the spring of 1943, the Eastern front was conquered by a salient, 200 km wide and 150 km deep, centred on the city of Kursk. **B**: The victory of Kursk must be a blazing torch to the world" [22, 24, 42]. . **C**: On 2 July 1943, Hitler declared, "This attack is of decisive importance and it must succeed, and it must do so rapidly and convincingly.
ACB
ACB
ABC
ACB
Selection 1
The applied literature suggests manipulation in RD designs occurs frequently (e.g. Angrist et al.,, 2019; Davis et al.,, 2013; Dee et al.,, 2019). Yet, there is comparatively little clarity on how to proceed when these tests indicate manipulation. One method used in empirical work is the donut hole RD design (such as in Almond and Doyle,, 2011; Bajari et al.,, 2017; Castleman and Goodman,, 2018; Kırdar et al.,, 2018). <|MaskedSetence|> Then RD estimates may be obtained from a parametric model fit on data outside the window and extrapolated to the cutoff. As yet, donut hole designs do not have a solid theoretical foundation and are therefore subject to ad-hoc estimation specifications. More fundamentally, by deleting all the data around the cutoff, they weaken the RD design itself, which relies on exactly the data around the cutoff the most. One paper that does not resort to donut-hole style deletion is Diamond and Persson, (2016). <|MaskedSetence|> They develop an estimator to determine the causal effect of the score manipulation on future educational attainment and earnings. <|MaskedSetence|> We pursue the “partial identification” approach, popularized by Manski and later by Tamer (see e.g., Manski,, 1990; Manski and Tamer,, 2002; Haile and Tamer,, 2003). The core idea is that, in scenarios in which a treatment effect cannot be point-identified (even with an infinite sample size), it can sometimes still be bounded. These bounds might be very informative in practice—for example, allowing us to rule out negative or positive treatment effects. Davis et al., (2013) show one way such a bound may be derived. However, they only provide a one-sided bound. .
**A**: The donut RD design deletes data within a window around the cutoff, with the goal being to remove all manipulated observations. **B**: They consider Swedish math test data in which there is evidence of teachers inflating students’ grades. **C**: While their focus is on a different causal effect than the one we consider, the paper develops several useful methods that we will incorporate here.
ABC
ACB
ABC
ABC
Selection 3
II-A Dropout Deep neural networks are the state of the art learning models used in artificial intelligence. The large number of parameters in neural networks make them very good at modelling and approximating any arbitrary function. <|MaskedSetence|> Dropout was first introduced in 2012 as a regularization technique to avoid over-fitting[12], and was applied in the winning submission for the Large Scale Visual Recognition Challenge that revolutionized deep learning research[13]. Over course of time a wide range of Dropout techniques inspired by the original method have been proposed. <|MaskedSetence|> <|MaskedSetence|>
**A**: The term Dropout methods was used to refer to them in general[14]. **B**: They include variational Dropout[15], Max-pooling Dropout[16], fast Dropout[17], Cutout[18], Monte Carlo Dropout[19], Concrete Dropout[20] and many others.. **C**: However the larger number of parameters also make them particularly prone to over-fitting, requiring regularization methods to combat this problem.
CAB
CAB
CAB
CBA
Selection 1
Following Fernández-Delgado et al. <|MaskedSetence|> Afterward, the number of training examples is limited to nlimitsubscript𝑛limitn_{\text{limit}}italic_n start_POSTSUBSCRIPT limit end_POSTSUBSCRIPT examples per class. We evaluate the training with 5555, 10101010, 20202020, and 50505050 examples per class. In contrast to Fernández-Delgado et al. <|MaskedSetence|> This ensures that the training and validation data are not mixed with the test data. <|MaskedSetence|> The methods are repeated additionally four times with different seeds on each split. .
**A**: (2014), each dataset is split into a training and a test set using a 50/50 split while maintaining the label distribution. **B**: (2014), we extract validation sets from the training set (e.g., for hyperparameter tuning). **C**: For some datasets which provide a separate test set, the test accuracy is evaluated on the respective set. Missing values are set to the mean value of the feature. All experiments are repeated ten times with randomly sampled splits.
ABC
ABC
ABC
ABC
Selection 2
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019; Ayoub et al., 2020; Zhou et al., 2020), as well as nonlinear settings involving general function approximators (Wen and Van Roy, 2017; Jiang et al., 2017; Du et al., 2019b; Dong et al., 2019). In particular, our setting is the same as the linear setting studied by Ayoub et al. (2020); Zhou et al. (2020), which generalizes the one proposed by Yang and Wang (2019a). We remark that our setting differs from the linear setting studied by Yang and Wang (2019b); Jin et al. (2019). <|MaskedSetence|> Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al. (2017); Dong et al. (2019). <|MaskedSetence|> In particular, compared with the work of Yang and Wang (2019b, a); Jin et al. (2019); Ayoub et al. (2020); Zhou et al. (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions. Compared with optimism-led iterative value-function elimination (OLIVE) (Jiang et al., 2017; Dong et al., 2019), which handles the more general low-Bellman-rank setting but is only sample-efficient, OPPO simultaneously attains computational efficiency and sample efficiency in the linear setting. <|MaskedSetence|>
**A**: In comparison, we focus on policy-based reinforcement learning, which is significantly less studied in theory. **B**: Despite the differences between policy-based and value-based reinforcement learning, our work shows that the general principle of “optimism in the face of uncertainty” (Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012) can be carried over from existing algorithms based on value iteration, e.g., optimistic LSVI, into policy optimization algorithms, e.g., NPG, TRPO, and PPO, to make them sample-efficient, which further leads to a new general principle of “conservative optimism in the face of uncertainty and adversary” that additionally allows adversarially chosen reward functions. . **C**: It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020).
CAB
CAB
CAB
ABC
Selection 2
Other recent approaches include DimReader [45], where the authors create so-called generalized axes for non-linear DR methods, but besides explaining a single dimension at a time, it is currently unclear how exactly it can be used in an interactive exploration scenario; and Praxis [46], with two methods—backward and forward projection—but it requires fast out-of-sample extensions which are not available for the original t-SNE. Most similarly to one of our proposed interactions (the Dimension Correlation, Subsection 4.4), in AxiSketcher [47] (and its prior version InterAxis [48]) the user can draw a polyline in the scatterplot to identify a shape, which results in new non-linear high-dimensional axes to match the user’s intentions. Since the resulting dimension contributions to the axes are not uniform, it is not possible to represent them using simple means such as bar charts. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Since t-viSNE adopts an approach of combining many different coordinated views, it is important for the Dimension Correlation to maintain—as much as possible—the users’ mental map of the projection, and to give simple and straightforward interpretations of the patterns they see..
**A**: In summary, although there is a superficial similarity between the two techniques regarding how the user interacts with the scatterplot, their goals and their inner workings are quite different. **B**: In our Dimension Correlation tool, the user also draws a polyline to identify a shape, but our intention is exactly the opposite of AxiSketcher: we want to capture dimension contributions in an easy and accessible way. **C**: For this, we project low-dimensional points into the line (not high-dimensional ones, as in AxiSketcher), and we compute the dimension contributions in a different way, using Spearman’s rank correlation.
BCA
BCA
BCA
ACB
Selection 1
However, the existing methods are limited to graph type data while no graph is provided for general data clustering. Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods. In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios. <|MaskedSetence|> The connectivity distributions given by the generative perspective also inspires us to devise a novel architecture for decoders. (2) As we utilize GAE to exploit the high-level information to construct a desirable graph, we find that the model suffers from a severe collapse due to the simple update of the graph. <|MaskedSetence|> We further propose a simple but effective strategy to avoid it. (3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. <|MaskedSetence|>
**A**: We analyze the degeneration theoretically and experimentally to understand the phenomenon. **B**: The main contributions are listed as follows: (1) Via extending the generative graph models into general type data, GAE is naturally employed as the basic representation learning model and weighted graphs can be further applied to GAE as well. **C**: Besides, it is insensitive to different initialization of parameters and needs no pretraining..
BAC
BAC
CBA
BAC
Selection 1
Traditionally, the literature has concentrated on estimation and inference in low-dimensional settings where p𝑝pitalic_p is fixed. Recent years, however, have witnessed considerable progress in the understanding and analysis of high-dimensional additive models that allow the number of components to grow with the sample size. For example, the theoretical literature has provided insights into estimation rates in high-dimensional additive models, as in the work of Sardy and Tseng (2004), Lin and Zhang (2006) and many others (Ravikumar et al., 2009; Meier et al., 2009; Huang et al., 2010; Koltchinskii and Yuan, 2010; Kato, 2012; Petersen et al., 2016; Lou et al., 2016). The theoretical results in high-dimensional settings rely on a sparsity assumption, which requires that only a small number s𝑠sitalic_s of coefficients or components are relevant, meaning they are non-zero. <|MaskedSetence|> However, it may still increase as the sample size grows, thus introducing additional structure. While high-dimensional additive models have been studied extensively, there has been limited focus on valid inference within them, particularly in terms of constructing valid hypothesis tests or confidence regions. <|MaskedSetence|> Only recently have new results been derived regarding valid inference in high-dimensional additive models. <|MaskedSetence|>
**A**: Approaches to construct confidence bands have been proposed by Härdle (1989), Sun and Loader (1994), Fan and Zhang (2000), Claeskens and Keilegom (2003) and Zhang and Peng (2010), but only in the widely studied fixed-dimensional setting. **B**: While the number of parameters or covariates exceed the sample size, or the number of parameters can increase along with the sample size, sparsity requires that the number of relevant parameters remains smaller than the sample size. **C**: In the remainder of the introduction section, we will review these recent advancements and highlight our contribution to this emerging literature..
BAC
BAC
BAC
BCA
Selection 2
<|MaskedSetence|> Finally, E1 suggested that the circular barcharts could only show the positive or negative difference compared to the first stored stack. To avoid an asymmetric design and retain a lower complexity level for StackGenVis, we omitted his proposal for the time being, but we consider implementing both methods in the future. Limitations. <|MaskedSetence|> The inherent computational burden of stacking multiple models still remains, as such complex ensemble learning methods need sufficient resources. <|MaskedSetence|>
**A**: Efficiency and scalability were the major concerns raised by all the experts. **B**: Also, the use of VA in between levels makes this even worse. We believe that, with the rapid development of high-performance hardware and support for parallelism, these challenges are due to diminish in the near future.. **C**: E3 also mentioned that supporting feature generation in the feature selection phase might be helpful.
BAC
CAB
CAB
CAB
Selection 4
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. <|MaskedSetence|> Also, when the value function approximator is linear, Melo et al. (2008); Zou et al. <|MaskedSetence|> (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly diverges (Baird, 1995; Boyan and Moore, 1995; Tsitsiklis and Van Roy, 1997). Bhatnagar et al. (2009) propose nonlinear gradient TD, which converges but only to a locally optimal solution. See Geist and Pietquin (2013); Bertsekas (2019) for a detailed survey. When the value function approximator is an overparameterized multi-layer neural network, Cai et al. (2019) prove that TD converges to the globally optimal solution in the NTK regime. See also the independent work of Brandfonbrener and Bruna (2019a, b); Agazzi and Lu (2019); Sirignano and Spiliopoulos (2019), where the state space is required to be finite. In contrast to the previous analysis in the NTK regime, our analysis allows TD to attain a data-dependent feature representation that is globally optimal. Meanwhile, our analysis is related to the recent breakthrough in the mean-field analysis of stochastic gradient descent (SGD) for the supervised learning of an overparameterized two-layer neural network (Chizat and Bach, 2018b; Mei et al., 2018, 2019; Javanmard et al., 2019; Wei et al., 2019; Fang et al., 2019a, b; Chen et al., 2020). See also the previous analysis in the NTK regime (Daniely, 2017; Chizat and Bach, 2018a; Jacot et al., 2018; Li and Liang, 2018; Allen-Zhu et al., 2018a, b; Du et al., 2018a, b; Zou et al., 2018; Arora et al., 2019a, b; Lee et al., 2019; Cao and Gu, 2019; Chen et al., 2019a; Zou and Gu, 2019; Ji and Telgarsky, 2019; Bai and Lee, 2019). <|MaskedSetence|> In contrast, TD follows the stochastic semigradient of the MSPBE (Sutton and Barto, 2018), which is biased. As a result, there does not exist an energy functional for casting TD as its Wasserstein gradient flow. Instead, our analysis combines a generalized notion of one-point monotonicity (Harker and Pang, 1990) and the first variation formula in the Wasserstein space (Ambrosio et al., 2008), which is of independent interest..
**A**: (2019); Chen et al. **B**: (2014) for a detailed survey. **C**: Specifically, the previous mean-field analysis casts SGD as the Wasserstein gradient flow of an energy functional, which corresponds to the objective function in supervised learning.
BAC
BAC
BAC
BAC
Selection 4
The structure of the work is the following: in Sec. <|MaskedSetence|> 3 we then proceed to present and develop the methodology to assess the uncertainty associated with these FCSIs. Finally, in Sec. 4 we tackle the motivating problem: moving from [17], we extend their results by providing, using the previously developed theory, an analysis of the time variability of sensitivities in time, as well as a quantification of the statistical significance and an analysis of its sparsity. <|MaskedSetence|> <|MaskedSetence|>
**A**: 2 we provide an extension to the theory and we define a new set of Finite Change Sensitivity Indices (FCSIs) for functional-valued responses, while in Sec. **B**: Sec. **C**: 5 concludes and devises additional research directions. In the Supplementary Material to this paper the interested reader can find an extensive simulation study that puts the proposed indices, estimation and inference technique to the test..
ABC
BAC
ABC
ABC
Selection 4
In this case, the sparsity assumption Lin and Zhang (2006); Meier et al. <|MaskedSetence|> <|MaskedSetence|> (2010); Raskutti et al. <|MaskedSetence|> (2011); Chen et al. (2018) may enable consistent estimation of the regression function. Nevertheless, general sparse estimators, when applied to a vectorized tensor covariate, ignore the potential tensor structure and may produce a large bias, especially when the sample size n𝑛nitalic_n is much smaller than s𝑠sitalic_s. .
**A**: (2009); Ravikumar et al. **B**: (2009); Huang et al. **C**: (2012); Fan et al.
CBA
ABC
ABC
ABC
Selection 4
Nonstationary bandits Bandit problems can be viewed as a special case of MDP problems with unit planning horizon. It is the simplest model that captures the exploration-exploitation tradeoff, a unique feature of sequential decision-making problems. There are several ways to define nonstationarity in the bandit literature. The first one is piecewise-stationary (Garivier & Moulines, 2011), which assumes the expected rewards of arms change in a piecewise manner, i.e., stay fixed for a time period and abruptly change at unknown time steps. The second one is to quantify the total variations of expected rewards of arms (Besbes et al., 2014). <|MaskedSetence|> <|MaskedSetence|> Other nonstationary bandit models include the nonstationary rested bandit, where the reward of each arm changes only when that arm is pulled (Cortes et al., 2017), and online learning with expert advice (Mohri & Yang, 2017a; b), where the qualities of experts are time-varying. However, reinforcement learning is much more intricate than bandits. <|MaskedSetence|>
**A**: Note that naïvely adapting existing nonstationary bandit algorithms to nonstationary RL leads to regret bounds with exponential dependence on the planing horizon H𝐻Hitalic_H. . **B**: The general strategy to adapt to nonstationarity for bandit problems is the forgetting principle: run the algorithm designed for stationary bandits either on a sliding window or in small epochs. **C**: This seemingly simple strategy is successful in developing near-optimal algorithms for many variants of nonstationary bandits, such as cascading bandits (Wang et al., 2019), combinatorial semi-bandits (Zhou et al., 2020) and linear contextual bandits (Cheung et al., 2019; Zhao et al., 2020; Russac et al., 2019).
BCA
BCA
BCA
CBA
Selection 3
<|MaskedSetence|> They aren’t really separating into nuisance and independent only.. they are also throwing away nuisance. While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. <|MaskedSetence|> <|MaskedSetence|>
**A**: On the other hand, if the unconstrained nuisance variables have enough capacity, the model can use them to achieve a high quality reconstruction while ignoring the latent variables related to the disentangled factors. **B**: This phenomena is sometimes called the "shortcut problem" and has been discussed in previous works [DBLP:conf/iclr/SzaboHPZF18]. . **C**: I think I would make what these methods doing clearer.
CAB
CAB
ACB
CAB
Selection 2
Another relevant factor is interpretability of the set of selected views. Although sparser models are typically considered more interpretable, a researcher may be interested in interpreting not only the model and its coefficients, but also the set of selected views. For example, one may wish to make decisions on which views to measure in the future based on the set of views selected using the current data. For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. <|MaskedSetence|> <|MaskedSetence|> But if there is also a desire to interpret the relationships between the views and the outcome, it may be more desirable to identify all of these combinations, even if this includes some redundant information. If one wants to go even further and perform formal statistical inference on the set of selected views, one may additionally be interested in theoretically controlling, say, the family-wise error rate (FWER) or false discovery rate (FDR) of the set of selected views. <|MaskedSetence|>
**A**: However, strict control of such an error rate could end up harming the predictive performance of the model, thus leading to a trade-off between the interpretability of the set of selected views and classification accuracy. . **B**: If the primary concern is sparsity, a researcher may be satisfied with just one of these combinations being selected, preferably the smallest set which contains the relevant information. **C**: However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012). An example of the trade-off between sparsity and interpretability of the set of selected views occurs when different views, or combinations of views, contain the same information.
CBA
ACB
CBA
CBA
Selection 4
CB-MNL enforces optimism via an optimistic parameter search (e.g. in Abbasi-Yadkori et al. [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In non-linear reward models, both approaches may not follow similar trajectory but may have overlapping analysis styles (see Filippi et al. [2010] for a short discussion). .
**A**: Optimistic parameter search provides a cleaner description of the learning strategy. **B**: [2010]. **C**: [2020], Filippi et al.
CBA
CBA
CAB
CBA
Selection 2
With crossover, random pairs of underperforming models (originating from the same algorithm) are picked and their hyperparameters are fused with the goal of creating a better model. <|MaskedSetence|> It facilitates scanning for external regions of the solution space to discover additional local optima. These unexplored areas of the hyperparameter space may offer a fresh start to the search for hyperparameters. <|MaskedSetence|> Hence, the problem of getting stuck in local optima of the hyperparameter space is addressed. <|MaskedSetence|> However, their output is usually a single model, which is frequently underpowered when compared to an ensemble of ML models [SR18]..
**A**: However, one question that emerges is: (RQ1) how to choose which models (and algorithms) should crossover and/or mutate, and to what extent, considering we have limited computational resources? Various automatic ML methods [FH19] and practical frameworks [Com, NNI] have been proposed to deal with the challenge of hyperparameter search. **B**: The synergy of combining both techniques can be beneficial in finding distinctive local optima that generalize to a better result in the end. **C**: As a result, internal regions of the solution space are further explored, and better local optima are investigated. On the other hand, mutation randomly generates new values for the hyperparameters to substitute old values.
CBA
CBA
CAB
CBA
Selection 4
In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. <|MaskedSetence|> SLIM combined the SLIM with the spectral method based on DCSBM for community detection. And the SLIM method outperforms state-of-art methods in many real and simulated datasets. <|MaskedSetence|> <|MaskedSetence|>
**A**: Numerical results of simulations and substantial empirical datasets in Section 5 show that our proposed Mixed-SLIM indeed enjoys satisfactory performances when compared to the benchmark methods for both community detection problem and mixed membership community detection problem. 2 Degree-corrected mixed membership model. **B**: As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random walk. **C**: Therefore, it is worth modifying this method to mixed membership networks.
BCA
BCA
BCA
ACB
Selection 3
In addition to gradient-based MCMC, variational transport also shares similarity with Stein variational gradient descent (SVGD) (Liu and Wang, 2016), which is a more recent particle-based algorithm for Bayesian inference. Variants of SVGD have been subsequently proposed. See, e.g., Detommaso et al. (2018); Han and Liu (2018); Chen et al. <|MaskedSetence|> <|MaskedSetence|> (2019); Wang et al. (2019); Zhang et al. <|MaskedSetence|> (2020) and the references therein. Departing from MCMC where independent stochastic particles are used, it leverages interacting deterministic particles to approximate the probability measure of interest. In the mean-field limit where the number of particles go to infinity, it can be viewed as the gradient flow of the KL-divergence with respect to a modified Wasserstein metric (Liu, 2017)..
**A**: (2019); Gong et al. **B**: (2020); Ye et al. **C**: (2018); Liu et al.
CBA
CAB
CAB
CAB
Selection 2
There are several different techniques for computing feature importance that produce diverse outcomes per feature. <|MaskedSetence|> Another key point is that users should have the ability to include and exclude features during the entire exploration phase. G3: Application of alternative feature transformations according to feedback received from statistical measures. In continuation of the preceding goal, the tool should provide sufficient visual guidance to users to choose between diverse feature transformations (T3). <|MaskedSetence|> <|MaskedSetence|> When checking how to modify features, users should be able to estimate the impact of such transformations..
**A**: The tool should facilitate the visual comparison of alternative feature selection techniques for each feature (T2). **B**: Statistical measures such as target correlation and mutual information shared between features, along with per class correlation, are necessary to evaluate the features’ influences in the result. **C**: Also, the tool should use variance influence factor and in-between features’ correlation for identifying colinearity issues.
CBA
ABC
ABC
ABC
Selection 4
We have pointed to issues with the existing bias mitigation approaches, which alter the loss or use resampling. <|MaskedSetence|> <|MaskedSetence|> Causality is another relevant line of research, where the goal is to uncover the underlying causal mechanisms [49, 45, 9, 2]. Discovery and usage of causal concepts is a promising direction for building robust systems. <|MaskedSetence|>
**A**: These areas have not been explicitly studied for their ability to overcome dataset bias. . **B**: An orthogonal avenue for attacking bias mitigation is to use alternative architectures. **C**: Neuro-symbolic and graph-based systems could be created that focus on learning and grounding predictions on structured concepts, which have shown promising generalization capabilities [68, 44, 34, 24, 60].
BCA
BCA
BAC
BCA
Selection 4
In the one-step ahead prediction paradigm uncertainties in emulation will propagate over time. It should be noted that the numerical simulation of a set of ODE (e.g., the numerical simulation of the Lorenz system) also propagates errors which depend upon the numerical scheme employed, as well as properties of the underlying vector field. Even higher-order numerical methods will deviate from the underlying function with time, especially in a chaotic regime such as that we study in the Lorenz system. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> As an example, we have applied our method to a six-dimensional model whose equations are given below .
**A**: After this point the emulator is not able to predict the simulator accurately, however, the uncertainty of the prediction is captured in the proposed method. The dimensionality of the dynamical systems we considered in this work is two or three. **B**: What we have shown for the systems studied is that the prediction uncertainty increases from step to step up to a predictability horizon that is defined the time where a change point occurs in the SD of prediction (see Figure 4). **C**: The applicability of our proposed methodology to higher dimensional problems needs more investigations though.
BAC
ACB
BAC
BAC
Selection 3
<|MaskedSetence|> <|MaskedSetence|> (2014)], [Pfister et al. (2018)], [Chakraborty and Zhang (2019)]), graphical modeling ([Lauritzen (1996)], [Gan, Narisetty and Liang (2019)]), linguistics ([Nguyen and Eisenstein (2017)]), clustering (Székely and Rizzo, 2005), dimension reduction (Fukumizu, Bach and Jordan, 2004; Sheng and Yin, 2016). <|MaskedSetence|> However, its lack of robustness to outliers and departures from normality eventually led researchers to consider alternative nonparametric procedures. To overcome such a problem, a natural approach is to consider the functional difference between the empirical joint distribution and the product of the empirical marginal distributions, see Hoeffding (1948), Blum, Kiefer and Rosenblatt (1961) and Bouzebda (2011). This approach can also use characteristic empirical functions; see Csörgő (1985). Inspired by the work of Blum, Kiefer and Rosenblatt (1961) and Dugué (1975), Deheuvels (1981) studied a test of multivariate independence based on the Möbius decomposition, generalized in Bouzebda (2014)..
**A**: [Bach and Jordan (2003)], [Chen and Bickel (2006)], [Samworth and Yuan (2012)] and [Matteson and Tsay (2017)]. **B**: Testing independence also has many applications, including causal inference ([Pearl (2009)], [Peters et al. **C**: The traditional approach for testing independence is based on Pearson’s correlation coefficient; for instance, refer to Binet and Vaschide (1897), Pearson (1920), Spearman (1904), Kendall (1938).
ABC
ABC
BAC
ABC
Selection 4
Self-concordant functions have received strong interest in recent years due to the attractive properties that they allow to prove for many statistical estimation settings [Marteau-Ferey et al., 2019, Ostrovskii & Bach, 2021]. The original definition of self-concordance has been expanded and generalized since its inception, as many objective functions of interest have self-concordant-like properties without satisfying the strict definition of self-concordance. <|MaskedSetence|> This was also the case in Ostrovskii & Bach [2021] and Tran-Dinh et al. <|MaskedSetence|> <|MaskedSetence|>
**A**: [2015], in which more general properties of these pseudo-self-concordant functions were established. **B**: This was fully formalized in Sun & Tran-Dinh [2019], in which the concept of generalized self-concordant functions was introduced, along with key bounds, properties, and variants of Newton methods for the unconstrained setting which make use of this property. . **C**: For example, the logistic loss function used in logistic regression is not strictly self-concordant, but it fits into a class of pseudo-self-concordant functions, which allows one to obtain similar properties and bounds as those obtained for self-concordant functions [Bach, 2010].
CAB
ACB
CAB
CAB
Selection 4
We measure the harm that past adaptivity causes to a future query by considering the query as evaluated on a posterior data distribution and comparing this with its value on a prior. The prior is the true data distribution, and the posterior is induced by observing the responses to past queries and updating the prior. <|MaskedSetence|> If furthermore, the the response given by the mechanism is close to the query result on the posterior, then by a triangle inequality argument, that mechanism is distribution accurate. <|MaskedSetence|> <|MaskedSetence|>
**A**: This type of triangle inequality first appeared as an analysis technique in Jung et al. **B**: If the new query behaves similarly on the prior distribution as it does on this posterior (a guarantee we call Bayes stability; Definition 3.3), adaptivity has not led us too far astray.111This can be viewed as a generalization of the Hypothesis Stability notion of Bousquet and Elisseeff (2002)—which was proven to guarantee on-average generalization (Shalev-Shwartz et al., 2010)—where the hypothesis is a post-processing of the responses to past queries, and the future query is the loss function estimation. **C**: (2020). .
BAC
BAC
ABC
BAC
Selection 1
4.2 Data Most of the data sets were obtained from the UCI repository Dua2019 . Specific references are given in Table 2. This table also shows the number of data points and (used) features and the skewness and (Pearson) kurtosis of the response variable. All data sets were standardized (both features and target variables) before training. <|MaskedSetence|> This strongly improved the R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-coefficient of the various models, but did not improve the prediction intervals, and therefore, these results are not included. The crime data set comes in two versions: the original data set consists of integer-valued data (count data), while the version used here was preprocessed using an unsupervised standardization algorithm redmond2002data . <|MaskedSetence|> The traffic data set, aside of being very small, is also extremely sparse (on average 14 features are zero). It should be noted that all of the data sets used in this study were considered as ordinary (static) data sets. <|MaskedSetence|> The main reason to exclude autoregressive features is that most, if not all, methods considered in this study assume the data to be i.i.d. (or exchangeable), a property that is generically not valid for autoregressive data. .
**A**: Even though some of them could be considered in a time series context, no autoregressive features were additionally extracted. **B**: The data sets blog and fb1 were also analysed after first taking a log transform of the response variable because these data sets are extremely skewed, which is reflected in the high skewness and kurtosis, as shown in the fourth column of Table 2, and are believed to follow a power law distribution. **C**: Although standardized, the data set retains (some of) its count data properties.
BCA
CAB
BCA
BCA
Selection 1
<|MaskedSetence|> <|MaskedSetence|> Observing a panel of linking decisions by a subset of nodes, set in small networks, allows us to directly (and tractably) estimate utility parameters from the evolution of gameplay. <|MaskedSetence|> Specifically, we no longer need to make assumptions regarding the meeting process for individual revisions. At time t𝑡titalic_t, some (known) subset of nodes are selected to update their linking strategy. We place no restrictions on who or how many are chosen for revision. The nodes that are selected have the opportunity to change their link selection and contribution. They select the combination link set and contribution that maximize their expected utility, including the preference shock which is subject to the following assumption:.
**A**: We can also relax the usual assumption that players move individually, which is common in the cross-sectional and large network estimators (Mele, 2017; Badev, 2021). **B**: This assumption is typically used to ensure the convergence of logit-response dynamics to a steady state distribution (Foster and Young, 1990; Alós-Ferrer and Netzer, 2010) from which parameters are then estimated. **C**: While the assumption of players using logit best-response is somewhat strict, it is a substantial relaxation of the assumptions used in cross-sectional estimators.
ABC
ABC
ABC
CAB
Selection 2
<|MaskedSetence|> However, it is interesting to see that, while it is never very easy to get to hospitals in busy times (like at 18:20), the 2 Central Station is still in a good spot (as it is not too far, and not too crowded), conversely the 1 Garibaldi Station is in a less favorable location, as it becomes practically inaccessible in crowded times. Even worse is the 3 Duomo area, which, despite being quite close to a hospital, experiences such high levels of crowdedness that make reaching the hospital almost impossible in crowded times (in terms of the requirement we have defined), while it is relatively easier in medium-crowded times. <|MaskedSetence|> 4 end_POSTSUBSCRIPT. <|MaskedSetence|>
**A**: This can be surprising at first, but a look at the broader map of the city clarifies that they are closer to hospitals that are not in our grid and, therefore cannot be fully analyzed by our model. . **B**: Lastly, the always failing areas at the corners of the grid, and at the bottom-center tell us something different: for the spatial configuration we are considering, they always violate the requirement to reach a hospital in the city center in dP.⁢4subscript𝑑P.4d_{\text{P.}4}italic_d start_POSTSUBSCRIPT P. **C**: By looking at the picture, an immediate observation is that areas at the corners are simply too far from any of the city center hospitals, meaning that going towards the center from there would be impractical.
ACB
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> Determining the number of factors in a data-driven way has been an important research topic in the factor model literature. <|MaskedSetence|> Lam and Yao, (2012), Ahn and Horenstein, (2013) developed an alternative approach to study the ratio of each pair of adjacent eigenvalues. Recently, Han et al., (2022) established a class of rank determination approaches for the factor models with Tucker low-rank structure, based on both the information criterion and the eigen-ratio criterion. <|MaskedSetence|>
**A**: Bai and Ng, (2002, 2007), Hallin and Liška, (2007) proposed consistent estimators in the vector factor models based on the information criteria approach. **B**: Those procedures can be extended to TFM-cp. . **C**: Here the estimators are constructed with given rank r𝑟ritalic_r, though in the theoretical analysis it is allowed to diverge.
CAB
CAB
BCA
CAB
Selection 2
<|MaskedSetence|> However, iForest can be used only for binary classification, while VisRuler can be used with multi-class data sets (as in the use case of Section System Overview and Use Case). Also, the feature flow, a node-link diagram, suffers from scalability issues (a challenge only partially overcome with aggregation). Our tool employs dimensionality reduction for clustering all decisions extracted by multiple models, thus enabling users to gain insights into the patterns inside a large quantities of rules. <|MaskedSetence|> <|MaskedSetence|> While the scalability is good, it does not cover the task of finding similarities between decisions from diverse models and algorithms. In conclusion, none of the above works have experimented with the fusion of bagged and boosted decision trees, and in particular, with visualizing both tree types in a joint decisions space to observe their dissimilarity, which can result in unique and undiscovered decisions. RfX Eirich2022RfX supports the comparison of several decision trees originating from a RF model with a dissimilarity projection and icicle plots, allowing electrical engineers to browse a single decision tree by using a node-link diagram. In contrast, VisRuler does not concentrate on a specific domain and gives attention to unique decision paths instead of trees with more scalable visual representations. Colorful trees Nsch2019Colorful follows a botanical metaphor and demonstrates many core parameters essential to comprehend how a RF model operates. This method allows customized mappings of RF components to visual attributes, thus enabling users to determine the performance, analyze the behavior of individual trees, and understand how to tune the hyperparameters to improve performance or efficiency. However, this work is targeted toward hyperparameter tuning and does not focus on concurrently extracting and analyzing the decisions from each RF and AB model. Additionally, it is impossible to accomplish case-based reasoning with the proposed visual representation. Finally, Neto and Paulovich Neto2021Multivariate describe the extraction and explanation of patterns in high-dimensional data sets from random decision trees, but model interpretation through the exploration of alternative decisions remains uncovered by this work (when compared to VisRuler). .
**A**: Therefore, VisRuler allows users to mine rules for both a particular class outcome and in connection to a specific case. **B**: ExMatrix Neto2021Explainable is another VA tool for RF interpretation that operates using a matrix-like visual representation, facilitating the analysis of a model and connecting rules to classification results. **C**: 2.1 Interpretation of Bagged Decision Trees As in VisRuler, relevant works that utilize bagging methods use the RF algorithm to produce decision trees. Zhao2019iForest ; Neto2021Explainable ; Eirich2022RfX ; Nsch2019Colorful ; Neto2021Multivariate iForest Zhao2019iForest provides users with tree-related information and an overview of the involved decision paths for case-based reasoning, with the goal of revealing the model’s working internals.
ACB
CAB
CAB
CAB
Selection 2
README.md exists but content is empty.
Downloads last month
37