Dataset Viewer
Auto-converted to Parquet
context
stringlengths
100
12k
A
stringlengths
100
5.1k
B
stringlengths
100
6.02k
C
stringlengths
100
4.6k
D
stringlengths
100
4.68k
label
stringclasses
4 values
(14), empirical PSIS result (blue dots) for the average sample size to obtain fixed L1 deviation, minimum sample size (yellow line) required
(10), empirical IS result (green dots) for the average sample size to obtain fixed L1 deviation (from 10 000 repeated simulations). The required sample size grows more quickly for IS than PSIS, and for PSIS quickly grows infeasibly large when k>0.7𝑘0.7k>0.7italic_k > 0.7.
(14), empirical PSIS result (blue dots) for the average sample size to obtain fixed L1 deviation, minimum sample size (yellow line) required
(11), and empirical PSIS result (blue dots) for the average sample size to obtain fixed RMSE (from 10 000 repeated simulations). The required sample size quickly grows infeasibly large when k>0.7𝑘0.7k>0.7italic_k > 0.7.
Figure 4: Convergence rate as a function of k𝑘kitalic_k and S𝑆Sitalic_S. Red dashed line shows the theoretical convergence rate based on the CLT and generalized CLT. Blue dots show the empirical convergence rate from the simulation with Pareto distributed ratios (from 10 000 repeated simulations). Empirical convergence rate is estimated by how much the error decreases when the sample size is doubled (5 000→10 000)→500010000(5\,000\rightarrow 10\,000)( 5 000 → 10 000 ). For a finite sample size S𝑆Sitalic_S the transition from CLT convergence rate to GCLT convergence rate is smooth, and the there is no sudden change at k=0.5𝑘0.5k=0.5italic_k = 0.5.
A
β∼N⁢(0,λ−1⁢(Mw⊤⁢Mw)−)similar-to𝛽𝑁0superscript𝜆1superscriptsuperscriptsubscript𝑀𝑤topsubscript𝑀𝑤\beta\sim N(0,\lambda^{-1}{(M_{w}^{\top}M_{w})}^{-})italic_β ∼ italic_N ( 0 , italic_λ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_M start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ).
\text{exp}(\beta(v))].italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT | italic_β start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ exp ( italic_β ( italic_v ) ) ] .
{v}^{T}\beta)]italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_RELOP SUPERSCRIPTOP start_ARG ∼ end_ARG start_ARG ind end_ARG end_RELOP Po [ roman_exp ( x start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_β ) ].
The maximum a posteriori (MAP) estimate β^^𝛽\widehat{\beta}over^ start_ARG italic_β end_ARG for β𝛽\betaitalic_β is
rv=(Yv−μ^v)/V⁢(μ^v)subscript𝑟𝑣subscript𝑌𝑣subscript^𝜇𝑣𝑉subscript^𝜇𝑣r_{v}=(Y_{v}-\widehat{\mu}_{v})/\sqrt{V(\widehat{\mu}_{v})}italic_r start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT = ( italic_Y start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT - over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) / square-root start_ARG italic_V ( over^ start_ARG italic_μ end_ARG start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ) end_ARG is the v𝑣vitalic_v-th Pearson
C
Aside from challenges above, many of the real world biological and medical data sets are collected along with multiple response variables. These responses are often more closely related and could share common relevant covariates than others and then form the tree or other kinds of structures [15, 16, 17, 18]. For instances, in genetic association analysis, which aims to select the single-nucleotide polymorphism (explanatory variables) that could affect the phenotype (response variables), the genes in the same pathway pretend to share the common set of relevant explanatory variables than other genes.
Thus, to improve the performance of the variable selection, incorporating the complex correlation structure in the responses is under our consideration. In this paper, we extend the recent solutions of sparse linear mixed model [8, 9] that can correct confounding factors and perform variable selection simultaneously further to account the relatedness between different responses. We propose the tree-guided sparse linear mixed model, namely TgSLMM, to correct the confounder and incorporate the relatedness among response variables simultaneously. With TgSLMM, we are capable to improve the performance of the variable selection when considering the statistical criterion, incorporating the complex tree-based correlation structure in the traits under our consideration. Eventually, we examine our model through plenty of repeated experiments and show that our method is superior to other existing approaches and able to discover the real genome association in the real data set.
Based on the sparsity of β𝛽\betaitalic_β, it’s reasonable to assume that β𝛽\betaitalic_β follows Laplace shrinkage prior. Such assumptions lead to the sparse linear mixed model. However, sparse LMM fails to consider the relatedness among response variables. The defect drives us to the tree-guided sparse linear mixed model.
To address these problems, we propose the tree-guided sparse linear mixed model for sparse variable selection. Apart from extending the recent solutions of LMM that can correct confounding factors, we can perform variable selection simultaneously further to account the relatedness between different responses. By conducting extensive experiments, we compare our method with state-of-art methods and deeply analyze how confounding factors from the high dimensional heterogeneous data set influence the capability of the model to identify active variables. We show that traditional methods easily fall into the trap of utilizing false information, whereas our proposed model outperforms other existing methods in both the synthetic data set and real genome data set. We make our source code available444https://github.com/lebronlambert/TgSLMM.
The linear mixed model (LMM) is an extension of the standard linear regression model that explicitly describes the relationship between response variables and explanatory variables incorporating an extra random term to account for confounding factors. To introduce the sparse linear mixed model, we briefly revisit the classical linear mixed model as Equation 1:
A
(d) Cumulative regret for SMC-based Bayesian policies in scenario F: known and unknown dynamic parameters.
Mean regret (standard deviation shown as shaded region) in contextual, linear Gaussian bandit Scenarios A and B
Mean regret (standard deviation shown as shaded region) in contextual, linear Gaussian bandit Scenarios A and B
Mean regret (standard deviation shown as shaded region) in contextual, non-stationary categorical bandit Scenarios E and F
Mean regret (standard deviation shown as shaded region) in contextual linear logistic dynamic bandit Scenarios C and D
C
Median number of blood glucose measurements per day varies between 2 and 7. Similarly, insulin is used on average between 3 and 6 times per day.
The data collection study was conducted from end of February to beginning of April 2017 by Emperra and includes 10 patients who were given specially prepared smartphones. Measurements on carbohydrate consumption, blood glucose levels, and insulin intake were made with Emperras Esysta system. Measurements on physical activities were obtained using the Google Fit app.
In terms of physical activity, we measure the 10 minute intervals with at least 10 steps tracked by the google fit app.
Patient 10 on the other hand has a surprisingly low median of 0 active 10 minutes intervals per day, indicating missing values due to, for instance, not carrying the smartphone at all times.
Table 2: Descriptive statistics for the number of patient data entries per day. Active intervals are 10 minute intervals with at least 10 steps taken.
B
We also assess bias and absolute bias of the outcomes of interest (for Simulation 1 and 2, λ^2subscript^𝜆2\hat{\lambda}_{2}over^ start_ARG italic_λ end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, for Simulation 3, β^1subscript^𝛽1\hat{\beta}_{1}over^ start_ARG italic_β end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, β^2subscript^𝛽2\hat{\beta}_{2}over^ start_ARG italic_β end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) as well as the standard error of the estimate, S⁢E⁢(θ^)𝑆𝐸^𝜃SE(\hat{\theta})italic_S italic_E ( over^ start_ARG italic_θ end_ARG ). Absolute and relative bias were comparable between the Invalid MIIVs estimator and the MIIV-2SBMA estimator, while bias of the Correct MIIVs estimator is, as expected, less than the other two estimators. We found that all three estimators had similar standard errors, with MIIV-2SBMA having very slightly increased standard errors at N=100𝑁100N=100italic_N = 100. The bias and standard error results are in the Supplementary Materials.
Table 7 shows the power of the traditional Sargan’s Test and the BMA Sargan’s test to detect model misspecification.
Table 4: Simulation 2 Results: Sargan’s Test Power. For Invalid and Correct MIIVs, Sargan’s Test Power is for the traditional test. For MIIV-2SBMA, power is for the BMA Sargan’s Test.
Table 7: Simulation 3 Results: Sargan’s Test Power. For Invalid and Correct MIIVs, Sargan’s Test Power is for the traditional test. For MIIV-2SBMA, power is for the BMA Sargan’s Test.
Table 1: Simulation 1 Results: Sargan’s Test Power. For Invalid and Correct MIIVs, Sargan’s Test Power is for the traditional test. For MIIV-2SBMA, power is for the BMA Sargan’s Test.
D
Our results are poor with 20202020K interactions. For 50505050K they are already almost as good as with 100100100100K interactions. From there the results improve until 500500500500K samples – it is also the point at which they are on par with model-free PPO. Detailed per game results can be found in Appendix F.
Since the publication of the first preprint of this work, it has been shown in van Hasselt et al. (2019); Kielak (2020) that Rainbow can be tuned to have better results in low data regime. The results are on a par with SimPLe – both of the model-free methods are better in 13 games, while SimPLe is better in the other 13 out of the total 26 games tested (note that in Section 4.2 van Hasselt et al. (2019) compares with the results of our first preprint, later improved).
Such a behavior, with fast growth at the beginning of training, but lower asymptotic performance is commonly observed when comparing model-based and model-free methods (Wang et al. (2019)). As observed in Section 6.4 assigning bigger computational budget helps in 100100100100K setting. We suspect that gains would be even bigger for the settings with more samples.
This demonstrates that SimPLe excels in a low data regime, but its advantage disappears with a bigger amount of data.
In our empirical evaluation, we find that SimPLe is significantly more sample-efficient than a highly tuned version of the state-of-the-art Rainbow algorithm (Hessel et al., 2018) on almost all games. In particular, in low data regime of 100100100100k samples, on more than half of the games, our method achieves a score which Rainbow requires at least twice as many samples. In the best case of Freeway, our method is more than 10x more sample-efficient, see Figure 3.
C
&log{(W_{def}(l,t))}\end{split}start_ROW start_CELL italic_l italic_o italic_g italic_C italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_l , italic_t ) end_CELL start_CELL = italic_l italic_o italic_g ( italic_n start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_l , italic_t ) ) + italic_l italic_o italic_g ( 1 / italic_d ) + italic_l italic_o italic_g ( italic_O italic_L italic_I start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) + italic_l italic_o italic_g ( italic_m start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_l , italic_t ) ) end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL + italic_l italic_o italic_g ( italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_l , italic_t ) ) + italic_l italic_o italic_g ( 2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) + italic_l italic_o italic_g ( italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) + italic_l italic_o italic_g ( italic_m italic_o start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) + italic_l italic_o italic_g ( italic_p italic_o start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_l , italic_t ) ) + end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL italic_l italic_o italic_g ( italic_W start_POSTSUBSCRIPT italic_d italic_e italic_f end_POSTSUBSCRIPT ( italic_l , italic_t ) ) end_CELL end_ROW
x˙1=(1.46)⁢x1.129⁢y1.404+(.906)⁢x1.138⁢y2.136subscript˙𝑥11.46superscriptsubscript𝑥1.129superscriptsubscript𝑦1.404.906superscriptsubscript𝑥1.138superscriptsubscript𝑦2.136\dot{x}_{1}=(1.46){x_{1}}^{.129}{y_{1}}^{.404}+(.906){x_{1}}^{.138}{y_{2}}^{.1%
y˙1=(.704)⁢y1.129⁢x1.404+(.953)⁢y1.138⁢x2.136subscript˙𝑦1.704superscriptsubscript𝑦1.129superscriptsubscript𝑥1.404.953superscriptsubscript𝑦1.138superscriptsubscript𝑥2.136\dot{y}_{1}=(.704){y_{1}}^{.129}{x_{1}}^{.404}+(.953){y_{1}}^{.138}{x_{2}}^{.1%
C⁢P⁢(x1,y1,t),C⁢P⁢(x1,y1,t),…,C⁢P⁢(xn,yn,tn)∈Fϕ𝐶𝑃superscript𝑥1superscript𝑦1𝑡𝐶𝑃superscript𝑥1superscript𝑦1𝑡…𝐶𝑃superscript𝑥𝑛superscript𝑦𝑛superscript𝑡𝑛subscript𝐹italic-ϕCP(x^{1},y^{1},t),CP(x^{1},y^{1},t),\dots,\\
K⁢S=m⁢a⁢x1≤t*≤30⁢[F⁢(et*˙^)−t*−130,t*30−F⁢(et*˙^)]𝐾𝑆𝑚𝑎subscript𝑥1superscript𝑡30𝐹^˙subscript𝑒superscript𝑡superscript𝑡130superscript𝑡30𝐹^˙subscript𝑒superscript𝑡KS={max}_{1\leq{t^{*}}\leq 30}[F(\hat{\dot{e_{t^{*}}}})-\frac{t^{*}-1}{30},%
C
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In practice, MSGD often outperforms SGD (Krizhevsky et al., 2012; Sutskever et al., 2013). Many machine learning platforms, such as TensorFlow, PyTorch and MXNet, include MSGD as one of their optimization methods.
In each iteration, SGD calculates a (mini-batch) stochastic gradient and uses it to update the model parameters. Inspired by momentum and Nesterov’s accelerated gradient descent, momentum SGD (MSGD) (Polyak, 1964; Tseng, 1998; Lan, 2012; Kingma and Ba, 2015) has been proposed and widely used in machine learning. In practice, MSGD often outperforms SGD (Krizhevsky et al., 2012; Sutskever et al., 2013). Many machine learning platforms, such as TensorFlow, PyTorch and MXNet, include MSGD as one of their optimization methods.
With the rapid growth of data, distributed SGD (DSGD) and its variant distributed MSGD (DMSGD) have garnered much attention. They distribute the stochastic gradient computation across multiple workers to expedite the model training.
Assume we have K𝐾Kitalic_K workers. The training data are distributed or partitioned across K𝐾Kitalic_K workers. Let 𝒟ksubscript𝒟𝑘\mathcal{D}_{k}caligraphic_D start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT denote the training data stored on worker k𝑘kitalic_k, and Fk⁢(𝐰)=1|𝒟k|⁢∑ξ∈𝒟kf⁢(𝐰;ξ)subscript𝐹𝑘𝐰1subscript𝒟𝑘subscript𝜉subscript𝒟𝑘𝑓𝐰𝜉F_{k}({\bf w})=\frac{1}{|\mathcal{D}_{k}|}\sum_{\xi\in\mathcal{D}_{k}}f({\bf w%
Furthermore, when we distribute the training across multiple workers, the local objective functions may differ from each other due to the heterogeneous training data distribution. In Section 5, we will demonstrate that the global momentum method outperforms its local momentum counterparts in distributed deep model training.
B
For this case d(i)superscript𝑑𝑖d^{(i)}italic_d start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT does not apply.
Although ReLU creates exact zeros (unlike its predecessors s⁢i⁢g⁢m⁢o⁢i⁢d𝑠𝑖𝑔𝑚𝑜𝑖𝑑sigmoiditalic_s italic_i italic_g italic_m italic_o italic_i italic_d and tanh\tanhroman_tanh), its activation map consists of sparsely separated but still dense areas (Fig. 1LABEL:sub@subfig:relu) instead of sparse spikes.
Sparsely Activated Networks (SANs) (Fig. 2) in which spike-like sparsity is enforced in the activation map (Fig. 1LABEL:sub@subfig:extremapoolindices and LABEL:sub@subfig:extrema) through the use of a sparse activation function.
The ReLU activation function produces sparsely disconnected but internally dense areas as shown in Fig. 1LABEL:sub@subfig:relu instead of sparse spikes.
Recently, in k𝑘kitalic_k-Sparse Autoencoders [21] the authors used an activation function that applies thresholding until the k𝑘kitalic_k most active activations remain, however this non-linearity covers a limited area of the activation map by creating sparsely disconnected dense areas (Fig. 1LABEL:sub@subfig:topkabsolutes), similar to the ReLU case.
C
|)+A(\gamma,m)\cdot O(n^{-1/3}).blackboard_E ( italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ≤ italic_α + italic_O ( 1 / italic_m ) + italic_O ( italic_m italic_γ start_POSTSUBSCRIPT 0 italic_n end_POSTSUBSCRIPT ) + italic_O ( italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT / | caligraphic_G start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT | ) + italic_A ( italic_γ , italic_m ) ⋅ italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 3 end_POSTSUPERSCRIPT ) .
Indeed, Theorem 2 of this paper shows that the rate of convergence of Condition (C1) determines a finite-sample bound between the Type I error rates of ϕnsubscriptitalic-ϕ𝑛\phi_{n}italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and ϕn∗superscriptsubscriptitalic-ϕ𝑛\phi_{n}^{*}italic_ϕ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. This bound also depends on the smoothness constant of the cdf of the ‘spacings variable’ in the denominator of (C1). Intuitively, the approximate randomization test performs as well as the true test unless the multiplicity of values in the randomization distribution is too erratic. For example, if the test statistic is asymptotically normal, the bound is of order O⁢(n−1/3)𝑂superscript𝑛13O(n^{-1/3})italic_O ( italic_n start_POSTSUPERSCRIPT - 1 / 3 end_POSTSUPERSCRIPT ), suggesting a robustness-efficiency trade-off that we discuss throughout the paper. These results are especially valuable under the invariant hypothesis (1) as prior randomization literature has not studied the performance of approximate randomization tests under the invariant regime.
The condition only stipulates that the variation in the error from the approximate randomization test using the proxy variables (numerator) is dominated by the variation in the spacings of the statistic values in the original randomization test using the true variables (denominator).
The key implication of this result is that the approximate randomization test ‘inherits’ the asymptotic properties of the original randomization test as long as
Next, we use Theorem 2 to establish a finite-sample bound for the Type II error of the approximate randomization test. This shows that the approximate test is consistent as long as the “signal” dominates the natural variation in the true randomization test.
D
Given the challenge to identification, empirical evidence on the effect of turning away volunteers on future behavior is scant. One context that has received some attention to identify temporary rejections on future volunteering is blood donations (e.g., Custer et al., (2007); Bruhin et al., (2020). Understanding how a temporary rejection (henceforth a deferral) affects future donation in the context of blood donation is crucial. The costs of collecting a unit of blood are non-trivial; the process of collecting a unit of blood typically requires more than an hour of a donor’s time (including about 12 uncomfortable minutes ‘needle in’ time), the marginal costs of staff time, equipment, needles, bags, and storage. If the collection is not used, there are also additional disposal costs.
Not all attempted donations are successful.444In our data, for unsuccessful donations, we do not know what type of donations was attempted. We can safely assume that it was a whole blood donation for women. For men, we do not know whether the attempted donation was for whole blood, plasma, or red cell aspheris. If the attempted donation is unsuccessful, it can be for several reasons. First, donors can start the process of donating555When one attempts to donate, one first goes through a medical screening, which involves a questionnaire as well as health checks such as measuring blood pressure and hemoglobin level. The process is detailed here: www.seha.ae/bloodBank#donate-carousel. (i.e., registering, health checks) and change their mind in the middle of it. In that case, they can come back on the same day. Second, it could be a failed phlebotomy, an unsuccessful attempt at drawing blood, often due to difficulties locating or accessing a vein, or the inability to collect sufficient blood for the donation. In case of a failed phlebotomy, the donor can return the next day.
Despite these costs, in cases where there is already excess supply, the prevailing view across blood banks is that the risk of deferrals reducing future donations is too high, and thus donors will not be deferred unless there is a medical concern for the donor or if the donor is unable to provide a safe blood donation. Therefore, blood banks usually accept donations even when they know they do not need it. Such a policy led to wastage in periods of excess supply, since a whole blood donation can only be stored for a limited time (six weeks or less). For example, after the 9/11 attacks, the American Red Cross allowed donors across the USA to donate in large numbers despite only 260 additional units of whole blood needed to treat victims of the 9/11 attacks. Blood banks accepted those donations, even though they did not need them, fearing that deferring donors would adversely deter future donations. This led to 200,000 units of blood being wasted (Korcok,, 2002).
Men with a reported h-level between 13 and 13.5, allowing a plasma donation but not a whole blood donations, are very different from other men. They are more experience with a lower propensity to be first time donor and a higher number of donations previous to this one. They are heavier and taller. They are also less likely to have the O negative blood types which is the universal whole blood donor and more likely to be of type AB which is the universal plasma donors. From Figure 1, 10 and 11 it is evident that nurses strategically manipulate the reported h-level. They mostly report a h-level between 13 and 13.5 for experience donors with of blood type AB which are well suited to give plasma. Most other donors are reported with a h-level above 13.5 to allow a whole blood donation.
The Abu Dhabi Blood Bank collects different types of blood donations: whole blood, plasma, and red-cell aspheris.333They also collect (i) Samples for medical tests which are not meant to be used for donations and (ii) Autologous donations, where a person donates blood for their own future use, typically before a scheduled surgery or medical procedure. We do not use these observations in our analyses since they are not voluntary donations in the sense of being directly intended to help a patient (N=4,327 and N=5, respectively). Whole blood donations, the most common type, involves giving all blood components—red cells, white cells, platelets, and plasma. In contrast, other donation types selectively extract one component, such as plasma, platelet, or red-cell donation. Blood is drawn and processed through a machine that separates the desired component(s), returning the rest to the donor.
B
In this paper, we introduce and conduct an empirical analysis of an alternative approach to mitigate variance and overestimation phenomena using Dropout techniques. Our main contribution is an extension to the DQN algorithm that incorporates Dropout methods to stabilize training and enhance performance. The effectiveness of our solution is demonstrated through computer simulations in a classic control environment.
It’s the original Dropout method. It was introduced in 2012. Standard Dropout provides a simple technique for avoiding over-fitting in fully connected neural networks[12]. During each training phase, each neuron is excluded from the network with a probability p. Once trained, in the testing phase the full network is used, but each of the neurons’ output is multiplied by the probability p that the neuron was excluded with. This approach gives approximately the same result as averaging of the outcome of a great number of different networks which is very expensive approach to evaluate, this compensates that in the testing phase Dropout achieves a green model averaging. The probability can vary for each layer, the original paper recommend p=0.2p0.2\textit{p}=0.2p = 0.2 for the input layer and p =0.5p 0.5\textit{p }=0.5p = 0.5 for hidden layers. Neurons in the output layer are not dropped. This method proved effective for regularizing neural networks, enabling them to be trained for longer periods without over-fitting and resulting in improved performance, and since then many Dropout techniques have been improved for different types neural networks architectures (Figure 1).
Reinforcement Learning (RL) is a learning paradigm that solves the problem of learning through interaction with environments, this is a totally different approach from the other learning paradigms that have been studied in the field of Machine Learning namely the supervised learning and the unsupervised learning. Reinforcement Learning is concerned with finding a sequence of actions an agent can follow that could lead to solve the task on the environment [1][2][3]. Most of Reinforcement Learning techniques estimate the consequences of actions in order to find an optimal policy in the form of sequence of actions that can be followed by the agent to solve the task. The process of choosing the optimal policy is based on selecting actions that maximize the future payoff of an action. Finding an optimal policy is the main concern of Reinforcement Learning for that reason many algorithms have been introduced over a course of time, e.g, Q-learning[4], SARSA[5], and policy gradient methods[6]. These methods use linear function approximation techniques to estimate action value, where convergence is guaranteed [7]. However, as challenges in modeling complex patterns increase, the need for expressive and flexible non-linear function approximators becomes clear. The recent advances in deep neural networks helped to develop artificial agent named deep Q-network(DQN)[8] that can learn successful policies directly from high-dimensional features. Despite the remarkable flexibility and the huge representative capability of DQN, some issues emerge from the combination of Q-learning and neural networks. One of these issues, known as ”overestimation phenomenon,” was first explored by [9]. They noted that the expansion of the action space in the Q-learning algorithm, along with generalization errors in neural networks, often results in an overestimation and increased variance of state-action values. They suggested that to counter these issues, further modifications and enhancements to the standard algorithm would be necessary to boost training stability and diminish overestimation. In response, [10] introduced Double-DQN, an improvement that incorporates the double Q-learning estimator [11], aiming to address the challenges of variance and overestimation. Additionally, [31] developed the Averaged-DQN algorithm, a significant improvement over the standard DQN. By averaging previously learned Q-values, Averaged-DQN effectively lowers the variance in target value estimates, thus enhancing training stability and overall performance.
Deep neural networks are the state of the art learning models used in artificial intelligence. The large number of parameters in neural networks make them very good at modelling and approximating any arbitrary function. However the larger number of parameters also make them particularly prone to over-fitting, requiring regularization methods to combat this problem. Dropout was first introduced in 2012 as a regularization technique to avoid over-fitting[12], and was applied in the winning submission for the Large Scale Visual Recognition Challenge that revolutionized deep learning research[13]. Over course of time a wide range of Dropout techniques inspired by the original method have been proposed. The term Dropout methods was used to refer to them in general[14]. They include variational Dropout[15], Max-pooling Dropout[16], fast Dropout[17], Cutout[18], Monte Carlo Dropout[19], Concrete Dropout[20] and many others.
The results in Figure 3 show that using DQN with different Dropout methods result in better-preforming policies and less variability as the reduced standard deviation between the variants indicate to. In table 1, Wilcoxon Sign-Ranked test was used to analyze the effect of Variance before applying Dropout (DQN) and after applying Dropout (Dropout methods DQN). There was a statistically significant decrease in Variance (14.72% between Gaussian Dropout and DQN, 48.89% between Variational Dropout and DQN). Furthermore one of the Dropout methods outperformed DQN score.
C
As baselines, we consider the network used to generate the word embeddings (Dense) and two more advanced architectures.
Interestingly, the GNNs configured with GRACLUS and NDP always achieve better results than the Dense network, even if the latter generates the word embeddings used to build the graph on which the GNN operates. This can be explained by the fact that the Dense network immediately overfits the dataset, whereas the graph structure provides a strong regularization, as the GNN combines only words that are neighboring on the vocabulary graph.
Then, we train a simple classifier consisting of a word embedding layer [53] of size 200, followed by a dense layer with a ReLU activation, a dropout layer [54] with probability 0.5, and a dense layer with sigmoid activation.
The first (LSTM), is a network where the dense hidden layer is replaced by an LSTM layer [55], which allows capturing the temporal dependencies in the sequence of words in the review.
The LSTM baseline generally achieves a better accuracy than Dense, since it captures the sequential ordering of the words in the reviews, which also helps to prevent overfitting on training data.
C
Sethi, Welbl (ind-full), and Welbl (joint-full) generate networks with around 980 000980000980\,000980 000 parameters on average.
In the first hidden layer, the number of neurons equals the number of split nodes in the decision tree. Each of these neurons implements the decision function of the split nodes and determines the routing to the left or right child node.
Compared to state-of-the-art methods, the presented implicit transformation significantly reduces the number of parameters of the networks while achieving the same or even slightly improved accuracy due to better generalization.
Welbl (2014) and Biau et al. (2019) follow a similar strategy. The authors propose a method that maps random forests into neural networks as a smart initialization and then fine-tunes the networks by backpropagation. Two training modes are introduced: independent and joint. Independent training fits all networks one after the other and creates an ensemble of networks as a final classifier. Joint training concatenates all tree networks into one single network so that the output layer is connected to all leaf neurons in the second hidden layer from all decision trees and all parameters are optimized together. Additionally, the authors evaluate sparse and full connectivity.
Of the four variants proposed by Welbl, joint training has a slightly smaller number of parameters compared to independent training because of shared neurons in the output layer.
D
And if Xtsubscript𝑋𝑡X_{t}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT satisfies a SDE then by Itô-Tanaka formula max⁡{Xt,0}subscript𝑋𝑡0\max\{X_{t},0\}roman_max { italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , 0 } and min⁡{Xt,0}subscript𝑋𝑡0\min\{X_{t},0\}roman_min { italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , 0 } satisfy a SDE involving the local time of the process Xtsubscript𝑋𝑡X_{t}italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. (See e.g. (4.5) for Itô-Tanaka formula.)
Indeed, study of (quasi) maximum likelihood estimators (MLE) of drift coefficients from high frequency observations
Nevertheless, in Theorem 2, we prove the analogous of Theorem 1 for the well known estimator of the normalized number of crossings when the process is a more general threshold diffusion.
Theorem 1 has been applied, in [42], to exhibit the asymptotic behavior in high frequency of (quasi) MLE of the drift parameters of a threshold diffusion which is a continuous-time SETAR model: a threshold Ornstein-Uhlenbeck process which follows two different Ornstein-Uhlenbeck dynamics above and below a fixed threshold. Similar applications are possible for other econometric models.
Some models in financial mathematics and econometrics are threshold diffusions, for instance continuous-time versions of SETAR (self-exciting threshold auto-regressive) models, see e.g. [15, 41]. SBM and OBM and their local time have been recently investigated in the context of option pricing, as for instance in [20] and [16].
C
{h}(x_{h}^{\tau},a_{h}^{\tau})^{\top}+\lambda\cdot I.where roman_Λ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_τ = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT italic_ϕ start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT ) italic_ϕ start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT , italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT + italic_λ ⋅ italic_I .
Here β>0𝛽0\beta>0italic_β > 0 scales with d𝑑ditalic_d, H𝐻Hitalic_H, and K𝐾Kitalic_K, which is specified in Theorem 3.1.
with high probability, which is subsequently characterized in Lemma 4.3. Here the inequality holds uniformly for any (h,k)∈[H]×[K]ℎ𝑘delimited-[]𝐻delimited-[]𝐾(h,k)\in[H]\times[K]( italic_h , italic_k ) ∈ [ italic_H ] × [ italic_K ] and (x,a)∈𝒮×𝒜𝑥𝑎𝒮𝒜(x,a)\in{\mathcal{S}}\times\mathcal{A}( italic_x , italic_a ) ∈ caligraphic_S × caligraphic_A. As the fact that rhk∈[0,1]subscriptsuperscript𝑟𝑘ℎ01r^{k}_{h}\in[0,1]italic_r start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∈ [ 0 , 1 ] for any h∈[H]ℎdelimited-[]𝐻h\in[H]italic_h ∈ [ italic_H ] implies that Qhπk,k∈[0,H−h+1]subscriptsuperscript𝑄superscript𝜋𝑘𝑘ℎ0𝐻ℎ1Q^{\pi^{k},k}_{h}\in[0,H-h+1]italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT , italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∈ [ 0 , italic_H - italic_h + 1 ], we truncate Qhksubscriptsuperscript𝑄𝑘ℎQ^{k}_{h}italic_Q start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT to the range [0,H−h+1]0𝐻ℎ1[0,H-h+1][ 0 , italic_H - italic_h + 1 ] in (3.1), which is correspondingly used in Line 17 of Algorithm 1.
We establish an upper bound of the regret of OPPO (Algorithm 1) in the following theorem. Recall that the regret is defined in (2.1) and T=H⁢K𝑇𝐻𝐾T=HKitalic_T = italic_H italic_K is the total number of steps taken by the agent, where H𝐻Hitalic_H is the length of each episode and K𝐾Kitalic_K is the total number of episodes. Also, |𝒜|𝒜|\mathcal{A}|| caligraphic_A | is the cardinality of 𝒜𝒜\mathcal{A}caligraphic_A and d𝑑ditalic_d is the dimension of the feature map ψ𝜓\psiitalic_ψ.
in the order of h=H,H−1,…,1ℎ𝐻𝐻1…1h=H,H-1,\ldots,1italic_h = italic_H , italic_H - 1 , … , 1. Here λ>0𝜆0\lambda>0italic_λ > 0 is the regularization parameter, which is specified in Theorem 3.1. Also, Γhk:𝒮×𝒜→ℝ+:subscriptsuperscriptΓ𝑘ℎ→𝒮𝒜superscriptℝ\Gamma^{k}_{h}:{\mathcal{S}}\times\mathcal{A}\rightarrow\mathbb{R}^{+}roman_Γ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT : caligraphic_S × caligraphic_A → blackboard_R start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is a bonus function, which quantifies the uncertainty in estimating the Q-function Qhπk,ksubscriptsuperscript𝑄superscript𝜋𝑘𝑘ℎQ^{\pi^{k},k}_{h}italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT , italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT based on only finite historical data. In particular, the weight vector whksubscriptsuperscript𝑤𝑘ℎw^{k}_{h}italic_w start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT obtained in (3.1) and the bonus function ΓhksubscriptsuperscriptΓ𝑘ℎ\Gamma^{k}_{h}roman_Γ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT take the following closed forms,
A
This results in similar activation statistics throughout the network which facilitates gradient flow during backpropagation.
Since each batch normalization parameter γ𝛾\gammaitalic_γ corresponds to a particular channel in the network, this results in channel pruning with minimal changes to existing training pipelines.
Quantization approaches reduce the number of bits used to store the weights and the activations of DNNs.
Quantization in DNNs is concerned with reducing the number of bits used for the representation of the weights and the activations.
The linear transformation of the normalized activations with the parameters β𝛽\betaitalic_β and γ𝛾\gammaitalic_γ is mainly used to recover the DNNs ability to approximate any desired function—a feature that would be lost if only the normalization step is performed.
D
The analysis of density, however, is one example of an inherent characteristic of t-SNE, since it comes directly from its algorithm. A limitation that arises from building a tool that is tuned to tackle problems concerning a particular algorithm is the possibility of the algorithm becoming obsolete or being replaced by a newer, better alternative. We argue, though, that more than a decade after its proposal, it has now become quite clear that t-SNE is not going away anytime soon. Papers are still regularly coming out proving its stability [76, 77, 78], and high-impact applications and publications in many different domains geared towards non-visualization and non-ML experts are based on it [79, 80].
Although our proposed solution is inspired by the work of Wattenberg et al. [14] and touches on most of the points raised by the authors, not all of them are fully covered by t-viSNE. More specifically, t-viSNE addresses points (ii), (iii), (v), and (vi) described previously, partially covers (i), and leaves point (iv) for future work, i.e., we only omit the investigation on how the formation of clusters might erroneously convey messages to the users even when the input is random. Thus, we intend this work to be a comprehensive proposal of possible solutions to the problem of opening t-SNE’s black box, and to provide very important and relevant steps towards that final goal.
Even in the improbable scenario that t-SNE becomes obsolete soon, the fact that most of our proposed views can be re-used or adapted to different DR methods means that our work is still relevant and largely future-proof.
Most of the related works described in Section 2 deal with the problem of assessing and interpreting DR in general, and aim to be applicable to a wide range of different scenarios, providing solutions that overlook the specific shortcomings of each DR method. While this approach has its merits, a gap remains regarding the treatment of method-specific problems that might lead to more directly-applicable results. However, very few single DR methods have enough widespread acceptance to warrant customized treatments (with the exception of PCA and MDS, for example). Nowadays, arguably, the situation has changed: t-SNE is almost a standard DR method for both analysts and researchers. Due to this, it is our understanding that a set of methods that is specifically designed to meet t-SNE’s shortcomings deserves its place among the current body of work in the interpretation and assessment of DR methods, and its potentials are large enough to deserve their own treatment.
Although our main design goal was to support the investigation of t-SNE projections, most of our views and interaction techniques are not strictly confined to the t-SNE algorithm. For example, the Dimension Correlation view could, in theory, be applied to any projection generated by any other algorithm. Its motivation, however, came from the fact that t-SNE is especially known to generate hard-to-interpret shapes in its output [14], so the necessity of exploring and investigating such shapes became more apparent than with other DR methods. The same goes for other views, such as Neighborhood Preservation or Adaptive PCP: the inspiration and the design constraints came from known shortcomings and characteristics of t-SNE, such as its focus on optimizing neighborhoods of points in detriment of global distances, but the implementation could be re-used in different scenarios.
B
On text datasets (Text and 20news), most graph-based methods get a trivial result, as they group all samples into the same cluster such that NMIs approximate 0. Only k𝑘kitalic_k-means, MGAE, and AdaGAE obtain the non-trivial assignments.
(3) AdaGAE is a scalable clustering model that works stably on different scale and type datasets, while the other deep clustering models usually fail when the training set is not large enough. Besides, it is insensitive to different initialization of parameters and needs no pretraining.
Classical clustering models work poorly on large scale datasets. Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph is constructed by an algorithm rather than prior information. If the graph is not updated, the contained information is low-level. The adaptive learning will induce the model to exploit the high-level information. In particular, AdaGAE is stable on all datasets.
From the comparison of 3 extra experiments, we confirm that the adaptive graph update plays a positive role. Besides, the novel architecture with weighted graph improves the performance on most of datasets.
Graph-based clustering methods can capture manifold information so that they are available for the non-Euclidean type data, which is not provided by k𝑘kitalic_k-means. Therefore, they are widely used in practice. Due to the success of deep learning, how to combine neural networks and traditional clustering models has been studied a lot [7, 8, 9]. In particular, CNN-based clustering models have been extensively investigated [10, 11, 12]. However, the convolution operation may be unavailable on other kinds of datasets, e.g., text, social network, signal, data mining, etc.
B
=:T1+T2+T3.\displaystyle=:T_{1}+T_{2}+T_{3}.= : italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT .
𝔼⁢[G~1q]1q𝔼superscriptdelimited-[]superscriptsubscript~𝐺1𝑞1𝑞\displaystyle\mathbb{E}[\tilde{G}_{1}^{q}]^{\frac{1}{q}}blackboard_E [ over~ start_ARG italic_G end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG italic_q end_ARG end_POSTSUPERSCRIPT
For any q~≤q/2~𝑞𝑞2\tilde{q}\leq q/2over~ start_ARG italic_q end_ARG ≤ italic_q / 2 in Assumption A.2, it holds
For t~=2~𝑡2\tilde{t}=2over~ start_ARG italic_t end_ARG = 2 and s~=O⁢(1)~𝑠𝑂1\tilde{s}=O(1)over~ start_ARG italic_s end_ARG = italic_O ( 1 ), we have to ensure that
\cdot\|_{Q,2})roman_sup start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT roman_log italic_N ( italic_ε ∥ italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT italic_Q , 2 end_POSTSUBSCRIPT , caligraphic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , ∥ ⋅ ∥ start_POSTSUBSCRIPT italic_Q , 2 end_POSTSUBSCRIPT )
B
Models’ Space. For the visual exploration of the models shown in Figure 5, we use MDS projections (t-SNE or UMAP are also available).
A summary of the performance of each model according to all selected and user-weighted metrics is color-encoded using the Viridis colormap [26]. The boxplots below the projection show the performance of the models per metric.
There is a large solution space of different learning methods and concrete models which can be combined in a stack. Hence, the identification and selection of particular algorithms and instantiations over the time of exploration is crucial for the the user. One way to manage this is to keep track of the history of each model.
As in the data space, each point of the projection is an instance of the data set. However, instead of its original features, the instances are characterized as high-dimensional vectors where each dimension represents the prediction of one model. Thus, since there are currently 174 models in \raisebox{-.0pt} {\tiny\bfS6}⃝, each instance is a 174-dimensional vector, projected into 2D. Groups of points represent instances that were consistently predicted to be in the same class. In StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f), for example, the points in the two clusters in both extremes of the projection (left and right sides, unselected) are well-classified, since they were consistently determined to be in the same class by most models of \raisebox{-.0pt} {\tiny\bfS6}⃝. The instances that are in-between these clusters, however, do not have a well-defined profile, since different models classified them differently. After selecting these instances with the lasso tool, the two histograms below the projection in StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics(f) show a comparison of the performance of the available models in the selected points (gray, upside down) vs. all points (black). The x-axis represents the performance according to the user-weighted metrics (in bins of 5%), and the y-axis shows the number of models in each bin. Our goal here is to look for models in the current stack \raisebox{-.0pt} {\tiny\bfS6}⃝ that could improve the performance for the selected points. However, by looking at the histograms, it does not look like we can achieve it this time, since all models perform worse in the selected points than in all points.
Each point is one model from the stack, projected from an 8-dimensional space where each dimension of each model is the value of a user-weighted metric. Thus, groups of points represent clusters of models that perform similarly according to all the metrics.
D
However, 𝒯π⁢Qsuperscript𝒯𝜋𝑄{\mathcal{T}}^{\pi}Qcaligraphic_T start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT italic_Q may be not representable by a given function class ℱℱ\mathcal{F}caligraphic_F.
Hence, we turn to minimizing a surrogate of the MSBE over Q∈ℱ𝑄ℱQ\in\mathcal{F}italic_Q ∈ caligraphic_F, namely the mean-squared projected Bellman error (MSPBE), which is defined as follows,
Contribution. Going beyond the NTK regime, we prove that, when the value function approximator is an overparameterized two-layer neural network, TD and Q-learning globally minimize the mean-squared projected Bellman error (MSPBE) at a sublinear rate. Moreover, in contrast to the NTK regime, the induced feature representation is able to deviate from the initial one and subsequently evolve into the globally optimal one, which corresponds to the global minimizer of the MSPBE. We further extend our analysis to soft Q-learning, which is connected to policy gradient.
Here 𝒯∗superscript𝒯{\mathcal{T}}^{*}caligraphic_T start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is the Bellman optimality operator, which is defined as follows,
We learn the Q-function by minimizing the mean-squared Bellman error (MSBE), which is defined as follows,
A
These interesting findings were rendered possible by an original extension of the current state of the art in the GSA literature, namely in the direction of defining sensitivity indices for complex data, and the statistical assessment of uncertainty on GSA indices. We prove the mathematical properties of such method, and, by exploiting the similarities between the proposed output decomposition and Functional Linear Models, we propose a novel way to perform testing over (functional) sensitivity indices.
A fundamental tool to understand and explore the complex dynamics that regulates this phenomenon is the use of computer models. In particular, the scientific community has oriented itself towards the use of coupled climate-energy-economy models, also known as Integrated Assessment Models (IAM). These are pieces of software that integrate climate, energy, land and economic modules, to generate predictions about decision variables for a given period (usually, the next century). They belong to two very different paradigms [see e.g. 38]: detailed process models which have provided major input to climate policy making and assessment reviews such as those of the IPCC. And benefit-cost models such as the Dynamic Integrated Climate-Economy (DICE) model [20], for which the economics Nobel prize was awarded in 2018. A classic variable of interest in this kind of analysis is the level of future C⁢O2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions, since these directly affect climatic variables, such as global average temperature.
The testing effort provides even more interesting results, showing differences between the two contrasts analyzed in this paper, and, in general, defining a sparsity in effects: The only significant factors in determining C⁢O2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions seem to be GDP per capita and energy intensity improvements, with fossil fuel availability being significant only in the contrast between the middle-of-the-road scenario and S⁢S⁢P⁢3𝑆𝑆𝑃3SSP3italic_S italic_S italic_P 3. There is no statistical evidence to affirm that interaction terms are significant, with the only notable “near-miss” of the interactions that involve GDP per capita. This is probably due to the pervasiveness and centrality of gross domestic product as the main economic variable inside climate-economy models. However, the importance of these drivers in determining the future climate varies over time; this allows analysts to define future periods when certain factors will be more or less relevant.
Apart from generating a significant simplification in estimation, restricting ourselves to discrete variations allows us, from a modelling perspective, to deal with scenarios that we want to explore, that may be represented by a moltitude of different modelling choices and settings in a model, such as the different Shared Socio-Economic Pathways in [17]. Moreover, such setting could extend to any situation where a modeller would like to analyse the impact of either a discrete variation of the level of a continuous parameter, or when a categorical set of parameters is used.
Our findings provide a very strong signal to the climate-energy-economy modeling community that either the Shared Socio-economic Pathways are too refined to be actually significant inside a representative ensemble of models, or that, while preserving their own individuality and peculiarities in the modelling approach, that IAMs need to converge towards more homogeneous predictions.
D
We impose two assumptions, respectively, on the smoothness of the loading functions and on tail behavior of the noise.
Given the identification condition (Assumption 2), we start with estimating the non-parametric component 𝐆m⁢(𝐗m)subscript𝐆𝑚subscript𝐗𝑚\mathbf{G}_{m}(\mathbf{X}_{m})bold_G start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( bold_X start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ).
The logarithmic factors in Lemma 2 emerge from the sub-exponential tail of noise distribution, which has never been studied in existing literature.
The smoothness assumption is standard in the non-parametric literature, while the tail condition is weaker than what is usually assumed in the tensor decomposition literature.
We impose two assumptions, respectively, on the smoothness of the loading functions and on tail behavior of the noise.
C
When β=0𝛽0\beta=0italic_β = 0, SNGM will degenerate to stochastic normalized gradient descent (SNGD) [9, 39].
LARS also adopts normalized gradient for large-batch training. Following the analysis in [34], we set β=0𝛽0\beta=0italic_β = 0 111We find that there are two different versions of LARS. The first one [32]
In the following content, we will compare SNGM with MSGD and LARS [34], the two most related works in the literature on large-batch training.
We compare SNGM with four baselines: MSGD, ADAM [14], LARS [34] and LAMB [34]. LAMB is a layer-wise adaptive large-batch optimization method based on ADAM, while LARS is based on MSGD.
Figure 3 shows the validation perplexity of the three methods with a small batch size of 20 and a large batch size of 2000. In small-batch training, SNGM and LARS achieve validation perplexity comparable to that of MSGD. Meanwhile, in large-batch training, SNGM achieves better performance than MSGD and LARS.
B
,\tilde{\mathbf{B}}_{D+1})+G(\bm{\theta})italic_L italic_G ( over~ start_ARG italic_ν end_ARG , bold_italic_θ , over~ start_ARG bold_B end_ARG start_POSTSUBSCRIPT italic_D + 1 end_POSTSUBSCRIPT ) = italic_L ( over~ start_ARG italic_ν end_ARG , bold_italic_θ , over~ start_ARG bold_B end_ARG start_POSTSUBSCRIPT italic_D + 1 end_POSTSUBSCRIPT ) + italic_G ( bold_italic_θ ).
Before proposing the algorithm, we first rearrange ⟨⋅,⋅⟩⋅⋅\langle\cdot,\cdot\rangle⟨ ⋅ , ⋅ ⟩ in L⁢G𝐿𝐺LGitalic_L italic_G by using the Khatri-Rao product and mode-d𝑑ditalic_d matricization.
The Khatri-Rao product is defined as a columnwise Kronecker product for two matrices with the same column number (Smilde et al., 2005). More precisely, letting 𝐁=(𝐛1,…,𝐛L)∈ℝI×L𝐁subscript𝐛1…subscript𝐛𝐿superscriptℝ𝐼𝐿\mathbf{B}=(\mathbf{b}_{1},\dots,\mathbf{b}_{L})\in\mathbb{R}^{I\times L}bold_B = ( bold_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_b start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_I × italic_L end_POSTSUPERSCRIPT and 𝐁′=(𝐛1′,…,𝐛L′)∈ℝJ×Lsuperscript𝐁′superscriptsubscript𝐛1′…superscriptsubscript𝐛𝐿′superscriptℝ𝐽𝐿\mathbf{B}^{\prime}=(\mathbf{b}_{1}^{\prime},\dots,\mathbf{b}_{L}^{\prime})\in%
where ν∈ℝ𝜈ℝ\nu\in\mathbb{R}italic_ν ∈ blackboard_R, 𝜸∈ℝp0𝜸superscriptℝsubscript𝑝0\bm{\gamma}\in\mathbb{R}^{p_{0}}bold_italic_γ ∈ blackboard_R start_POSTSUPERSCRIPT italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, and 𝐁∈ℝp1×p2×⋯×pD𝐁superscriptℝsubscript𝑝1subscript𝑝2⋯subscript𝑝𝐷\bm{\mathrm{B}}\in\mathbb{R}^{p_{1}\times p_{2}\times\dots\times p_{D}}bold_B ∈ blackboard_R start_POSTSUPERSCRIPT italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT × italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT × ⋯ × italic_p start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT end_POSTSUPERSCRIPT are unknown parameters, and ⟨⋅,⋅⟩⋅⋅\langle\cdot,\cdot\rangle⟨ ⋅ , ⋅ ⟩ denotes the componentwise inner product, i.e., ⟨𝐁,𝐗⟩=∑i1=1p1∑i2=1p2⋯⁢∑iD=1pDBi1,…,iD⁢Xi1,…,iD𝐁𝐗superscriptsubscriptsubscript𝑖11subscript𝑝1superscriptsubscriptsubscript𝑖21subscript𝑝2⋯superscriptsubscriptsubscript𝑖𝐷1subscript𝑝𝐷subscript𝐵subscript𝑖1…subscript𝑖𝐷subscript𝑋subscript𝑖1…subscript𝑖𝐷\langle\bm{\mathrm{B}},\bm{\mathrm{X}}\rangle=\sum_{i_{1}=1}^{p_{1}}\sum_{i_{2%
Φ~⁢(𝐗)(d)~Φsubscript𝐗𝑑\tilde{\Phi}(\bm{\mathrm{X}})_{(d)}over~ start_ARG roman_Φ end_ARG ( bold_X ) start_POSTSUBSCRIPT ( italic_d ) end_POSTSUBSCRIPT is the mode-d𝑑ditalic_d matricization of Φ~⁢(𝐗)~Φ𝐗\tilde{\Phi}(\bm{\mathrm{X}})over~ start_ARG roman_Φ end_ARG ( bold_X ).
A
In this paper, we studied nonstationary RL with time-varying reward and transition functions. We focused on the class of nonstationary linear MDPs such that linear function approximation is sufficient to realize any value function. We first incorporated the epoch start strategy into LSVI-UCB algorithm (Jin et al., 2020) to propose the LSVI-UCB-Restart algorithm with low dynamic regret when the total variations are known. We then designed a parameter-free algorithm Ada-LSVI-UCB-Restart that enjoys a slightly worse dynamic regret bound without knowing the total variations. We derived a minimax regret lower bound for nonstationary linear MDPs to demonstrate that our proposed algorithms are near-optimal. Specifically, when the local variations are known, LSVI-UCB-Restart is near order-optimal except for the dependency on feature dimension d𝑑ditalic_d, planning horizon H𝐻Hitalic_H, and some poly-logarithmic factors. Numerical experiments demonstrates the effectiveness of our algorithms.
A number of future directions are of interest. An immediate step is to investigate whether the dependence on the dimension d𝑑ditalic_d and planning horizon H𝐻Hitalic_H in our bounds can be improved, and whether the minimax regret lower bound can also be improved. It would also be interesting to investigate the setting of nonstationary RL under general function approximation (Wang et al., 2020; Du et al., 2021; Jin et al., 2021), which is closer to modern RL algorithms in practice. Recall that our algorithm is more computationally efficient than other works. Another closely related and interesting direction is to study the low-switching cost (Gao et al., 2021) or deployment efficient (Huang et al., 2021) algorithm in the nonstationary RL setting. Finally, our algorithm is based on the Optimism in Face of Uncertainty. There is another broad category of algorithms called Thompson Sampling (TS) (Agrawal & Jia, 2017; Russo, 2019; Agrawal et al., 2021; Xiong et al., 2021; Ishfaq et al., 2021; Dann et al., 2021; Zhang, 2022). It would be an interesting avenue to see whether empirically appealing TS algorithms are also suitable in nonstationary RL settings.
The last relevant line of work is on dynamic regret analysis of nonstationary MDPs mostly without function approximation (Auer et al., 2010; Ortner et al., 2020; Cheung et al., 2019; Fei et al., 2020; Cheung et al., 2020). The work of Auer et al. (2010) considers the setting in which the MDP is piecewise-stationary and allowed to change in l𝑙litalic_l times for the reward and transition functions. They show that UCRL2 with restart achieves O~⁢(l1/3⁢T2/3)~𝑂superscript𝑙13superscript𝑇23\tilde{O}(l^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( italic_l start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret, where T𝑇Titalic_T is the time horizon. Later works (Ortner et al., 2020; Cheung et al., 2020; Fei et al., 2020) generalize the nonstationary setting to allow reward and transition functions vary for any number of time steps, as long as the total variation is bounded. Specifically, the work of (Ortner et al., 2020) proves that UCRL with restart achieves O~⁢((Br+Bp)1/3⁢T2/3)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝13superscript𝑇23\tilde{O}((B_{r}+B_{p})^{1/3}T^{2/3})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT ) dynamic regret (when the variation in each epoch is known), where Brsubscript𝐵𝑟B_{r}italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and Bpsubscript𝐵𝑝B_{p}italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT denote the total variation of reward and transition functions over all time steps. Cheung et al. (2020) proposes an algorithm based on UCRL2 by combining sliding windows and a confidence widening technique. Their algorithm has slightly worse dynamic regret bound O~⁢((Br+Bp)1/4⁢T3/4)~𝑂superscriptsubscript𝐵𝑟subscript𝐵𝑝14superscript𝑇34\tilde{O}((B_{r}+B_{p})^{1/4}T^{3/4})over~ start_ARG italic_O end_ARG ( ( italic_B start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT + italic_B start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) without knowing the local variations. Further, Fei et al. (2020) develops an algorithm which directly optimizes the policy and enjoys near-optimal regret in the low-variation regime. A different model of nonstationary MDP is proposed by Lykouris et al. (2021), which smoothly interpolates between stationary and adversarial environments, by assuming that most episodes are stationary except for a small number of adversarial episodes. Note that Lykouris et al. (2021) considers linear function approximation, but their nonstationarity assumption is different from ours. In this paper, we assume the variation budget for reward and transition function is bounded, which is similar to the settings in Ortner et al. (2020); Cheung et al. (2020); Mao et al. (2021). Concurrently to our work, Touati & Vincent (2020) propose an algorithm combining weighted least-squares value iteration and the optimistic principle, achieving the same O~⁢(B1/4⁢d5/4⁢H5/4⁢T3/4)~𝑂superscript𝐵14superscript𝑑54superscript𝐻54superscript𝑇34\tilde{O}(B^{1/4}d^{5/4}H^{5/4}T^{3/4})over~ start_ARG italic_O end_ARG ( italic_B start_POSTSUPERSCRIPT 1 / 4 end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_H start_POSTSUPERSCRIPT 5 / 4 end_POSTSUPERSCRIPT italic_T start_POSTSUPERSCRIPT 3 / 4 end_POSTSUPERSCRIPT ) regret as we do with knowledge of the total variation B𝐵Bitalic_B. They do not have a dynamic regret bound when the knowledge of local variations is available. Their proposed algorithm uses exponential weights to smoothly forget data that are far in the past. By contrast, our algorithm periodically restarts the LSVI-UCB algorithm from scratch to handle the non-stationarity and is much more computationally efficient. Another concurrent work by Wei & Luo (2021) follows a substantially different approach to achieve the optimal T2/3superscript𝑇23T^{2/3}italic_T start_POSTSUPERSCRIPT 2 / 3 end_POSTSUPERSCRIPT regret. The key idea of their algorithm is to run multiple base algorithms for stationary instances with different duration simultaneously, under a carefully designed random schedule. Compared with them, our algorithm has a slightly worse rate, but a much better computational complexity, since we only require to maintain one instance of the base algorithm. Both of these two concurrent works do not have empirical results, and we are also the first one to conduct numerical experiments on online exploration for non-stationary MDPs (Section 6). Other related and concurrent works investigate online exploration in different classes of non-stationary MDPs, including linear kernal MDP (Zhong et al., 2021), constrained tabular MDP (Ding & Lavaei, 2022), and stochastic shorted path problem (Chen & Luo, 2022).
Reinforcement learning (RL) is a core control problem in which an agent sequentially interacts with an unknown environment to maximize its cumulative reward (Sutton & Barto, 2018). RL finds enormous applications in real-time bidding in advertisement auctions (Cai et al., 2017), autonomous driving (Shalev-Shwartz et al., 2016), gaming-AI (Silver et al., 2018), and inventory control (Agrawal & Jia, 2019), among others. Due to the large dimension of sequential decision-making problems that are of growing interest, classical RL algorithms designed for finite state space such as tabular Q-learning (Watkins & Dayan, 1992) no longer yield satisfactory performance. Recent advances in RL rely on function approximators such as deep neural nets to overcome the curse of dimensionality, i.e., the value function is approximated by a function which is able to predict the value function for unseen state-action pairs given a few training samples. This function approximation technique has achieved remarkable success in various large-scale decision-making problems such as playing video games (Mnih et al., 2015), the game of Go (Silver et al., 2017), and robot control (Akkaya et al., 2019). Motivated by the empirical success of RL algorithms with function approximation, there is growing interest in developing RL algorithms with function approximation that are statistically efficient (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Wang et al., 2020; Wei et al., 2021; Neu & Olkhovskaya, 2021; Jiang et al., 2017; Wang et al., 2020; Jin et al., 2021; Du et al., 2021). The focus of this line of work is to develop statistically efficient algorithms with function approximation for RL in terms of either regret or sample complexity. Such efficiency is especially crucial in data-sparse applications such as medical trials (Zhao et al., 2009).
Motivated by empirical success of deep RL, there is a recent line of work analyzing the theoretical performance of RL algorithms with function approximation (Yang & Wang, 2019; Cai et al., 2020; Jin et al., 2020; Modi et al., 2020; Ayoub et al., 2020; Wang et al., 2020; Zhou et al., 2021; Wei et al., 2021; Neu & Olkhovskaya, 2021; Huang et al., 2021; Modi et al., 2021; Jiang et al., 2017; Agarwal et al., 2020; Dong et al., 2020; Jin et al., 2021; Du et al., 2021; Foster et al., 2021a; Chen et al., 2022). Recent work also studies the instance-dependent sample complexity bound for RL with function approximation, which adapts to the complexity of the specific MDP instance (Foster et al., 2021b; Dong & Ma, 2022). All of these works assume that the learner is interacting with a stationary environment. In sharp contrast, this paper considers learning in a nonstationary environment. As we will show later, if we do not properly adapt to the nonstationarity, linear regret is incurred.
A
A two-sample test is performed to decide whether to accept the null hypothesis H0:μ=ν:subscript𝐻0𝜇𝜈H_{0}:~{}\mu=\nuitalic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT : italic_μ = italic_ν or the general alternative hypothesis H1:μ≠ν:subscript𝐻1𝜇𝜈H_{1}:~{}\mu\neq\nuitalic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT : italic_μ ≠ italic_ν.
When under H1subscript𝐻1H_{1}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, we set the distribution μ𝜇\muitalic_μ to be the uniform distribution on [−1,1]dsuperscript11𝑑[-1,1]^{d}[ - 1 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, and ν𝜈\nuitalic_ν to be the Gaussian distribution 𝒩⁢(0,σ2⁢Id)𝒩0superscript𝜎2subscript𝐼𝑑\mathcal{N}(0,\sigma^{2}I_{d})caligraphic_N ( 0 , italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_I start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) truncated on the interval [−1,1]dsuperscript11𝑑[-1,1]^{d}[ - 1 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT with σ=11.96𝜎11.96\sigma=\frac{1}{1.96}italic_σ = divide start_ARG 1 end_ARG start_ARG 1.96 end_ARG.
Similarly, in change-point detection [4, 5, 6], the post-change observations follow a different distribution from the pre-change one.
Given the function space ℱℱ\mathcal{F}caligraphic_F and a distribution μ𝜇{\mu}italic_μ, define the Rademacher complexity as
For instance, in anomaly detection [1, 2, 3], the abnormal observations follow a different distribution from the typical distribution.
D
Specifically, we apply a DGM to learn the nuisance variables Z𝑍Zitalic_Z, conditioned on the output image of the first part, and use Z𝑍Zitalic_Z in the normalization layers of the decoder network to shift and scale the features extracted from the input image. This process adds the details information captured in Z𝑍Zitalic_Z while maintaining the semantic information captured in C𝐶Citalic_C to obtain the final reconstruction (Image 1d in our example).
In this paper we propose a principled framework, DS-VAE, for correctly realizing the data generation hypothesis while avoiding the disentangled representation vs. reconstruction trade-off.
While the aforementioned models made significant progress on the problem, they suffer from an inherent trade-off between learning DR and reconstruction quality. If the latent space is heavily regularized, not allowing enough capacity for the nuisance variables, reconstruction quality is diminished. On the other hand, if the unconstrained nuisance variables have enough capacity, the model can use them to achieve a high quality reconstruction while ignoring the latent variables related to the disentangled factors. This phenomena is sometimes called the "shortcut problem" and has been discussed in previous works [DBLP:conf/iclr/SzaboHPZF18].
We introduce the DS-VAE framework for learning DR without compromising on the reconstruction quality. DS-VAE can be seamlessly applied to existing DGM-based DR learning methods, therefore, allowing them to learn a complete representation of the data.
The model has two parts. First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space. We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementations. where we significantly constrain the capacity of the learned representation and heavily regularize the model to produce independent factors. As we explained above, such a model will likely learn a good disentangled representation, however, its reconstruction will be of low quality as it will only be able to generate the information captured by the disentangled factors while averaging the details. For example, in Figure 1, the model uses β𝛽\betaitalic_β-TCVAE [mig] to retrieve the pose of the model as a latent factor. In the reconstruction, the rest of the details are averaged, resulting in a blurry image (1b). The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage. In Figure 1 that means to transform Image 1b (the output of the first stage) to be as similar as possible to Image 1a (the target observation). We can view this as a style transfer task and use a technique from [adaIN] to achieve our goal.
C
For this purpose, one would ideally like to use an algorithm that provides sparsity, but also algorithmic stability in the sense that given two very similar data sets, the set of selected views should vary little. However, sparse algorithms are generally not stable, and vice versa (Xu \BOthers., \APACyear2012).
An example of the trade-off between sparsity and interpretability of the set of selected views occurs when different views, or combinations of views, contain the same information. If the primary concern is sparsity, a researcher may be satisfied with just one of these combinations being selected, preferably the smallest set which contains the relevant information. But if there is also a desire to interpret the relationships between the views and the outcome, it may be more desirable to identify all of these combinations, even if this includes some redundant information. If one wants to go even further and perform formal statistical inference on the set of selected views, one may additionally be interested in theoretically controlling, say, the family-wise error rate (FWER) or false discovery rate (FDR) of the set of selected views. However, strict control of such an error rate could end up harming the predictive performance of the model, thus leading to a trade-off between the interpretability of the set of selected views and classification accuracy.
Another relevant factor is interpretability of the set of selected views. Although sparser models are typically considered more interpretable, a researcher may be interested in interpreting not only the model and its coefficients, but also the set of selected views. For example, one may wish to make decisions on which views to measure in the future based on the set of views selected using the current data.
Excluding the interpolating predictor, stability selection produced the sparsest models in our simulations. However, this led to a reduction in accuracy whenever the correlation within features from the same view was of a similar magnitude as the correlations between features from different views. In both gene expression data sets stability selection also produced the sparsest models, but it also had the worst classification accuracy of all meta-learners. In applying stability selection, one has to specify several parameters. We calculated the values of these parameters in part by specifying a desired bound on the PFER (in our case 1.5). This kind of error control is much less strict than the typical family-wise error rate (FWER) or FDR control one would apply when doing statistical inference. In fact, one can observe in Figures 3 and 4 that although stability selection has a low FPR, for a sample size of 200 its FDR is still much higher than one would typically consider acceptable when doing inference (common FDR control levels are 0.05 or 0.1). Additionally, we gave the meta-learner information about the number of views containing signal in the data (parameter q𝑞qitalic_q), which the other meta-learners did not have access to. It is also worth noting that the sets of views selected by stability selection in both gene expression data sets had low view selection stability. Ideally, selecting views based on their stability would lead to a set of selected views that is itself highly stable, but evidently this is not the case. It follows then that stability selection may produce a set of selected views which is neither particularly useful for prediction, nor for inference. One could add additional assumptions (Shah \BBA Samworth, \APACyear2013), which may increase predictive performance, but may also increase FDR. Or one could opt for stricter error control, but this would likely reduce classification performance even further. This implies that performing view selection for both the aims of prediction and inference using a single procedure may produce poor results, since the resulting set of selected views may not be suitable for either purpose.
We apply multi-view stacking to each simulated training set, using logistic ridge regression as the base-learner. Once we obtain the matrix of cross-validated predictions 𝒁𝒁\bm{Z}bold_italic_Z, we apply the seven different meta-learners. To assess classification performance, we generate a matching test set of 1000 observations for each training set, and calculate the classification accuracy of the stacked classifiers on this test set. To assess view selection performance we calculate three different measures: (1) the true positive rate (TPR), i.e. the average proportion of views truly related to the outcome that were correctly selected by the meta-learner; (2) the false positive rate (FPR), i.e. the average proportion of views not related to the outcome that were incorrectly selected by the meta-learner; and (3) the false discovery rate (FDR), i.e. the average proportion of the selected views that are not related to the outcome.
A
In this paper, we build on recent developments for generalized linear bandits (Faury et al. [2020]) to propose a new optimistic algorithm, CB-MNL for the problem of contextual multinomial logit bandits. CB-MNL follows the standard template of optimistic parameter search strategies (also known as optimism in the face of uncertainty approaches)  [Abbasi-Yadkori et al., 2011, Abeille et al., 2021]. We use Bernstein-style concentration for self-normalized martingales, which were previously proposed in the context of scalar logistic bandits in Faury et al. [2020], to define our confidence set over the true parameter, taking into account the effects of the local curvature of the reward function. We show that the performance of CB-MNL (as measured by regret) is bounded as O~⁢\del⁢d⁢T+κ~O\del𝑑𝑇𝜅\tilde{\mathrm{O}}\del{d\sqrt{T}+\kappa}over~ start_ARG roman_O end_ARG italic_d square-root start_ARG italic_T end_ARG + italic_κ, significantly improving the theoretical performance over existing algorithms where κ𝜅\kappaitalic_κ appears as a multiplicative factor in the leading term. We also leverage a self-concordance [Bach, 2010] like relation for the multinomial logit reward function [Zhang & Lin, 2015], which helps us limit the effect of κ𝜅\kappaitalic_κ on the final regret upper bound to only the higher-order terms. Finally, we propose a different convex confidence set for the optimization problem in the decision set of CB-MNL, which reduces the optimization problem to a constrained convex problem.
choice model for capturing consumer purchase behavior in assortment selection models (see Flores et al. [2019] and Avadhanula [2019]). Recently, large-scale field experiments at Alibaba [Feldman et al., 2018] have demonstrated the efficacy of the MNL model in boosting revenues. Rusmevichientong et al. [2010] and Sauré & Zeevi [2013] were a couple of early works that studied explore-then-commit strategies for the dynamic assortment selection problem under the MNL model when there are no contexts/product features. The works of Agrawal et al. [2019] and Agrawal et al. [2017] revisited this problem and presented adaptive online learning algorithms based on the Upper Confidence Bounds(UCB) and Thompson Sampling (TS) ideas. These approaches, unlike earlier ideas, did not require prior information about the problem parameters and had near-optimal regret bounds. Following these developments, the contextual variant of the problem has received considerable attention. Cheung & Simchi-Levi [2017] and Oh & Iyengar [2019] propose TS-based approaches and establish Bayesian regret bounds on their performance333Our results give worst-case regret bound which is strictly stronger than Bayesian regret bound. Worst-case regret bounds directly imply Bayesian regret bounds with same order dependence.. Chen et al. [2020] present a UCB-based algorithm and establish min-max regret bounds. However, these contextual MNL algorithms and their performance bounds depend on a problem parameter κ𝜅\kappaitalic_κ that can be prohibitively large, even for simple real-life examples. See Figure 1 for an illustration and Section 1.2 for a detailed discussion.
Our result is still O⁢(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) away from the minimax lower of bound Chu et al. [2011] known for the linear contextual bandit. In the case of logistic bandits, Li et al. [2017] makes an i.i.d. assumption on the contexts to bridge the gap (however, they still retain the κ𝜅\kappaitalic_κ factor). Improving the worst-case regret bound by O⁢(d)O𝑑\mathrm{O}(\sqrt{d})roman_O ( square-root start_ARG italic_d end_ARG ) while keeping κ𝜅\kappaitalic_κ as an additive term is an open problem. It may be possible to improve the dependence on κ𝜅\kappaitalic_κ by using a higher-order approximation for estimation error. Finding a lower bound on dependence κ𝜅\kappaitalic_κ is an interesting open problem and may require newer techniques than presented in this work.
where pessimism is the additive inverse of the optimism (difference between the payoffs under true parameters and those estimated by CB-MNL). Due to optimistic decision-making and the fact that θ∗∈Ct⁢(δ)subscript𝜃subscript𝐶𝑡𝛿\theta_{*}\in C_{t}(\delta)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) (see Eq (12)), pessimism is non-positive, for all rounds. Thus, the regret is upper bounded by the sum of the prediction error for T𝑇Titalic_T rounds. In Section 4.1 we derive an the expression for prediction error upper bound for a single round t𝑡titalic_t. We also contrast with the previous works Filippi et al. [2010], Li et al. [2017], Oh & Iyengar [2021] and point out specific technical differences which allow us to use Bernstein-like tail concentration inequality and therefore, achieve stronger regret guarantees. In Section 4.2, we describe the additional steps leading to the statement of Theorem 1. The style of the arguments is simpler and shorter than that in Faury et al. [2020]. Finally, in Section 4.3, we discuss the relationship between two confidence sets Ct⁢(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and Et⁢(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) and show that even using Et⁢(δ)subscript𝐸𝑡𝛿E_{t}(\delta)italic_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ) in place of Ct⁢(δ)subscript𝐶𝑡𝛿C_{t}(\delta)italic_C start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_δ ), we get the regret upper bounds with same parameter dependence as in Corollary 2.
In summary, our work establishes strong worst-case regret guarantees by carefully accounting for local gradient information and using second-order function approximation for the estimation error.
D
G1: Analysis of predictions and validation metrics for the identification of effective hyperparameters.
The aforementioned works that make use of genetic algorithms contain similar mechanisms as in VisEvol, but without VA support for (1) the exploration of the interconnected hyperparameters, and (2) the selection of the proper number of models that should crossover and mutate.
In this paper, we presented VisEvol, a VA tool with the aim to support hyperparameter search through evolutionary optimization. With the utilization of multiple coordinated views, we allow users to generate new hyperparameter sets and store the already robust hyperparameters in a majority-voting ensemble. Exploring the impact of the addition and removal of algorithms and models in a majority-voting ensemble from different perspectives and tracking the crossover and mutation process enables users to be sure how to proceed with the selection of hyperparameters for a single model or complex ensembles that require a combination of the most performant and diverse models. The effectiveness of VisEvol was examined with use cases using real-world data that demonstrated the advancement of the methods behind achieving performance improvement. Our tool’s workflow and visual metaphors received positive feedback from three ML experts, who even identified limitations of VisEvol. These limitations pose future research directions for us.
The study of the impact of particular hyperparameters is considered as a future direction for VisEvol. Also, E3 stated that we could allow the user to specify the hyperparameters range at every stage and test alternative mutation strategies [CK05]. E1 expressed his interest in checking combinations of evolutionary optimization with the crossover and mutation process applied to the best-performing models (e.g., [YRK∗15]). However, as the user usually adds—as few as possible—models to the ensembles, the hyperparameters’ evolution for the excluded algorithms will be infeasible. We plan to overcome such limitations.
We aim to support the exploration of algorithms and models with various hyperparameters (R1) as follows:
D
Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT
In this section, four real-world network datasets with known label information are analyzed to test the performances of our Mixed-SLIM methods for community detection. The four datasets can be downloaded from
In this paper, we extend the symmetric Laplacian inverse matrix (SLIM) method (SLIM, ) to mixed membership networks and call this proposed method as mixed-SLIM. As mentioned in SLIM , the idea of using the symmetric Laplacian inverse matrix to measure the closeness of nodes comes from the first hitting time in a random walk. SLIM combined the SLIM with the spectral method based on DCSBM for community detection. And the SLIM method outperforms state-of-art methods in many real and simulated datasets. Therefore, it is worth modifying this method to mixed membership networks. Numerical results of simulations and substantial empirical datasets in Section 5 show that our proposed Mixed-SLIM indeed enjoys satisfactory performances when compared to the benchmark methods for both community detection problem and mixed membership community detection problem.
Table 2 records the error rates on the four real-world networks. The numerical results suggest that Mixed-SLIM methods enjoy satisfactory performances compared with SCORE, SLIM, OCCAM, Mixed-SCORE, and GeoNMF when detecting the four empirical datasets. Especially, the number error for Mixed-SLIM on the Polblogs network is 49, which is the smallest number error for this dataset in literature as far as we know.
We report the averaged mixed Hamming error rates for our methods and the other three competitors in Table 4. Mixed-SLIMτ⁢a⁢p⁢p⁢r⁢osubscriptSLIM𝜏𝑎𝑝𝑝𝑟𝑜\mathrm{SLIM}_{\tau appro}roman_SLIM start_POSTSUBSCRIPT italic_τ italic_a italic_p italic_p italic_r italic_o end_POSTSUBSCRIPT outperforms the other three Mixed-SLIM methods on all SNAP ego-networks and it significantly outperforms Mixed-SCORE, OCCAM, and GeoNMF on GooglePlus and Twitter networks. Mixed-SLIM methods have smaller averaged mixed Humming error rates than Mixed-SCORE, OCCAM, and GeoNMF on the GooglePlus networks and Twitter networks, while they perform slightly poorer than Mixed-SCORE on Facebook networks. Meanwhile, we also find that OCCAM and GeoNMF share similar performances on the ego-networks. It is interesting to find that the error rates on Twitter and GooglePlus networks are higher than error rates on Facebook which may be because Twitter and GooglePlus networks have a higher proportion of overlapping nodes than Facebook.
C
That is, when the target functions under Assumption 4.7 belong to a smoother RKHS class, variational transport attains a smaller statistical error.
we first solve the inner variational problem associated with the objective functional using the particles.
We study the distributional optimization problem where the objective functional admits a variational form.
Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation.
The following assumption characterizes the regularity of the solution to the inner optimization problem associated with the variational representation of the objective functional.
B
The name of the data column containing observation times is supplied to the times argument; the name of the column containing the unit names is supplied to the units argument.
The prediction step advances the Monte Carlo ensemble to the next observation time by using simulations from the postulated model
The t0 argument supplies the initial time from which the dynamic system is modeled, which should be no greater than the first observation time.
The neighborhood is supplied via the nbhd argument to abf as a function which takes a point in space-time, (u,n)𝑢𝑛(u,n)( italic_u , italic_n ), and returns a list of points in space-time which correspond to Bu,nsubscript𝐵𝑢𝑛B_{u,n}italic_B start_POSTSUBSCRIPT italic_u , italic_n end_POSTSUBSCRIPT.
The name of the data column containing observation times is supplied to the times argument; the name of the column containing the unit names is supplied to the units argument.
B
Teal color encodes the current action’s score, and brown the best result reached so far. The choice of colors was made deliberately because they complement each other, and the former denotes the current action since it is brighter than the latter.
The size of the circle encodes the order of the main actions, with larger radii for recent steps. The brown color is used only if the overall performance increases.
The brown circles in the punchcard in Fig. 1(e) enable us to acknowledge that the feature generation boosted the overall performance of the classifier.
To verify each of our interactions, we continuously monitor the process through the punchcard, as shown in Fig. 6(c). From this visualization, we acknowledge that when F16 was excluded, we reached a better result. The feature generation process (described previously) led to the best predictive result we managed to accomplish. The new feature is appended at the end of the list in the punchcard. From the grouped bar chart in Fig. 6(c), the improvement is prominent for all validation metrics because the brown-colored bars are at the same level as the teal bars. To summarize, FeatureEnVi supported the exploration of valuable features and offered transparency to the process of feature engineering. The following case study is another proof of this concept.
(a) presents another transformation of the second most impactful feature (according to Fig. 5(b)). F4_p4///F15///F18_l1p is the most important combination (see the darker green color in (b)). The punchcard visualization in (c) indicates that when we removed F16, the performance increased and that the new feature boosted, even more, the predictive results. For all metrics in the grouped bar chart, the best values are equal to the current results.
B
These techniques impair the ability of the representation learner to encode biases [69, 1, 52, 25]. Like ensembling methods, they also employ a two-branch setup, with the representation encoder in the main branch being penalized if the bias-only branch: fb⁢()subscript𝑓𝑏f_{b}()italic_f start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ( ) is successful at predicting biases from them [69]. Alternately, fb⁢()subscript𝑓𝑏f_{b}()italic_f start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ( ) may be trained to predict the class label from the biased features  [52, 25], but in either case, the gradient from fb⁢()subscript𝑓𝑏f_{b}()italic_f start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ( ) is reversed during backpropagation for debiasing.
IRMv1 [5] is an efficient approximation of an otherwise computationally expensive bi-level IRM objective. It consists of a regularization constraint on the gradient norm with respect to a fixed scalar θc=1.0subscript𝜃𝑐1.0\theta_{c}=1.0italic_θ start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = 1.0:
Learning Not to Learn (LNL) [37] uses an adversarial setup derived from minimization of mutual information between representation and bias. In addition to the gradient reversal, the mutual information formulation introduces an entropy regularization on the bias predictions.
It is unknown how well the methods scale up to multiple sources of biases and large number of groups, even when they are explicitly annotated. To study this, we train the explicit methods with multiple explicit variables for Biased MNISTv1 and individual variables that lead to hundreds and thousands of groups for GQA and compare them with the implicit methods. For Biased MNISTv1, we first sort the seven total variables in the descending order of MMD (obtained by StdM) and then conduct a series of experiments. In the first experiment, the most exploited variable, distractor shape, is used as the explicit bias. In the second experiment, the two most exploited variables, distractor shape and texture, are used as explicit biases. This is repeated until all seven variables are used333The exact order is given in the Appendix.. Note that conducting the seventh experiment entails annotating each instance with every possible source of bias. While this may not be realistic in practice, such a controlled setup will reveal if the explicit methods can generalize when they have complete information about every bias source.
An interesting observation was that a weaker architecture, CNNs, were able to ignore position bias, whereas a more powerful architecture, CoordConv, resorted to exploiting this bias resulting in worse performance. While the community has largely focused on training procedures for bias mitigation, an exciting avenue for future work is to incorporate appropriate inductive biases into the architectures, perhaps endowing them with the ability to choose the the minimal computational power to do a task so that they are less sensitive to unwanted biases. This will essentially enable the algorithms to use Occam’s razor to determine the minimal capabilities required to do a task to reduce their ability to utilize biases.
B
It is worth mentioning that generating Y^(s)⁢(𝐱)superscript^𝑌𝑠𝐱\hat{Y}^{(s)}(\mathbf{x})over^ start_ARG italic_Y end_ARG start_POSTSUPERSCRIPT ( italic_s ) end_POSTSUPERSCRIPT ( bold_x ) incurs a constant cost of 𝒪⁢(M3)𝒪superscript𝑀3\mathcal{O}(M^{3})caligraphic_O ( italic_M start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ). However, sampling from a GP posterior distribution on a discrete domain 𝒳𝒳\mathcal{X}caligraphic_X has a computational complexity of 𝒪⁢(|𝒳|3)𝒪superscript𝒳3\mathcal{O}(|\mathcal{X}|^{3})caligraphic_O ( | caligraphic_X | start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) due to a required Cholesky decomposition of the covariance matrix. The computational burden of this sampling strategy becomes prohibitive as |𝒳|𝒳|\mathcal{X}|| caligraphic_X | grows exponentially with the dimension [39].
The procedure to generate a sample of the GP posterior is outlined in Algorithm 1. Now, one can generate multiple such GP samples by drawing different realisations 𝐰(s)superscript𝐰𝑠\mathbf{w}^{(s)}bold_w start_POSTSUPERSCRIPT ( italic_s ) end_POSTSUPERSCRIPT. This idea is used to emulate dynamical simulators where draws from the emulated flow map are employed to perform one-step ahead predictions. With this, we can quantify uncertainty of the time series prediction as described in the next section.
This paper presents a novel data-driven approach for emulating complex dynamical simulators relying on emulating the numerical flow map over a short period of time. The flow map is a function that maps an initial condition to the solution of the system at a future time t𝑡titalic_t. We emulate the numerical flow map of the system over the initial (short) time step via GPs. The idea is that GP emulators model the underlying function (in this case, the flow map) as a probabilistic distribution, and their sample paths provide a characterisation of the function throughout its entire domain. These sample paths extend the notion of merely being a distribution over individual function values at specific points, such as those generated from a multivariate normal distribution. The model output time series is then predicted relying on the Markov assumption; a sample path from the emulated flow map is drawn and employed in an iterative manner to perform one-step ahead predictions. By repeating this procedure with multiple draws, we acquire a distribution over the time series whose mean and variance at a specific time point serve as the model output prediction and the associated uncertainty, respectively. However, obtaining a GP sample path, evaluable at any location in the domain for use in one-step ahead predictions, is infeasible. To address this challenge, we employ RFF [42], as described in Section 3. RFF is a technique for approximating the GP kernel using a finite number of its Fourier features. The resulting approximate GP samples, generated with RFF, are analytically tractable, providing both theoretical guarantees and computational efficiency.
This work presents a novel approach for emulating dynamical simulators, where samples from the posterior GP are defined analytically. In order to do this we approximate the kernel with RFF given that there is no know method to draw exact GP samples. The approximate sample paths are then employed to perform one-step ahead prediction as explained in Section 4. We found that the new method performs adequately and can capture a significant portion of the required uncertainty quantification.
We proposed a novel data-driven approach for emulating deterministic complex dynamical systems implemented as computer codes. The output of such models is a time series and presents the evolving state of a physical phenomenon over time. Our method is based on emulating the short-time numerical flow map of the system and using draws of the emulated flow map in an iterative manner to perform one-step ahead predictions. The flow map is a function that returns the solution of a dynamic system at a certain time point, given initial conditions. In this paper, the numerical flow map is emulated via a GP and its approximate sample paths are generated with random Fourier features. The approximate GP draws are employed in the one-step ahead prediction paradigm which results in a distribution over the time series. The mean and variance of that distribution serve as the time series prediction and the associated uncertainty, respectively. The proposed method is tested on several nonlinear dynamic simulators such as the Lorenz, van der Pol, and Hindmarsh-Rose models. The results suggest that our approach can emulate those systems accurately and the prediction uncertainty can capture the true trajectory with a good accuracy. A future work direction is to conduct quantitative studies such as uncertainty quantification and sensitivity analysis on computationally expensive dynamical simulators emulated by the method suggested in this paper.
A
While the plug-in procedure displays an analytical solution, which depends on unknown quantities that need to be estimated, the double kernel is performed empirically. Notice also that one may use the maximum likelihood cross-validation method to determine the smoothing parameter; however, this procedure performs very poorly, as indicated in Devroye (1997). The double kernel method uses a pair of kernels, K⁢(⋅)𝐾⋅K(\cdot)italic_K ( ⋅ ) and L⁢(⋅)𝐿⋅L(\cdot)italic_L ( ⋅ ), and picks H=arg⁡minh⁢∫|fn,h−gn,h|⁢𝑑x,𝐻subscriptℎsubscript𝑓𝑛ℎsubscript𝑔𝑛ℎdifferential-d𝑥H=\arg\min_{h}\int\left|f_{n,h}-g_{n,h}\right|dx,italic_H = roman_arg roman_min start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ∫ | italic_f start_POSTSUBSCRIPT italic_n , italic_h end_POSTSUBSCRIPT - italic_g start_POSTSUBSCRIPT italic_n , italic_h end_POSTSUBSCRIPT | italic_d italic_x , where fn,h⁢(⋅)subscript𝑓𝑛ℎ⋅f_{n,h}(\cdot)italic_f start_POSTSUBSCRIPT italic_n , italic_h end_POSTSUBSCRIPT ( ⋅ ) and gn,h⁢(⋅)subscript𝑔𝑛ℎ⋅g_{n,h}(\cdot)italic_g start_POSTSUBSCRIPT italic_n , italic_h end_POSTSUBSCRIPT ( ⋅ ) are the kernel estimates with kernels K⁢(⋅)𝐾⋅K(\cdot)italic_K ( ⋅ ) and L⁢(⋅)𝐿⋅L(\cdot)italic_L ( ⋅ ), respectively. Assume that d=1𝑑1d=1italic_d = 1. If the characteristic functions of K⁢(⋅)𝐾⋅K(\cdot)italic_K ( ⋅ ) and L⁢(⋅)𝐿⋅L(\cdot)italic_L ( ⋅ ) do not coincide on an open interval about the origin, then the choice H𝐻Hitalic_H is consistent, refer to Devroye (1989).
The ideal bandwidth selection for nonparametric testing differs from that for nonparametric estimation because we must balance the test’s size and power rather than the estimator’s bias and variance. There are no methods for calculating the appropriate bandwidth for our test, and it is difficult to formulate a theory that provides the solution. The choice of bandwidth determines the sensitivity with which specific types of dependence can be identified and, thus, affects the practical performance of the test. Idealistically, we should select a bandwidth hℎhitalic_h that provides the best power for a given sample size, but deriving this process is intricate enough to need a separate study.
For testing a parametric model for conditional mean function against a nonparametric alternative, Horowitz and Spokoiny (2001) proposed an adaptive-rate-optimal rule. Gao and Gijbels (2008) proposed, utilizing the Edgeworth expansion of the asymptotic distribution of the test, to select the bandwidth such that the power function of the test is maximized while the size function is controlled.
To our best knowledge, this is the first time that the general context L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-norm for testing the independence appeared in the literature and gives the main motivation of the present work by responding to the open problems mentioned in Gretton and Györfi (2010). The main contribution of this paper is to establish the asymptotic distribution of the proposed test statistic under the null hypothesis and under the local alternatives that converge to the null at the rate of n−1/2⁢hn−d/4superscript𝑛12superscriptsubscriptℎ𝑛𝑑4n^{-1/2}h_{n}^{-d/4}italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT italic_h start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - italic_d / 4 end_POSTSUPERSCRIPT. As an important feature, the L1subscript𝐿1L_{1}italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-based
tests are all model-free. Furthermore, no regularity requirements for the densities are necessary to demonstrate the asymptotic normality of our statistic, a desired attribute. We conduct simulations to determine the size and power of the test. We illustrate that the proposed test has superior power characteristics compared to existing tests for various analyzed situations. The proposed test encompasses all dependency types, including complicated dependence structures, particularly sinusoidal dependence.
A
We also show that the Away-step Frank-Wolfe Wolfe [1970], Lacoste-Julien & Jaggi [2015] and the Blended Pairwise Conditional Gradient Tsuji et al. [2022] can use the aforementioned line search to achieve linear rates over polytopes.
We also show that the Away-step Frank-Wolfe Wolfe [1970], Lacoste-Julien & Jaggi [2015] and the Blended Pairwise Conditional Gradient Tsuji et al. [2022] can use the aforementioned line search to achieve linear rates over polytopes.
We also show improved convergence rates for several variants in various cases of interest and prove that the AFW [Wolfe, 1970, Lacoste-Julien & Jaggi, 2015] and BPCG Tsuji et al. [2022] algorithms coupled with the backtracking line search of Pedregosa et al. [2020] can achieve linear convergence rates over polytopes when minimizing generalized self-concordant functions.
for 𝒳𝒳\mathcal{X}caligraphic_X, to obtain a linear convergence rate in primal gap over polytopes given in inequality description. The authors in Dvurechensky et al. [2022] also present an
For clarity we want to stress that any linear rate over polytopes has to depend also on the ambient dimension of the polytope; this applies to our linear rates and those in Table 1 established elsewhere (see Diakonikolas et al. [2020]).
D
We prove these theorems via a new notion, pairwise concentration (PC) (Definition 4.2), which captures the extent to which replacing one dataset by another would be “noticeable,” given a particular query-response sequence. This is thus a function of particular differing datasets (instead of worst-case over elements), and it also depends on the actual issued queries. We then build a composition toolkit (Theorem 4.4) that allows us to track PC losses over multiple computations.
We measure the harm that past adaptivity causes to a future query by considering the query as evaluated on a posterior data distribution and comparing this with its value on a prior. The prior is the true data distribution, and the posterior is induced by observing the responses to past queries and updating the prior. If the new query behaves similarly on the prior distribution as it does on this posterior (a guarantee we call Bayes stability; Definition 3.3), adaptivity has not led us too far astray.111This can be viewed as a generalization of the Hypothesis Stability notion of Bousquet and Elisseeff (2002)—which was proven to guarantee on-average generalization (Shalev-Shwartz et al., 2010)—where the hypothesis is a post-processing of the responses to past queries, and the future query is the loss function estimation. If furthermore, the the response given by the mechanism is close to the query result on the posterior, then by a triangle inequality argument, that mechanism is distribution accurate. This type of triangle inequality first appeared as an analysis technique in Jung et al. (2020).
In order to leverage this more careful analysis of the information encoded in query-response sequences, we rely on a simple new characterization (Lemma 3.5) that
The PC notion allows for more careful analysis of the information encoded by the query-response sequence than differential privacy does.
These results extend to the case where the variance (or variance proxy) of each query qisubscript𝑞𝑖q_{i}italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is bounded by a unique value σi2superscriptsubscript𝜎𝑖2\sigma_{i}^{2}italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, by simply passing this value to the mechanism as auxiliary information and scaling the added noise ηisubscript𝜂𝑖\eta_{i}italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT accordingly. Furthermore, using this approach we can quantify the extent to which incorrect bounds affect the accuracy guarantee. Overestimating the bound on a query’s variance would increase the error of the response to this query by a factor of square root of the ratio between the assumed and the correct variance, while the error of the other responses would only decrease. On the other hand, underestimating the bound on a query’s variance would only decrease the error of the response to this query, while increasing the error of each subsequent query by a factor of the square root of the ratio between the assumed and the correct variance, divided by the number of subsequent queries. A formal version of this claim can be found in Section E.3.
C
Construct a model with architecture 𝒜𝒜\mathcal{A}caligraphic_A, where the parameters are sampled from p⁢(θ)𝑝𝜃p(\theta)italic_p ( italic_θ )
Output : Predictive distribution p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D )
In Bayesian inference one tries to model the distribution of interest by updating a prior estimate using a collection of observed data. The conditional distribution p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D ) is inferred from a given parametric model or likelihood function p⁢(Y|X,θ)𝑝conditional𝑌𝑋𝜃p(Y\,|\,X,\theta)italic_p ( italic_Y | italic_X , italic_θ ), a prior distribution p⁢(θ)𝑝𝜃p(\theta)italic_p ( italic_θ ) over the model parameters and a data set 𝒟≡(𝐗,𝐲)𝒟𝐗𝐲\mathcal{D}\equiv(\mathbf{X},\mathbf{y})caligraphic_D ≡ ( bold_X , bold_y ). The first step is to update the prior belief based on the data set using Bayes’ rule:
return p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D )
Infer the predictive distribution p⁢(Y|X,𝒟)𝑝conditional𝑌𝑋𝒟p(Y\,|\,X,\mathcal{D})italic_p ( italic_Y | italic_X , caligraphic_D ) using Eq. (5)
D
We focus on the estimates of five primary parameters.555Both of the inertia terms are significantly negative, indicating a tendency of players to bias their actions toward those that they took in the previous round. The first parameter, θ1subscript𝜃1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, captures how players weigh the cost of sharing. Not surprisingly, all estimates of this parameter are positive. This means that players follow their incentives and that, holding all else constant, they are more likely to choose actions that are less costly.
The coefficients θ3subscript𝜃3\theta_{3}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT and θ4subscript𝜃4\theta_{4}italic_θ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT are related to generalized reciprocity. The interpretation of generalized reciprocity is that it measures the tendency of players to share more when they receive more overall benefits as a result of their group’s sharing in the previous round, without regard to who exactly shared with them. The estimate of θ3subscript𝜃3\theta_{3}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is positive indicating that in the baseline sessions, generalized reciprocity drives some of the observed sharing behavior. The negative coefficient on the treatment effect (θ4subscript𝜃4\theta_{4}italic_θ start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT), however, drives behavior in the opposite direction. Because the sum of effects is significantly negative, this suggests a strict tradeoff between generalized and direct reciprocity, and highlights how the information provided in the treatment helps to focus reciprocity toward active collaborators.
The other four main coefficients concern the behavioral component of payoffs. The positive sign of θ2subscript𝜃2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, which is the coefficient on the interaction between contribution cost and the treatment, indicates that having access to this new information makes players more careful about who they share with. We can interpret this coefficient as a reduction in altruism due to the treatment since it indicates a tendency of players to focus more on their individual costs of sharing.
Because there are only three trust questions, the first principal component summarizes most of the information from the trust questionnaire. It places positive weight on the question that involves trust and negative weights on two questions that suggest mistrust. Perhaps surprisingly, this measure of trust is associated with a positive interaction on contribution costs in the baseline, which indicates that individuals who score highly on trust are less altruistic and more careful about where they direct effort in the baseline. This agrees with the results of Glaeser et al. (2000), which suggest that such trust questionnaires predict trustworthy behavior but do not necessarily predict trusting behavior. Further in line with these results is a strong positive interaction of the trust characteristic with generalized reciprocity in the baseline. This suggests that these individuals are trustworthy in that they respond to sharing by others by increasing their own contribution. However, they are less likely to share blindly and trust that others will reciprocate. In the treatment, estimates of the effect of trust are less precise but suggest a reversal of this phenomenon; they trust that others will reciprocate when they know that others will be aware of their sharing behavior. This is captured by the negative estimate of the interaction between trust, the treatment indicator, and contribution costs, together with the positive estimate of the coefficient for the interaction between trust, the treatment indicator, and direct reciprocity. This sheds more light on information as a mechanism driving the mixed results regarding trust and sharing behavior in public goods games, observed in previous work (Anderson et al., 2004).
On the other hand, the second component of reciprocity places positive weight on questions involving positive reciprocity and negative weight on questions involving negative reciprocity or punishment. Individuals who align with this characteristic place much lower weight on the actual cost of contributing, suggesting some altruism. While there is some tradeoff in the treatment, the sign of the aggregate interaction term remains negative in the treatment suggesting that these players are still behaving more altruistically than average. Perhaps surprisingly, there is a strong negative coefficient on the interaction between positive reciprocity and generalized reciprocity in the baseline. These together suggest that their increased sharing is not conditional on having received more benefits from their group, possibly representing a tendency to share in anticipation that others will behave reciprocally. This interpretation is reinforced by a large positive effect of the treatment on generalized reciprocity for this group, offset by a small decrease in direct reciprocity. In other words, these individuals reciprocate by sharing with the entire group, and trusting in the reciprocity of others, rather than by using new information as a tool for punishment.
B
On a larger scale, by exploiting the synergy between Bayesian modeling and formal verification methods, we also advocate for the development and use of explainable algorithms where properties relevant to decision-making are incorporated into the data analytic process flow.
We demonstrate our novel approach with spatio-temporal areal data, where measurements are collected over time at various areal units, and a neighboring matrix allows calculating the distance between the different units. In particular, we consider an urban mobility application, given that urban population density dynamics are highly variable both in space and time. For such applications, building a Bayesian spatio-temporal model that accurately predicts future population dynamics, is of paramount importance to decision-makers in the context of urban planning (e.g., who must plan for resource allocation, divert traffic and increase mobile network capabilities temporarily) but has far-reaching implications related to the environment, economy, and health (Gariazzo et al., 2019). In particular, the latter link became even more evident in the context of the COVID-19 pandemic.
In this paper, we propose a Bayesian Machine Learning approach that naturally deals with uncertainty propagation, while simultaneously it allows to learn the value of the parameters from the data. Our proposed approach extends the classical approach to SMC to a Bayesian framework by performing verification and monitoring on trajectories of the Bayesian predictive distribution
In this paper, we propose a framework for predictive model checking and comparison, where in addition to usual approaches, we advocate for the specification of concrete (spatio-temporal) properties that the predictions from a model should satisfy. Given trajectories from the Bayesian predictive distribution, the posterior predictive probability of satisfaction and the posterior predictive robustness of these properties can be approximated by verifying the properties on each of the trajectories efficiently using techniques from formal verification methods. Finally, we can evaluate ex post the model by comparing the resulting measures with the values in the observed data.
Therefore, the proposed approach has a clear potential in the area of sustainable cities and urban mobility, as these applications deal with complex systems, with a multitude of stakeholders and with a pressing need for transparency in the decision-making process. We hope for the illustration in the current paper to open the way to further applications.
D
More modern state of the art methods such as Discrimitative Deep Learning (DDL) [11] produces excellent results. This has been quantified in the recent benchmark paper [17].
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and mean lnQ metrics. As observed for all the metrics rMSE, MAPE and
Estimation: The coefficients 𝜽^^𝜽\hat{\bm{\theta}}over^ start_ARG bold_italic_θ end_ARG of the covariance coefficients
In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depicts the percentage of the dataset used for training, while the vertical axis represents metrics such as rMSE, MAPE, and mean lnQ. Notably, the Kriging/BLUP method consistently outperforms the other methods across all metrics (rMSE, MAPE, and mean lnQ), with its superiority evident in nearly all scenarios.
Unbiased Predictor (BLUP) [27]. We note that we refer to Kriging as both the estimation of the coefficients of the covariance function and BLUP, although we mostly use Kriging/BLUP for clarification
D
It follows that ([ϵ¯k,ϵ¯k])k=1,…,Mϵsubscriptsubscript¯italic-ϵ𝑘subscript¯italic-ϵ𝑘𝑘1…subscript𝑀italic-ϵ([\underline{\epsilon}_{k},\overline{\epsilon}_{k}])_{k=1,\ldots,M_{\epsilon}}( [ under¯ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , over¯ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ] ) start_POSTSUBSCRIPT italic_k = 1 , … , italic_M start_POSTSUBSCRIPT italic_ϵ end_POSTSUBSCRIPT end_POSTSUBSCRIPT is a collection of (2⁢ϵ⁢‖G‖L2⁢(μn,k,x),L2⁢(μn,k,x))2italic-ϵsubscriptnorm𝐺subscript𝐿2subscript𝜇𝑛𝑘𝑥subscript𝐿2subscript𝜇𝑛𝑘𝑥(2\epsilon\|G\|_{L_{2}(\mu_{n,k,x})},L_{2}(\mu_{n,k,x}))( 2 italic_ϵ ∥ italic_G ∥ start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_μ start_POSTSUBSCRIPT italic_n , italic_k , italic_x end_POSTSUBSCRIPT ) end_POSTSUBSCRIPT , italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_μ start_POSTSUBSCRIPT italic_n , italic_k , italic_x end_POSTSUBSCRIPT ) )-brackets covering ℰℰ\mathcal{E}caligraphic_E and satisfying −2⁢G≤ϵ¯k≤ϵ¯k≤2⁢G2𝐺subscript¯italic-ϵ𝑘subscript¯italic-ϵ𝑘2𝐺-2G\leq\underline{\epsilon}_{k}\leq\overline{\epsilon}_{k}\leq 2G- 2 italic_G ≤ under¯ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ≤ over¯ start_ARG italic_ϵ end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ≤ 2 italic_G. As a consequence, ℰℰ\mathcal{E}caligraphic_E satisfies (7) with envelope 2⁢G2𝐺2G2 italic_G.
The first two steps consist in independent intermediate results. Their proofs are given in the Appendix. They will be put together in the third and last step of the proof.
Step (iii): End of the proof. Define the process, for any function f𝑓fitalic_f and u∈[1/2,3/2]𝑢1232u\in[1/2,3/2]italic_u ∈ [ 1 / 2 , 3 / 2 ]
Another way to obtain (1) is given in the next proposition. It requires the existence of a dominating measure for which a standard bracketing entropy condition is satisfied. The proof of the next proposition is deferred to the end of the Appendix, Section 10.
The weak convergence property of the k𝑘kitalic_k-NN process is obtained under the following metric entropy condition. For any u>0𝑢0u>0italic_u > 0, define the probability measure
B
$a$}}_{ik}^{(m)},\quad i=1,...,r,\ \ k=1,...,K.over^ start_ARG bold_italic_a end_ARG start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_cOALS end_POSTSUPERSCRIPT = over^ start_ARG bold_italic_a end_ARG start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT , italic_i = 1 , … , italic_r , italic_k = 1 , … , italic_K .
The rest of the paper is organized as follows. After a brief introduction of the basic notations and preliminaries of tensor analysis in Section 1.1, we introduce a tensor factor model with CP low-rank structure in Section 2. The estimation procedures of the factors and the loading vectors are presented in Section 3. Section 4 investigates the theoretical properties of the proposed methods. Section 5 develops some alternative algorithms to tensor factor models, which extend existing popular CP methods to the auto-covariance tensors with cPCA as initialization, and provides some simulation studies to demonstrate the numerical performance of all the estimation procedures. Section 6 illustrates the model and its interpretations in real data applications. Section 7 provides a short concluding remark. All technical details and more simulation results are relegated to the supplementary materials.
In this section, we compare the empirical performance of different procedures of estimating the loading vectors of TFM-cp, under various simulation setups. We consider the cPCA initialization (Algorithm 1) alone, the iterative procedure HOPE, and the intermediate output from the iterative procedure when the number of iteration is 1 after initialization. The one step procedure will be denoted as 1HOPE. We also check the performance of the alternative algorithms ALS, OALS, cALS, and cOALS as described above.
In this section, we focus on the estimation of the factors and loading vectors of model (1). The proposed procedure includes two steps: an initialization step using a new composite PCA (cPCA) procedure, presented in Algorithm 1, and an iterative refinement step using a new iterative simultaneous orthogonalization (ISO) procedure, presented in Algorithm 2. We call this two-step procedure HOPE (High-Order Projection Estimators) as it repeatedly perform high order projections on high order moments of the tensor observations. It utilizes the special structure of the model and leads to higher statistical and computational efficiency, which will be demonstrated later.
In addition, ALS and cALS are always the worst under the cases δ≥0.1𝛿0.1\delta\geq 0.1italic_δ ≥ 0.1. The hybrid methods cALS and cOALS improve the original randomized initialized ALS and OALS significantly, showing the advantages of the cPCA initialization. It is worth noting that cOALS has comparable performance with 1HOPE and HOPE when δ𝛿\deltaitalic_δ is small.
B
CB estimator as B→∞→𝐵B\to\inftyitalic_B → ∞ and α→0→𝛼0\alpha\to 0italic_α → 0, and prove that under the
α𝛼\alphaitalic_α, the CB estimator is unbiased for Riskα⁢(g)subscriptRisk𝛼𝑔\mathrm{Risk}_{\alpha}(g)roman_Risk start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_g ).
original risk Risk⁢(g)Risk𝑔\mathrm{Risk}(g)roman_Risk ( italic_g ). For any estimator R^⁢(g)^𝑅𝑔\hat{R}(g)over^ start_ARG italic_R end_ARG ( italic_g ) of
CB estimator when it is viewed as an estimator of Risk⁢(g)Risk𝑔\mathrm{Risk}(g)roman_Risk ( italic_g ), the original
estimator is unbiased for Riskα⁢(g)subscriptRisk𝛼𝑔\mathrm{Risk}_{\alpha}(g)roman_Risk start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_g ), the risk of the given function g𝑔gitalic_g,
C
(2) discretize the two distributions of each feature into bins based on the Local Feature Ranking - Bins value set by the user (default is 10);
(1) break each feature into two disjoint distributions: the values inside the selected group vs. all the rest of the points;
Examining the Global Contribution of Features. After this new selection of models, Amy observes in Figure 1(b) that most features (except for the last two) are more important now than in the initial state. Ins_perc and Val_sa_st importances drop only by 0.01, implying these features are stable. She suggests Joe to keep all features for now and explore the differences through the decision rules later on. Another interesting insight is that A_bal is the most important feature for the RF models, while the AB models prefer D_cred (see Figure 1(b)). This could indicate that mixing models’ decisions from different algorithms is beneficial.
(3) compute the cross-entropy Mannor2005The between the two distributions of each feature: higher values of cross-entropy suggest more unique features (i.e. the within-selection distribution is very different than the rest), while lower values suggest more common, shared features; and
The order of the features is initially the global one, as described in Section Global Feature Ranking. When a group of points is selected using the lasso tool in the decisions space (DS) view, a contrastive analysis Zou2013Contrastive is used to rank the features and highlight unique features that explain a cluster’s separation from the rest. The computation works as follows:
C
Note that for n≪smuch-less-than𝑛𝑠n\ll sitalic_n ≪ italic_s, which is the setting of our application, the matrix A𝐴Aitalic_A is sparse. Therefore, the minimization problem (21) can be efficiently solved by conjugate gradients, or its variations, e.g. LSQR \parencitepaige1982algorithm, without requiring the explicit computation of the high-dimensional normal matrix AT⁢Asuperscript𝐴𝑇𝐴A^{T}Aitalic_A start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_A – a quantity related to the covariance structure of the functional predictors.
As a result, the coefficients of the approximate solution of the model in equation (15) are given by
An approximate solution to the univariate model in equation (10) follows as a special case of the multivariate case considered here.
If the covariance structures of the two classes are believed to be different, the proposed functional linear discriminant model can be generalized to an approximate functional quadratic discriminant model, following the approach proposed by \textcitegaynanova2019sparse, as follows. We estimate the discriminant rule by minimizing the following objective function with respect to β1,β2∈ℒ2⁢(ℳ)subscript𝛽1subscript𝛽2superscriptℒ2ℳ\beta_{1},\beta_{2}\in\mathcal{L}^{2}(\mathcal{M})italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ caligraphic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( caligraphic_M ):
Similar to the univariate case, we assume that the population quantity 𝜷0∈ℋsuperscript𝜷0ℋ\bm{\beta}^{0}\in\mathcal{H}bold_italic_β start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT ∈ caligraphic_H is well-defined and satisfies the equation
B
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does not increase.
In this paper, we have proposed a general, flexible, and identifiable mixed membership distribution-free (MMDF) model to capture community structures of overlapping weighted networks. An efficient spectral algorithm, DFSP, was used to conduct mixed membership community detection and shown to be consistent under mild conditions in the MMDF framework. We have also proposed the fuzzy weighted modularity for overlapping weighted networks. And by maximizing the fuzzy weighted modularity, we can get an efficient estimation of the number of communities for overlapping weighted networks. The advantages of MMDF and fuzzy weighted modularity are validated on both computer-generated and real-world weighted networks. Experimental results demonstrated that DFSP outperforms its competitors in community detection and KDFSP outperforms its competitors in inferring the number of communities.
(4) We conduct extensive experiments to illustrate the advantages of MMDF and fuzzy weighted modularity.
This section conducts extensive experiments to demonstrate that DFSP is effective for mixed membership community detection and our fuzzy weighted modularity is capable of the estimation of the number of communities for mixed membership weighted networks generated from our MMDF model. We conducted all experiments on a standard personal computer (Thinkpad X1 Carbon Gen 8) using MATLAB R2021b. First, we introduce comparison algorithms for each task. Next, evaluation metrics are introduced. Finally, we compare DFSP and our method for determining K𝐾Kitalic_K with their respective comparison algorithms on synthetic and real-world networks.
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does not increase.
B
Another extension of this work might obtain theoretical guarantees about the identifiability of the CCA parameters in the submodel of our model for semiparametric CCA where the multivariate marginals P1,P2subscript𝑃1subscript𝑃2P_{1},P_{2}italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are discrete. Like many recent results concerning ranks defined using cyclical monotonicity, our result on the identifiability of the CCA parameters assumes that the cyclically monotone transformations G1,G2subscript𝐺1subscript𝐺2G_{1},G_{2}italic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_G start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are absolutely continuous, and our argument following the statement of Proposition 2 assumes margins with at least some continuous part. Intuition suggests that weak identifiability conditions may also be possible in the case of entirely discrete margins. For instance, discrete margins may imply that 𝑸1,𝑸2subscript𝑸1subscript𝑸2\boldsymbol{Q}_{1},\boldsymbol{Q}_{2}bold_italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_italic_Q start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are identifiable up to some rotational ambiguity, with the degree of that ambiguity depending on the support of the discrete marginal. However, obtaining a precise statement related to this issue remains a subject of ongoing investigation. Finally, although this article has focused on semiparametric CCA, inference approaches using the multirank likelihood could be useful in other semiparametric inference problems such as semiparametric regression or hierarchical models involving cyclically monotone transformations.
Fig. 4: Results of simulation study for p1=p2=3subscript𝑝1subscript𝑝23p_{1}=p_{2}=3italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 3. Sum of squares error for three simulation scenarios and four estimation methods: traditional CCA (CCA), Gaussian copula-based CCA (GCCCA), semiparametric CCA using the pseudolikelihood strategy of Section 3.1 (CMCCA plugin), and semiparametric CCA using the algorithm of Section 3.2 (CMCCA MCMC). (a) Estimation improves with sample size for all methods. (b) Estimation with traditional CCA stops improving as n𝑛nitalic_n increases. (c) The estimates derived from our model for semiparametric CCA show the best improvement for n≥250𝑛250n\geq 250italic_n ≥ 250.
In the first part of Section 2 of this article, we describe a CCA parameterization of the multivariate normal model for variable sets, which separates the parameters describing between-set dependence from those determining the multivariate marginal distributions of the variable sets. We then introduce our model for semiparametric CCA, a Gaussian transformation model whose multivariate margins are parameterized by cyclically monotone functions. In Section 3, we define the multirank likelihood and use it to develop a Bayesian inference strategy for obtaining estimates and confidence regions for the CCA parameters. We then discuss the details of the MCMC algorithm allowing us to simulate from the posterior distribution of the CCA parameters. In Section 4 we illustrate the use of our model for semiparametric CCA on simulated datasets and apply the model to two real datasets: one containing measurements of climate variables in Brazil, and one containing monthly stock returns from the materials and communications market sectors. We conclude with a discussion of possible extensions to this work in Section 5. By default, roman characters referring to mathematical objects in this article are italicized. However, where necessary, we use italicized and un-italicized roman characters to distinguish between random variables and elements of their sample spaces.
Code to reproduce the figures and tables in this article, as well as software for inference with the semiparametric CCA model are available at https://github.com/j-g-b/cmcca.
Fig. 1: Sum of squares error for three simulation scenarios and four estimation methods: traditional CCA (CCA), Gaussian copula-based CCA (GCCCA), and our methods for semiparametric CCA using the pseudolikelihood strategy of Section 3.1 (CMCCA plugin) and the algorithm of Section 3.2 (CMCCA MCMC). (a) Estimation improves with sample size for all methods. (b) Estimation with traditional CCA stops improving as n𝑛nitalic_n increases. (c) The estimates derived from our model for semiparametric CCA improve as n𝑛nitalic_n increases while the others either do not improve or lag behind. (d) Per-iteration run time of our CMCCA MCMC algorithm for several sample sizes and variable set dimensions.
C
\geq\vartheta,\,t\leq\tau^{\vartheta})\,\mathrm{d}t.italic_P italic_h start_POSTSUBSCRIPT italic_ϑ end_POSTSUBSCRIPT = italic_α ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - italic_r italic_t end_POSTSUPERSCRIPT blackboard_P ( italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ≥ italic_ϑ , italic_t ≤ italic_τ start_POSTSUPERSCRIPT italic_ϑ end_POSTSUPERSCRIPT ) roman_d italic_t .
where f:Θ→ℝ:𝑓→Θℝf:\Theta\to\mathbb{R}italic_f : roman_Θ → blackboard_R and ϑitalic-ϑ\varthetaitalic_ϑ are the parameter that controls the ‘loss’; see Feng [4] and Feng and Shimizu [5]. Hereafter, ϑ0subscriptitalic-ϑ0\vartheta_{0}italic_ϑ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT given by (1.2) is an optimal parameter for minimizing the expected discounted ‘risk’ for surplus X𝑋Xitalic_X. For instance, consider a dividend strategy that pays ratio α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ) when the insurance surplus is over threshold ϑ>0italic-ϑ0\vartheta>0italic_ϑ > 0. The expected total dividends up to the ruin is given by
is interpreted as the aggregate dividends paid up to ruin xt<ξsubscript𝑥𝑡𝜉x_{t}<\xiitalic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT < italic_ξ or maturity g⁢(ϑ)𝑔italic-ϑg(\vartheta)italic_g ( italic_ϑ ) depending on the parameter ϑitalic-ϑ\varthetaitalic_ϑ, where the dividend α𝛼\alphaitalic_α is paid when the surplus xtsubscript𝑥𝑡x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is over the threshold ϑitalic-ϑ\varthetaitalic_ϑ, and the maturity depends on the threshold. That is, when the threshold level is high (the dividends are hard to pay), the maturity for dividends will be longer, but the maturity will be shorter when the threshold level is low (the dividends are easy to pay).
In the dividends problem, we can consider a case where Uϑsubscript𝑈italic-ϑU_{\vartheta}italic_U start_POSTSUBSCRIPT italic_ϑ end_POSTSUBSCRIPT in (5.2) is of the form
In this quantity, the probability of paying the dividends is small when ϑitalic-ϑ\varthetaitalic_ϑ is large, although large dividends are paid, and vice versa. Therefore, the expectation can be optimized to a suitable level.
D
3:Run Algorithm 1 Pre-processing to obtain subset ℳℳ{\mathcal{M}}caligraphic_M which achieves the maximal SNRSNR\mathrm{SNR}roman_SNR.
𝔼⁢S11′⁢(u)𝔼superscriptsubscript𝑆11′𝑢\displaystyle\mathbb{E}S_{11}^{\prime}(u)blackboard_E italic_S start_POSTSUBSCRIPT 11 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_u )
In Algorithm 2, we first randomly partition the vertex set V𝑉Vitalic_V into two disjoint subsets Z𝑍Zitalic_Z and Y𝑌Yitalic_Y by assigning +11+1+ 1 and −11-1- 1 to each vertex independently with equal probability. Let 𝐁∈ℝ|Z|×|Y|𝐁superscriptℝ𝑍𝑌{\boldsymbol{\rm{B}}}\in\mathbb{R}^{|Z|\times|Y|}bold_B ∈ blackboard_R start_POSTSUPERSCRIPT | italic_Z | × | italic_Y | end_POSTSUPERSCRIPT denote the submatrix of 𝐀𝐀{\boldsymbol{\rm{A}}}bold_A, while 𝐀𝐀{\boldsymbol{\rm{A}}}bold_A was defined in (2.2), where rows and columns of 𝐁𝐁{\boldsymbol{\rm{B}}}bold_B correspond to vertices in Z𝑍Zitalic_Z and Y𝑌Yitalic_Y respectively. Let nisubscript𝑛𝑖n_{i}italic_n start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denote the number of vertices in Z∩Vi𝑍subscript𝑉𝑖Z\cap V_{i}italic_Z ∩ italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, where Visubscript𝑉𝑖V_{i}italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes the true partition with |Vi|=nksubscript𝑉𝑖𝑛𝑘|V_{i}|=\frac{n}{k}| italic_V start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | = divide start_ARG italic_n end_ARG start_ARG italic_k end_ARG for all i∈[k]𝑖delimited-[]𝑘i\in[k]italic_i ∈ [ italic_k ], then nisubscript𝑛𝑖n_{i}italic_n start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT can be written as a sum of independent Bernoulli random variables, i.e.,
1:Randomly label vertices in Y𝑌Yitalic_Y with +11+1+ 1 and −11-1- 1 sign with equal probability, and partition Y𝑌Yitalic_Y into 2222 disjoint subsets Y1subscript𝑌1Y_{1}italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Y2subscript𝑌2Y_{2}italic_Y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.
5:Randomly partition V𝑉Vitalic_V into 2222 disjoint subsets Y𝑌Yitalic_Y and Z𝑍Zitalic_Z by assigning +11+1+ 1 or −11-1- 1 to each vertex with equal probability.
D
ℙ⁢(q∗⁢(Hp)>q∗⁢(H)−ε/4)>1−ε/4,ℙsuperscript𝑞subscript𝐻𝑝superscript𝑞𝐻𝜀41𝜀4{\mathbb{P}}(q^{*}(H_{p})>q^{*}(H)-\varepsilon/4)>1-\varepsilon/4\,,blackboard_P ( italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) > italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_H ) - italic_ε / 4 ) > 1 - italic_ε / 4 ,
𝔼⁢[q∗⁢(HZ)⁢𝟏X=H]=𝔼⁢[q∗⁢(Hp)]⁢ℙ⁢(X=H)𝔼delimited-[]superscript𝑞subscript𝐻𝑍subscript1𝑋𝐻𝔼delimited-[]superscript𝑞subscript𝐻𝑝ℙ𝑋𝐻\displaystyle{\mathbb{E}}[q^{*}(H_{Z}){\mathbf{1}}_{X=H}]\;=\;{\mathbb{E}}[q^{%
ℙ⁢(q∗⁢(Hp)>q∗⁢(H)−ε/4)>1−ε/4,ℙsuperscript𝑞subscript𝐻𝑝superscript𝑞𝐻𝜀41𝜀4{\mathbb{P}}(q^{*}(H_{p})>q^{*}(H)-\varepsilon/4)>1-\varepsilon/4\,,blackboard_P ( italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) > italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_H ) - italic_ε / 4 ) > 1 - italic_ε / 4 ,
𝔼⁢[q∗⁢(Hp)]≥(1−ε/4)⁢(q∗⁢(H)−ε/4)>q∗⁢(H)−ε/2.𝔼delimited-[]superscript𝑞subscript𝐻𝑝1𝜀4superscript𝑞𝐻𝜀4superscript𝑞𝐻𝜀2{\mathbb{E}}[q^{*}(H_{p})]\geq(1-\varepsilon/4)(q^{*}(H)-\varepsilon/4)>q^{*}(%
ℙ⁢(Hp∈A)≥ℙ⁢(Hp0∈A)>1−ε/2.ℙsubscript𝐻𝑝𝐴ℙsubscript𝐻subscript𝑝0𝐴1𝜀2{\mathbb{P}}(H_{p}\in A)\geq{\mathbb{P}}(H_{p_{0}}\in A)>1-\varepsilon/2.blackboard_P ( italic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∈ italic_A ) ≥ blackboard_P ( italic_H start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∈ italic_A ) > 1 - italic_ε / 2 .
C
Moreover, our overview highlights the role of the interconnectivity of studies in driving some main findings of the environmental migration literature.
This first step provides the most comprehensive sample of economic contributions on the relationship between climatic variations (and natural hazards) and human mobility, in all its different forms. We implement a systematic review aimed at mapping the body of literature and defining the boundaries of our focus. Systematic reviews have become highly recommended to conduct bibliographic overviews of specific literature because they provide a tool to report a synthesis of the state of the art of a field through a structured and transparent methodology (Page et al., 2021b, ). To allow for comparability with previous MA and reviews, we also add to our sample all articles included in two recently published MA, Hoffmann et al., (2020) and Beine and Jeusette, (2021)111 A detailed table highlighting specific studies featured in other meta-analyses, along with their citations, that have been reviewed in our study is provided in the Supplementary material, Section A.. We begin with the definition of the research question and the main keywords, to gather and collect data in a sample of contributions. After the definition of inclusion and exclusion conditions, we proceed with a screening by title to exclude off-topic contributions and then to a screening of the text to assure the uniformity of contributions. The resulting sample is then the object of a preliminary bibliometric analysis.
The paper also offers an encompassing methodology for the empirical analysis of very heterogeneous outcomes of a research field. The sample collected through a systematic review of the literature, the bibliometric analysis, the construction of a co-citation network and the community detection on the structure of the network of essays, allow the inspection of a scientific area also in absence of a uniform and cohesive literature. In the case of environmental migration, the too many different characteristics in terms of object of analysis, empirical strategy, and mediating covariates render the meta-analytic average effect estimates just a first approximation of the quantitative evidence of the literature.
The PRISMA flow diagram (Moher et al., , 2009) in Figure 1 shows the process of identifica-tion, screening, eligibility, and inclusion of contributions in the final sample. It is important to note that there are two levels of inclusion: the first level identifies the sample of contributions included in our network analysis, while the second level is restricted to quantitative analyses suitable for the MA. To conduct a MA it is crucial to select only comparable papers that provide complete information (mainly on estimated coefficients and standard errors) that can then be used to recover the average effect size 555 Our inclusion criteria prioritize studies reporting outcomes in an appropriate and consistent manner. In particular, we have excluded studies that do not rely on a complete set of objective measures. For instance, studies that only present estimated coefficients, solely indicating the significant level, without reporting standard errors or t𝑡titalic_t-ratios have been excluded because they do not allow for the calculation of a meta-synthesis.. This implies the exclusion of papers that do not comply with the requirements of a MA. However, those excluded papers can be of interest in building the taxonomy of the whole concerned literature, as they may play a role in building links between different contributions (see Section 3). Similarly, non-quantitative (policy, qualitative or theoretical) papers may participate as well in the development of research fronts or give a direction to a certain thread of contributions and incidentally affect the detection of clusters. These reasons led us to build our citation-based network and perform the network analysis and the community detection on the whole sample, while only the sample for the MA is restricted only to quantitative contributions that meet the coding requirements. Our final database of point estimates for the MA includes 96 papers released between 2003 and 2020, published in an academic journal, working papers series, or unpublished studies, providing 3,904 point estimates of the effect of slow-onset events (provided by 66 studies) and 2,065 point estimates of the effect of fast-onset events (provided by 60 studies). The list of articles is in the Appendix Table 2.
Section 2 offers a systematic review of the literature and gives a detailed description of the data collection process; Section 3 analyses the structural characteristic of the network of the bibliographically coupled papers; Section 4 summarizes and discusses the results of the MA, finally, Section 5 concludes and offers some possible future extensions of the analysis.
D
‖α^‖0subscriptnorm^𝛼0\|\hat{\alpha}\|_{0}∥ over^ start_ARG italic_α end_ARG ∥ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT denotes the number of nonzero elements in
to zjsubscript𝑧𝑗z_{j}italic_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, for j=1,…,d𝑗1…𝑑j=1,\ldots,ditalic_j = 1 , … , italic_d. A solution α^^𝛼\hat{\alpha}over^ start_ARG italic_α end_ARG in
‖α^‖0subscriptnorm^𝛼0\|\hat{\alpha}\|_{0}∥ over^ start_ARG italic_α end_ARG ∥ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT denotes the number of nonzero elements in
α^^𝛼\hat{\alpha}over^ start_ARG italic_α end_ARG (the number of active basis functions), because evaluating
takes O⁢(‖α^‖0⁢(k+1)d)𝑂subscriptnorm^𝛼0superscript𝑘1𝑑O(\|\hat{\alpha}\|_{0}(k+1)^{d})italic_O ( ∥ over^ start_ARG italic_α end_ARG ∥ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_k + 1 ) start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT ) operations, where
C
Karwa and Slavković (2016) derived the differentially private estimators of parameters in the β𝛽\betaitalic_β-model,
Yan (2021) developed differentially private inferences in the p0subscript𝑝0p_{0}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT model for directed networks with a bi-degree sequence.
In this paper, we aim to establish the unified asymptotically theoretical framework in a class of directed networks for differentially private analysis.
Fan (2023) established the unified theoretical framework for directed graphs with bi-degree sequence.
We have established the asymptotic theory in a class of directed random graph model parameterized by the differentially private bi-sequence and illustrated application to the Probit model. The result shows that statistical inference can be made using the noisy bi-sequence. We assume that the edges are mutually independent in this work. We should be able to obtain consistent conclusion if the edges are dependent, provided that the conditions stated in Theorem 1 are met. However, the asymptotic normality of the estimator is not clear. To avoid this problem, we need appropriately select a probability distribution for directed random graphs when using the existing method. In the further, we may relax our theoretical conditions to ignore the independence of edges.
A
We have introduced Bayesian hierarchical models based on DAG constructions of latent spatial processes for large scale non-Gaussian multivariate multi-type data which may be misaligned, along with computational tools for adaptive posterior sampling. We illustrated our methods using applications with data sizes in the tens to hundreds of thousands, with compute times ranging from a few seconds to under 30 minutes in a single workstation. The compute time for a single SiMPA iteration for a univariate Poisson outcome observed on gridded coordinates with n=106𝑛superscript106n=10^{6}italic_n = 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT is under 0.2 seconds after burn-in; in other words, our methods enable running MCMC for hundreds of thousands of iterations on massive spatial data under a total time budget of 12 hours.
We have applied our methodologies using practical cross-covariance choices such as models of coregionalization built on independent stationary covariances. However, nonstationary models are desirable in many applied settings. Recent work (Jin et al., 2021) highlights that DAG choice must be made carefully when considering explicit models of nonstationary, as spatial process models based on sparse DAGs induce nonstationarities even when using stationary covariances.
Furthermore, our methods can be applied for posterior sampling of Bayesian hierarchies based on more complex conditional independence models of multivariate dependence (Dey et al., 2021).
Our work in this article will enable new research into nonstationary models of large scale non-Gaussian data.
Our methodologies rely on the ability to embed the assumed spatial DAG within the larger Bayesian hierarchy and lead to drastic reductions in wall clock time compared to models based on unrestricted GPs. Nevertheless, high posterior correlations of high dimensional model parameters may still negatively impact overall sampling efficiency in certain cases. Motivated by recent progress in improving sampling efficiency of multivariate Gaussian models (Peruzzi et al., 2021), future research will consider generalized strategies for improving MCMC performance in spatial factor models of highly multivariate non-Gaussian data. Finally, optimizing DAG choice for MCMC performance is another interesting path, and recent work on the theory of Bayesian computation for hierarchical models (Zanella and Roberts, 2021) might motivate further development for spatial process models based on DAGs.
A
\rightarrow\end{subarray}\;N^{y}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT start_ARG start_ROW start_CELL ← end_CELL end_ROW start_ROW start_CELL → end_CELL end_ROW end_ARG italic_N start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT, N∗←Ny←superscript𝑁superscript𝑁𝑦N^{*}\leftarrow N^{y}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ← italic_N start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT and N∗→Ny→superscript𝑁superscript𝑁𝑦N^{*}\rightarrow N^{y}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → italic_N start_POSTSUPERSCRIPT italic_y end_POSTSUPERSCRIPT, respectively (albeit with bi-directed self loops).
In this section we combine and further generalise the previous results. We wish to draw inference on the effect of a hypothetical intervention on a treatment or exposure process Nxsuperscript𝑁𝑥N^{x}italic_N start_POSTSUPERSCRIPT italic_x end_POSTSUPERSCRIPT on one or more outcome processes 𝒩0subscript𝒩0\mathcal{N}_{0}caligraphic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT under a further intervention that prevents censoring. The set 𝒩0subscript𝒩0\mathcal{N}_{0}caligraphic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT could include a survival-type outcome, but can be much more general event-histories such as recurrent events and multi-state processes. Typically, causal validity will not hold for these sets alone, and adjustment is required for additional covariates, baseline or processes, denoted ℒℒ\mathcal{L}caligraphic_L. Therefore, the set of measured 𝒱0subscript𝒱0\mathcal{V}_{0}caligraphic_V start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT from the previous sections is now extended to 𝒱0∪ℒsubscript𝒱0ℒ\mathcal{V}_{0}\cup\mathcal{L}caligraphic_V start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∪ caligraphic_L. Here ℒℒ\mathcal{L}caligraphic_L is not of substantive interest in the sense that we would like the effect of the treatment intervention on 𝒩0subscript𝒩0\mathcal{N}_{0}caligraphic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT marginally over ℒℒ\mathcal{L}caligraphic_L.
A graph G=(𝒱,ℰ)𝐺𝒱ℰG=(\mathcal{V},\mathcal{E})italic_G = ( caligraphic_V , caligraphic_E ) is given by a set of vertices (or nodes) 𝒱𝒱\mathcal{V}caligraphic_V and directed edges ℰℰ\mathcal{E}caligraphic_E; the nodes represent variables or processes; there can be up to two edges between nodes representing dynamic relations. The induced subgraph GAsubscript𝐺𝐴G_{A}italic_G start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT, A⊂𝒱𝐴𝒱A\subset\mathcal{V}italic_A ⊂ caligraphic_V, has nodes 𝒱∩A𝒱𝐴\mathcal{V}\cap Acaligraphic_V ∩ italic_A and edges ℰ∩(A×A)ℰ𝐴𝐴\mathcal{E}\cap(A\times A)caligraphic_E ∩ ( italic_A × italic_A ); a subgraph GA′superscriptsubscript𝐺𝐴′G_{A}^{\prime}italic_G start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT on A𝐴Aitalic_A is given if the edges are a subset of those of the induced subgraph. Any node b∈𝒱\{a}𝑏\𝒱𝑎b\in\mathcal{V}\backslash\{a\}italic_b ∈ caligraphic_V \ { italic_a } with an edge b⟶a⟶𝑏𝑎b\longrightarrow aitalic_b ⟶ italic_a is called a parent of a𝑎aitalic_a, while a𝑎aitalic_a is a child of b𝑏bitalic_b; graphical ancestors or descendants are defined analogously in terms of sequences of directed edges.
However, in general it does not hold that the latent projection over eliminable nodes corresponds to the induced subgraph on the remaining nodes as bi-directed edges could occur between nodes within 𝒩0subscript𝒩0\mathcal{N}_{0}caligraphic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Latent projections of causal graphs have been used to identify valid adjustment sets (Witte et al., 2020) to which we return in section 6. We briefly comment on the projection graphs for the examples (26,31) in Supplement A.
Our result on eliminability is related to the marginalisation considered by Mogensen and Hansen (2020). The authors propose an extended class of local independence graphs, and corresponding μ𝜇\muitalic_μ-separation, which is closed under marginalisation. These more general graphs include bi-directed edges as a possible result of latent processes not shown as nodes in the graph. In our case, if we consider 𝒰𝒰\mathcal{U}caligraphic_U as latent processes and if they satisfy the conditions of eliminability, then they do not induce any bi-directed edges with endpoints between N∗superscript𝑁N^{*}italic_N start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT and 𝒱0\∗superscriptsubscript𝒱0\absent\mathcal{V}_{0}^{\backslash*}caligraphic_V start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \ ∗ end_POSTSUPERSCRIPT in these more general graphs. Moreover, their results can be used to obtain the ‘latent projection’ graph representing the local independence structure after marginalising over 𝒰𝒰\mathcal{U}caligraphic_U (Mogensen and Hansen, 2020, Definition 2.23 and Theorem 2.24).
C
\mathrm{r}}(t-1),t)italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( bold_italic_H start_POSTSUBSCRIPT ∖ 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_r end_POSTSUPERSCRIPT ( italic_t - 1 ) , italic_t ) > italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( bold_italic_H start_POSTSUBSCRIPT ∖ 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_r end_POSTSUPERSCRIPT ( italic_t - 1 ) , italic_t ) if N1⁢(t)≥T′,N2⁢(t)<T′formulae-sequencesubscript𝑁1𝑡superscript𝑇′subscript𝑁2𝑡superscript𝑇′N_{1}(t)\geq T^{\prime},N_{2}(t)<T^{\prime}italic_N start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_t ) ≥ italic_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_N start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_t ) < italic_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. A Bayes optimal algorithm eventually draws both of the arms T′superscript𝑇′T^{\prime}italic_T start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT times.
A challenge in analyzing a sequential decision-making algorithm is its flexibility. The decision I⁢(t)𝐼𝑡I(t)italic_I ( italic_t ) at round t𝑡titalic_t depends on the results of the Bellman equation, which is difficult to compute exactly. Accordingly, we have introduced a quantity called the EBI, which represents the improvement of the Bayesian simple regret from a single sample.
Figure 2 compares the regret in SR and ABO. Unlike the successive rejects algorithm, the regret of ABO remains large, even for large T𝑇Titalic_T, suggesting that the simple regret of the Bayes optimal algorithm is polynomial in T−1superscript𝑇1T^{-1}italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT unlike SR that has a regret exponentially small in T𝑇Titalic_T.
We conducted a computer simulation to observe the polynomial rate of the simple regret.121212The code that replicates the results is available at https://github.com/jkomiyama/bayesoptimalalg/.
We show that the ABO algorithm has simple regret polynomial in T−1superscript𝑇1T^{-1}italic_T start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT.
C
The RELAX [20] estimator generalizes REBAR by noticing that their continuous relaxation can be replaced with a free-form CV.
We then apply it to generalize the linear CVs in Double CV to very flexible ones such as neural networks.
The RELAX [20] estimator generalizes REBAR by noticing that their continuous relaxation can be replaced with a free-form CV.
Although RELAX was often observed to have very strong performance in prior work [14, 60], our results in Figure 1 suggest that, for dynamically binarized datasets, much larger gains can be achieved by using the same number of function evaluations in other estimators.
However, in order to get strong performance, RELAX still includes the continuous relaxation in their CV and only adds a small deviation to it.
D
(T^n)n≥1subscriptsubscript^𝑇𝑛𝑛1(\hat{T}_{n})_{n\geq 1}( over^ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_n ≥ 1 end_POSTSUBSCRIPT converges in probability to θ∗∈(−α∗,∞)subscript𝜃subscript𝛼\theta_{*}\in(-\alpha_{*},\infty)italic_θ start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ∈ ( - italic_α start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT , ∞ ).
where Z𝑍Zitalic_Z is a r.v. related to Sα0,θ0subscript𝑆subscript𝛼0subscript𝜃0S_{\alpha_{0},\theta_{0}}italic_S start_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT via the relation Sα0,θ0=exp⁡{ψ⁢(Z/α0+1)−α0⁢ψ⁢(Z+1)}subscript𝑆subscript𝛼0subscript𝜃0𝜓𝑍subscript𝛼01subscript𝛼0𝜓𝑍1S_{\alpha_{0},\theta_{0}}=\exp\{\psi(Z/\alpha_{0}+1)-\alpha_{0}\psi(Z+1)\}italic_S start_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT = roman_exp { italic_ψ ( italic_Z / italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 1 ) - italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT italic_ψ ( italic_Z + 1 ) }. Here, ψ𝜓\psiitalic_ψ denotes the digamma function; that is the function ψ𝜓\psiitalic_ψ is the derivative of log⁡ΓΓ\log\Gammaroman_log roman_Γ.
Let α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ) arbitrary. Recall that ψ𝜓\psiitalic_ψ is the derivative of
We conclude present representation of the EPSF in terms of compound Poisson sampling models [Charalambides, 2005, Chapter 7], thus providing an intuitive construction of the EPSF that sheds light on the sampling structure of the PYP prior. We consider a population of individuals with a random number K𝐾Kitalic_K of distinct types, and we assume that K𝐾Kitalic_K is distributed as a Poisson distribution with parameter λ=z⁢[1−(1−q)α]𝜆𝑧delimited-[]1superscript1𝑞𝛼\lambda=z[1-(1-q)^{\alpha}]italic_λ = italic_z [ 1 - ( 1 - italic_q ) start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT ] such that q∈(0,1)𝑞01q\in(0,1)italic_q ∈ ( 0 , 1 ), α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ) and z>0𝑧0z>0italic_z > 0. For i≥1𝑖1i\geq 1italic_i ≥ 1 let Nisubscript𝑁𝑖N_{i}italic_N start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denote the random number of individuals of type i𝑖iitalic_i in the population, and assume the Nisubscript𝑁𝑖N_{i}italic_N start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT’s to be independent of K𝐾Kitalic_K, independent of each other and such that for x∈ℕ𝑥ℕx\in\mathbb{N}italic_x ∈ blackboard_N
Here, ψ𝜓\psiitalic_ψ is the digamma function; that is ψ𝜓\psiitalic_ψ is the derivative of log⁡ΓΓ\log\Gammaroman_log roman_Γ.
B
The Lorenz dominance ordering is a partial ordering of multivariate distributions. In many cases, α𝛼\displaystyle\alphaitalic_α-Lorenz curves may cross. For a complete inequality ordering, we also propose an extension of the classical Gini index to compare inequality in multi-attribute allocations.
To visualize Lorenz dominance, we define an Inverse Lorenz Function at a given vector of resource shares as the fraction of the population that cumulatively holds those shares. It is characterized by the cumulative distribution function of the image of a uniform random vector by the Lorenz map. Hence, it is a cumulative distribution function by construction, like the univariate inverse Lorenz curve. In two dimensions, the α𝛼\displaystyle\alphaitalic_α-level sets of this cumulative distribution function, which we call α𝛼\displaystyle\alphaitalic_α-Lorenz curves, are non crossing downward sloping curves that shift to the south-west when inequality increases, as defined by the Lorenz ordering. For the cases, where allocations are not ranked in the Lorenz inequality dominance ordering, we propose a family of multivariate S-Gini coefficients based on our vector Lorenz map, with the flexibility to entertain different tastes for inequality in different dimensions. Finally, we propose an illustration to the analysis of income-wealth inequality in the United States between 1989 and 2022.
Figure 7. Top: Gini indices for income and for wealth, multivariate Gini index, and Kendall’s τ𝜏\displaystyle\tauitalic_τ (dashed) for US Income-Wealth across 1989-2022.
Gajdos and Weymark (2005) propose a multivariate Gini coefficient based on aggregation across individuals first, then across dimensions, which removes the effect of dependence across attributes. Decancq and Lugo (2012) propose to aggregate across dimensions first, then across individuals, in order to keep track of correlation.
Multi-attribute inequality can vary substantially across population groups, as shown in Maasoumi and Racine (2016) within the information theoretic framework of Maasoumi (1986).
C
Following the analytical tasks and the resulting design goals, we have developed HardVis, an interactive web-based VA system that allows users to identify areas where instance hardness occurs and to micromanage sampling algorithms. Section 7.2 contains further implementation details.
(i) explore various projections with alternative distributions of data types, leading to the division of training data into SBRO (cf. Figure 3(b));
G2: Application of undersampling and oversampling in specific data types only, with different parameter settings.
G1: Visual examination of several data types’ distributions and projections to choose a generic ‘number of neighbors’ parameter.
The system consists of 8 interactive visualization panels (Figure 1): (a) data types projections (→→\rightarrow→ G1) incl. data sets and sampling techniques (→→\rightarrow→ G2), (b) data overview, (c) data types distribution, (d) data details, (e) data space, (f) predicted probabilities (→→\rightarrow→ G3 and G4), (g) sampling execution tracker, and (h) test set confusion (→→\rightarrow→ G5).
D
Björkegren et al. (2020) propose a structural model for manipulation and use data from a field experiment to estimate the optimal policy. Frankel and Kartik (2019a) demonstrate that optimal predictors that account for strategic behavior will underweight manipulable data. Munro (2023) studies the optimal unconstrained assignment of binary-valued treatments in the presence of strategic behavior, without parametric assumptions on agent behavior. The main difference between our work and these previous works is that we account for the equilibrium effect of strategic behavior that arises from competition.
Our work is also related to strategic classification (Ahmadi et al., 2022; Brückner et al., 2012; Chen et al., 2020; Dalvi et al., 2004; Dong et al., 2018; Hardt et al., 2016; Jagadeesan et al., 2021; Kleinberg and Raghavan, 2020; Levanon and Rosenfeld, 2022) and performative prediction (Miller et al., 2021; Perdomo et al., 2020). These works model the interaction between a predictor and its environment (strategic agents), and develop methods that are robust to the distribution shift induced by the predictor. However, a key distinction between our work and these references is that we optimize decisions by explicitly considering utility from treatment assignment with strategic agents,
We describe some of the extensions of our model and learning procedure. First, our model assumes that the decision maker’s policy is fixed over time. Dynamic treatment rules, where the policy is time-varying, would extend this work and would likely require new equilibrium definitions. Second, we consider linear policies because they are relevant to many real-world applications, such as the Chilean college admissions system (Santelices et al., 2019), but more flexible policies are possible. In addition, we assume that agents are myopic. Future work may consider agents who respond to a history of thresholds {sj}j∈[t−k,t]subscriptsubscript𝑠𝑗𝑗𝑡𝑘𝑡\{s_{j}\}_{j\in[t-k,t]}{ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j ∈ [ italic_t - italic_k , italic_t ] end_POSTSUBSCRIPT instead of just stsubscript𝑠𝑡s_{t}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. Many models of agent behavior are possible in this case, e.g., an agent could respond to the mean of the thresholds or use the trend of the history to predict the next threshold and respond to the prediction. These different assumptions may yield different dynamics than the ones that we study. Also, a technical limitation of our model is that it does not permit an agent’s post-effort covariates Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to exactly equal their raw covariates Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT to be exactly equal to Zi,subscript𝑍𝑖Z_{i},italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , the cost of covariate modification must be infinite for any deviation from Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For technical convenience, our work only considers cost functions that are twice-differentiable and lie in an L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-space, but we expect that a more general proof strategy can relax these requirements. Also, our model heavily relies on some form of noise being present in the system. If agents have a noisy understanding of the policy parameters (Jagadeesan et al., 2021), or agents best respond imperfectly (as in our work), or exogenous noise affects how decisions are made (Kleinberg et al., 2018), then best responses will be continuous. We expect our results to hold as long as there is sufficient exogenous noise in the system to guarantee that best responses are continuous–the source of the noise itself is not especially crucial. For technical convenience our work assumes Gaussian noise, but we expect our results to apply to more generic noise distributions, provided they are mean-zero, twice-differentiable, and have bounded second derivative. Possible modifications of our model that could still allow for tractable equilibrium modeling include considering stochastic policies instead of deterministic ones, generic noise distributions, or the noisy response model of Jagadeesan et al. (2021). If there is no noise in the system, then the agents can strategize perfectly, yielding a discontinuous best response function. In some practical scenarios, such discontinuities are unnatural; see Jagadeesan et al. (2021) for a number of examples. Nevertheless, it may still be interesting to develop procedures that perform well despite these discontinuities. Finally, while our policy gradient estimator is consistent in the absence of competition and strategic behavior, it may not be the simplest strategy if competition and strategic behavior are not first-order concerns. The problems of (1) detecting whether agents are strategic (2) detecting whether interference exists are potential areas for future research.
The goal of maximizing the equilibrium policy value is motivated by prior works that estimate policy effects or treatment effects at equilibrium (Heckman et al., 1998; Munro et al., 2021; Wager and Xu, 2021). Heckman et al. (1998) estimate the effect of a tuition subsidy program on college enrollment by accounting for the program’s impact on the equilibrium college skill price. Munro et al. (2021) estimate the effect of a binary intervention in a marketplace setting by accounting for the impact of the intervention on the resulting supply-demand equilibrium. Wager and Xu (2021) estimate the effect of supply-side payments on a platform’s utility in equilibrium. Johari et al. (2022) use a structural model of a marketplace and its associated mean field limit to analyze how marketplace interference affects the performance of different experimental designs and estimators.
The problem of estimating the effect of an intervention in a marketplace setting is also relevant to our work. Marketplace interventions can impact the resulting supply-demand equilibrium, introducing interference and complicating estimation of the intervention’s effect (Blake and Coey, 2014; Heckman et al., 1998; Johari et al., 2022). To estimate an intervention’s effect without disturbing the market equilibrium, Munro et al. (2021); Wager and Xu (2021) propose a local experimentation scheme, motivated by mean-field modeling. Methodologically, we adapt their mean-field modeling and estimation strategies to estimate the effect of a policy at its equilibrium threshold.
A
In this section, we present certain desirable properties of the proposed filtration and substantiate our claims in the introduction. In Section 4.1, we discuss how the proposed filtration prolongs persistences of homology classes of high-density regions. Then we discuss, in Section 4.2, the proposed filtration’s scale invariance, which motivates the awkward-looking exponent 1/D1𝐷1/D1 / italic_D in the definition of the RDAD function, and in Section 4.3, its robustness, which is enhanced by the DTM setup. We conclude by giving further mathematical properties of the proposed filtration in Section 4.4. All proofs are delayed to Appendix A.
In this subsection, we illustrate how the proposed filtration prolongs persistences of homology classes of high-density regions with a numerical example, and we formalize the observations from the example with theorems. For the numerical examples in this and subsequent subsections, parameters are summarized in Table 3 in Appendix B, and implementation details are deferred to Section 6.1.
In this section, we present certain desirable properties of the proposed filtration and substantiate our claims in the introduction. In Section 4.1, we discuss how the proposed filtration prolongs persistences of homology classes of high-density regions. Then we discuss, in Section 4.2, the proposed filtration’s scale invariance, which motivates the awkward-looking exponent 1/D1𝐷1/D1 / italic_D in the definition of the RDAD function, and in Section 4.3, its robustness, which is enhanced by the DTM setup. We conclude by giving further mathematical properties of the proposed filtration in Section 4.4. All proofs are delayed to Appendix A.
The rest of the paper is organized as follows. After reviewing the mathematical background in Section 2, we define the proposed filtration in Section 3 and discuss its properties in Section 4. We discuss bootstrapping in Section 5 and present numerical simulations in Section 6. A discussion and the conclusion are presented in Sections 7 and 8.
We illustrate the results above with corrupted versions of the “Antman" example in Figure 5. We compare the DAD filtration and the RDAD filtration in Figures 6 and 7. The persistence diagrams of RDAD for the corrupted datasets are affected to a lesser extent by the noise and outliers than those of DAD.
A
From this survey, we find that most papers do not explicitly discuss their parameter of interest, and that as many as a third of the experiments conduct analyses that, when paired with their corresponding sampling design, do not necessarily recover either of the parameters that we consider in this paper.
For each of the two parameters of interest we consider, we propose an estimator and develop the requisite distributional approximations to permit its use for inference about the parameter of interest when treatment is assigned using a covariate-adaptive stratified randomization procedure. In the case of the equally-weighted cluster-level average treatment effect, the estimator we propose takes the form of a difference-in-means of cluster averages. This estimator may equivalently be described as the ordinary least squares estimator of the coefficient on treatment in a regression of the average outcome (within clusters) on a constant and treatment. In the case of the size-weighted cluster-level average treatment effect, the estimator we propose takes the form of a weighted difference-in-means of cluster averages, where the weights are proportional to cluster size. This estimator may equivalently be described as the weighted least squares estimator of the coefficient on treatment in a regression of the individual-level outcomes on a constant and treatment with weights proportional to cluster size.111In Appendix A.4, we also briefly consider versions of both estimators which allow for linear regression adjustment using additional baseline covariates.
Klar, 2000). In this paper, we consider the problem of inference about the effect of a binary treatment on an outcome of interest in such experiments in a super-population framework in which cluster sizes are permitted to be random and non-ignorable. By non-ignorable cluster sizes, we refer to the possibility that the treatment effects may depend on the cluster size.
We refer to this quantity as the equally-weighted cluster-level average treatment effect. θ1⁢(QG)subscript𝜃1subscript𝑄𝐺\theta_{1}(Q_{G})italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_Q start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ) can be thought of as the average treatment effect where the clusters themselves are the units of interest. The second parameter of interest corresponds to the choice of ωg=Ng/E⁢[Ng]subscript𝜔𝑔subscript𝑁𝑔𝐸delimited-[]subscript𝑁𝑔\omega_{g}=N_{g}/E[N_{g}]italic_ω start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT = italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT / italic_E [ italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ], thus weighting the average effect of the treatment across clusters in proportion to their size:
et al., 2023) that differ in the way they aggregate, or average, the treatment effect across units. They differ, in particular, according to whether the units of interest are the clusters themselves or the individuals within the cluster. The first of these parameters takes the clusters themselves as the units of interest and identifies an equally-weighted cluster-level average treatment effect. The second of these parameters takes the individuals within the clusters as the units of interest and identifies a size-weighted cluster-level average treatment effect. When individual-level average treatment effects vary with cluster size (i.e., cluster size is non-ignorable) and cluster sizes are heterogeneous, these two parameters are generally different, though, as discussed in Remark 2.3, they coincide in some instances. Importantly, we show that the estimand associated with the standard difference-in-means estimator is a sample-weighted cluster-level average treatment effect, which cannot generally be interpreted as an average treatment effect for either the clusters themselves or the individuals within the clusters. We show, however, in Section 2.2, that this estimand can equal the size-weighted or the equally-weighted cluster-level average treatment effect for some very specific sampling designs. We argue that a clear description of whether the clusters themselves or the individuals within the clusters are of interest should therefore be at the forefront of empirical practice, yet we find that such a description is often absent. Indeed, we surveyed all articles involving a cluster randomized experiment published in the American Economic Journal: Applied Economics from 2018201820182018 to 2022202220222022. We document our findings in Appendix A.3.
A
1:  Input: number of iterations K∈ℕ𝐾ℕK\in\mathbb{N}italic_K ∈ blackboard_N, confidence level β>0𝛽0\beta>0italic_β > 0
and receives the reward 𝒓h=r⁢(𝒐h,𝒂h)subscript𝒓ℎ𝑟subscript𝒐ℎsubscript𝒂ℎ\bm{r}_{h}=r(\bm{o}_{h},\bm{a}_{h})bold_italic_r start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT = italic_r ( bold_italic_o start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , bold_italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ). Any mapping π𝜋\piitalic_π from the observation history to the action is called a (deterministic) policy. We denote by ΠΠ\Piroman_Π the set of all such mappings. Note that the policy does not use the action history as an input. Such a restriction does not exclude the optimal policy, as the action history can be decoded from the observation history. Subsequently, the agent receives the next state 𝒔h+1subscript𝒔ℎ1\bm{s}_{h+1}bold_italic_s start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT following 𝒔h+1∼𝒯h(⋅|𝒔h,𝒂h)\bm{s}_{h+1}\sim{\mathcal{T}}_{h}(\cdot\,|\,\bm{s}_{h},\bm{a}_{h})bold_italic_s start_POSTSUBSCRIPT italic_h + 1 end_POSTSUBSCRIPT ∼ caligraphic_T start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( ⋅ | bold_italic_s start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT , bold_italic_a start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ). See Figure 1 for an illustration.
π¯k=mixing⁢{π0,…,πk−1}.subscript¯𝜋𝑘mixingsubscript𝜋0…subscript𝜋𝑘1\displaystyle\overline{\pi}_{k}={\rm mixing}\{\pi_{0},\ldots,\pi_{k-1}\}.over¯ start_ARG italic_π end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = roman_mixing { italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_π start_POSTSUBSCRIPT italic_k - 1 end_POSTSUBSCRIPT } .
2:  Initialization: set π0subscript𝜋0\pi_{0}italic_π start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT as a deterministic policy
15:  Output: policy set {π1,…,πK}subscript𝜋1…subscript𝜋𝐾\{\pi_{1},\ldots,\pi_{K}\}{ italic_π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_π start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT }
C
Xn=(X1,…,Xn)superscript𝑋𝑛subscript𝑋1…subscript𝑋𝑛X^{n}=(X_{1},\ldots,X_{n})italic_X start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT = ( italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) are i.i.d. ∼N⁢(θ,1)similar-toabsent𝑁𝜃1\sim N(\theta,1)∼ italic_N ( italic_θ , 1 ).
This is like the previous example, but rather than always being able to choose one among four actions, the very set of choices that is presented to DM via setting B=b𝐵𝑏B=bitalic_B = italic_b might depend on the data Y𝑌Yitalic_Y or on external situations.
[15] gives various suitable collections, but for simplicity we here stick to a single, simple choice, taken from Example 8 of [15], that, like the standard CI, is
But, assuming our p-value is strict so that it has a uniform distribution under the null, this gives a Type-I risk of
Note that B𝐵Bitalic_B is allowed to be any function of, hence ‘conditional on’ data; but its performance is evaluated ‘unconditionally’, i.e. by means of (12) which is an unconditional expectation. This quasi-conditional stance, explained further in [15], provides a middle ground between fully Bayesian and traditional Neyman-Pearson-Wald type methods and analysis.
B
Holmström, 1987). As a form of evidence, betting scores avoid some of the pathologies of significance testing, and by incorporating statistical evidence in an economic contract, we can afford a great deal of flexibility to researchers without ignoring their incentives.
In this work, the agent and the principal will enter into a contract that caps the reward the agent can receive as a function of the statistical evidence for the quality of the product. We explore how different contracts change the incentive landscape of the agent, and develop optimal contracts in this setting.
We begin with a stylized example to highlight the interaction between an agent’s incentives and the principal’s statistical protocol. Suppose there are two types of pharmaceutical companies: companies with ineffective drugs (θ=0𝜃0\theta=0italic_θ = 0) and companies with effective drugs (θ=1𝜃1\theta=1italic_θ = 1). Further, assume that the company knows its type, while the regulator does not. The company may choose to pay $10 million to run a clinical trial, which results in a statistical test for the null hypothesis that the drug is ineffective. Suppose that the test is carried out so that it has 5% type-I error and 80% power to reject when θ=1𝜃1\theta=1italic_θ = 1; see below.
We model this interaction as a game between two players. The first player is known as the principal (e.g., a regulator) and the second is called the agent (e.g., a pharmaceutical company).
The agent’s profit from the contract is L−C𝐿𝐶L-Citalic_L - italic_C (or zero if they opt out), and we model the agent as seeking to maximize this profit; see below.
C
\delta/2\mid x)\right\},0\right].1 - roman_max [ roman_sup start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT { italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_y + italic_δ / 2 ∣ italic_x ) - italic_F start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_y - italic_δ / 2 ∣ italic_x ) } , 0 ] .
Case 2: Now consider a uniform guarantee on the bound estimators across x∈𝒳x𝒳x\in\mathcal{X}italic_x ∈ caligraphic_X.
The contribution of the present work, relative to the contribution of Fan and Park, (2010) who discuss inference for only the randomized experiment setting, is the concentration inequality for the pibt bound estimators. Under regularity conditions, Fan and Park, (2010) show asymptotically that the plug-in bound estimators follow either a normal distribution (centered at the target bound), or a truncated normal distribution, or a point mass. Exactly which distribution this is depends on the supremum difference between the two potential outcomes’ cumulative distribution functions (cdfs), which is unkown. Even if we know that the asymptotic distribution of the estimator is Gaussian, a prospective power analysis further requires an estimator for the standard error to guarantee a target confidence level and margin of error (e.g. a maximum deviation of 0.05), but such an estimator is not discussed in Fan and Park, (2010). This points to a strength of our main concentration result in the randomized experiment setting: despite the possibility that the plug-in estimator can have a non-trivial, possibly biased, sampling distribution in a finite sample, the confidence level we can have for a target margin of error depends only on sample size. The discussion around Proposition 2.2 and Fig. 2 gives more details.
Correspondingly, we can obtain the bound estimators by plugging in the cdf estimators in analogy to Fan and Park, (2010) who consider only the randomized experiment case:
Fay et al., (2018) also discuss the statistical inference technique of Fan and Park, (2010) in conjunction with the quantity 1−η⁢(δ)1𝜂𝛿1-\eta(\delta)1 - italic_η ( italic_δ ) in Definition 1.2. Interestingly, it has been established that the Makarov bounds for the marginal cdf of Yi⁢(1)−Yi⁢(0)subscript𝑌𝑖1subscript𝑌𝑖0Y_{i}(1)-Y_{i}(0)italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 1 ) - italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 0 ) studied in Fan and Park, (2010) are point-wise but not uniformly sharp (Firpo and Ridder,, 2010, 2019). While promising, we consider the estimation of these tightened bounds beyond the scope of this paper as it is not immediately clear that it is amenable to our analysis. For continuous outcomes in a randomized experiment setting, Frandsen and Lefgren, (2021) works under a condition known as mutual stochastic increasing-ness of the potential outcomes (Yi⁢(0),Yi⁢(1))subscript𝑌𝑖0subscript𝑌𝑖1(Y_{i}(0),Y_{i}(1))( italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 0 ) , italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 1 ) ) (Lehmann,, 1966). The plug-in estimation approach we use in the randomized experiment case does not make the assumption of positive correlation: it works for any joint distribution on (Yi⁢(0),Yi⁢(1))subscript𝑌𝑖0subscript𝑌𝑖1(Y_{i}(0),Y_{i}(1))( italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 0 ) , italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 1 ) ) (Fan and Park,, 2010), including those with any type of negative association. Also in the context of a randomized experiment, Caughey et al., (2021) study pibt under a randomization inference setup that is traditionally used to test the sharp null hypothesis that all individual treatment effects are constant (Fisher,, 1935). The approach we take to bound pibt assumes the existence of an infinite super-population that subjects in our sample at hand are drawn independent and identically distributed from and for which our plug-in estimators provide inference for. Caughey et al., (2021) appears to be a nice alternative under the differing assumption that randomness is solely due to random assignment of subjects to a treatment.
C
Limitation. A well-trained generator is critical in MEKD, and GANs are known to suffer from mode collapse, especially for challenging tasks.
For the training of teacher and student models, we adopt the same setting of hyperparameters, so as to verify the distillation effect of student models trained with different methods compared with the teacher model trained with vanilla supervised learning under the same conditions.
Although the parameter size and structural limitations of the model prevent the student from fully mimicking the function of the teacher, MEKD can still improve distillation performance compared with other B2KD methods.
The first two aim to derive the student to mimic the responses of the output layer or the feature maps of the hidden layers of the teacher, and the last approach uses the relationships between the teacher’s different layers to guide the training of the student model.
The effect of α𝛼\alphaitalic_α is also reported in Tab. 4, which reflects that the utilization of ℒI⁢Msubscriptℒ𝐼𝑀\mathcal{L}_{IM}caligraphic_L start_POSTSUBSCRIPT italic_I italic_M end_POSTSUBSCRIPT can improve the performance of model distillation.
B
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
-

Collection including liangzid/robench2024b_all_setstatSCP-p