Dataset Viewer
Auto-converted to Parquet
text_with_holes
stringlengths
166
4.13k
text_candidates
stringlengths
105
1.58k
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
A procedure explicitly addressing the construction of uniformly valid confidence bands for the components in high-dimensional additive models has been developed by Lu et al. (2020). <|MaskedSetence|> Whereas confidence bands in the low-dimensional case are mostly built using kernel methods, the estimators for high-dimensional sparse additive models typically rely on sieve estimators based on dictionaries. <|MaskedSetence|> <|MaskedSetence|> This is a two-step estimator with tuning parameters for kernel estimation and sieves estimation, such as the bandwidth and penalization levels, which must be chosen by cross-validation. Because of the local structure of the hybrid estimator, the framework of Lu et al. (2020) differs from ours in that they consider an additive local approximation model with sparsity (ATLAS), in which they need to impose a local sparsity structure. The advantage of our proposed estimator is that we do not have to leave the sieves framework while establishing the uniform validity of the resulting confidence bands. Interestingly, Lu et al. (2020) conclude that ”it is challenging to study the uniform confidence.
**A**: To derive their results, Lu et al. **B**: (2020) combine both kernel and sieve methods to draw upon the advantages of each, resulting in a kernel-sieve hybrid estimator. **C**: The authors emphasize that achieving uniformly valid inference in these models is challenging due to the difficulty of directly generalizing the ideas from the fixed-dimensional case.
CAB
CAB
CAB
CBA
Selection 3
A very natural framework to tackle this specific issue is Functional Data Analysis (FDA) [29], the branch of statistics that deals with studying data points that come in the shape of continuous functions over some kind of domain. <|MaskedSetence|> This approach is thus not capable of detecting the presence of time variations in impacts, nor does it address the issue of statistical significance of impacts. <|MaskedSetence|> [9] instead use a bayesian framework, based on adaptive splines to extract also in this case non-time-varying indices. <|MaskedSetence|> A very sound framework for the GSA of stochastic models with scalar outputs is provided in [2]..
**A**: FDA is a niche yet established area in the statistical literature, with many applied and methodological publications in all domains of knowledge, including spatial and space-time FDA [7, 16, 13, 19, 19, 12], coastal engineering [21], environmental studies [3, 18], transportation science [27] and epidemiology [32]. Methodologies for GSA that are able to deal with functional outputs are present in the literature: [14] propose non-time-varying sensitivity indices for models with functional outputs, based on a PCA expansion of the data. **B**: In all the cited works around GSA techniques for functional outputs uncertainty is not explicitly explored. **C**: [11] proposes a similar approach, without specifying a fixed functional basis, and proposing an innovative functional pick-and-freeze method for estimation.
ACB
ACB
ACB
BCA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> By contrast, DUB appears to be a new condition on information structures, although Milgrom (1979) utilizes a related property in the context of auction theory. Like SCD, DUB is formulated for a (totally) ordered state space. It requires that for any state ω𝜔\omegaitalic_ω and any prior that puts positive probability on ω𝜔\omegaitalic_ω, there exist both: (i) signals that make one arbitrarily certain that the state is at least ω𝜔\omegaitalic_ω; and (ii).
**A**: SCD is a familiar property (Milgrom and Shannon, 1994) that is widely assumed in economics: it captures settings in which there are no preference reversals as the state increases. **B**: Our leading application of excludability is to preferences with single-crossing differences (SCD). **C**: Here we show that learning obtains when the information structure satisfies directionally unbounded beliefs (DUB).
BCA
CBA
BCA
BCA
Selection 4
<|MaskedSetence|> The model encompasses two asymmetries that are essential for justifying standard hypothesis testing (e.g., Tetenov, 2016). First, there is asymmetric information: the parameter θ𝜃\thetaitalic_θ is known to the researcher but unknown to the planner. Second, there are asymmetric incentives: while the researcher’s payoff depends on the expected private benefits from experimentation, the planner’s objective depends on the welfare effects that such findings generate and on the value of scientific research. Both of these features accord well with the regulatory approval process example, where pharmaceutical companies likely have more information about their products than the regulator and have incentives to find significant effects that support approval which may not fully align with the regulator’s desire to approve only welfare-increasing products. <|MaskedSetence|> In the baseline model, researchers cannot decide (i) the number of hypotheses J𝐽Jitalic_J, (ii) which hypotheses to test, and (iii) whether to selectively withhold test results (p𝑝pitalic_p-hack). <|MaskedSetence|>
**A**: Asymmetric information and incentives. **B**: Both can also be relaxed somewhat, however: the main results continue to hold when the researcher is imperfectly informed (see Appendix B.1, where we exploit duality properties of the optimization problem) and if only some (but not all) researchers have misaligned preferences (see Appendix B.3). Exogenous designs. **C**: We maintain (iii) throughout, motivated for example by current FDA regulations (Food and Drug.
BAC
ABC
ABC
ABC
Selection 4
As shown above, almost all characterization results heavily rely on strategy-proofness. The primary question that we in this paper try to clarify is what is the Machiavellian frontier of the TTC rule under the constraint of axioms combinations such as individual rationality and Pareto efficiency. In other words, we would like to know, to what extent we can relax strategy-proofness while ensuring the uniqueness of TTC allocation or breaking the uniqueness. In order to achieve the above goals, we weaken strategy-proofness based on truncation strategies and use the weaker version to recharacterize the TTC rule. In the literature, Altuntaş et al. <|MaskedSetence|> They also probed the Machiavellian frontier of the TTC rule by weakening strategy-proofness. However, their paper is quite different from ours in the following aspects. First, Altuntaş et al. (2023) considered a more general model setting (each agent may be endowed with and consume more than one object) but a more restricted preference domain (lexicographic preference domain). Second, Altuntaş et al. (2023) weakened strategy-proofness by considering several forms of manipulations, but these manipulations are different from truncation strategies defined in our paper. <|MaskedSetence|> <|MaskedSetence|> Therefore, our paper is novel. Specifically, our contribution lies in three aspects. .
**A**: (2023) probed the Machiavellian frontier between impossibility results and uniqueness of the TTC rule while the current paper goes from uniqueness of the TTC rule to non-uniqueness of it. **B**: Third, Altuntaş et al. **C**: (2023) is closely related to our work.
CBA
CBA
CBA
ACB
Selection 3
For other types of outcome variables (continuous outcomes in linear models, binary and multinomial outcomes), results for regression models with fixed effects and lagged dependent variables are already available. Such results are of great importance for applied practice, as they allow researchers to distinguish unobserved heterogeneity from state dependence, and to control for both when estimating the effect of regressors. The demand for such methods is evidenced by the popularity of existing approaches for the linear model, such as those proposed by Arellano and Bond (1991) and Blundell and Bond (1998). In contrast, for ordinal outcomes, almost no results are available. The challenge of accommodating unobserved heterogeneity in nonlinear models is well understood, especially when the researcher also wants to allow for lagged dependent variables. <|MaskedSetence|> Chamberlain 1985 and Honoré and Kyriazidou 2000). <|MaskedSetence|> The reason is that even the static version of the model is not in the exponential family (Hahn 1997). As a result, one cannot directly appeal to a sufficient statistic approach. An alternative approach in the static ordered logit model is to reduce it to a set of binary choice models (cf. Das and van Soest 1999, Johnson 2004b, Baetschmann, Staub, and Winkelmann 2015, Muris 2017, and Botosaru, Muris, and Pendakur 2023). Unfortunately, the dynamic ordered logit model cannot be similarly reduced to a dynamic binary choice model (see Muris, Raposo, and Vandoros 2023). Therefore, a new approach is needed. The contribution of this paper is to develop such an approach. To do this, we follow the functional differencing approach in Bonhomme (2012) to obtain moment conditions for the finite-dimensional parameters in this model, namely the autoregressive parameters (one for each level of the lagged dependent variable), the threshold parameters in the underlying latent variable formulation, and the regression coefficients. <|MaskedSetence|>
**A**: For example, while recent developments (Kitazawa 2021 and Honoré and Weidner 2020) relax these requirements, early work on the dynamic binary logit model with fixed effects either assumed no regressors, or restricted their joint distribution (cf. **B**: The challenge of accommodating unobserved heterogeneity in the ordered logit model seems even greater than in the binary model. **C**: Our approach is closely related to Honoré and Weidner (2020), and can be seen as the extension of their method to the case of an ordered response variable..
CAB
ABC
ABC
ABC
Selection 2
3.1 Testing for Reverse Causality We implement the test as described in Algorithm 1 for the nominal level α=0.05𝛼0.05\alpha=0.05italic_α = 0.05. Figure 2 reports the empirical rejection probabilities of testing conditional independence of covariates and estimated residuals under the reverse model (2). Overall, the empirical rejection rates are close to the nominal level for different choices of ρ𝜌\rhoitalic_ρ, q𝑞qitalic_q, and τ𝜏\tauitalic_τ. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: This is remarkable as the testing problem is complex and builds on nonparametric procedures. **B**: As such, we cannot expect exact control over the significance level; see also the discussion in Section 2.6. **C**: The test shows some degree of oversizing under super-Gaussian error distributions, indicating the complexity of the testing problem. .
ABC
ABC
ACB
ABC
Selection 4
<|MaskedSetence|> <|MaskedSetence|> It also surpasses Anthony Wrigley’s estimate that matching the annual energy output of Britain’s coal industry circa 1815 would have required that the country magically receive 15,000,000 additional acres of forest. <|MaskedSetence|> (p. 276) Based on this calculation, I set the land supply Z𝑍Zitalic_Z after the relief of land constraints to .
**A**: …[R]aising enough sheep to replace the yarn made with Britain’s New World cotton imports by would have required staggering quantities of land: almost 9,000,000 acres in 1815, using ratios from model farms, and over 23,000,000 acres in 1830. **B**: This final figure surpasses Britain’s total crop and pasture land combined. **C**: If we add cotton, sugar, and timber circa 1830, we have somewhere between 25,000,000 and 30,000,000 ghost acres, exceeding even the contribution of coal by a healthy margin.
ABC
ABC
BAC
ABC
Selection 2
<|MaskedSetence|> This includes both questions about positive reciprocity (e.g. <|MaskedSetence|> Estimates of the interaction between this characteristic and the behavioral utility terms suggest that these individuals are more altruistic in the baseline and behave more in line with generalized reciprocity. At the onset of the treatment, they also shift more weight toward direct reciprocity. <|MaskedSetence|> This suggests that individuals who have a high overall reciprocity attribute use new information to discriminate between collaborators as a mechanism for punishment. .
**A**: “If someone does me a favor, I am prepared to return it”), as well as negative reciprocity (“If someone puts me in a difficult position, I will do the same to them”). **B**: The characteristic that we describe as overall reciprocity consists of positive weights on the answers to all of the questions in the reciprocity questionnaire. **C**: However, this shift toward direct reciprocity is potentially offset by a decrease in altruism (measured by additional weight placed on the costs of contributing) coupled with a strong decrease in generalized reciprocity.
BAC
BAC
BAC
BAC
Selection 4
Non-Business day. Values are in percentage. We remark that this example is just for illustration and showcasing the interpretation of the proposed tensor factor model. Again we note that for the TFM-tucker model, one needs to identify a proper representation of the loading space in order to interpret the model. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: For TFM-cp, the model is unique hence interpretation can be made directly. **B**: In Chen et al., (2022), varimax rotation was used to find the most sparse loading matrix representation to model interpretation. **C**: Interpretation is impossible for the vector factor model in such a high dimensional case. .
BAC
BAC
BAC
BAC
Selection 2
In Section 3, we study how specific properties of choice rules tend to lead to contextual privacy violations. <|MaskedSetence|> <|MaskedSetence|> This result has been used to show that the second-price auction does not permit a decentralized computation protocol (Brandt and Sandholm, 2005) that satisfies unconditional privacy, compare Chor and Kushilevitz (1989) and Milgrom and Segal (2020). Propositions 3-7 show how this conflict between collective and individual pivotality arises in common choice rules in environments with and without transfers. In Section 4, we take the perspective of a privacy-conscious designer who wants to implement a choice rule through a maximally contextually private protocol. Our discussion yields two central insights. First, maximally contextually private protocols involve a deliberate decision about the protection set, i.e. <|MaskedSetence|> Second, given that protection set, maximally contextually private protocols delay asking questions to protected agents as much as possible. .
**A**: a set of agents whose privacy ought to be protected if possible. **B**: In our first results, Proposition 1 and Proposition 2, we provide characterizations of choice rules that fully avoid contextual privacy violations under an arbitrary fixed elicitation technology, and under individual elicitation technologies, respectively. **C**: These abstract characterizations lead us to a more intuitive insight, Theorem 1, which says that under the restriction to individual elicitation protocols, any time there is some group of agents who are collectively pivotal but there is no agent who is individually pivotal, there must be a contextual privacy violation and the designer must choose whose privacy to protect.777One can derive as a special case of Theorem 1 a result from the cryptography literature known as the Corners Lemma (Chor and Kushilevitz, 1989; Chor et al., 1994).
BCA
CAB
BCA
BCA
Selection 3
Gerard Debreu took the latter position. He said that all known conditions for guaranteeing the uniqueness of equilibrium prices are “exceedingly strong” (Debreu, 1972). <|MaskedSetence|> This theorem implies that any compact set in the positive orthant can be included in the set of equilibrium prices when we prohibit to make any additional assumptions othe than the usual, widely accepted assumptions of the economy. Thus, without any assumptions that he claimed to be “exceedingly strong”, we know nothing about the number of equilibria. The number of equilibria may be one, a million, or infinity. <|MaskedSetence|> <|MaskedSetence|>
**A**: To solve this problem, Debreu (1970) treated perturbations of the economy by initial endowment vectors, and showed that in “almost all” economies, the number of equilibria is at least not infinite. **B**: On the other hand, he proved the Sonnenschein–Mantel–Debreu theorem (Debreu, 1974). **C**: Later, this result has been refined and developed into the theory of regular economy. .
BCA
BAC
BAC
BAC
Selection 3
For a unique derivation of the utility function, however, the range of the demand function must be sufficiently wide, as discussed above. <|MaskedSetence|> <|MaskedSetence|> Namely, the range of the function in the limit must also be sufficiently wide. In addition, when all functions satisfy the “C axiom” introduced by Hosoya (2017, 2020), then the desired continuity proposition can be obtained (Theorem 2). <|MaskedSetence|>
**A**: Therefore, an additional assumption is needed for the continuity result we desire. **B**: We have found an example of a sequence of demand functions that satisfy all the assumptions of Corollary 3 and have a sufficiently wide range, yet the range in the limit is very small (Example 2). **C**: That is, when a sequence of demand functions converges to a demand function with respect to the topology discussed above, then the corresponding sequence of utility functions also converges to the corresponding utility function uniformly on any compact set consisting of strictly positive consumption vectors. .
BCA
BAC
BAC
BAC
Selection 2
In this section, we apply our methodology to the analysis of income-wealth inequality in the United States between 1989 and 2022, based on the public version of the triennial Survey of Consumer Finances (SCF). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We refer to inequality displayed by our measure as overall inequality, while specific marginal inequality is described as wealth or income inequality. 3.1. Income-wealth 𝜶𝜶\displaystyle\bm{\alpha}bold_italic_α-Lorenz curves.
**A**: Wealth refers to all assets, financial and otherwise. **B**: A guide for practical implementation of the computational procedure outlined in section 1.2 is given in appendix A. **C**: Details of the sampling technique and a discussion of specific features and issues with the data set are given in appendix B.
ACB
ACB
ACB
CAB
Selection 2
Although many recent works focus on learning in the presence of strategic behavior, learning in the presence of capacity constraints and strategic behavior has not previously been studied in depth. <|MaskedSetence|> Competition for the treatment arises when agents are strategic and the decision maker is capacity-constrained, complicating estimation of the optimal policy. We adopt a flexible model where agents are heterogenous in their raw covariates and their ability to modify them. Depending on the context, strategic behavior may be harmful, beneficial, or neutral for the decision maker. In some applications, strategic behavior may be a form of “gaming the system,” e.g. <|MaskedSetence|> In other applications, the decision maker may want to accept such agents because the agents who would benefit the most from the treatment are those who can invest effort to make themselves look desirable. Lastly, as demonstrated by Liu et al. (2022), when all agents have identical ability to modify their covariates, the strategic behavior may be neutral for the decision maker because it does not affect which agents are assigned treatment. <|MaskedSetence|>
**A**: cheating on exams in the context of college admissions, and the decision maker may not want to assign treatment to agents who have high ability to modify their covariates. **B**: Our model permits all of these interpretations because we allow for potential outcomes to be flexibly related to the agent’s type.. **C**: Many motivating applications for learning with strategic behavior, such as college admissions and hiring, are precisely settings where the decision maker is capacity-constrained.
CAB
CAB
CAB
BAC
Selection 3
The field experiment in Celhay et al. (2019) took place in Misiones, Argentina, one of the poorest provinces in the country, and with relatively high rate of maternal and child mortality. <|MaskedSetence|> The study selected 37 public primary care facilities (accounting for 70% of the prenatal care visits in the beneficiary population), and randomly assigned 18 to treatment and 19 to control. <|MaskedSetence|> The intervention was implemented only for eight months (May 2010 - December 2010), and the clinics were clearly informed of the temporary nature of this intervention. During the intervention period, control group clinics saw no change in their fees for prenatal visits. <|MaskedSetence|> Prenatal visits after week 13 or subsequent prenatal visits experienced no change in fees. Remark 5.1..
**A**: In contrast, clinics in the treatment group received a three-fold increase in payments for any first prenatal visit that occurred before week 13 of pregnancy. **B**: To the best of our understanding, the treatment assignment was balanced across the 37 clinics and was not stratified. **C**: As part of the national Plan Nacer program, the Argentinean government transfers funds to medical care providers in exchange for their patient services.
CBA
CAB
CBA
CBA
Selection 1
When determining order quantities for the replenishment of a local fulfilment centre, the retailer needs to take into account several costs. <|MaskedSetence|> In case of perishable SKUs, units may deteriorate beyond an acceptable level of quality at the end of a period. This is associated with a spoilage cost of hℎhitalic_h per unit. On the other hand, unfulfilled customer demand leads to lost sales, whose costs are more difficult to determine (Walter and Grabner,, 1975, Fisher et al.,, 1994). These costs comprise short-term lost revenue and consequences of long term customer churn. Long-run objectives that impact expected future sales strongly affect the strategic service-level selection (Anderson et al.,, 2006). In e-grocery retailing, there is typically a strongly asymmetric cost structure due to the much increased risk of a complete order cancellation in case of a stock-out: the main convenience of online shopping, namely not having to visit a physical store, may then be outweighed by the inconvenience of the potential necessity of placing a second order, and generally of having to stay at home during the delivery time slot. <|MaskedSetence|> We follow previous literature on e-grocery retailing and derive cost parameters for lost sales from the strategic service level target of the e-grocery retailer addressing the trade-off between shortage costs and costs incurred by excess inventory (cf. <|MaskedSetence|> This data allows to explicitly calculate the amount of lost sales for each period. Note that this is not possible in brick-and-mortar retailing as the observable demand in that case is limited to the inventory level. .
**A**: If the number of units in the inventory exceeds customer demand, this generates costs for inventory holding denoted by v𝑣vitalic_v per unit. **B**: Therefore, e-grocery retailers operate with very high service-level targets of 97-99%. **C**: Ulrich et al.,, 2021). A particular advantage of the e-grocery retailing business case under consideration is the availability of uncensored demand data.
ABC
ABC
CBA
ABC
Selection 1
For analytical tractability, we focus on the case where researchers use one-sided tests in this section, for a limited number of options for p𝑝pitalic_p-hacking. Appendix C provides analogous numerical results for two-sided tests. The analytical results provide a clear understanding of the opportunities for tests to have power and what types of situations the tests will have power in. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: Appendix B presents derivations underlying all analytical results.. **B**: We present these results in Appendix A. **C**: In the simulation study in Section 5, we consider generalizations of these analytical examples for two-sided tests, and we also show results for one-sided tests. In addition to analyzing the effects of p𝑝pitalic_p-hacking on the shape of the p𝑝pitalic_p-curve, we study its implications for the bias of the estimates and size distortions of the tests reported by researchers engaged in p𝑝pitalic_p-hacking.
CBA
CBA
ACB
CBA
Selection 4
<|MaskedSetence|> The autoregressive parameter shows a high persistence. We find positive and negative coefficients for the linear and quadratic term of the GDP per capita, which is consistent with the Environmental Kuznets Curve hypothesis. The agriculture share in the value added is negatively correlated with GHG emissions, whereas the industry share is positively and significantly related with GHG emissions. We do not find any significant effect of the human capital index. The share of renewable energy consumption is not significantly associated with GHG emissions, whereas fossil fuel energy consumption and the energy use are positively associated with GHG emissions. The forest area is not significantly related to emissions. <|MaskedSetence|> <|MaskedSetence|>
**A**: The credit to private sector, trade and financial globalization are negatively related with emissions. **B**: The control variables mostly exhibit the expected signs. **C**: Finally, in relation to the additional controls included in the IV estimates, net capital inflows and GDP growth are positively associated with emissions. .
BAC
CBA
BAC
BAC
Selection 4
Among the different deep architectures for forecasting models, we focus on the neural basis expansion analysis for time series (N-BEATS) algorithm Oreshkin et al. (2019). N-BEATS is a deep neural architecture designed to predict future values in a time series on the basis of past values. The algorithm has been shown to perform well in a range of forecasting tasks. The key innovation we rely on is a recent adaptation of N-BEATS that incorporates time series other than the one being predicted as additional features Olivares et al. (2021); in the panel data setting, this innovation allows us to use N-BEATS to predict unobserved values of the treated unit on the basis of prior values of the treated unit as well as contemporaneous values of the control units. Because our proposed approach essentially involves using the N-BEATS algorithm to estimate a synthetic (i.e., predicted) untreated outcome for the treated state during the post-treatment period, we refer to it as Synthetic N-BEATS (“SyNBEATS”). Although the N-BEATS algorithm has been shown to excel at a range of forecasting tasks, an important concern is whether its performance will be as strong when applied to the relatively small panel data sets typically employed in social science research. <|MaskedSetence|> To assess the suitability of SyNBEATS for causal inference with panel data, we compare it to existing alternatives across two canonical panel data settings. <|MaskedSetence|> <|MaskedSetence|> (2015). In both of these settings, we find that SyNBEATS outperforms canonical methods such as SC and TWFE estimation. In addition, we compare the performance of these models in estimating the impact of simulated events on abnormal returns in publicly traded firms Baker and Gelbach (2020). In this setting, where historical values would not be expected to provide much information about future outcomes, SyNBEATS only marginally improves performance relative to the other estimators. We also compare SyNBEATS to two recent proposed causal inference methods for panel data settings: matrix completion (MC) Athey et al. (2021) and synthetic difference-in-differences (SDID) Arkhangelsky et al. (2021). In the three settings we consider, we find that SyNBEATS generally achieves comparable or better performance compared to synthetic difference-in-differences, and significantly outperforms matrix completion. We further investigate the factors that shape the relative performance of SDID and SyNBEATS through a range of simulations. Finally, we unpack SyNBEATS’ strong comparative performance, and find it stems from both model architecture and SyNBEATS’ efficient use of time-series data in informing its predictions. .
**A**: With limited data, simpler methods like synthetic controls (SC) or two-way fixed effects (TWFE) may yield more reliable causal estimates. **B**: (2010) and the German reunification on the West Germany economy Abadie et al. **C**: Specifically, we contrast performance in data that has been used to estimate the effect of a cigarette sales tax in California Abadie et al.
ACB
ACB
CBA
ACB
Selection 4
Regular estimation approaches, however, become infeasible for high-dimensional settings. <|MaskedSetence|> Recently, sparse shrinkage methods such as the lasso have gained considerable popularity in econometrics for policy evaluation (Belloni et al., 2014) and (macroeconomic) time series analysis (Kock et al., 2020). Their use in impulse response analysis however has only scarcely been explored. While several methods and theoretical results now exist for estimating sparse VAR models – see e.g., Basu and Michailidis (2015); Kock and Callot (2015); Masini et al. (2022) and the references cited therein – inference on impulse responses is complicated by two issues. <|MaskedSetence|> Second, while several methods such as orthogonalization (Belloni et al., 2014) and debiasing, or desparsifying the lasso (van de Geer et al., 2014; Javanmard and Montanari, 2014) have been proposed to yield uniformly valid (or ‘honest’) post-selection inference, the impulse response parameters are nonlinear functions of all estimated VAR parameters. This severely limits the applicability of existing post-selection inference methods which are typically designed for (relatively) low-dimensional parameters of interest that can be estimated directly. Indeed, to our knowledge, impulse response analysis in sparse HD-SVARs is only considered in Krampe et al. <|MaskedSetence|> Instead, by casting the problem in the LP framework, we reduce the impulse response parameter(s) to a (directly estimable) low-dimensional object in the presence of high-dimensional nuisance parameters, which makes the standard post-selection tools available. .
**A**: Traditional approaches to high-dimensionality include modelling commonalities between variables, such as through factor-augmented VARs (FAVAR) (Bernanke et al., 2005), dynamic factor models (DFM) (Forni et al., 2009; Stock and Watson, 2016) or – for panel structures – global VARs (Chudik and Pesaran, 2016); as well as Bayesian shrinkage methods (Bańbura et al., 2010; Chan, 2020). **B**: First, sparse estimation techniques such as the lasso perform model selection, which induces issues with non-uniformity of limit results if this selection is ignored (Leeb and Pötscher, 2005). **C**: (2022), who construct a complex multi-step algorithm to overcome these complications.
ABC
ABC
ABC
ACB
Selection 2
<|MaskedSetence|> Generating distributional responses from a bot is replete with caveats since LLMs are deterministic functions, meaning that randomization occurs in post-processing by sampling responses according to the softmax probability. Deep Neural Networks are known for yielding poor estimates of the distribution of outcomes, leading to high-confidence prediction errors (Nguyen et al. <|MaskedSetence|> This is manifested in my results by the significantly smaller variance in GPT-3’s responses compared to the outcome of the human poll. Nonetheless, GPT-3 shows similar vulnerability to anchoring effects as humans, albeit with significant qualitative and quantitative differences: an overall downward shift in the responses for realistic anchors and a collapse of the bimodal distribution for unrealistic anchors compared to the more balanced one observed in Prolific workers. This anchoring effect appears even though there are no web pages with the precise answer to the question asked in the polls. Thus, by aggregating information posted on the web by humans, LLMs appear to assimilate the same cognitive biases as human subjects but with significant differences that should be investigated further. My hypothesis in using GPT-3 was to test whether large language models can be used as a proxy for human polls, since they aggregate human-generated data. The results show that while the trends are remarkably similar, including vulnerability to anchoring effects, the numerical values and qualitative responses are not comparable (Fig. 2). I speculate that the numerical discrepancy from human behavior can be ascribed to recent increases in the minimum wage advocated for in mainstream political discourse, recent inflation trends, and increased attention to income inequality, which may have increased people’s judgements of what constitutes a fair wage. <|MaskedSetence|>
**A**: In addition, I use an AI Bot based on a language model as an aggregator for large-scale web data. **B**: (2015)). **C**: Thus, current beliefs regarding the minimum wage may be biased upward compared to archival knowledge in the wider web used for training GPT-3. .
ABC
BAC
ABC
ABC
Selection 4
<|MaskedSetence|> We believe that this is more compelling than other solution concepts that assume that one or all players make certain types of mistakes for all other actions including those that have not been observed. <|MaskedSetence|> <|MaskedSetence|> We showed that an OPE can be computed in polynomial time in two-player zero-sum games based on repeatedly solving a linear program formulation. We also argued that computation of OPE is more efficient than computation of the related concept of one-sided quasi-perfect equilibrium, which in turn has been shown to be more efficient than computation of quasi-perfect equilibrium and extensive-form trembling-hand perfect equilibrium. We demonstrated that observable perfect equilibrium leads to a different solution in no-limit poker than EFTHPE, QPE, and OSQPE. While we only considered a simplified game called the no-limit clairvoyance game, this game encodes several elements of the complexity of full no-limit Texas hold ’em, and in fact conclusions from this game have been incorporated into some of the strongest agents for no-limit Texas hold ’em. So we expect our analysis to extend to significantly more complex settings than the example considered..
**A**: We presented a new solution concept for sequential imperfect-information games called observable perfect equilibrium that captures the assumption that all players are playing as rationally as possible given the fact that some players have taken observable suboptimal actions. **B**: We also showed that observable perfect equilibrium is always guaranteed to exist. **C**: We showed that every observable perfect equilibrium is a Nash equilibrium, which implies that observable perfect equilibrium is a refinement of Nash equilibrium.
ACB
ACB
ACB
ABC
Selection 3
This paper shows that there exists a simple institution and a signaling structure that can solve information asymmetries at no cost. <|MaskedSetence|> Subsequent work by Spence (\APACyear1973) shows that signaling can resolve information asymmetries but does not eliminate inefficiencies, as signals are wasteful and unproductive. This paper departs from the canonical one-sender-one-receiver setting and uses a costly talk signaling structure. When considering multi-sender protocols, the concern for collusion naturally emerges. <|MaskedSetence|> Likewise, there is no need for mediation, arbitration, or other complex arrangements. The main result has potentially significant implications for the understanding of organizational design. It shows that only one minimal protocol can achieve efficiency under the threat of senders’ collusion. <|MaskedSetence|> The proposed arrangement has a plain structure and does not require commitment power as ex-ante, and in the interim, the organization adheres to the protocol. Importantly, such an arrangement always yields an efficient outcome for any configuration permitted by the model described in Section 3. This finding provides a rationale for using public advocacy structures..
**A**: Yet, in this setting, it is possible to structure communication in a way that fully restores efficiency without using wasteful signaling expenditures or commitment power. **B**: This protocol prescribes the sequential and public consultation of two informed agents with conflicting interests. **C**: At least since Akerlof (\APACyear1970), it is well known that the presence of asymmetric information can yield inefficient outcomes.
BAC
CAB
CAB
CAB
Selection 3
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> With the cost information, the government could design an incentive (research) contract (e.g., Laffont and Tirole, 1986), that balances R&D costs and (unobserved) research effort. The second approach involves using advanced market commitment (Kremer and Glennerster, 2004; Kremer et al., 2020, 2022). Under this approach, the government commits to buying a certain number of FDA-approved drugs from the firm(s) at a pre-fixed price. If the commitment is sufficiently large, it will incentivize R&D because it can be viewed as a prize for the early developers who can “pull” the R&D efforts. .
**A**: The first approach involves all interested firms reporting their R&D costs as the drug progresses through the development process. **B**: While this solution is straightforward, it is potentially subject to moral hazard because firms’ research efforts are non-contractible. **C**: After receiving the payment, the firm may not exert the necessary effort to develop the drug. To overcome this challenge, we consider two approaches.
CAB
BCA
BCA
BCA
Selection 2
<|MaskedSetence|> We will now look at some bifurcation diagrams using the freeness of trade, ϕitalic-ϕ\phiitalic_ϕ, as the bifurcation parameter. <|MaskedSetence|> <|MaskedSetence|> The illustrations, 8 in total, exhaust all mathematical/numerical possibilities.888This can be shown through the combination of the analysis performed in the previous sections with various simulations under a very wide range of parameter values. The six scenarios analysed in this Section are as follows: .
**A**: Two additional illustrations for different parameter values are provided in Appendix B. **B**: To provide a complete gallery, we depict 6 qualitatively different scenarios, keeping most parameter values constant (except for the sixth scenario) and varying b𝑏bitalic_b, thus placing emphasis on changes in the value of related variety. **C**: 5 The impact of economic integration It is common in geographical economics to study the qualitative change of the spatial economy as economic integration increases.
CBA
BCA
CBA
CBA
Selection 4
In economics, Kleiner, Moldovanu and Strack (2021) characterize the extreme points of monotone functions on [0,1]01[0,1][ 0 , 1 ] that majorize (or are majorized by) some given monotone function, which is equivalent to the set of probability measures that dominate (or are dominated by) a given probability measure in the convex order. <|MaskedSetence|> <|MaskedSetence|> See, for instance, Bergemann et al. (2015) and Lipnowski and Mathevet (2018). Candogan and Strack (2023) and Nikzad (2023) characterize the extreme points of the same sets subject to finitely many additional linear constraints. In comparison, this paper characterizes the extreme points of monotone functions that are in between two given monotone functions on ℝℝ\mathbb{R}blackboard_R in the pointwise order, which is equivalent to the set of probability measures in between two given probability measures in the stochastic order,333Theorem A.1 in the appendix also characterizes the extreme points of this set, subject to finitely-many additional linear constraints. <|MaskedSetence|>
**A**: and applies the characterization to voting, quantile-based persuasion, self ranking, and security design. . **B**: They then apply this characterization to various economic settings, including mechanism design, two-sided matching, mean-based persuasion, and delegation.222See also Arieli, Babichenko, Smorodinsky and Yamashita (2023). **C**: Several recent papers in economics also exploit properties of extreme points to derive economic implications.
BCA
BCA
BCA
ACB
Selection 1
Emissions of firms in the EU Emission Trading System (ETS). The Emission Trading System (ETS) is one of the cornerstones of the European Union’s industrial climate policy. It requires industrial firms to acquire emission permits for their plants and installations either through free allocation or through trading extra permits with other firms. The ETS has undergone many design changes since its inception in 2005 which can be split into four phases. Since here we are dealing with data from 2019, phase three (2013 - 2020) is of main interest to us. In this phase, plants and installations from the following sectors are covered by the EU ETS: Power stations and other combustion plants ≥\geq≥20MW; oil refineries; coke ovens; iron and steel plants; cement clinker; glass; lime; bricks; ceramics; pulp; paper and board; aluminum; petrochemicals; aviation; ammonia; nitric, adipic and glyoxylic acid products; CO2 capture, transport pipelines and geological storage of CO2 [37]. The European Union Transaction Log (EUTL) provides public access to data on the compliance of regulated installations, participants in the ETS, and transactions between participants [38]. On top of the officially provided ETS data, a relational database is available to make the data more easily accessible [39]. This database was used in this study to make the necessary queries to access the Hungarian ETS emissions data and to aggregate verified emissions of installations to the Hungarian firms that own them. This resulted in a dataset of 122 firms and their verified emissions for the year 2019. <|MaskedSetence|> <|MaskedSetence|> These 119 industrial firms emitted 21.72 million tonnes of CO2 in the year 2019, which amounts to 33.7% of Hungary’s total emissions of 64.44 million tonnes in 2019 [40]. The emissions of firms outside the Hungarian ETS remain unknown. This leads to an underestimation of saved CO2 emissions for the discussed decarbonization strategies since indirect emission reduction effects cannot be considered. But since these 119 companies account for a third of Hungary’s total emissions, this limited emissions dataset covers the essential CO2 emitting firms of Hungary, which enables the key insights of our study. <|MaskedSetence|>
**A**: . **B**: This dataset was matched to the Hungarian firm-level production network dataset via the unique value-added tax (VAT) number of Hungarian firms. **C**: Three firms could not be matched which resulted in a final dataset of 119 firms for which both direct CO2 emissions and other firm characteristics such as out-strength and number of employees are known.
BCA
BCA
CBA
BCA
Selection 4
Regarding the representation of deregulation in the power sector, i.e., decoupling transmission and generation expansion decisions, one can pinpoint two generalised strategies in the literature. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> For example, this coordinated modelling approach has been considered in (Moreira et al., 2017; Tian et al., 2020; Zhang et al., 2020). For the modelling assessment proposed in this paper, we consider a decentralised planning strategy to ensure the representation of the reactive position (i.e., acting as price-takers) of GenCos..
**A**: Nonetheless, while the burden to formulate exhaustive uncertainty sets appears to be challenging on its own, this modelling strategy prevents generation companies (GenCos) from being dynamic market players capable of making reactive decisions regarding generation levels and capacity expansion. Another strategy attempts to develop efficient modelling tools to consider the planning of the transmission and generation infrastructure expansion in a coordinated manner. **B**: Examples of such strategy can be found in (Sun et al., 2018; Mortaz and Valenzuela, 2019). **C**: The first spans investigations aimed at developing an optimal transmission network expansion strategy that would account for various possible developments of the generation infrastructure.
CBA
CBA
CBA
CAB
Selection 1
<|MaskedSetence|> The strategy-proof rules on the domain of Alcalde-Unzu and Vorsatz (2018) also follow a two-step procedure. In their model, the location of the peak/dip of each agent is known, so the first step of their rules asks which agents have single-peaked preferences. <|MaskedSetence|> In the domain analyzed here, the type of preference of each agent is public information and in the first step we ask agents with single-peaked preferences about their peaks. As a result, the type of preference of each agent and the location of all peaks are known. <|MaskedSetence|> If two alternatives are preselected, the second step of Alcalde-Unzu and.
**A**: Note that even though the social planner in our domain has less information after the first step (since she does not know the location of the dips), at most two alternatives are preselected in both settings. **B**: The family of strategy-proof rules characterized here shows some similarities with and differences from the family of strategy-proof rules characterized in Alcalde-Unzu and Vorsatz (2018). **C**: As a result of the first step, both the type of preference of each agent and the location of the peaks and dips are known.
BCA
BCA
BCA
BCA
Selection 2
The application analyzes the interaction of the oil and stock market. Kilian and Park, (2009) propose recursive restrictions to identify and estimate the effects of different oil and stock market shocks. <|MaskedSetence|> <|MaskedSetence|> The application in this study fills this gap. I present evidence that oil and stock prices cannot be ordered recursively. <|MaskedSetence|>
**A**: By allowing both variables to interact simultaneously, the study reveals that information shocks originating from the stock market contain crucial information on oil prices, which explains approximately 25252525 % of the fluctuations in oil prices. . **B**: The proposed restrictions are widely used to analyze the impact of oil market shocks on the stock market, see, e.g., Apergis and Miller, (2009), Abhyankar et al., (2013), Kang and Ratti, (2013), Sim and Zhou, (2015), Ahmadi et al., (2016), Lambertides et al., (2017), Mokni, (2020), Arampatzidis et al., (2021), Kwon, (2022), or Arampatzidis and Panagiotidis, (2023). **C**: However, the impact and importance of stock market information shocks on the oil price is usually not analyzed.
CAB
BCA
BCA
BCA
Selection 2
<|MaskedSetence|> Specifically, we consider cohort-based privacy, which is a restriction in line with the recent Google Privacy Sandbox proposals to replace third-party cookies. Under this policy, the platform in our model informs the firms about the consumer’s ranking of their products, without disclosing the consumer’s exact value for any specific product.262626See the complete Google proposal at https://privacysandbox.com/. <|MaskedSetence|> <|MaskedSetence|> See Ali.
**A**: In this Section, we focus on exogenous restrictions on information disclosure. **B**: 5.2 Privacy and Data We now assess the impact of privacy regulation by considering policies that limit the firms’ access to the consumers’ information. **C**: Voluntary information disclosure by the consumer is another important, though different dimension.
BAC
ABC
BAC
BAC
Selection 1
<|MaskedSetence|> <|MaskedSetence|> How can a country like China, which once was the largest importer of P, be such a relevant exporter of P? Classical analyses fail to answer such questions. In fact, phosphate is a raw material that is essential for all nations. The situation is different for other commodities such as technology metals. Of course, the markets may be smaller, but the knowledge how an industrialized country can be affected by even minute changes in raw material supply is one of the game changing issues of our time. Our approach to P flows therefore aims to use much more detailed trade data as the basis of the analysis \citep[see also][]chen_p_net. <|MaskedSetence|> It is therefor for the first time possible to quantify how much P is (in a material sense) transferred between countries as either raw material, preliminary product, or fertilizer with the intended use in agricultural production. This model is meant to show the trade-based first round of global P flows (before biomass production) in greater detail than currently available and can thus serve as the foundation for the analysis of P supply security and resilience..
**A**: In today’s world of unprecedented geopolitical power shifts and increasingly monopolistic commodity supply structures, it is in the vital interest of any country or economy to understand commodity flows on a global scale. **B**: The novelty of our approach is that we transform and connect these data to other sources in such a way that we receive results that can again be interpreted in terms of the material flow of P, and not just as monetary value of traded amounts. **C**: This, in turn, means that there is currently no methodology that can adequately reflect the P flows that are necessary before biomass production.
CAB
CAB
CAB
BAC
Selection 2
A stable rule is a function that associates each stated strategy profile to a stable matching under this stated profile. To evaluate such matchings workers, and firms use their true preferences and their true choice functions, respectively. <|MaskedSetence|> In this game, the set of strategies for each worker is the set of all possible preferences that she could state. <|MaskedSetence|> <|MaskedSetence|>
**A**: A market and a stable rule induce a matching game. **B**: Similarly, the set of strategies for each firm is the set of all possible choice functions that it could state. In this paper, the equilibrium concept we focus on is Nash equilibrium. **C**: In a Nash equilibrium, no agent improves by deviating from its initial chosen strategy, assuming the other agents keep their strategies unchanged. .
ABC
ABC
ABC
CBA
Selection 3
In the analysis, we use the following two definitions of homeownership. <|MaskedSetence|> With this comprehensive homeownership definition, about 70% of individuals in our sample are homeowners (vs 40% if we consider only individuals with a positive open mortgage amount). Second, in an alternative definition of homeownership, we consider the origination of new mortgages. In this second case, we define a mortgage origination as a situation in which either the number of open mortgage trades in year t𝑡titalic_t is bigger than the number of open mortgage trades in year t−1𝑡1t-1italic_t - 1 or the number of months since the most recent mortgage trade has been opened is lower than 12. <|MaskedSetence|> Individuals who experienced a harsh default before or in the same year as a soft default in the sample period (i.e. from 2004 onwards) have been dropped. Special codes credit scores lower than 300 have been trimmed. Similarly, the top 1% of total credit limit, total balance on revolving trades and total revolving limit have been trimmed. <|MaskedSetence|>
**A**: . **B**: First, we consider an individual as a homeowner if either she ever had a mortgage or she is recorded as a homeowner according to Experian’s imputation. **C**: Clearly, this definition would only capture the flow, and perhaps more importantly would miss cash purchases and wouldn’t distinguish between a new mortgage and a remortgage. Table 1: Summary statistics of our main variables, 2010, balanced panel.
BCA
BCA
BCA
BCA
Selection 1
By Carathéodory’s theorem, for any distribution there exists a decomposition in which at most n+1𝑛1n+1italic_n + 1 optimal solutions have a strictly positive weight. Such a decomposition can be found by applying the algorithm described in Theorem 6.5.11 by Grötschel et al. <|MaskedSetence|> Clearly, a decomposition of a distribution 𝒅𝒅\bm{d}bold_italic_d implies its implementation, but not vice versa. <|MaskedSetence|> <|MaskedSetence|>
**A**: We will therefore pay special attention to cases in which we can find an implementation of a distribution without first generating its decomposition (see Section 5.4).. **B**: Given the computational complexity of generating optimal solutions in integer programming, as discussed in Section 3, obtaining a decomposition of a distribution is not always tractable. **C**: (1988). Note that, given an implementation, neither the distribution which it realizes, nor the underlying decomposition from which a solution is sampled are assumed to be explicitly known (see Section 5.4 for an example).
CBA
CBA
ACB
CBA
Selection 1
<|MaskedSetence|> As explained in the paragraph that follows (7), the central feature of mGB is that, after exhibiting a long power-law dependence, it eventually terminates at a finite value of the variable. GB2, on the other hand, has a power-law tail that extends mGB’s power-law dependence to infinity. The key to understanding the results of fits in Sec. <|MaskedSetence|> <|MaskedSetence|> Distribution of daily realized variance can be modeled using a duo of stochastic differential equations – for stock returns and stochastic volatility – which produces distributions of daily variance such as mGB liu2023rethinking and GB2 dashti2021combined . Via a simple change of variable, daily RV would then follow the same distributions but with renormalized parameters..
**A**: 4 is the analysis of the structure of RV used by the markets – a square root of realized variance (1). **B**: While the standard search for Dragon Kings involves performing a linear fit of the tails of the distribution pisarenko2012robust ; janczura2012black , here we tried to broaden our analysis by also fitting the entire distribution using mGB (7) and GB2 (11) – the two members of the Generalized Beta family of distributions liu2023rethinking , mcdonald1995generalization . **C**: At its core is the average of the consecutive daily realized variances (2).
ABC
BAC
BAC
BAC
Selection 4
Finally, we mention equilibrium analyses of bidding and ad exchanges that provide results related to our simulation findings. Choi and Sayedi (2019) analyze auctions in which a new entrant’s click-through rate is not known to the publisher. Despotakis et al. <|MaskedSetence|> <|MaskedSetence|> Nabi et al. <|MaskedSetence|> They further apply their approach in a contextual bandit setting, demonstrating improvements in performance and convergence time. .
**A**: When multiple slots are offered, Rafieian and Yoganarasimhan (2021) shows that, when advertisers can target their bids to specific placements in second-price auctions, total surplus increases, but the effect on publisher revenues is ambiguous, the key difference being that we consider a single slot with randomly arriving queries, rather than multiple slots that are often offered for sale. **B**: (2022) propose a hierarchical empirical Bayes method that learns empirical meta-priors from the data in Bayesian frameworks. **C**: (2021) demonstrates that, with competing ad exchanges, a multi-layered auction involving symmetric bidders can result in a scenario where first-price auctions, and soft floors in general, yield higher revenue than second-price auctions.
ABC
CAB
CAB
CAB
Selection 2
Having shown that the identity of the winner is sufficient for restoring efficiency (even without observing effort), it might seem that this information is also necessary for implementing an efficient outcome. <|MaskedSetence|> That is, if the identity of the winner is not contractible, then the effort profile at time of breakthrough also suffices. <|MaskedSetence|> <|MaskedSetence|> (2005) where multiple agents conduct research on a project that is initially unknown to be good or bad. Exerting effort on research comes at an opportunity cost. If the project is good, the project generates a conclusive breakthrough at some rate according to each researching agent’s effort. Instead, if the project is bad, a breakthrough never arrives. Breakthrough brings about fixed instantaneous and continuation rewards, shared amongst the participating agents. Since I consider a general model, the Hamilton-Jacobi-Bellman equation characterizing the agent best-response problem does not always admit a differentiable solution. To resolve this, I consider viscosity solutions, use a guess-and-verify approach to confirm an equilibrium candidate, and exploit other features of the environment to rule out other equilibria. .
**A**: Importantly for fairness considerations, contracting on the effort profile at time of breakthrough results in outcomes which are ex-post symmetric on the equilibrium path, unlike the asymmetry necessary for efficient behavior when contracting on the winner’s identity. Methodologically, I build off of the canonical model of strategic experimentation of Keller et al. **B**: It is not; contracting on the effort profile at the time of breakthrough is also sufficient to restore efficiency. **C**: In particular, this implies that the full history of effort is redundant given the terminal effort profile, and further that the identity of the winner is sufficient but not necessary to restoring efficiency.
BCA
ACB
BCA
BCA
Selection 1
. In terms of gender, the sample comprised 48% female and 52% male participants. <|MaskedSetence|> Recruitment took place via the ORSEE system (Greiner, 2015), and the experiment was computerised using the oTree platform (Chen et al., 2016). None of the subjects had previously participated in a public goods experiment and every subject participated only once. Each session consisted of 16 participants, forming groups of 4 and playing for a total of 10 rounds. In an effort to replicate the one-shot play environment, we adopted a random matching protocol where in every round, participants were randomly matched into new groups. At the beginning of all the sessions, participants were given written instructions followed by a comprehension test. <|MaskedSetence|> The interface would provide immediate feedback and explanation on the questions that the subjects failed to respond correctly. An experimenter would offer additional clarifications if there were further questions. Therefore, no exclusion criteria were applied. In every session, participants played a variation of a sequential, binary public goods game, as described in the previous section. At the beginning of every round, participants were endowed with 10 tokens each, with an exchange rate of £0.50 per token. In every round, subjects were randomly positioned in the sequence (with equal chances of being allocated at each position) and were sequentially asked to make a binary decision between investing all of these tokens in either a common project account or a private account. At the end of every round, feedback was provided regarding the participant’s decision, the total group contribution, the individual share, and the payoff from the round. We employed the random payment mechanism, with one of the 10 rounds being randomly chosen to be played for real (different for each group)111111The utilisation of both Experimental Currency Units (ECU) and the Random Payment Mechanism (RPM) has been a subject of ongoing debate in the literature. Drichoutis et al. (2015) show that there is no difference between using ECUs or cash in the lab; while at the same time, the use of ECUs leads to decisions closer to theoretical predictions. Conversely, empirical research comparing the different incentive mechanisms is inconclusive (Azrieli et al., 2018), while Azrieli et al. (2020) provide a theoretical justification that the RPM is incentive-compatible in almost all experiments. <|MaskedSetence|> The average payment was £17.8, including a show-up fee of £5. The experimental sessions lasted less than 45 minutes and payments were made via bank transfer..
**A**: Participants could retake the questionnaire until they passed. **B**: We follow the common practice that the literature in public goods adopts and we use both. . **C**: The average age was 20.7 for those who provided details (4 subjects did not reply this question).
CAB
CAB
BCA
CAB
Selection 4
Our work is closely related to recent work on delegation in financial decision making. Apesteguia et al. (2020), like the present paper, report an experiment where investors may decide to delegate financial decisions to their peers. They show that a substantial fraction of investors does so by either directly copying previously successful investors by the click of a button or manually implementing investment strategies which are similar to those of the most successful peers. <|MaskedSetence|> (2020) find that copy trading may lead to a substantial increase in risk taking. The present study extends the design of Apesteguia et al. (2020) by varying the complexity of the underlying task and the information investors receive about the experts. When our investors do not have access to information on experts’ decision quality, we confirm that a substantial fraction of subjects chooses to delegate to experts with previously high earnings.111Other studies besides Apesteguia et al. (2020) finding an important role for previous earnings in the choice of “experts” include Huck et al. <|MaskedSetence|> <|MaskedSetence|> (2010) and Huber et al. (2010)..
**A**: (2002), Apesteguia et al. **B**: (1999), Offerman et al. **C**: Since success is mainly driven by luck and since investors who previously took on a lot of risk appear on top of the earning rankings, Apesteguia et al.
ACB
CBA
CBA
CBA
Selection 2
In this paper, we propose a framework for the sensitivity analysis of identification failures for linear estimators. The framework proceeds as follows. <|MaskedSetence|> <|MaskedSetence|> Third, we leverage work from the literature in statistics and distributionally robust optimization to estimate bounds implied by the restrictions from the second step. <|MaskedSetence|>
**A**: The framework is especially powerful when the implied restrictions correspond to a family of restrictions on the Radon-Nikodym derivative (the generalization of likelihood ratio to allow point masses) between the distribution of observables under the target distribution and under the distribution the observed data is drawn from. . **B**: First, we define a target distribution: a synthetic distribution over observed variables that would enable a standard estimator to point-identify the causal estimand of interest. **C**: Second, we consider a structural model that implies restrictions on the divergence between the target population and the population the practitioner observes.
BCA
BCA
BCA
CBA
Selection 1
When considering the single-peaked domain of preferences, if equal division is taken as a benchmark of fairness, we can single out agents that play a prominent role in the allocation process. <|MaskedSetence|> Similarly, in economies with excess supply, these agents demand more than or equal to equal division and thus withhold resources from the rest of the agents. We call these agents “simple”. <|MaskedSetence|> For example, the uniform rule (Sprumont, 1991) treats non-simple agents as equally as possible, and the sequential rules (Barberà et al., 1997) allow for an asymmetric treatment of non-simple agents, although imposing a monotonicity condition. We can describe a simple rule by means of a two-step procedure. In the first step, simple agents receive their peak (as their final allotment) and the rest receive (provisionally) equal division. <|MaskedSetence|>
**A**: We propose the class of “simple” rules that reward simple agents by fully satiating them and give each remaining agent an amount between his peak and equal division.555Several rules in the literature behave in this way. **B**: In the second step, the amounts assigned to non-simple agents are. **C**: In economies with excess demand, these agents demand less than or equal to equal division and thus free up resources for the rest of the agents.
CAB
CAB
CAB
CAB
Selection 4
<|MaskedSetence|> <|MaskedSetence|> If the reader is familiar with ergodic theory, skip Subsection 5.1. <|MaskedSetence|> Note that our strategy (philosophy) in this and the next sections stems from [Lyubich, 2012] and [Shen and van Strien, 2014] (these are quite readable expository articles on recent developments of unimodal dynamics). We stress that a deep result by Avila et.al. (Proposition 6.3) theoretically supports our argument. .
**A**: Our (numerical/theoretical) argument in this and the next sections use ergodic theory. **B**: Our basic references for ergodic theory are classical [Collet and Eckmann, 1980], [Day, 1998], and [W. de Melo, 1993]. **C**: Here, we give a quick review of ergodic theory.
ACB
ACB
ACB
BCA
Selection 1
<|MaskedSetence|> Major updates (e.g. <|MaskedSetence|> Minor updates (from 1.1 to 1.2) correspond to new features. <|MaskedSetence|> For additional information, see https://semver.org/. .
**A**: Further, to simplify usage, require standardizes version numbers and also extracts version release dates whenever available.333Standardized version numbers are based on semantic versioning, which defines software versions using the major.minor.patch format. **B**: Patch updates (from 1.1 to 1.1.1) indicate bug fixes. **C**: from version 1.1 to 2.0) represent updates that might break compatibility.
ABC
ACB
ACB
ACB
Selection 3
This paper draws on the Mittelstadt et al. <|MaskedSetence|> Particularly important for the public are what that paper calls unfair outcomes, transformative effects and traceability. The former two (the “normative concerns”) have a direct effect on outcomes for members of the public whether through algorithms that discriminate in ways we judge morally wrong, or by the very use of these algorithms changing how government works (e.g. erroding the standards of transparency expected). Transparency is not just about understanding models though, it is also about holding human beings accountable for the consequences of these models, what Mittelstadt et al. (2016) calls traceability. Traceability is a necessary part of a process of accountability. Accountability can be seen as a multi-stage process consisting of providing information for investigation, providing an explanation or justification and facing consequences if needed (Olsen, 2017). The problem with accountability for machine learning systems is clear, they can obfuscate the exact nature of the failure, make it very difficult to obtain an explanation or justification. It can be difficult to know who should face consequences for problems or whether there should be consequences at all. Was anyone negligent, or was this more or less an unforeseeable situation (for example, the distribution of new data has shifted suddenly and unexpectedly) (Santoni de Sio & Mecacci, 2021; Matthias, 2004)? This means that models need to be well enough explained that policy-makers can understand them enough to be held accountable for the decision to use them. <|MaskedSetence|> <|MaskedSetence|> In the worst case scenario, this opaqueness could not only be an unfortunate side-effect of black-box models, but an intended effect, where complex methods are intentionally used to avoid responsibility for unpopular decisions (Zarsky, 2016; Mittelstadt et al., 2016)..
**A**: (2016) survey of the ethical issues with algorithmic decision-making to map out these transparency issues. **B**: It also means that causal machine learning systems and the chains of responsibility for these systems need to be clear enough that responsibility can be traced from a mistake inside the model, to a human decision-maker. **C**: Finally, it also means that in cases where traceability is not possible due to the complexity of the analysis — a so called ‘responsibility gap’ — such analysis should only be used if the benefits somehow outweigh this serious drawback (Matthias, 2004).
ABC
ABC
ACB
ABC
Selection 1
Indeed, many economic and game theoretic analyses make use of the idea that once the rules of some mechanism (or game, or contract) have been announced, they are common knowledge. <|MaskedSetence|> <|MaskedSetence|> Under our new definition, common knowledge becomes non-vacuous in many of these settings. <|MaskedSetence|> We provide an agreement theorem (in the spirit of Aumann, 1976) for our definition, and apply it to recover agreement on posteriors in the above-discussed Geanakoplos and Polemarchakis (1982) setting with added timing frictions. .
**A**: Under our new definition, common knowledge in such circumstances is regained, resolving a paradox that has baffled economists and computer scientists for four decades, and rendering Wilson’s “presumption” (to use his own word) of common knowledge of the rules—and with it, all of the consequences that it entails—a precise mathematical truth. Common knowledge indeed has many appealing implications (see, e.g., Aumann, 1976; Milgrom and Stokey, 1982; Brandenburger and Dekel, 1987; Aumann and Brandenburger, 1995; Chwe, 1999) but the Halpern–Moses Paradox renders it impoverished or vacuous in various settings under its traditional definition. **B**: A closer inspection reveals that in realistic settings this might never be the case vis a vis the traditional definition of common knowledge, as even slight timing frictions foil the ability to harness common knowledge in the analysis. **C**: Despite our definition being more permissive, in Section 4 we demonstrate that it is still powerful enough for deriving appealing implications traditionally associated with common knowledge, and in particular such implications that are known not to follow from any finite level of nested knowledge.
BAC
BAC
BAC
BCA
Selection 2
Inferential procedures based on generated regressors are studied in linear and conditional quantile models as in Chen et al., (2021) (see, Pagan, (1984), Doran and Griffiths, (1983), Hoffman, (1987), Oxley and McAleer, (1993), Dufour and Jasiak, (2001) and Chen et al., (2023)), although in the case of risk measures such as 𝖵𝖺𝖱𝖵𝖺𝖱\mathsf{VaR}sansserif_VaR and 𝖢𝗈𝖵𝖺𝖱𝖢𝗈𝖵𝖺𝖱\mathsf{CoVaR}sansserif_CoVaR a conditional quantile specification is required. The conventional approach for estimating these risk measures is to employ stationary quantile regressions (as in Adrian and Brunnermeier, (2016) and Härdle et al., (2016)). <|MaskedSetence|> In this study, we extend the econometric specifications for the joint estimation of the pair (𝖵𝖺𝖱,𝖢𝗈𝖵𝖺𝖱)𝖵𝖺𝖱𝖢𝗈𝖵𝖺𝖱(\mathsf{VaR},\mathsf{CoVaR})( sansserif_VaR , sansserif_CoVaR ) as discussed in the study of Katsouris, 2023b (see also Katsouris, (2021); Katsouris, 2023a ), to the case of nonstationary quantile predictive regression models (see, Lee, (2016)), which implies using a local-to-unity parametrization for modelling the unknown form of persistence. Specifically, Katsouris, 2023b considers the statistical estimation for covariance-type matrices with tail forecasts using node-specific quantile predictive regressions, extending the framework of Katsouris, (2021) who discusses identification and estimation issues of conditional quantile risk measures. <|MaskedSetence|> <|MaskedSetence|>
**A**: Thus, a suitable estimation approach under. **B**: Several studies in the literature consider the estimation of risk measures using stationary time series environments either with parametric (e.g., see He et al., (2020)) or semiparametric approaches (e.g., see, Wang and Zhao, (2016)). **C**: However, both of these studies operate under the assumption of time series stationarity, thereby relying on OLS-based estimators (see, Patton et al., (2019)).
BCA
CAB
BCA
BCA
Selection 4
Gavrilova et al. <|MaskedSetence|> <|MaskedSetence|> Further, their method requires splitting the data and estimating completely separate models to estimate the treatment effect of the treated cohort at each time 2,3,…,T23…𝑇2,3,\ldots,T2 , 3 , … , italic_T, while FETWFE allows for the full sample to be used to jointly estimate all treatment effects simultaneously. <|MaskedSetence|> (None of the above works mentions fusion penalties.) Our work also contributes to the much broader topic of estimating conditional average treatment effects. In Section 4 we mention some prominent works in this stream of literature. .
**A**: Their approach allows for arbitrary T𝑇Titalic_T, though they focus on the case with only one treated cohort that starts an absorbing treatment at time t=2𝑡2t=2italic_t = 2, providing a brief sketch as to how their method might be extended to allow for staggered adoptions. **B**: Indeed, a central idea of our method is that the fusion penalties we use allow FETWFE to borrow strength across cohorts and time in order to improve estimation efficiency. **C**: (2023) propose an approach using random forests that is somewhat more flexible in its estimands.
CAB
CAB
ACB
CAB
Selection 1
As was discussed in Sec. <|MaskedSetence|> However, there is also an obvious trend towards moving down from the straight line at tails ends. Since single-year HPI also have an oder of magnitude fewer points than HP, we wanted to ascertain whether this trend was significant. Towards this end we studied a combined multi-year distribution of HPI for years 2000-2022, which contained 201040 data points. <|MaskedSetence|> 11 and 12 below is that the tails of the combined HPI is more aligned with the finite upper limit of HPI and, accordingly, with mGB distribution. <|MaskedSetence|>
**A**: The main result, as seen in Figs. **B**: Of course, such upper limit of the variable does not have to be fixed – it may change as HPI is updated annually.. **C**: 3.2.1, tails of single-year HPI align better with power law than those of HP.
CAB
CAB
CAB
ACB
Selection 3
The structure of the paper is as follows. Section 2 formalizes our model and provides a method for obtaining the asymptotic distribution, or upper bounds thereof, of minimax test statistics. <|MaskedSetence|> <|MaskedSetence|> Proofs are relegated to Section B in the appendix. 2. <|MaskedSetence|>
**A**: The main focus of the paper is on single hypothesis tests, but Section A of the appendix extends our distributional results uniformly over a class of parameter spaces and underlying probability distributions. **B**: Section 3 shows that, under general conditions, critical values can be obtained for those distributions using the bootstrap. **C**: Model and Asymptotic Distribution.
BAC
BAC
BAC
ACB
Selection 2
𝒙t,subscript𝒙𝑡\boldsymbol{x}_{t},bold_italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , and not to the pre-selected variables, 𝒛tsubscript𝒛𝑡\boldsymbol{z}_{t}bold_italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. <|MaskedSetence|> In the first stage the common effects of 𝒛tsubscript𝒛𝑡\boldsymbol{z}_{t}bold_italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT are filtered out by regressing yt+hsubscript𝑦𝑡ℎy_{t+h}italic_y start_POSTSUBSCRIPT italic_t + italic_h end_POSTSUBSCRIPT and 𝒙tsubscript𝒙𝑡\boldsymbol{x}_{t}bold_italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT on the pre-selected variables 𝒛tsubscript𝒛𝑡\boldsymbol{z}_{t}bold_italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and saving the residuals ey.zsubscript𝑒formulae-sequence𝑦𝑧e_{y.z}italic_e start_POSTSUBSCRIPT italic_y . italic_z end_POSTSUBSCRIPT and ex⁢j.zsubscript𝑒formulae-sequence𝑥𝑗𝑧e_{xj.z}italic_e start_POSTSUBSCRIPT italic_x italic_j . italic_z end_POSTSUBSCRIPT, j=1,2,..,Kj=1,2,..,Kitalic_j = 1 , 2 , . . <|MaskedSetence|> In the second stage Lasso is applied to these residuals. <|MaskedSetence|>
**A**: A proof. **B**: , italic_K. **C**: The above optimization problem can be solved in two stages.
CBA
BAC
CBA
CBA
Selection 1
5. Conclusion In committee-sized elections (5-21 voters), MLPs can learn to vote strategically on the basis of limited information, though the profitability of doing so varies significantly between different voting methods. <|MaskedSetence|> There are a number of natural extensions for future work, including manipulation by a coalition of multiple voters, as well as different probability models for generating elections. <|MaskedSetence|> <|MaskedSetence|> To overcome this limitation, we plan to develop a reinforcement learning approach to learning to manipulate..
**A**: This serves as a proof of concept for the study of machine learnability of manipulation under limited information. **B**: However, further research is needed on other questions: What if all agents in the election strategize? And what is the social cost or benefit of the learned manipulations? Finally, one limitation of the classification approach in this paper is that it is infeasible to apply to more than 6 candidates. **C**: Our code is already set up to handle these extensions, which only require more compute.
BAC
ACB
ACB
ACB
Selection 2
After 10 rounds, all subjects are informed that experts now have the opportunity to invest into their diagnostic precision (Figure 1). There are two treatments. <|MaskedSetence|> In Algorithm, experts can pay 10 Coins each round to rent an algorithmic decision aid that increases the expert’s maximum diagnostic precision to 90% if used correctly. For the algorithmic decision aid, consumers also learn that experts are not forced to use the system, but can choose to ignore it. All subjects know that consumers pay for the investment by automatically paying 10 Coins more per treatment if they choose to approach an investing expert. <|MaskedSetence|> Experts first decide whether they want to invest, then choose their price vector, and proceed to diagnosis and treatment. During the diagnosis, investments allow experts to utilize four input numbers, and those in Algorithm can forego the decision aid with the click of a button. <|MaskedSetence|> Otherwise, nothing changes. Upon completing the credence goods experiment, subjects proceed to a short post-experimental questionnaire and answer a battery of demographic questions as well as a question about their risk attitudes (Dohmen et al., 2011). Nature draws problem hExperts makeinvestment decisiondExperts setprices 𝐏𝐏\mathbf{P}bold_PConsumers observed and 𝐏𝐏\mathbf{P}bold_PConsumers chooseExpert or σ𝜎\sigmaitalic_σExperts receivediagnostic signal kExperts chooseHQT or LQTPayoffsare realized .
**A**: Consumers observe each expert’s investment decision. **B**: In Skill, subjects learn that experts can pay 10 Coins each round to increase their diagnostic precision to 90%. **C**: Then, subjects complete another 15 rounds.
ABC
BCA
BCA
BCA
Selection 2
The present work provides an economic rationale for the support of data-based services where timeliness is a relevant quality requirement by means of 5G networks. More specifically, we focus on data-based services the quality of which is related to the Age of Information, and we assess two alternatives for the support of this sort of services by means of a 5G network: one that is based on the eMBB service type, and one that is based on the URLLC service type. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Each SP uses a set of sensors that obtain the sensed data by means of a 5G network operated by the same SP: SP1’s network provides a URLLC service, and SP2’s network provides an eMBB service..
**A**: These assessment is conducted in a duopoly scenario. Fig. 1 depicts the scenario under study. **B**: The users obtain a utility which depends on the timeliness of the data used by the service. **C**: We assume that there are two Service Providers (SPs) that provide a service to users, e.g., by means of an app.
ACB
CBA
ACB
ACB
Selection 4
<|MaskedSetence|> One of the cornerstones of this approach relies on insights that can be traced back to Dudley in the 1960s ([19]) for Gaussian processes. <|MaskedSetence|> <|MaskedSetence|> Unfortunately, without restrictions on the mixing coefficients, this approach is not feasible, as this norm may not even be well-defined. In view of this, the current paper proposes a new proof technique which employs a family of norms, rather than just one, to measure the complexity of the class ℱℱ\mathcal{F}caligraphic_F. To accommodate this new feature, we introduce a measure of complexity inspired by Talagrand’s measure ([25, 26]) that allows for a family of norms..
**A**: Unfortunately, the approach utilized in [17] and related papers cannot be applied to establish maximal inequalities when the aforementioned restrictions on the mixing coefficients do not hold. **B**: In [17], the authors use this insight to construct a “natural” norm, which turns out to depend on the β𝛽\betaitalic_β-mixing coefficients. **C**: Dudley’s work states that the “natural” topology to measure the complexity of the class of functions ℱℱ\mathcal{F}caligraphic_F is related to the variation of the stochastic process.
BCA
ACB
ACB
ACB
Selection 4
<|MaskedSetence|> Even as data and computational methods become increasingly sophisticated and widely available, this problem of discovery of which metrics or variables to analyze continues to permeate the natural sciences, social sciences, and engineering. In this work, we’ve explored one avenue for discovery of metrics in a setting of information asymmetry where relevant environmental variables are unknown to a principal, but known to an evaluated agent. There are other possibilities for formulating the question of who holds relevant information, and when they would be willing to share it. For example, one could model incentives for third-party individuals to offer new metrics, or perhaps bi-directional information transfer where a principal and agent both hold distinct information. The key element of our work that may be worth retaining in alternative information design frameworks is the property that the variable itself may be unknown to the information receiver. Going beyond information design, our broader motivation is to apply these insights to help design mechanisms that would benefit all players. For example, if sharing information would increase total welfare, but decrease an agent’s private utility, then perhaps a wealth transfer mechanism exists to adequately compensate the agent for revealing the information. <|MaskedSetence|> <|MaskedSetence|>
**A**: More broadly, this work was motivated by analyzing a mechanism by which one might discover unknown unknowns. **B**: Finally, a particularly interesting avenue for future work would be to consider the possibility of cooperation and competition between multiple agents in this agency game with information transfer. . **C**: Or, perhaps there exist auction or tournament frameworks that may further encourage information sharing.
BCA
ACB
ACB
ACB
Selection 2
The three main results of this paper are two positive ones, with a negative one sandwiched in between. In the first, Lemma 3.2, we discover that a finite collection of ranked experiments identifies the agent’s undominated actions (up to relabeling and duplication) as well as the regions of beliefs on which each such action is optimal. The second result, Proposition 3.5, reveals an important indeterminacy about the agent’s utility function that cannot be resolved with only ordinal rankings of experiments. The third result, Theorem 3.7, states that an additional finite collection of utility differences between experiments (the difference in utils from observing one experiment versus another) identifies the agent’s utility function up to a decision-irrelevant payoff. These results show that an agent’s preferences for information reveal a significant amount about her decision problem. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: The rationale behind these results is that expected utility maximization imposes a significant amount of structure on how an agent values information. . **B**: An additional finite number of utility differences yields the agent’s exact value for information, for any comparison of information structures. **C**: A finite amount of ordinal information alone is enough to completely characterize the agent’s behavior from observing any information structure.
CBA
CBA
CBA
ABC
Selection 3
The rest of this paper proceeds as follows. In Section 2 we introduce our setup and some preliminary constructions. In Section 3 we provide our main result and discuss its proof. <|MaskedSetence|> (2017) and Turansick (2022). <|MaskedSetence|> <|MaskedSetence|>
**A**: In Section 4 we introduce our identifying restriction and compare it to the restrictions of Apesteguia et al. **B**: Finally, we conclude with a discussion of the related literature in Section 6. . **C**: Section 5 provides a complete classification of all maximal identified models for the case of n=4𝑛4n=4italic_n = 4 alternatives.
ACB
ACB
ACB
CBA
Selection 1
<|MaskedSetence|> MSE predictions from 1,843 observations using 5-fold cross validation. The second column of the table considers neural network models, as depicted in Figure 3.2. On their own, the neural networks have quite high mean squared error rates. <|MaskedSetence|> The neural network with only MLS attributes and no image data has an MSE of 0.1932. When we consider neural networks based only on the image data the mean of the MSEs is 0.6649, with considerable dispersion between the models. <|MaskedSetence|> Interestingly, when we add MLS attributes to the image data the best MSE improves considerably, and the “tout ensemble” model is no longer the winner. Here, the winning model is the pair of MobileNet and Inception encoders. The “tout ensemble” MSE is still respectable at 0.2755, compared to a mean of 0.5857 and a minimum of 0.1259. Conversely, the worst-performing model has a far worse MSE compared to neural networks with no MLS attributes, 5.8715 versus 1.7296. This corresponds well to the results in Table 4.1 where some of the worst models have negative coefficients when the MLS attributes are combined with the image data in forming the neural network forecasted price..
**A**: The best MSE from an image-only neural network comes from the “tout ensemble”. **B**: The worst convoluted (with or without attributes) model has a better MSE than the best neural network. **C**: Notes: Bolded numbers represent the MSE associated with the “tout ensemble” model.
CBA
CBA
CBA
CBA
Selection 2
<|MaskedSetence|> However, many scholars have studied the phenomenon from various aspects to restore the world’s economic status and the health status of people. <|MaskedSetence|> As a result, the economic and health consequences of COVID-19 were more damaging and ubiquitous than those of the other two outbreaks. Reference (6) disentangled these economic and health effects, and they argued that health should be prioritized over material well-being. They provided an example to support their claim pointing out the fact that the expenditure on health does not say anything about the outcomes on health (21, 6). <|MaskedSetence|> Reference (9) examined the nexus between job loss and mental health. They found that the effects of being unemployed are different for every individual. Some were more anxious and stressed than others (9)..
**A**: This section summarizes such studies. There were two more main coronaviruses (SARS-CoV in 2002 and MARS-CoV in 2012) before COVID-19, but they could not spread as much as the COVID-19 pandemic did. **B**: Given the recent emergence of the COVID-19 pandemic, the body of literature related to this topic is relatively limited. **C**: Reference (8) reported their results in terms of both health and economic outcomes.
BAC
BAC
BCA
BAC
Selection 1
Our method proceeds in several steps. <|MaskedSetence|> We fix a particular adoption time and use the average pretreatment outcomes for units with this adoption time, along with average outcomes for units with later adoption times, to estimate the contemporaneous treatment effect with the SDiD estimator. <|MaskedSetence|> We repeat this exercise for all adoption times and then move one step forward to estimate the average treatment effect one period after the adoption. <|MaskedSetence|> The key feature of our method is that at each step, we use the previously constructed estimates to build the new ones. We analyze the properties of our estimator in a model with interactive fixed effects, which has a long tradition in econometrics of panel data (Holtz-Eakin.
**A**: As a preliminary step, we average all outcomes in a given period for units that share the same adoption date. **B**: We proceed sequentially, using the estimates to impute the missing average counterfactuals. **C**: We then use the resulting estimate to impute the missing average counterfactual outcome for the treated units.
ACB
ACB
ACB
ACB
Selection 3
There are shocks in trucking where drivers and employer interests are aligned. Equipment maintenance is one example. Long-haul truck drivers live out of their vehicles so that keeping vehicles in good condition contributes substantially to driver welfare. <|MaskedSetence|> Additionally, because drivers are often paid piece-rate by mileage, any equipment failures that prevent vehicle operation cuts into the drivers’ pay. <|MaskedSetence|> <|MaskedSetence|> Second, severe equipment problems may result in safety violations, negatively affecting a firm’s safety record and thus their ability to do business..
**A**: First, equipment-related problems that prevent a vehicle from operating hurt firms through reduced operational efficiency and potential late-delivery penalties from shippers. **B**: Drivers therefore have the incentive to immediately address and report equipment-related problems. Correspondingly, employers have the incentive to immediately respond and take substantive action to fix equipment-related problems. **C**: Drivers are liable for any maintenance problems that result in potential safety hazards.
CBA
CBA
ABC
CBA
Selection 2
6 Empirical Application We apply the proposed approach to the data from Project STAR (e.g., Krueger (1999); Gerber et al. <|MaskedSetence|> <|MaskedSetence|> We use data of 1,877 students who were not assigned to regular-size classes without a full-time teacher aide in kindergarten.101010We do not consider allocation to a regular-size class without a teacher aide as it should not be superior to either the regular-size class with a teacher aide or the small-size class without one for any student. <|MaskedSetence|> As they progressed to grade 1, the students were supposed to be randomly shuffled between the two types of classes. However, because some students selected a class type themselves, the experimental allocation was not entirely random (see, e.g., Ding and.
**A**: (2001); Krueger and Whitmore (2001); Ding and Lehrer (2010); Chetty et al. **B**: Among these students, 702 were randomly assigned to regular-size classes with a teacher aide, while the others were randomly assigned to small-size classes without a teacher aide in kindergarten (labeled by grade K). **C**: (2011)), where we study the optimal allocation of students to regular-size classes with a full-time teacher aide and small-size classes without a full-time teacher aide in their early education.
ACB
BCA
ACB
ACB
Selection 3
In their seminal paper, Gale and Shapley (1962) proved that every marriage model admits a stable matching and also described an algorithm that finds such a matching. <|MaskedSetence|> In the first round, every woman makes a proposal to the man she prefers most; every man who receives proposals from different women chooses his most preferred woman and gets temporarily matched with her, while all the other women who proposed to him are rejected. In each subsequent round, each unmatched woman makes a proposal to her most-preferred man to whom she has not yet proposed (regardless of whether that man is already matched), and each man who receives some proposals gets matched to the woman he prefers most among the ones who proposed to him and the woman he has been already matched to, if any. In particular, if he has a provisional partner and he prefers another woman to her, he rejects the provisional partner who becomes unmatched again. <|MaskedSetence|> The role of the two groups of individuals can be reversed with men proposing matches and women deciding whether to accept or reject each proposal. In general, the stable matchings produced by the two versions of the algorithm are different. Gale and Shapley also proved that the stable matching generated by the algorithm when women propose is optimal for all the women, in the sense that it associates each woman with the best partner she can have among all the stable matchings. For this reason, it is called woman-optimal stable matching. However, it is the worst stable matching for the other side of the market: indeed, it associates each man with the worst partner he can get among all the stable matchings.222This property is a consequence of a more general result due to Knuth (1976). <|MaskedSetence|> Symmetrically, the man-optimal stable matching is the stable matching which results from the algorithm when the men propose: it is the best stable matching for men and the worst stable matching for women. The purpose of matching theory is, however, more general than determining a set of matchings for a specific marriage model. In fact, its objective is to determine a method able to select a sensible set of matchings for any conceivable marriage model, namely a correspondence that associates a set of matchings with each preference profile. Such correspondence is called a matching mechanism. Hence, a matching mechanism operates as a centralized clearinghouse that collects the preferences of all market participants and provides a set of matchings; whether those matchings can be determined by using efficient algorithms is an important question for practical applications that has received considerable attention. Several properties may be imposed on a matching mechanism..
**A**: See, also, Roth and Sotomayor (1990), Theorem 2.13 and Corollary 2.14. **B**: This process is repeated until all women have been matched to a partner. **C**: The algorithm involves more rounds.
CBA
BCA
CBA
CBA
Selection 4
<|MaskedSetence|> The “random coefficients” refer to the heterogeneous effects that are the focus of the analysis, such as the effects of neighborhoods, or the worker and firm effects. <|MaskedSetence|> In both cases, the model involves a very large number of such covariates (e.g., many thousands of firm and worker indicators or neighborhood exposures). The primitive parameters of the RC model are the means and variances of the coefficients (e.g., the neighborhood effects, or the worker and firm effects), as well as the variance of the errors. <|MaskedSetence|> They satisfy first and second moment conditions that we present. These moment conditions, which remain valid absent normality, build on moment conditions previously derived for panel data settings (e.g., Chamberlain, 1992, Arellano and Bonhomme, 2012). .
**A**: The model that underlies most applications to these settings is a linear normal random coefficients (RC) model. **B**: Those coefficients are associated with specific covariates: in Chetty and Hendren (2018b) the effect of a neighborhood is simply the coefficient of the exposure to that neighborhood (i.e., of how long the family stayed in that neighborhood), whereas in Abowd, Kramarz, and Margolis (1999) the effect of firm j𝑗jitalic_j is the coefficient of the j𝑗jitalic_j-th firm indicator. **C**: All of these parameters are potentially functions of all the covariates.
ABC
BAC
ABC
ABC
Selection 4
The world is currently undergoing unprecedented changes, with increasingly complex domestic and international environments marked by heightened levels of instability and uncertainty. The research on low-frequency factors in GARCH-MIDAS is no longer limited to macroeconomic variables, but extends to diverse uncertainties. Su et al. (2017) use the news implied volatility index to explore the effect of news uncertainty on U.S. financial market volatility. Li et al. <|MaskedSetence|> (2023). Based on the GARCH-MIDAS framework, Wu and Liu (2023) and Li et al. (2023b) provide new insights into the linkage between climate policy uncertainty and volatility in green finance markets. <|MaskedSetence|> (2017) examine the roles of speculation, fundamentals, and uncertainties in predicting volatility in the crude oil market, demonstrating that EPU indices are major determinants of crude oil market volatility. Fang et al. (2018), Dai et al. (2022a) and Raza et al. (2023) investigate the effects of global economic policy uncertainty on volatility in gold, crude oil, and precious metals markets, respectively, and prove the remarkable predictive power of EPU. Zhang et al. (2023) employ a modified GARCH-MIDAS model to reveal the heterogeneous influences of climate policy uncertainty on fluctuations in crude oil and clean energy markets. Wang et al. (2023) analyze the performance of different weather variables in forecasting soybean market volatility, finding that weather indicators can provide valuable information for predicting soybean volatility. In recent years, frequent geopolitical conflicts have brought considerable uncertainty to the global economy. Geopolitical risk, as a low-frequency macro-factor, has aroused wide concern among scholars, with its impact on various markets becoming a hot topic. Li et al. (2023c) and Segnon et al. (2024) construct extended GARCH-MIDAS models to investigate the role of geopolitical risk in forecasting stock market volatility. Liu et al. (2019) examine the predictive effect of geopolitical risk on oil volatility, highlighting its potential to provide valuable insights into oil fluctuations. Li et al. (2022) endorse this opinion and emphasize the significant impact of geopolitical risk on oil market volatility. Liang et al. (2021) compare the explanatory power of diverse uncertainty indices for the volatility of natural gas futures, showing stronger forecasting ability of geopolitical risk and stock market volatility index. <|MaskedSetence|> (2021) analyze the impact of geopolitical uncertainty on the volatility of energy commodity markets, and conclude that the noteworthy positive influence primarily transmits through the threat of adverse geopolitical events. Conversely, Zhang et al. (2024) discover a significantly negative effect of geopolitical acts on the Chinese energy market. Gong and Xu (2022) explore the dynamic linkages among various commodity markets and further elucidate the influence of geopolitical risk on the interconnectedness between them. Abid et al. (2023) utilize the GARCH-MIDAS model to discuss the roles of geopolitical shocks on agricultural and energy prices, revealing that all commodity markets respond to geopolitical shocks, albeit with varying performance across different commodity types..
**A**: Liu et al. **B**: (2023a) pay attention to economic policy uncertainty (EPU) and emphasize its effectiveness as a predictor of stock market volatility, which is consistent with the findings of Salisu et al. **C**: Wei et al.
BCA
BAC
BCA
BCA
Selection 1
<|MaskedSetence|> Let’s start with the encounter in the top center. The high-urgency lila car has a current karma account of 9 and bids 4, thus outbidding the low-urgency blue car whose karma account is also 9 but bids 2. <|MaskedSetence|> Let’s move along clockwise. <|MaskedSetence|> Now the orange karma goes up by 4 to 13. Let’s move along clockwise. Orange now has high urgency and bids 4, thus outbidding low-urgency lila who has 5 karma left and bids 1. Thus the circle closes, et cetera, et cetera. .
**A**: Figure 1 illustrates how karma works out to the benefit of everyone at hand of an example involving three intersection encounters. **B**: Blue now happens to have high urgency and bids 4, thus outbidding and getting priority over orange whose karma account is nine and bids 3. **C**: As a result, the blue car’s karma account goes up by 4 to 13.
ACB
ACB
CAB
ACB
Selection 4
The X Community Notes system is innovative due to its open-source and crowdsourced design. <|MaskedSetence|> Our findings align with existing experimental research on warning labels and fact-checking, demonstrating that adding context to social media posts collaboratively can reduce the spread of misinformation by almost half. This effect is likely a lower bound, considering the significant increase in the number of deleted tweets for which we lack the diffusion process. <|MaskedSetence|> <|MaskedSetence|>
**A**: The availability of high-frequency tweet sharing combined with information about the exact timing of a tweet publication, and the time at which a Note is made visible on the platform allows us - to the best of our knowledge - to conduct the first causal estimates of a content moderation system on real data from a large online platform. **B**: Despite this substantial post-treatment effect, the overall impact on the spread of tweets is relatively modest (-16.34% to -20.75% for retweets). **C**: This indicates that the speed of content moderation as it currently stands may not be adequate to significantly limit the spread of misleading content on social media platforms. Appendix A.
ABC
CAB
ABC
ABC
Selection 3
Games of optimal investment in both stochastic and deterministic settings are extensively covered in the Economics literature, and a comprehensive overview of models and results can be found in Vives [20]. Specifically, mean-field problems with Cournot competition have garnered interest in recent literature; see Chan and Sircar [10] for an insightful overview. Graber and Bensoussan [14] study the existence (and, under certain conditions, uniqueness) of a solution to a system of partial differential equations (PDEs) (namely, the Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck equations) associated with a mean-field game involving Bertrand and Cournot competition among a continuum of players. <|MaskedSetence|> <|MaskedSetence|> [1]. <|MaskedSetence|> [6] considers stationary discounted and ergodic mean-field games of singular controls motivated by irreversible investment and provide existence and uniqueness results, as well as relations across the two classes of considered problems. To the best of our knowledge, the existence and uniqueness of the (nonstationary) mean-field equilibrium for a mean-field model of optimal investment with an isoelastic demand function, as discussed in this paper, is presented here for the first time. .
**A**: An optimal transport perspective on Cournot-Nash equilibria is explored in Acciaio et al. **B**: In Graber and Sircar [13], existence and uniqueness of the master equation associated with a mean-field game of controls with absorption are proven, while Chan and Sircar [9] delve into dynamic mean-field games with exhaustible capacities and interactions akin to Cournot and Bertrand competitions. **C**: Finally, Cao et al.
BAC
BCA
BAC
BAC
Selection 4
The use of these methodologies in economic applications and, in particular, for emerging markets has been lagging. In econometrics, most of what is known as structural breaks methodology has been “offline,” that is, it uses historical time series to find regime shifts (see, e.g., Hamilton, 2016; Bai and Perron, 1998, 2003). The focus has been on obtaining consistent tests for the number and location of regime shifts in the available data, rather than on sequential monitoring for change points. The conventional econometric methodology often assumes the number of regime shifts to be known or to be within a fixed region. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Known as concept drift or data drift detection, this field of machine learning recognizes that the conditional distribution of target variable given explanatory variables may change, drastically affecting predictive capacity of a machine learning engine trained on data from that distribution (see, e.g., Hoens, 2012; Chen et al., 2022). This brings to the fore the recent successes of advanced machine learning methods such as random forests and artificial neural networks in estimating flexible nonparametric conditional distributions with high dimensions of the conditioning set (see, e.g., Breiman, 2001; Pospisil and Lee, 2018; Friedberg et al., 2021). .
**A**: Comparisons of performance between online and offline methods requires re-designing the offline methods to make them applicable in an online environment for which they were not designed. **B**: Therefore, there is little guidance in the literature on choosing between these methods. **C**: Moreover, traditionally economics has focused on linear models of information transfer, represented in such concepts and models as Granger (1969) causality, vector autoregression (VAR) and dynamic conditional covariance (DCC) models, which use primarily parametric specifications and may suffer from large misspecification biases. Meanwhile, the field of machine learning has seen remarkable advances in areas related to the task of change-point detection.
ABC
ABC
ABC
CAB
Selection 1
Choice of elasticities. We take the trade elasticity from the value reported in the handbook chapter by Head and Mayer (2014) and set it to 5.03. This is also the trade elasticity we used to express the effect of the Iron Curtain in tariff equivalent terms. The supply elasticity depends on the importance of intermediates in the production function. We follow the strategy of Campos et al. (2023) and choose the supply elasticity as the midpoint of the supply elasticities implied by the 10th and 90th percentiles of the distribution of the range of intermediate shares for the sample of countries in the KLEMS database, as reported by Huo et al. (2023). This yields a supply elasticity of 1.24, which is slightly higher than the value of 1.0 used by Alvarez and Lucas (2007), but lower than 3.76, the value favored by Eaton and Kortum (2002) in their work. <|MaskedSetence|> For the simulations, we extrapolated domestic trade by regressing log(GDP) on a linear trend for East Germany and imputing the data with fitted values. <|MaskedSetence|> <|MaskedSetence|>
**A**: For this country, we also completed the bilateral data after 1974 with imputed trade flows. **B**: Appendix A explains how data are imputed for this country.. **C**: Since the model is static, we solve the model for a counterfactual equilibrium in each year starting in 1950.111111East Germany is missing domestic trade for the years before 1954.
CAB
BAC
CAB
CAB
Selection 4
Dynamic programming has a wide range of applications in economics including the savings problem, economic growth, job search, business cycles, olipoly equilibrium or recursive contracts [Ljungqvist and Sargent, 2018]. Particularly important for the present research is the problem of a competitive firm that maximises the inter-temporal value of its production with an adjustment cost of the rate of output: in order to obtain a greater return (or value) in the future, the firm must invest (or set aside) a part of its current production incurring a quadratic cost, following a scheme like the one originally proposed by Lucas and Prescott [1971]. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Whereas conventional forecasting methods penalises the difference (or loss) between predicted and actual outcomes, that method is guided by the difference between temporally successive predictions..
**A**: Here, our goal is to estimate those parameters directly from data following the temporal difference (TD) method proposed by Sutton [1988]. **B**: The trade-off between immediate costs and future returns gives the firm the incentive to forecast the output market price as far as the investment (or change of output) decision is concerned. This problem can be solved optimally by assuming that the firm’s return function is quadratic on the state variables, namely, the current level of output. **C**: However, we have to know in advance the values of the model’s parameters to get that kind of solution.
BCA
BCA
CBA
BCA
Selection 2
We use the 2005-2019 American Community Survey (ACS) and household composition to infer sexual orientation and create a unique and novel panel dataset on the passage of local and state anti-discrimination laws. We collected information on local laws from a host of sources including media reports, FOIA requests, and an advocacy group. We find a significant reduction in differences between LGB workers and heterosexual workers across labor supply and wage measures, due to anti-discrimination laws. Anti-discrimination laws significantly reduce the gap in labor force participation and employment of gay men by 2.1 p.p. and 1.5 p.p., respectively. The laws also significantly increase hourly wages for gay men by 6.2%. <|MaskedSetence|> Using an event study plot, we show that the outcome trends are parallel before the implementation of the anti-discrimination laws suggesting the workers in treatment and control regions are reasonable comparisons.777In Appendix A, we replicate the findings of past research that finds gay/bisexual men have lower labor force participation and employment rates and make 8-11% less than their employed straight counterparts using hourly wages and annual earnings. <|MaskedSetence|> In the traditional Becker (1981) model of household specialization, men typically specialize in market production, and women typically specialize in household production, in part due to differences in biology where women birth and care for children, resulting in a one-earner household. These differences in household specialization are less pronounced in same-sex partnerships, but they may become more similar for women in same-sex partnerships following the passage of an anti-discrimination law if it gives greater protection to the higher wage earner. We show empirically that the difference in hours worked between partners within a lesbian household goes up relative to gay households following anti-discrimination laws, suggesting that lesbian households could become more specialized following anti-discrimination laws with one woman working more hours and the other woman working fewer to focus on household production. We also show that lesbian households have significantly more children than gay households after the passage of an anti-discrimination law. <|MaskedSetence|>
**A**: We also replicate findings that lesbian/bisexual women have a higher labor force participation and employment rate and earn 5-15% more than their employed straight counterparts. We explore theories for the differing effects of anti-discrimination laws on gay men and lesbian women in the Discussion section, using the Becker (1981) model of household specialization. **B**: More children could induce lesbian households to further specialize in the intra-household division of labor, adopting a more traditional household model to help care for additional children. . **C**: The results differ for women, with the laws significantly reducing their labor force participation and employment premium over straight women by 1.7 p.p., and 2.3 p.p., respectively.
CAB
CAB
CAB
CAB
Selection 1
<|MaskedSetence|> Thus, we propose a three-step approach for retailers: in the first step, our objective is to assess the costs of planning uncertainty and their impact on the retailer’s profit for specific SKUs. To achieve this, the retailer must analyse historical data to gain insights into customers’ purchasing probabilities for various SKUs and their influence on inventory planning challenges. This enables the computation of uncertainty costs for each SKU. In the second step, the retailer focuses on SKUs with significant uncertainty costs and aims to evaluate the potential reduction in these costs by acquiring advanced demand information. This involves determining the extent to which gathering information about intended purchase behaviour from customers could decrease uncertainty costs. <|MaskedSetence|> In the final step, all information gathered about the impact of uncertainty costs and the value of advanced demand information can then be translated into profit-increasing subscription offers. We always consider customer purchase probabilities when crafting subscription offers. Additionally, we acknowledge that not all customers will opt to subscribe, either due to dissatisfaction with the offer or a general reluctance to commit to a retailer. <|MaskedSetence|> However, we anticipate that successfully encouraging customers to subscribe will lead to enduring positive impacts on the retailer’s profitability in the long run..
**A**: Our analysis primarily focuses on the retailer’s overall profit, particularly in the context of short-term inventory planning solutions. **B**: For certain SKUs, obtaining information from a limited number of customers may suffice, while for others, persuading a larger proportion of customers to share their purchasing intentions is necessary for a substantial reduction in uncertainty costs to occur. **C**: However, due to the narrow profit margins, subscription incentives can only be extended when they are expected to positively impact revenue.
CBA
CBA
BCA
CBA
Selection 1
<|MaskedSetence|> <|MaskedSetence|> This might render QALYs more appropriate for the evaluation of these treatments. <|MaskedSetence|> This is a possible motivation for age weights (another motivation is the fair innings argument mentioned above). Our hybrid evaluation functions, such as PQALYs, QALYs-PALYs, QALYs-PQALYs or the more general functional form (8), offer an alternative way of dealing with productivity differences, selecting the appropriate parameters therein. We conclude mentioning that our framework allows for alternative plausible interpretations, as well as for further generalizations..
**A**: As we mentioned in the introduction, there is a pressing need to protect the health and productivity of the economically active population. **B**: But using the same evaluation function for younger patients misses productivity effects. **C**: Treatments for the elderly (retired people) have no effects on (labour market) productivity.
ACB
ACB
CBA
ACB
Selection 4
We can consider two types of many-to-one assignment markets, depending on which side of the market has unitary capacity. This does not affect the core, but makes a big difference when competitive equilibria are considered. If the unitary capacity is on the side that posts prices, that is sellers or workers, then we are in the many-to-one model of Sotomayor (2002) and the core coincides with the set of competitive equilibrium payoff vectors. <|MaskedSetence|> <|MaskedSetence|> We prove that, differently from the one-to-one assignment market, the kernel may not be a subset of the core for many-to-one assignment markets. Therefore, the core and the classical bargaining set do not coincide (Example 4 and Corollary 5). <|MaskedSetence|>
**A**: The loss of this coincidence means that the core is somehow a less robust solution in these markets, since those allocations that do not have a justified objection could be taken into account when looking for a distribution of the worth of the grand coalition. . **B**: When the unitary capacity is on the agents that report a demand given some prices, that is, buyers or firms, then we are in the many-to-one model of Kaneko (1976), and the core may strictly contain the set of competitive equilibrium payoff vectors (CE payoff vectors) that now coincides with the set of solutions of the dual linear program that finds an optimal matching. **C**: In this paper we focus on the first case, the job market with unitary capacity workers of Sotomayor (2002), and only in the last section we show that, after some adjustments, parallel results can be obtained for Kaneko’s buyer-seller market, where buyers have unitary capacity. In the first part of the paper, we focus on the core of the many-to-one assignment market games and consider other related set-solution concepts different from the core to show that more dissimilarities appear with respect to the one-to-one case.
BCA
ABC
BCA
BCA
Selection 1
The obtained research results have important implications for the touristic affirmation of caves. The implementation of advanced forecasting modeling enables management structures to make various strategic moves such as sustainable management of resources and conservation efforts. <|MaskedSetence|> Observed trends in tourism demand and the impact of factors such as seasonality and external events point to the fact that a continuous increase in tourist visitation to Stopića cave is to be expected. Due to this prediction, it is necessary to establish adequate protection measures that can ensure long-term subterranean environmental sustainability. <|MaskedSetence|> The analysis of monitoring results can greatly contribute to the understanding of the anthropogenic impact on Stopića cave, as well as the level of its vulnerability. <|MaskedSetence|>
**A**: The use of such analytical techniques indicates effective modeling approaches that are crucial for the sustainability and protection of subterranean karst environments. **B**: Establishing monitoring programs and tracking visitor trends are crucial for tourism authorities so they can assess the effectiveness of carrying capacity measures and adapt management strategies in order to ensure the long-term sustainability of Stopića cave as a tourist destination. . **C**: This primarily involves monitoring microclimate indicators such as temperature fluctuations, air humidity, and CO2 emissions.
ACB
ACB
ACB
ABC
Selection 1
2.2 Model We use fund4.0m-g, one of a stable of integrated assessment models collectively known as fund. <|MaskedSetence|> <|MaskedSetence|> Emissions are input into a Maier-Reimer and Hasselmann (1987) carbon cycle model which is coupled to a Schneider and Thompson (1981) climate model. Climate change feeds back on total factor productivity according to a damage function calibrated as in Barrage and Nordhaus (2023). The main departure from Nordhaus’ seminal work is that we assume that society becomes less vulnerable to climate change as incomes grow (Schelling, 1984). <|MaskedSetence|>
**A**: This version of the model has been used previously (Tol, 2020, 2024b); earlier versions date back to Tol (1997).. **B**: The model consists of a Solow (1956) growth model with energy (and associated emissions of carbon dioxide) as a derived demand rather than a factor of production, as in dice (Nordhaus, 1993). **C**: This is a global model implemented in Matlab.
CBA
CBA
CBA
ACB
Selection 2
The aforementioned may give rise to the following query: can a monotone data generation process produce an equivalent distribution to the one propagated by a non-monotonic data generation process? The answer to such query is definitely negative (as is proven here in lemma .4, Appendix E). <|MaskedSetence|> <|MaskedSetence|> This is a very strong result in that it is universal, neither relying on any structural form restrictions nor on any assumptions regarding the true measure (prior to normalization) of the disturbances. Thus, it lends itself to nonparametric treatment. <|MaskedSetence|> Theorem 7.1 establishes that the kernel can be unknown in Fredholm integral equations, whereas Theorem 7.2 establishes the identifiability of the counterfactual distribution based on the insight of theorem 7.2. .
**A**: Building on the above identifiability result of the counterfactual distribution, we next develop the synthetic counterfactual machinery. **B**: So far we have established identifiability for the interventional distribution. **C**: This is so because any monotonic data generation process lacks sufficient variation in the distribution of the outcome variable due to the restriction on any value of context producing the same distribution of Y𝑌Yitalic_Y within a fixed conditional quantile of X𝑋Xitalic_X.
CBA
CBA
CBA
CBA
Selection 3
<|MaskedSetence|> Such a monotonic rent structure comes from the optimality condition: Many suboptimal menu profiles induce non-monotonic rents. This contrasts with a setting where adverse selection is the main driving force and monotonic rents result from incentive compatibility. Third, the strictly convex price functions contrast with the well-known click-based pricing yet take a logarithm form that supports the classic volume-based pricing first developed by, for example, Butters (1977) and Grossman and Shapiro (1984). To explore the allocative role of platform advertising, we first consider two benchmarks where the platform is absent and the producer either can or cannot control disclosure. If the producer cannot self-advertise, then no communication happens and every type earns the prior mean. If the producer does have the option to disclose freely, we have the setting of Grossman (1981) and Milgrom (1981) who show that, in equilibrium, the producer’s unraveling impetus results in every type fully disclosing and earning exactly the type. <|MaskedSetence|> <|MaskedSetence|>
**A**: In both cases, the producer extracts full surplus, albeit distributed differently among types. **B**: In comparison, the platform in our model is able to extract high surplus despite adversarial equilibrium selection, and every producer type is strictly worse off than in both benchmarks.. **C**: Second, under the optimal solution, producers of higher types enjoy higher rents.
CAB
CAB
BAC
CAB
Selection 2
<|MaskedSetence|> Proposition 3 show that an increased likelihood of disclosure by other sources may have the side effect of discouraging speculators from publishing reports with richer content. <|MaskedSetence|> First, we assume the speculator has a short horizon and closes his position after disclosure. If the speculator has longer horizon and can trade multiple times, he may engage in dynamic disclosure management. <|MaskedSetence|> We examine this possibility in the online appendix. .
**A**: For instance, the speculator can first disclose the high inventory alone, buy at depressed prices, and then resell after disclosing the high inventory is actually good news. **B**: One cost that speculators face when issuing misleading reports is the potential disclosure by other sources, including firms, analysts, media outlets, and other speculators. **C**: This insight can be helpful to regulators who are considering imposing direct costs on market manipulation, such as conducting investigations on activist short-sellers. Our study has the following limitations.
BCA
CAB
BCA
BCA
Selection 1
For my main analysis sample, I combine the personnel records and the internal application histories into an employee-by-quarter dataset spanning 2015 to 2019. <|MaskedSetence|> I restrict my sample to only white-collar and management employees who are regular employees at the firm (e.g., excluding marginal employment such as mini jobs). My main analysis sample contains over 400,000 employee by quarter observations and covers over 30,000 unique white-collar and management employees. In order to account for the potential impact of other vacancy characteristics as drivers of the gender leadership gap, I create an auxiliary employee-by-vacancy dataset from 2015 to 2019 that combines each employee in my quarterly analysis sample with every available job opening they could have applied to. <|MaskedSetence|> Due to the large size of the dataset I restrict to lower-level employees who have applied to at least one position during my sample period. In unreported results, I find similar patterns when using a random sample of employees in lower-level positions rather than restricting to those who have applied at least once. <|MaskedSetence|> The dataset also contains an indicator for employees’ realized application choices. In total, the dataset has over 39 million observations. In a given quarter, the median employee has 421 job openings in their choice set, yielding over 39 million total observations..
**A**: I choose to collapse the data to a quarterly level because the median applicant applies only to one internal position in a given quarter. **B**: I refine these choices based on observed application patterns, dropping combinations that never occur in the data.999For instance, I drop combinations between employee location and vacancy location for which applications never occur in my data. **C**: The dataset includes detailed information about each vacancy’s (advertised) job features, information about employees’ current positions, as well as their demographics.
ABC
ABC
ABC
CBA
Selection 2
The rest of the paper is organized as follows. The introduction (Section 1) ends with a review of the related literature. Section 2 introduces our model and solves for the equilibrium of Istanbul Flower Auctions. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We provide all proofs in the Appendix. 1.1 Brief review of the literature.
**A**: Section 4 provides our numerical results. **B**: Section 5 concludes. **C**: Section 3 provides theoretical results on the payoff comparison of Istanbul Flower Auctions against the backdrop of Dutch and English Auctions.
CAB
CAB
CAB
BAC
Selection 3
There are also less technical applications, such as producing university rankings. <|MaskedSetence|> <|MaskedSetence|> Similarly, one could produce rankings of cities by livability or suitability for remote work. In all previous examples, the input rankings are criteria (which tend to be objective) and the whole setup is essentially single-agent. However, there are also compelling multi-agent applications, where we can think of the input rankings as votes. An example might be a university hiring committee needing to rank applicants. <|MaskedSetence|> The output should be a ranking instead of a single winning candidate, because we do not know which candidates will accept the job offer. Proportionality may be desirable in this context to ensure that the output ranking reflects the diverse interests of the university department. Other multi-agent examples are groups of friends wanting to produce rankings of favorite music, restaurants, or travel destinations, a context in which a majoritarian method seems out of place..
**A**: These rankings are usually a result of aggregating rankings for several criteria (such as student satisfaction, % of students employed after graduating, research output). **B**: In such a scenario, each committee member can provide their personal ranking. **C**: These rankings could be weighted and then be used to produce an aggregate ranking via Squared Kemeny as all criteria should be taken into account.
BAC
ACB
ACB
ACB
Selection 2
For the Mineral section, China’s dominance is unmistakable. The United States places second, even though its relative centrality to China decreased significantly from 80% in 2010 to 45% in 2022. This underscores China’s consolidating dominance. Australia, the Republic of Korea, France, and the Netherlands see noteworthy advancements in the 2022 rankings, with the Republic of Korea’s ascent from eleventh to seventh, highlighting its increased centrality. Conversely, Belgium, Hong Kong, Italy, Japan, and Singapore register declines, with Hong Kong’s fall to fourteenth and Belgium’s to fifteenth marking significant shifts. The rankings note the departure of only two countries, Thailand and Canada, from the Top 15 in 2010 and the entry of India and Malaysia in 2022, with India’s placement at fourth being particularly impressive. The second part of our topological analysis looks into shifts in ranking centrality within the main section-level trade networks between 2010 and 2022. Table 2 reports centrality rankings for the following sections: Mechanical & Electrical, Chemical, and Mineral. Regarding the Mechanical & Electrical section, the United States is the most central country for both years under review. Following closely, China and Germany constitute the Top 3, with Germany ascending to the second position by 2022. <|MaskedSetence|> Conversely, Mexico exhibits an upward trajectory, advancing from the seventh to the fourth position. Additionally, entry and exit dynamics from the Top 15 list reveal notable shifts. <|MaskedSetence|> <|MaskedSetence|> Four nations — Hong Kong, India, the Republic of Korea, and the United Arab Emirates — were not listed in 2010 and appeared in the 2022 ranking. Among these, Hong Kong and India are particularly prominent, securing the seventh and ninth positions, respectively..
**A**: Thailand, previously ranked fourth in 2010, vanishes from the 2022 ranking. **B**: We highlight the fall in the rank position of the following three countries: Great Britain descends from the fifth to the tenth position, Singapore from the ninth to the twelfth, and Japan from the twelfth to the fifteenth. **C**: Similarly, Russia, Malaysia, and Spain, present in the 2010 rankings, are absent in 2022.
BAC
BAC
ABC
BAC
Selection 2
<|MaskedSetence|> <|MaskedSetence|> This stems from the uniformed traders’ optimal demand. Although they are price takers, when determining their optimal demand, they do recognize that the insider internalizes her price impact. <|MaskedSetence|> The lower sensitivity on the public signal alters the intercept point of their demand function (as it depends on public signal). Hence, in the price-impact equilibrium the insider trades against a different residual demand than in the price-taking equilibrium, which implies that price-taking equilibrium cannot be written as a special case of the price-impact one. .
**A**: As long as the insider internalizes her price impact, equilibrium prices cannot be driven to the corresponding price-taking equilibrium ones. **B**: The above results underscore an important (and clarifying) fact. **C**: As mentioned above, price impact reduces public signal’s precision, and hence uniformed traders demand becomes less elastic than in the competitive equilibrium, and this is true for any realization of the public signal and level of prices.
BAC
ACB
BAC
BAC
Selection 1
<|MaskedSetence|> <|MaskedSetence|> In many fields, knowledge is obtained after carefully weighing all the positive and negative evidences provided by many papers. If both sides p-hack at roughly the same intensity, will the effect of p-hacking cancel out? In occasional cases, p-hacking might even be a good thing, as it helps to bring more attention to a stunted but correct theory. <|MaskedSetence|> (See Bohren (2016), Bohren and Hauser (2021) and Frick.
**A**: . **B**: A paper p-hacks doesn’t mean the theory of that paper is wrong. **C**: A famous example is that Gregor Mendel might p-hack in his peas experiments. We propose to study the consequence of p-hacking under the framework of mis-specified Bayesian learning.
ABC
CAB
ABC
ABC
Selection 4
It is worth noting that the counterfactual density distribution to the left of the policy threshold is not an upward parallel shifting of the observed density distribution. <|MaskedSetence|> These are in contract with the assumption under the estimation framework by Chetty et al. (2013). <|MaskedSetence|> <|MaskedSetence|> As shown in Figure 6b, the estimated marginal buncher’s response Δ⁢z∗Δsuperscript𝑧\Delta z^{*}roman_Δ italic_z start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is 400 RMB, which is larger than our estimates of 260 RMB. .
**A**: This is because patients with different counterfactual expenses shift leftwards with different magnitudes in response to the kinked policy as elaborated in Section 2, resulting in the counterfactual and observed density distribution having different shapes in the region to the left of the threshold. **B**: Specifically, by assuming an upward parallel shift from the observed to the counterfactual density distributions, estimates following Chetty et al. **C**: (2011) end up overestimating the marginal buncher’s response.
BAC
ABC
ABC
ABC
Selection 3
The normalization in Property 6 determines the scale of nominal quantities in all universal gravity models. Since the prototypical trade model determines only real variables, we said earlier that Property 6 does not conflict with the model. <|MaskedSetence|> The use of normalization in Property 6 is harmless for equilibrium objects in the baseline economy and in the counterfactual economy when considered in isolation. However, when doing comparative statics, using the same normalization in both models introduces an additional assumption. <|MaskedSetence|> This assumption may not be consistent with the intended use of the model. Researchers should therefore be cautious when reporting comparative static calculations for nominal quantities. <|MaskedSetence|>
**A**: However, there is a subtle point that needs to be clarified. **B**: For this reason, the default results table generated by the command shows growth rates only for real variables. The default calculation reports the vector relative changes of real (non-domestic) exports for all locations, which is calculated as . **C**: In this case, the assumption is that global nominal income is the same in the baseline and the counterfactual.
ACB
ACB
ACB
ACB
Selection 4
First, there has been quite some recent work trying to trace links between different updating biases. For example, Heger and Papageorge, (2018) and Gneezy et al., (2023) study how wishful thinking (optimism) can affect overconfidence; while Charness and Dave, (2017) and Zhenxun, 2023b ; Zhenxun, 2023a try to separate behaviour stemming from motivated beliefs and unmotivated confirmation bias. In the absence of an overarching model that encompasses a wide array of biases, findings can be confounded and results about such links could therefore be biased. For example, bias A may appear to be linked to bias B, only because a third bias C is missing. Second, many researchers search for belief-based explanations of behavioural phenomena, when conflicting biases come into play. For example, political polarization has been separately explained from the standpoint of overconfidence (Ortoleva and Snowberg,, 2015), and confirmation bias (Del Vicario et al.,, 2017). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: As previously mentioned, utilizing an approach that is able to tell conflicting biases apart has the advantage to better pinpoint what biases are precisely underpinning certain behavioural regularities; or even let us know if some biases are simply an artifact of using a less complete model. . **B**: In the financial literature, there is some debate as to whether the disposition effect could be caused by discrepant biases, such as motivated beliefs (Heinke et al.,, 2023), the gambler’s fallacy (Jiao,, 2017) or general underinference (Pitkäjärvi,, 2022). **C**: More examples range from linking confirmation bias to several stylised facts in financial markets (Pouget et al.,, 2017); or confidence biases to poor investment performance (Ahmad and Shah,, 2020) and biased memory (Huffman et al.,, 2022).
BCA
CAB
BCA
BCA
Selection 1
First we note that due to the different types of deliberation involved we would expect to see different types of learning profiles in committee work and larger elections and referenda. In a well functioning committee the deliberation is typically quite organised and the members are chosen so that they complement each other’s background competencies. In large scale elections the electorate does not typically depend on the question to be voted on, and deliberation is not organised in the same sense as for a committee. <|MaskedSetence|> <|MaskedSetence|> As the group size grows communication becomes costlier to organise, and at very large scales will even require physical infrastructure in order to function. <|MaskedSetence|>
**A**: There are several distinct factors which appear in this description. First, we can ask more generally how the group size affect the learning profile. **B**: So, unless adequate organisation and infrastructure are present we would expect each added member to contribute less and less to the learning rate, when the group size has reached above some threshold. . **C**: In a small group deliberation is relatively easy to organise and one can assure, for example, that all members of the group are both heard and have access to all the other members.
CAB
ACB
ACB
ACB
Selection 2
that these preferences are described by a parameter x,𝑥x,italic_x , uniformly distributed along the interval [0,1]01[0,1][ 0 , 1 ]. <|MaskedSetence|> x=1𝑥1x=1italic_x = 1) has the highest preference for residing in region L𝐿Litalic_L (resp. region R𝑅Ritalic_R). <|MaskedSetence|> <|MaskedSetence|>
**A**: The consumer with preference described by x=0𝑥0x=0italic_x = 0 (resp. **B**: An agent whose preference corresponds to x=1/2𝑥12x=1/2italic_x = 1 / 2 is indifferent between either region. **C**: We can.
ABC
BCA
ABC
ABC
Selection 1
<|MaskedSetence|> Our examination primarily focuses on the harmonic centrality distributions observed in the first quarter of each of these six years. As illustrated in Figure 3, it is evident that the harmonic centrality distributions exhibit a consistent pattern across the different years. To gain deeper insights into the distinctive peaks within these distributions, we employed a clustering algorithm, specifically KMeans, to decompose each distribution. Using clustering algorithms to analyze data with multimodal distributions over time is beneficial because these algorithms can effectively identify and separate distinct patterns or trends within the data. <|MaskedSetence|> To determine the optimal number of clusters for each year, we utilized the average of the Davies–Bouldin index and the Calinski–Harabasz index. <|MaskedSetence|> To align the Davies–Bouldin index with the Calinski–Harabasz index for ease of interpretation, we took the reciprocal of the former, with a higher value indicating superior clustering performance, and then we combined the two indices by taking their average. Figure 3. Harmonic Centrality Distribution Over Time from 2018 to 2023 within the Australia (.au) Web Domain..
**A**: The Calinski-Harabasz index (calinski1974dendrite) assesses the ratio of within-cluster dispersion to between-cluster dispersion for all clusters, whereas the Davies-Bouldin index (davies1979cluster) quantifies the similarity between clusters, thereby allowing us to evaluate the separability of categories. **B**: This approach facilitates a deeper understanding of the dynamics and variability in the data over time, which is essential for concretely observing the changes in domains over the past six years and analysing the changes in those non-structural features under different movement scenarios. **C**: In our investigation of the Australian domain space, we conducted an extensive analysis of harmonic centrality data obtained from the Common Crawl dataset spanning the years 2018 to 2023.
CBA
ABC
CBA
CBA
Selection 1
<|MaskedSetence|> (2019); Pion-Tonachini et al. <|MaskedSetence|> (2022); Wang et al. <|MaskedSetence|> (2022) develops conformal inference procedures to measure the out-of-distribution predictive performance of economic theories..
**A**: (2023) provide recent reviews on the use of machine learning across the physical sciences, such as biology, chemistry, mathematics, and physics. Substantial progress has already been made in exploring how machine learning interacts with economic theories. Recent work compares the out-of-sample predictive performance of black box machine learning models against that of economic theories in choice under risk and strategic behavior in normal form games, measuring the “completeness” of economic theories (Fudenberg et al., 2022). Andrews et al. **B**: This paper sits in a rapidly growing literature that seeks to integrate machine learning into the scientific process across various fields. Carleo et al. **C**: (2021); Krenn et al.
BCA
BAC
BCA
BCA
Selection 1
As an illustration, we discuss two interesting and practically relevant examples of robust allocation mechanisms: the Mean-Deviation allocation mechanism and the Expected-Shortfall allocation mechanism, based on the theory of risk/deviation measures; see Rockafellar et al. [42] and McNeil et al. [37]. In each example, we propose a model of market allocation based on the given robust allocation mechanism, and we show how the global frictional cost in the market can be parameterized by a single parameter. This parameter can be understood as related to the fees imposed by the allocator on the participants. On a technical level, this paper contributes to the mathematical literature on convex duality. In particular, we provide envelope representation results for superlinear operators mapping from Dedekind complete Riesz spaces to Dedekind complete Riesz spaces. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
**A**: We discuss the relevant mathematical literature and our contribution to it in Section 6. . **B**: From this perspective, the closest work to ours are the studies on conditional risk measures. **C**: These results are then applied in the context of finite Cartesian products of L1superscript𝐿1L^{1}italic_L start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT-spaces, and conditional-expectation representations of operators.
CBA
CBA
CBA
CBA
Selection 1
<|MaskedSetence|> Antonelli (1886) was the first monograph to tackle this problem, and in this paper, there is a reference to condition (B). Pareto (1906) also addressed this problem, but Volterra was critical of Pareto’s discussion of integrability in his review (Volterra, 1906). Hence, Pareto tried to develop a consumer theory without condition (B) in the French version of his book (Pareto, 1909). Pareto argued that condition (B) is related to the order of consumption. <|MaskedSetence|> <|MaskedSetence|>
**A**: Suda (2007) surveyed these arguments in detail.. **B**: This idea is later implicitly criticized by Samuelson (1950). **C**: 4 Comparison with Related Literature The research topic of this paper was already known as the integrability problem at the end of the 19th century.
CBA
CBA
CBA
CBA
Selection 3
<|MaskedSetence|> We show that the marginal prior for conditional variances is centred around the hypothesis of homoskedasticity and exhibits strong shrinkage towards it while maintaining heavy tails. These features are essential for SVAR models with identification via heteroskedasticity to be verified for two reasons. First, they standardise the SVAR model facilitating the identification and estimation of conditional variances and the structural matrix. <|MaskedSetence|> <|MaskedSetence|> (2024). .
**A**: A third contribution of this paper is the provision of a detailed analysis of the marginal prior distribution of the conditional variances implied by the normality of the SV process innovations, our new prior for the volatility of the log-volatility parameter, and the non-centred SV process parameterisation proposed by Kastner and Frühwirth-Schnatter (2014) that we adapt to the SVAR context. **B**: We also point out a problem of a standard prior setup for the SV model in heteroskedastic SVARs used by Cogley and Sargent (2005) and more recently by Chan et al. **C**: Secondly, this setup requires the evidence in favour of heteroskedasticity to come from the data.
ACB
ACB
ACB
ABC
Selection 2
At the time of writing, the asymptotic distribution of the WALS estimator for GLMs is still an open research topic and its variance estimator has been a subject of debate. Recent work by De Luca et al. (2022) proposes a new estimator for the variance of WALS in the linear regression model instead of the Bayesian posterior variance that has traditionally been used. De Luca et al. (2023) further analyze the confidence and prediction intervals of WALS in the linear model and propose a new simulation-based method that corrects for bias in the WALS estimator. In contrast, this work focuses on the predictive power of model averaging and leaves the challenging issue of inference (after model averaging) for future research. Model averaging estimators typically improve the predictive accuracy compared to using a single model. <|MaskedSetence|> Moreover, Min and Zellner (1993) show that the expected squared error loss of predictive mean forecasts is always minimized by BMA, if the data-generating model is included in the model space considered for averaging. <|MaskedSetence|> <|MaskedSetence|> Both the simulation experiment and the empirical application show that WALS NB improves on the ML estimator in sparse situations with few observations and many covariates. In the latter, its fit is competitive with lasso while being computationally more efficient. .
**A**: Finally, the method is also compared to the lasso estimator (Wang et al., 2016) in an empirical application on modeling doctor visits. **B**: In this paper, I compare the proposed WALS NB method to traditional maximum likelihood (ML) estimation of the NB2 regression model in a simulation experiment using the classical precision measure, root mean squared error (RMSE), and scoring rules (Gneiting and Raftery, 2007) as measures for the distributional fit. **C**: For example, in an early application of BMA, Madigan and Raftery (1994) find that BMA achieves better logarithmic predictive score than any single model.
CBA
CBA
CBA
BAC
Selection 2
percent of “never-takers” who would not be signed up for the job-search service under either treatment are nevertheless affected by the treatment. <|MaskedSetence|> For comparison, our estimate of the overall average treatment effect is 0.120.120.120.12 . The effect for never-takers is thus of a fairly similar magnitude to that of the total population, despite the fact that they have no change in job-search service signup. <|MaskedSetence|> <|MaskedSetence|>
**A**: (We obtain a trivial lower-bound of 0 for the “always-takers”.) Applying the results in Proposition 3.3, we also estimate lower and upper bounds on the average effect for these never-takers of 0.110.110.110.11 to 0.180.180.180.18 .212121Because the outcome is binary, the lower bound for the average effect corresponds exactly to our lower bound on the fraction of always-takers affected. **B**: If we were willing to assume that the direct effects (i.e. **C**: effects not through the job-search service) were similar between always-takers, never-takers, and compliers (granted, a strong assumption), this would imply that the majority of the total effect operates through the information treatment. .
ABC
ABC
ABC
ABC
Selection 3
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4

Collection including liangzid/robench2024b_all_seteconSCP-c