robench-2024b
Collection
48 items
•
Updated
context
stringlengths 100
3.4k
| A
stringlengths 100
3.42k
| B
stringlengths 100
3.99k
| C
stringlengths 100
3.94k
| D
stringlengths 100
3k
| label
stringclasses 4
values |
---|---|---|---|---|---|
𝔼[ν(l)Z−l]=0𝔼delimited-[]superscript𝜈𝑙subscript𝑍𝑙0\mathbb{E}[\nu^{(l)}Z_{-l}]=0blackboard_E [ italic_ν start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT italic_Z start_POSTSUBSCRIPT - italic_l end_POSTSUBSCRIPT ] = 0, such that (γ0(l))TZ−lsuperscriptsubscriptsuperscript𝛾𝑙0𝑇subscript𝑍𝑙(\gamma^{(l)}_{0})^{T}Z_{-l}( italic_γ start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_Z start_POSTSUBSCRIPT - italic_l end_POSTSUBSCRIPT is the linear projection of gl(X1)subscript𝑔𝑙subscript𝑋1g_{l}(X_{1})italic_g start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) onto Z−lsubscript𝑍𝑙Z_{-l}italic_Z start_POSTSUBSCRIPT - italic_l end_POSTSUBSCRIPT. Here, γ0(l,1)subscriptsuperscript𝛾𝑙10\gamma^{(l,1)}_{0}italic_γ start_POSTSUPERSCRIPT ( italic_l , 1 ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT denotes the sparse part of the coefficient vector γ0(l)subscriptsuperscript𝛾𝑙0\gamma^{(l)}_{0}italic_γ start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.
|
The auxiliary regression (2.7) is used to construct an orthogonal score function for valid inference in a high-dimensional setting, as described in Section 2.1.
|
The primary aim of our paper is to provide a method for constructing uniformly valid inference and confidence bands in sparse high-dimensional models in the sieve framework. In doing so, we contribute to the growing literature on high-dimensional inference in additive models, especially that on debiased/double machine learning. The double machine learning approach (Belloni et al., 2014b; Chernozhukov et al., 2018) offers a general framework for uniformly valid inference in high-dimensional settings. Similar methods, such as those proposed by van de Geer et al. (2014) and Zhang and Zhang (2014), have also produced valid confidence intervals for low-dimensional parameters in high-dimensional linear models. These studies are based on the so-called debiasing approach, which provides an alternative framework for valid inference. The framework entails a one-step correction of the lasso estimator, resulting in an asymptotically normally distributed estimator of the low-dimensional target parameter. For a survey on post-selection inference in high-dimensional settings and its generalizations, we refer to Chernozhukov et al. (2015b).
|
Later, we will also allow for an approximation error in this equation. Belloni et al. (2014b) propose including in the final regression not only the covariates selected in the first step of the naive approach but also augmenting this set of variables with Lasso-selected regressors from the auxiliary regression. This procedure is equivalent to constructing a so-called Neyman orthogonal moment function with respect to the nuisance part. This is essential for ensuring valid post-selection inference for the first component of the vector θ0subscript𝜃0\theta_{0}italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. In Section 2.2, we will provide more details about this property. Heuristically, the additional regression step in Equation (2.4) will lead to robustness against moderate selection mistakes. It can be shown formally, that this procedure implements an orthogonal moment equation
|
Belloni et al. (2014b) developed an approach for valid inference for one parameter. In high-dimensional additive models, the major technical challenge arises from the need to conduct inference for the potentially high-dimensional vector θ0subscript𝜃0\theta_{0}italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. In other words, the number of elements in θ0subscript𝜃0\theta_{0}italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT for which we would like to construct a valid confidence region is allowed to grow with the sample size. Each component of θ0subscript𝜃0\theta_{0}italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, θ0,lsubscript𝜃0𝑙\theta_{0,l}italic_θ start_POSTSUBSCRIPT 0 , italic_l end_POSTSUBSCRIPT with ł=1,…,d1italic-ł1…subscript𝑑1\l=1,\ldots,d_{1}italic_ł = 1 , … , italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, is determined by an orthogonal moment condition and we will demonstrate how uniformly valid confidence bands can be constructed by embedding into a high-dimensional Z-estimation framework. Finally, we illustrate how the estimation of θ0subscript𝜃0\theta_{0}italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT can be translated into uniformly valid confidence bands for the target function f1≈θ0Tg(⋅)subscript𝑓1superscriptsubscript𝜃0𝑇𝑔⋅f_{1}\approx\theta_{0}^{T}g(\cdot)italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≈ italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_g ( ⋅ ) using a multiplier bootstrap procedure.
|
A
|
The attentive eye can observe the similarity of our approach with respect to [17]. The fundamental difference (and one of the key feature of our method) is that our indices are defined over T𝑇Titalic_T, meaning that they are able to provide insights about the impact of input variables all across the domain of definition of the output.
|
Predicting a quantity for the long time scales which matter for the climate is a hard task, with a great degree of uncertainty involved. Many efforts have been undertaken to model and control this and other uncertainties, such as the development of standardized scenarios of future development, called Shared Socio-economic Pathways (SSPs) [22, 30] or the use of model ensembles to tackle the issue of model uncertainty. Given also the relative opaqueness and the complexity of IAMs, post-hoc diagnostic methods have been used, for instance with the purpose of performing Global Sensitivity Analysis. In fact, GSA methods can provide fundamental information to policymakers in terms of the relevance of specific factors over model outputs [17]. Moreover, the specific methodology employed in the paper [4] is able to detect both main and interaction effects with a very parsimonious experimental design, and to do so in the case of finite changes for the input variables.
|
Some fundamental pieces of knowledge are still missing: given a dynamic phenomenon such as the evolution of CO2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions in time a policymaker is interested if the input of the factor varies across time, and how. Moreover, given the presence of a model ensemble, with different modelling choices, and thus different impacts of identical input factors across different models, a key information to provide to policymakers is if the evidence provided by the model ensemble is significant, in the sense that it is ‘higher’ than the natural variability of the model ensemble. In this specific setting we do not want just to provide a ‘global’ idea of significance, but we also want to explore the temporal sparsity of it (e.g. I would like to know if the impact of a specific input variable is significant in a given timeframe, but fails to be ‘detectable’ in the model ensemble after a given date). Our aim in the present work is thus threefold: we want to introduce a way to express sensitivity that allows to account for time-varying impacts, and we also want to assess the significance of such sensitivities, being able to explore the presence of temporal sparsity of the significance.
|
The attentive eye can observe the similarity of our approach with respect to [17]. The fundamental difference (and one of the key feature of our method) is that our indices are defined over T𝑇Titalic_T, meaning that they are able to provide insights about the impact of input variables all across the domain of definition of the output.
|
In the presence of a I/O model whose output(s) are not intrinsically deterministic, it is of fundamental importance to compute the mean value of the sensitivity indices introduced in the previous section, and to compare their absolute or relative magnitude to the natural variability of the phenomenon, or to the uncertainty introduced with the modelling effort, to understand if the impact of a specific factor is significant or not with respect to the natural or modelling variability.
|
D
|
The “if” direction of Theorem 1 readily follows from Theorem 3: when all stationary beliefs have adequate knowledge, a correct action is taken almost surely for any distribution of stationary beliefs, hence u∗(μ0)=u∗(μ0)subscript𝑢subscript𝜇0superscript𝑢subscript𝜇0u_{*}(\mu_{0})=u^{*}(\mu_{0})italic_u start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT ( italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = italic_u start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ( italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ), and we have adequate learning.
|
Theorem 3 can be used to quantify how a failure of excludability impacts welfare. Proposition SA.2 in Supplementary Appendix SA.2 provides a formal result in this vein. In particular, that result implies a sense in which an environment with “approximate excludability” ensures that, eventually, agents’ ex-ante expected
|
belief convergence. Since expanding observations is compatible with the observational network having multiple components, one cannot expect the social belief to converge even in probability.252525Consider an observational network consisting of two disjoint complete subnetworks: every odd agent observes only all odd predecessors, and symmetrically for even agents. Given any specification in which learning would fail on a complete network—such as the canonical binary state/binary action herding example—there is positive probability of the limit belief among odd agents being different from that among even agents. Furthermore, there can be a positive probability that the social belief is not eventually even in a neighborhood of the set of stationary beliefs, as already noted.
|
The conclusion of Theorem 3 would be straightforward if we were assured that agents eventually hold stationary beliefs. However, there are networks (with expanding observations) in which with positive probability the beliefs of an infinite number of agents are bounded away from the set of stationary beliefs; see Example SA.1 in Supplementary Appendix SA.1.
|
Consider normal information. There are full-support priors μ𝜇\muitalic_μ such that the posterior probability μs(ω)subscript𝜇𝑠𝜔\mu_{s}(\omega)italic_μ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_ω ) is uniformly bounded away from 1111 across signals s𝑠sitalic_s and states ω𝜔\omegaitalic_ω (see Supplementary Appendix SA.3 for details).
|
C
|
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0\lambda=0italic_λ = 0. This notion of local power is inspired by the corresponding notions in Section 4 of Romano
|
We consider two notions of optimality: maximin optimality (corresponding to the case where λ=0𝜆0\lambda=0italic_λ = 0) and global optimality (corresponding to the more general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0). Accordingly, we say that r∗superscript𝑟r^{*}italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a maximin optimal if
|
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0\lambda=0italic_λ = 0. This notion of local power is inspired by the corresponding notions in Section 4 of Romano
|
Romano (2005b). We show that any globally most powerful protocol is also locally most powerful (and thus admissible if λ=0𝜆0\lambda=0italic_λ = 0) under linearity and normality.
|
Here, we consider the general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0 and show that when λ>0𝜆0\lambda>0italic_λ > 0, the planner’s subjective utility from research implies a notion of power. Globally optimal protocols generally depend on both λ𝜆\lambdaitalic_λ and the planner’s prior π𝜋\piitalic_π. We restrict our attention to the following class of planner’s priors ΠΠ\Piroman_Π.
|
C
|
Note that the oppositely directed implication of statement (ii) is necessarily not true: the NRM rule defined in Example 2 below satisfies truncation-invariance but violates rank monotonicity.
|
Note that the oppositely directed implication of statement (iii) is not necessarily true: the object-proposing deferred acceptance (OPDA) rule defined in Example 3 satisfies truncation-proofness but violates truncation-invariance.121212Chen et al. (2024) proved that the OPDA rule satisfies truncation-proofness.
|
Note that the oppositely directed implication of statement (i) is not necessarily true: The immediate acceptance (IA) rule101010See Abdulkadiroğlu and
|
Sönmez (2003). defined in Example 1 below satisfies truncation-invariance but violates strategy-proofness.111111From the procedure of the immediate acceptance algorithm, one can easily obtain that the IA rule satisfies truncation-invariance. The IA rule is exactly the so-called Boston mechanism. It is well-known that such mechanism is not strategy-proof.
|
Note that the oppositely directed implication of statement (ii) is necessarily not true: the NRM rule defined in Example 2 below satisfies truncation-invariance but violates rank monotonicity.
|
A
|
Panel surveys routinely collect data on an ordinal scale. For example, many nationally representative surveys ask respondents to rate their health or life satisfaction on an ordinal scale.333One example is the British Household Panel Survey in our empirical application. Others include the U.S. Health and Retirement Study and Medical Expenditure Panel Survey, the Canadian Longitudinal Study on Ageing and the National Longitudinal Survey of Children and Youth, the Australian Longitudinal Study on Women’s Health, the European Union Statistics on Income and Living Conditions, the Survey on Health, Ageing, and Retirement in Europe, among many others. Other examples include test results in longitudinal data sets gathered for studying education.
|
We are interested in regression models for ordinal outcomes that allow for lagged dependent variables as well as fixed effects. In the model that we propose, the ordered outcome depends on a fixed effect, a lagged dependent variable, regressors, and a logistic error term. We study identification and estimation of the finite-dimensional parameters in this model when only a small number (≥4absent4\geq 4≥ 4) of time periods is available.
|
To do this, we follow the functional differencing approach in Bonhomme (2012) to obtain moment conditions for the finite-dimensional parameters in this model, namely the autoregressive parameters (one for each level of the lagged dependent variable), the threshold parameters in the underlying latent variable formulation, and the regression coefficients. Our approach is closely related to Honoré and Weidner (2020), and can be seen as the extension of their method to the case of an ordered response variable.
|
For other types of outcome variables (continuous outcomes in linear models, binary and multinomial outcomes), results for regression models with fixed effects and lagged dependent variables are already available. Such results are of great importance for applied practice, as they allow researchers to distinguish unobserved heterogeneity from state dependence, and to control for both when estimating the effect of regressors. The demand for such methods is evidenced by the popularity of existing approaches for the linear model, such as those proposed by Arellano and Bond (1991) and Blundell and Bond (1998). In contrast, for ordinal outcomes, almost no results are available.
|
This paper contributes to the literature on dynamic ordered logit models. We are aware of only one paper that studies a fixed-T𝑇Titalic_T version of this model while allowing for fixed effects. The approach in Muris, Raposo, and Vandoros (2023) builds on methods for dynamic binary choice models in Honoré and Kyriazidou (2000) by restricting how past values of the dependent variable enter the model. In particular, in Muris, Raposo, and Vandoros (2023), the lagged dependent variable Yi,t−1subscript𝑌𝑖𝑡1Y_{i,t-1}italic_Y start_POSTSUBSCRIPT italic_i , italic_t - 1 end_POSTSUBSCRIPT enters the model only via 𝟙{Yi,t−1≥k}1subscript𝑌𝑖𝑡1𝑘\mathbbm{1}\{Y_{i,t-1}\geq k\}blackboard_1 { italic_Y start_POSTSUBSCRIPT italic_i , italic_t - 1 end_POSTSUBSCRIPT ≥ italic_k } for some known k𝑘kitalic_k. We do not impose such a restriction, and allow the effect of Yi,t−1subscript𝑌𝑖𝑡1Y_{i,t-1}italic_Y start_POSTSUBSCRIPT italic_i , italic_t - 1 end_POSTSUBSCRIPT to vary freely with its level.
|
A
|
First, we introduce the model, discuss how the model specification relates to the existing causal discovery literature, and derive testable implications. Second, we present the conditional independence test that is a central component of the test. Third, we present the implementation of the test.
|
The main idea of this paper is to study the conditions under which this model is distinguishable form reversed analog without relying on exogenous information. The reverse model, where Y𝑌Yitalic_Y is causing X𝑋Xitalic_X, again in the presence of the vector of covariates W𝑊Witalic_W, is defined as
|
Consider a model where observable continuous scalar variable X𝑋Xitalic_X causes observable scalar variable Y𝑌Yitalic_Y in the presence of the vector of covariates W𝑊Witalic_W (which we refer to in the following as the model):
|
We show how to test for reverse causality between two variables X𝑋Xitalic_X and Y𝑌Yitalic_Y in the presence of additional covariates W𝑊Witalic_W.
|
We extend their work by allowing for additional control variables W𝑊Witalic_W and also considering heteroskedasticity of the error term with respect to these covariates. Intuitively, nonlinearity of hℎhitalic_h ensures that the error terms in the reverse model are not independent of the regressor, which provides power of the test. While linear models are used in many economic applications, they are typically seen as approximations of nonlinear relationships between dependent variable and regressors.
|
B
|
The per capita GDP of the Yangzi Delta, the most developed region in China, was roughly at par with that of the Netherlands, the most developed region in Europe.
|
The per capita GDP of the Yangzi Delta, the most developed region in China, was roughly at par with that of the Netherlands, the most developed region in Europe.
|
Our model offers the interpretation that per capita income stagnated in the pre-modern era because laborforce was allocated to the agricultural sector that engaged in subsistence production and not to productive activities, possibly factory work in the manufacturing sector that generated sustained per capita growth.
|
This claim is supported by recent estimates of GDP per capita, as plotted in Figure 1. The figure shows that Britain’s GDP per capita was similar to that of China before 1750, but diverged after that.
|
this study assumes that per capita GDP was roughly constant before the Industrial Revolution (see also Broadberry et al., 2015).
|
C
|
There is a large literature in behavioral and experimental economics that points toward the importance of various behavioral traits and heterogeneous characteristics of trust and reciprocity in sharing behavior. A large series of studies including Fehr et al. (1997); Fehr and Gächter (1998, 2000); Camerer (2003); Cox (2004) have leveraged experimental evidence to highlight the use of trust and reciprocity as devices for contract enforcement and for driving cooperation in markets and sharing games. Throughout these works it is emphasized that reciprocity manifests not only positively, but can also be used to characterize punishment behavior. While trust and reciprocity in these settings often drive efficiency gains, we show through counterfactuals that which characteristics matter the most is highly context-dependent; both trust and reciprocity can backfire as tools to promote efficiency, depending on the information structure.
|
These results also agree with other experimental work that has specifically focused on the role of trust in experimental sharing environments. In particular, Glaeser et al. (2000) finds that survey questions about trust (similar to the questions we used to measure trust) are effective at predicting trustworthy behavior but less so at predicting trusting behavior. The distinction between trusting and trustworthy behavior is mirrored in Anderson et al. (2004), who find that certain measures of trust are negatively associated with sharing in public goods experiments.
|
Because there are only three trust questions, the first principal component summarizes most of the information from the trust questionnaire. It places positive weight on the question that involves trust and negative weights on two questions that suggest mistrust. Perhaps surprisingly, this measure of trust is associated with a positive interaction on contribution costs in the baseline, which indicates that individuals who score highly on trust are less altruistic and more careful about where they direct effort in the baseline. This agrees with the results of Glaeser et al. (2000), which suggest that such trust questionnaires predict trustworthy behavior but do not necessarily predict trusting behavior. Further in line with these results is a strong positive interaction of the trust characteristic with generalized reciprocity in the baseline. This suggests that these individuals are trustworthy in that they respond to sharing by others by increasing their own contribution. However, they are less likely to share blindly and trust that others will reciprocate. In the treatment, estimates of the effect of trust are less precise but suggest a reversal of this phenomenon; they trust that others will reciprocate when they know that others will be aware of their sharing behavior. This is captured by the negative estimate of the interaction between trust, the treatment indicator, and contribution costs, together with the positive estimate of the coefficient for the interaction between trust, the treatment indicator, and direct reciprocity. This sheds more light on information as a mechanism driving the mixed results regarding trust and sharing behavior in public goods games, observed in previous work (Anderson et al., 2004).
|
Finally, we use our structural framework to conduct three counterfactual simulations, each examining the effects of a uniform increase in one of the three principal attributes—trust, overall reciprocity, and positive reciprocity. Consistent with our estimates for the model with individual heterogeneity, an increase in the trust attribute improves key outcomes in the information treatment, but has a mild, negative impact on the baseline condition. Overall reciprocity improves outcomes in the baseline condition but can backfire substantially due to negative reciprocity in the treatment condition. However, a uniform increase in subjects’ positive reciprocity attribute has a substantial positive effect on all key outcomes in both the treatment and baseline conditions.
|
This result agrees with some more recent work examining the role of these characteristics in supporting positive market outcomes. For example, Choi and Storr (2022) finds evidence suggesting that providing reputation systems in experimental markets interacts with preferences primarily by giving participants more information about whom not to trust. Subsequent work by Solimine and Isaac (2023) supports this result and further emphasizes the role of the information in determining the effectiveness of trust in promoting positive market outcomes. Our counterfactual findings involving trust agree with these findings; through the way that trust interacts with preferences for reciprocity and altruism, promoting trust in the community dramatically improves outcomes when subjects are provided with detailed information about others’ behavior. When information is more limited, however, introducing higher levels of trust increases trustworthy behavior by some but may backfire by allowing others to take advantage of this change.
|
D
|
(ii) The statistical error rate of HOPE in (29) is the same as the upper bound of the iterative projection algorithms for estimation of the fixed rank TFM-tucker, c.f. Corollary 3.1 and 3.2 in Han et al., (2020), which is shown to have the minimax optimality.
|
Figure 3: Boxplots of the logarithm of the estimation error for HOPE under experiment configuration II.
|
(ii) The statistical error rate of HOPE in (29) is the same as the upper bound of the iterative projection algorithms for estimation of the fixed rank TFM-tucker, c.f. Corollary 3.1 and 3.2 in Han et al., (2020), which is shown to have the minimax optimality.
|
It follows that HOPE also achieves the minimax rate-optimal estimation error under fixed r𝑟ritalic_r.
|
controls the level of signal cancellation (see Han et al., (2020) for details). When there is no signal cancellation, ζ=0𝜁0\zeta=0italic_ζ = 0, the rate of the two procedures are the same. Note that iTIPUP only estimates the loading space, while HOPE provides estimates of the unique loading vectors. The error rate of HOPE is better when ζ>0𝜁0\zeta>0italic_ζ > 0. This demonstrates that HOPE is able to utilize the specific structure in TFM-cp to achieve more accurate estimation than simply applying the estimation procedures designed for general TFM-tucker.
|
C
|
In this section, we briefly discuss the strategic aspects of the messaging game induced by an individual elicitation protocols. We formally define two dynamic implementation notions in Section 5.1: implementation in dominant and obviously dominant strategies. Although there is no conceptual innovation in these definitions, it is useful to explicitly write them in our formalism.
|
We then turn to the special case of the second-price auction rule as an instructive and practically relevant example—we see how maximally contextually private protocols for the second-price auction rule choose a set of agents to protect, and delay asking questions to the protected agents. In Theorem 3, we use the representation theorem (Theorem 2) to derive two maximally contextually private protocols: the ascending-join and the overdescending-join protocols. In the ascending-join protocol, which is a variant of the familiar ascending or ?English? auction protocol, the designer begins by conducting an ascending protocol with just two agents. Whenever one agent drops out, another agent ?joins? the protocol at the going threshold. The overdescending-join protocol is analogous, and related to the overdescending protocol introduced in Harstad (2018). The ascending-join protocol protects the winner, the overdescending-join protocol protects the losers, and both protocols avoid violations by delaying questions to agents in the protected set as much as possible.
|
In Section 5.2 we consider the incentive properties of the maximally contextually private protocols discussed in Section 4. We first observe that the ascending-join protocol is implementable in obviously dominant strategies. This result is a direct consequence of Li (2017)’s characterization of obvious dominance. Then, we show that the overdescending-join protocol is implementable in dominant strategies. This result strengthens and formalizes an observation in Harstad (2018) that the overdescending protocol is strategyproof.
|
Under the restriction to individual elicitation, protocols induce a well-defined extensive-form game. In Section 5, we ?check? the incentive properties of the maximally contextually private protocols for the second-price auction rule described in Section 4. The ascending-join and overdescending-join protocols have implementations in dominant strategies, with the former satisfying the stronger requirement of obvious strategyproofness. This section relies on prior results regarding the strategic properties of personal clock-auctions (Li, 2017).
|
In addition to the ascending-join and overdescending-join protocols, the serial dictatorship also has good privacy properties and incentive guarantees. In particular, we show in Appendix B that the serial dictatorship protocol is contextually private (i.e. it produces no privacy violations) and that the messaging strategies are obviously dominant.
|
B
|
Finally, the study discussed in section 3.2 is related to section 4 of Osana (1992). This paper, written in Japanese, discusses not only consumer surplus but also the relationship between this and equivalent and compensating variations. The relationship between Stokes’ theorem and these results is also discussed.
|
So, why are all equilibrium prices locally stable in a quasi-linear economy? The answer is obtained by the theory of no-trade equilibria. Balasko (1978, Theorem 1) showed that in a pure exchange economy, any no-trade equilibrium price is locally stable. This result was in fact substantially shown in Kihlstrom et al. (1976, Lemma 1). Namely, they showed that if the initial endowments become the equilibrium allocation, then (11) holds for the corresponding equilibrium price. If the economy is not quasi-linear, (11) may not hold at some equilibrium price because the income effect that arises from the gap between the initial endowments and the equilibrium allocation has a non-negligible power. In a quasi-linear economy, however, the income effect affects only the numeraire good, and when we aggregate the excess demand function of each consumer, the error with (11) is equal to the value of the excess demand function divided by pLsubscript𝑝𝐿p_{L}italic_p start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT (see Lemma 6 and (18) in Step 2 of the proof of Theorem 1). Thus, this effect is canceled out when the price is an equilibrium price. As a result, the result that holds at the no-trade equilibrium price is restored at any equilibrium price.
|
One virtue of the two-commodity quasi-linear economy is the ability to calculate the change of consumer’s utility from the aggregated demand curve. That is, such an economy can be described by a partial equilibrium model, and we can calculate the consumer’s surplus instead of the utility function directly. It is known that a change in the consumer’s surplus coincides with a change in the sum of the utility of consumers in a quasi-linear economy with L=2𝐿2L=2italic_L = 2. We can extend this result for the case in which L≥3𝐿3L\geq 3italic_L ≥ 3.
|
We have shown that in a quasi-linear economy, the equilibrium price is uniquely determined and is locally stable. Compared with similar results, a feature of this result is that there is no assumption imposed on the excess demand function. Moreover, we have exhibited that in this economy, consumers’ surplus can be defined, and coincides with the change in the sum of utilities.
|
This paper also discusses surplus analysis. As in the partial equilibrium theory, the consumer’s surplus can be defined for a quasi-linear economy, and can be calculated using only the aggregated demand function. The amount of surplus coincides with the increase in the sum of utilities in the trade of this market. (Theorem 2) This result may be applicable to various applied research.
|
C
|
If s∉D𝑠𝐷s\not\in Ditalic_s ∉ italic_D, the statistician is told the element z∈Z𝑧𝑍z\in Zitalic_z ∈ italic_Z such that s∈[Tz]𝑠delimited-[]subscript𝑇𝑧s\in[T_{z}]italic_s ∈ [ italic_T start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ]. The statistician has then to select an element j∈I𝑗𝐼j\in Iitalic_j ∈ italic_I.
|
Roughly speaking, we consider a zero-sum game between an adversary and a statistician, in which the adversary chooses a deviation and the statistician, after observing the realization s𝑠sitalic_s, has to guess the deviator if s∉D𝑠𝐷s\notin Ditalic_s ∉ italic_D. A strategy for the statistican in this game is a blame function. We use the minimax theorem to establish that the statistician has a strategy that guarantees high payoff.
|
The adversary selects an element of i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I (a player in the original problem)
|
The blame function f𝑓fitalic_f above correctly identifies the Deviator with probability 1111, regardless of Deviator’s strategy.
|
Thus, the adversary’s strategy is to select the identity of the deviator i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I and a strategy for that deviator.
|
D
|
\bar{p}}(y))\Leftrightarrow u_{f,\bar{p}}(x)\geq u_{f,\bar{p}}(y),italic_x ≿ italic_y ⇔ italic_f ( over¯ start_ARG italic_p end_ARG , italic_u start_POSTSUBSCRIPT italic_f , over¯ start_ARG italic_p end_ARG end_POSTSUBSCRIPT ( italic_x ) ) ≿ italic_f ( over¯ start_ARG italic_p end_ARG , italic_u start_POSTSUBSCRIPT italic_f , over¯ start_ARG italic_p end_ARG end_POSTSUBSCRIPT ( italic_y ) ) ⇔ italic_u start_POSTSUBSCRIPT italic_f , over¯ start_ARG italic_p end_ARG end_POSTSUBSCRIPT ( italic_x ) ≥ italic_u start_POSTSUBSCRIPT italic_f , over¯ start_ARG italic_p end_ARG end_POSTSUBSCRIPT ( italic_y ) ,
|
Steps 8–10 indicate that all of our claims in Proposition 1 are correct. This completes the proof. ■■\blacksquare■
|
We now complete the preparation for proving Proposition 1. We separate the proof of Proposition 1 into ten steps.
|
Finally, our Proposition 1 says that condition (iii) implies condition (ii). This completes the proof. ■■\blacksquare■
|
and thus Fact 1 implies that the solution function is locally Lipschitz. This completes the proof. ■■\blacksquare■
|
A
|
The proposed multivariate extensions of the Lorenz curve in both Taguchi (1972a,1972b) and Koshevoy and Mosler (1996) relate population proportions to a vector of resource shares. Our proposal differs substantially from these in that it directly relates a specific subset of the population, namely individuals with multivariate rank below r𝑟\displaystyle ritalic_r to their share of both resources. Beyond this major conceptual difference, we now investigate properties of our multivariate extension of the Lorenz curve that make it a valuable contribution.
|
The appealing properties of the Lorenz curve are well captured by the formulation given in Gastwirth (1971). In that formulation, the Lorenz curve is the graph of the Lorenz map, and the latter is the cumulative share of individuals below a given rank in the distribution, i.e., the normalized integral of the quantile function. The relation to majorization and the convex order follows immediately, as shown in section C of Marshall et al. (2011). As pointed out by Arnold (2008), this makes the Lorenz ordering an uncontroversial partial inequality ordering of univariate distributions, and most open questions concern the higher dimensional case.
|
Interpretation. Unlike other multivariate proposals, the Lorenz map shares the interpretation of the traditional Lorenz curve as the cumulative share of resources held by the lowest ranked individuals.
|
Lorenz curve as a CDF. The Lorenz map is a map from [0,1]dsuperscript01𝑑\displaystyle[0,1]^{d}[ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT to [0,1]dsuperscript01𝑑\displaystyle[0,1]^{d}[ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT. Hence, unlike the traditional scalar Lorenz curve, it cannot be a CDF. However, the Inverse Lorenz Function is the cumulative distribution function of a random vector on [0,1]dsuperscript01𝑑\displaystyle[0,1]^{d}[ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT by construction. This property is not shared by the alternative proposals in the literature.
|
A more successful proposal in that respect, is the Lorenz zonoid of Koshevoy and Mosler (1996). Again, take (1.1) in the univariate case as the point of departure. It associates a fraction p𝑝\displaystyle pitalic_p of the population to the share of the resource collectively held by the poorest fraction p𝑝\displaystyle pitalic_p of the population. Koshevoy and Mosler (1996) eschew the need to order the population by associating with a fraction p𝑝\displaystyle pitalic_p of the population the share of resources held by any group of individuals making up a fraction p𝑝\displaystyle pitalic_p of the population, poor, rich, or mixed. The lower bound is the share held by the poorest individuals (the traditional Lorenz curve), and the upper bound is the share held by the richest individuals (a reverse Lorenz curve). The Lorenz zonoid is defined in Koshevoy and Mosler (1996) as the collection of all such shares for each fraction of the population. It is a convex region in [0,1]2superscript012\displaystyle[0,1]^{2}[ 0 , 1 ] start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT bounded below by the Lorenz curve and above by the reverse Lorenz curve. More precisely, the Lorenz zonoid is defined as the set of points
|
B
|
V^eq,3(β^)subscript^𝑉eq3^𝛽\hat{V}_{\text{eq},3}(\hat{\beta})over^ start_ARG italic_V end_ARG start_POSTSUBSCRIPT eq , 3 end_POSTSUBSCRIPT ( over^ start_ARG italic_β end_ARG )
|
Figure 4: We plot the score distributions that are induced by βcompsubscript𝛽comp\beta_{\text{comp}}italic_β start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT, βstratsubscript𝛽strat\beta_{\text{strat}}italic_β start_POSTSUBSCRIPT strat end_POSTSUBSCRIPT, βcapsubscript𝛽cap\beta_{\text{cap}}italic_β start_POSTSUBSCRIPT cap end_POSTSUBSCRIPT to maximize Veq,j(β)subscript𝑉eq𝑗𝛽V_{\text{eq},j}(\beta)italic_V start_POSTSUBSCRIPT eq , italic_j end_POSTSUBSCRIPT ( italic_β ) for j=1,2,3𝑗123j=1,2,3italic_j = 1 , 2 , 3. We plot a histogram of scores for each agent with a distinct unobservable (Zi,ci)subscript𝑍𝑖subscript𝑐𝑖(Z_{i},c_{i})( italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ). Agents are color-coded on according to low (yellow), medium (orange), and high (pink) relative SES. The selection criterion βcompsubscript𝛽comp\beta_{\text{comp}}italic_β start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT accepts students with varying SES.
|
Capacity-Aware βcapsubscript𝛽cap\beta_{\text{cap}}italic_β start_POSTSUBSCRIPT cap end_POSTSUBSCRIPT
|
Following Bhattacharya and Dupas (2012), the decision maker runs a randomized controlled trial (RCT) to obtain a model for the conditional average treatment effect (CATE) τ(x)=𝔼[Yi(1)−Yi(0)∣X=x]𝜏𝑥𝔼delimited-[]subscript𝑌𝑖1conditionalsubscript𝑌𝑖0𝑋𝑥\tau(x)=\mathbb{E}\left[Y_{i}(1)-Y_{i}(0)\mid X=x\right]italic_τ ( italic_x ) = blackboard_E [ italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 1 ) - italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( 0 ) ∣ italic_X = italic_x ] and at deployment, computes an estimate of the CATE for each agent using their observed covariates and assigns treatment to the students with estimated CATE above the q𝑞qitalic_q-th quantile. Note that students are not strategic in the RCT because treatment assignment is random but will be strategic at deployment. In our implementation, we obtain a CATE estimate of the form β1x+β0subscript𝛽1𝑥subscript𝛽0\beta_{1}x+\beta_{0}italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_x + italic_β start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT by estimating the conditional mean outcomes via linear regression and subtracting the models. We refer to this method’s learned policy as βcap=Projℬ(β1),subscript𝛽capsubscriptProjℬsubscript𝛽1\beta_{\text{cap}}=\text{Proj}_{\mathcal{B}}(\beta_{1}),italic_β start_POSTSUBSCRIPT cap end_POSTSUBSCRIPT = Proj start_POSTSUBSCRIPT caligraphic_B end_POSTSUBSCRIPT ( italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , which is a projection of the parameters of the CATE onto the allowed policy class ℬℬ\mathcal{B}caligraphic_B.
|
Competition-Aware (Policy Gradient) βcompsubscript𝛽comp\beta_{\text{comp}}italic_β start_POSTSUBSCRIPT comp end_POSTSUBSCRIPT
|
B
|
For some C<∞𝐶C<\inftyitalic_C < ∞, P{E[Yi,g2(a)|Ng,Zg]≤C for all 1≤i≤Ng}=1𝑃𝐸delimited-[]conditionalsubscriptsuperscript𝑌2𝑖𝑔𝑎subscript𝑁𝑔subscript𝑍𝑔𝐶 for all 1𝑖subscript𝑁𝑔1P\{E[Y^{2}_{i,g}(a)|N_{g},Z_{g}]\leq C\text{ for all }1\leq i\leq N_{g}\}=1italic_P { italic_E [ italic_Y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_g end_POSTSUBSCRIPT ( italic_a ) | italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT , italic_Z start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ] ≤ italic_C for all 1 ≤ italic_i ≤ italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT } = 1 for all a∈{0,1}𝑎01a\in\{0,1\}italic_a ∈ { 0 , 1 } and 1≤g≤G1𝑔𝐺1\leq g\leq G1 ≤ italic_g ≤ italic_G.
|
Assumptions 2.2.(a)–(b) formalize the idea that our data consist of an i.i.d. sample of clusters, where the cluster sizes are themselves random and possibly related to potential outcomes. An important implication of these two assumptions for our purposes is that
|
Assumptions 2.2.(e)–(f) impose some mild regularity on the (conditional) moments of the distribution of cluster sizes and potential outcomes, in order to permit the application of relevant laws of large numbers and central limit theorems. Note that Assumption 2.2.(e) does not rule out the possibility of observing arbitrarily large clusters but does place restrictions on the frequency of extremely large realizations. For instance, two consequences of Assumptions 2.2.(a) and (e) are that222The first is an immediate consequence of the law of large numbers and the Continuous Mapping Theorem. The second follows from Lemma S.1.1 in Bai
|
An attractive feature of our framework is that, by virtue of modeling cluster sizes as random, it is straightforward to permit dependence between the cluster size and other features of the cluster, such as the distribution of potential outcomes within the cluster. In this way, our setting departs from other frameworks in the literature on clustered data in which the cluster sizes are treated as deterministic: see, for example, Hansen and
|
We model the distribution of the data described above in two parts: a super-population sampling framework for the clusters and an assignment mechanism which assigns the clusters to treatments. The sampling framework itself can be described in two stages. In the first stage, an i.i.d. sample of G𝐺Gitalic_G clusters is drawn from a distribution of clusters. In the second stage, a subset of the individual units within each cluster is sampled. A key feature of this framework is that the cluster size Ngsubscript𝑁𝑔N_{g}italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT is modeled as a random variable in the same way as other cluster characteristics Zgsubscript𝑍𝑔Z_{g}italic_Z start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT. While the clusters are (ex-ante) identically distributed, we note that they may exhibit heterogeneity in terms of their (ex-post) realizations of Ngsubscript𝑁𝑔N_{g}italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and Zgsubscript𝑍𝑔Z_{g}italic_Z start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT. The second sampling stage allows for settings in which the analyst does not observe all of the units within a cluster. Define ℳgsubscriptℳ𝑔\mathcal{M}_{g}caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT to be the subset of {1,…,Ng}1…subscript𝑁𝑔\{1,\ldots,N_{g}\}{ 1 , … , italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT } corresponding to the observations within the g𝑔gitalic_gth cluster that are sampled by the researcher. We emphasize that a realization of ℳgsubscriptℳ𝑔\mathcal{M}_{g}caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT is a set whose cardinality we denote by |ℳg|subscriptℳ𝑔|\mathcal{M}_{g}|| caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT |, whereas a realization of Ngsubscript𝑁𝑔N_{g}italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT is a positive integer. For example, in the event that all observations in a cluster are sampled, ℳg={1,…,Ng}subscriptℳ𝑔1…subscript𝑁𝑔\mathcal{M}_{g}=\{1,\ldots,N_{g}\}caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT = { 1 , … , italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT } and |ℳg|=Ngsubscriptℳ𝑔subscript𝑁𝑔|\mathcal{M}_{g}|=N_{g}| caligraphic_M start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT | = italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT. Once the sample of clusters is realized, the experiment assigns treatments A(G):=(Ag:1≤g≤G)A^{(G)}:=(A_{g}:1\leq g\leq G)italic_A start_POSTSUPERSCRIPT ( italic_G ) end_POSTSUPERSCRIPT := ( italic_A start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT : 1 ≤ italic_g ≤ italic_G ) using an assignment rule that stratifies according to baseline covariates Zgsubscript𝑍𝑔Z_{g}italic_Z start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT and cluster sizes Ngsubscript𝑁𝑔N_{g}italic_N start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT. Formally, denote by PGsubscript𝑃𝐺P_{G}italic_P start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT the distribution of the observed data
|
A
|
In this paper, we propose a stochastic lookahead policy embedded in a data-driven sequential decision process for determining replenishment order quantities in e-grocery retailing. We aim at investigating to what extent this approach allows a retailer to improve the inventory management process when faced with multiple sources of non-stationary uncertainty, namely stochastic customer demand, shelf lives, and supply shortages, a lead time of multiple days and demand that is lost if not served. To this purpose, we represent the determination of replenishment order quantities as solutions of a dynamic stochastic period-review inventory model with lost sales and an expected-cost objective function. In real-world applications, the probability distributions of the inventory level at the beginning of a period and its marginals, such as distributions of demand, spoilage, and supply shortage, are typically unknown and hence need to be estimated. The periodically updated estimates for these distributions form the states in the sequential decision process; the inventory model plays the role of a decision model (see Figure 3). The analysis of data provided by the business partner was carried out in previous studies using descriptive and predictive methods (Ulrich et al.,, 2021, 2022); the findings are applied in the numerical analyses of this paper. The literature stresses the difficulty of finding an optimal replenishment policy for decision models like the one discussed here. We therefore propose a stochastic lookahead policy that allows us to integrate probabilistic forecasts for the underlying probability distributions into the optimisation process in a dynamic multi-period framework. We thereby demonstrate the feasibility of the integration of the different components of the data-driven sequential decision process (analytics and statistics, modelling and optimisation). In addition, the framework enables us to gain insights into the value of probabilistic information in our environment, not least in order to find some guidance for designing an adequate decision model. Finally, we show that such a framework is applicable to a real-world business environment of e-grocery retailing, potentially to the benefit of the retailer.
|
The decision policy introduced above allows to explicitly consider the full uncertainty in the inventory management process by incorporating distributional information for the stochastic variables demand, spoilage, and supply shortage when determining replenishment order quantities. In practice, the underlying distributions for the stochastic variables need to be estimated, e.g. from historical data. However, the precision of the estimates of these probability distributions is highly dependent on the quality of the data available to the retailer. To avoid potential inaccuracies and allow for a comprehensive comparison between different policies, in this section, we rely on a simulation-based setting to evaluate the lookahead policy proposed above and to analyse the importance of incorporating probabilistic information when determining replenishment order quantities. Thus, we consider the simplified situation in which the retailer knows the probability distribution for each source of uncertainty (demand, spoilage, supply shortages), while we allow for non-stationarity and define the underlying distributions in accordance with a descriptive analysis of the data available in our business case.
|
For the evaluation of the lookahead policy proposed in this paper, we first test the policy in a simulation-based setting, where we can consider the benefit of incorporating full uncertainty information in isolation, i.e. without the additional noise induced by the need to estimate the relevant probability distributions. After deriving replenishment order quantities based on the newsvendor model and a deterministic approach as a benchmark, we apply the stochastic lookahead policy to the same data set. This allows us to assess the benefit of (1) using this approach in the first place, instead of the myopic newsvendor model, and (2) using probability distributions instead of deterministic expected values for the stochastic variables affecting the replenishment order decision process. In these simulations, we further discuss the sensitivity of the results with respect to the specification of different model parameters. Second, we evaluate our policy in a case study using real-life data from a European e-grocery retailer, with the additional challenge that the stochastic variables’ probability distributions need to be estimated from historical data and vary over time. The data is used to generate probabilistic forecasts which are fed into the stochastic lookahead policy. The practical applicability of our approach is further demonstrated by comparing it to a parametric decision rule used in practice by the e-grocery retailer considered.
|
Evaluating an experimental data set generated in accordance with data provided by our business partner, we can show that our approach yields a replenishment policy that reduces the corresponding inventory management costs compared to the frequently applied newsvendor model. In addition, we analyse the value of explicitly exploiting probabilistic information instead of relying on point forecasts (expected values) in our replenishment decisions. Our results demonstrate that incorporating the full distributional information for all sources of uncertainty can lead to substantial cost reductions (with the amount of savings of course depending on the specific situation). The importance of including distributional information tends to increase with higher asymmetry in cost parameters (i.e. very low or very high service-level targets), as commonly found in e-grocery retailing. Regarding the different sources of uncertainty, the simulation results indicate that the benefit of integrating the probability distributions instead of expected values when determining replenishment order quantities is highest for customer demand. In contrast, the additional contribution of modelling shelf lives and supply shortages by probability distributions here turns out to be marginal but highly dependent on the structure of the underlying probability distributions (see the analyses in the online supplementary material). Finally, in a case study based on a comprehensive data set provided by a European e-grocery retailer we demonstrate the practical applicability of our approach by comparing the order policy under our approach to a policy used by this company in practice. Considering four different SKUs, we obtain cost savings between 6% and 25% when averaging over six fulfilment centres. From a managerial perspective, the simulation-based analyses as well as the case study suggest that using prescriptive analytics relying on modern computational methods to exploit the considerable amount of data available in e-grocery retailing is beneficial for retailers. In particular, it has the potential to outperform simple parametric inventory management policies designed by experienced human experts as well as myopic policies such as those based on the simple newsvendor model and deterministic approaches based on expected values. In addition to explicitly accounting for all sources of uncertainty, a key advantage of our lookahead policy over simple parametric policies is that it naturally adapts to a changing environment (e.g. induced by dynamic market developments), structural shocks (e.g. the Covid-pandemic), and regime shifts due to strategic changes (e.g. an increased focus on sustainability). Furthermore, it easily allows an adaption to the business cases of other companies. Specifically, our sensitivity analyses already provide a generalisation to other cases and present results to be expected in different settings.
|
In our setting with multiple sources of uncertainty (demand, supply, and spoilage), the use of point forecasts reduced average per-period costs for perishable SKUs compared to the more myopic newsvendor model, which addresses only the stochasticity of demand. However, the approach presented in the previous section results in more unfulfilled demand than intended by the strategic service level of the e-grocery retailer. Thus, in the following, we apply the lookahead policy introduced in Section 3.5, evaluating the policy in our simulation setting in detail. As the outcome of any of these policies is highly dependent on the business case, we provide a discussion on the sensitivity of our results with respect to the underlying parameter values, thereby generalising to other inventory management settings, in the online supplementary material.
|
C
|
Alternatively, the researcher could consider the threshold strategy of first using both datasets, choosing to report this p𝑝pitalic_p-value if it is below a threshold and, otherwise, choosing the best of the available p𝑝pitalic_p-values. For K=2𝐾2K=2italic_K = 2, this gives three potential p𝑝pitalic_p-values to choose between. For many such testing problems (for example, testing a regression coefficient in a linear regression), Tk∼𝒩(h,1)similar-tosubscript𝑇𝑘𝒩ℎ1T_{k}\sim\mathcal{N}(h,1)italic_T start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ∼ caligraphic_N ( italic_h , 1 ), k=1,2𝑘12k=1,2italic_k = 1 , 2, approximately so that the t𝑡titalic_t-statistic from the combined samples is T12≃(T1+T2)/2similar-to-or-equalssubscript𝑇12subscript𝑇1subscript𝑇22T_{12}\simeq(T_{1}+T_{2})/\sqrt{2}italic_T start_POSTSUBSCRIPT 12 end_POSTSUBSCRIPT ≃ ( italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) / square-root start_ARG 2 end_ARG. This is precisely the same setup asymptotically as in the IV case presented above, so those results apply directly to this problem. As such, we refer to the discussion there rather than re-present the results.
|
The right-hand side panel in Figure 7 presents the p𝑝pitalic_p-curves for the minimum case. When p𝑝pitalic_p-hacking works through taking the minimum p𝑝pitalic_p-value, as in earlier cases for p𝑝pitalic_p-values near commonly used sizes, the impact is to move the distributions towards the left, making the p𝑝pitalic_p-curves fall more steeply. Of interest is what happens at p=0.5𝑝0.5p=0.5italic_p = 0.5, where taking the minimum (this effect is also apparent in the thresholding case) results in a discontinuity. The reason for this is that choices over the denominator of the t𝑡titalic_t-statistic used to test the hypothesis cannot change the sign of the t𝑡titalic_t-test. Within each side, the effect is to push the distribution to the left, so this results in a (small) discontinuity at p=0.5𝑝0.5p=0.5italic_p = 0.5. This effect will extend to all methods where p𝑝pitalic_p-hacking is based on searching over different choices of variance-covariance matrices — for example, different choices in estimators, different choices in the number of clusters (as we consider in the Monte Carlo simulations), etc. Figure 7 (right panel) shows that for h=1,2ℎ12h=1,2italic_h = 1 , 2, the bound is not reached, and any discontinuity at p=0.5𝑝0.5p=0.5italic_p = 0.5 is very small. For h=0ℎ0h=0italic_h = 0, the bound is slightly below the p𝑝pitalic_p-curve after the discontinuity.
|
In order to consider relevant directions of power, we examine two approaches to p𝑝pitalic_p-hacking in four situations in which we might think opportunities for p𝑝pitalic_p-hacking in economics and other fields commonly arise. The two approaches are what we refer to as a “threshold” approach where a researcher targeting a specific threshold stops if the preferred model rejects at this size and conducts a search over alternative specifications if not and a second approach of simply choosing the best p𝑝pitalic_p-value from a set of specifications (denoted the “minimum” approach below). We examine four situations where opportunities for p𝑝pitalic_p-hacking arise: (a) searching across linear regression models with different control variables, (b) searching across different choices of instruments in estimating causal effects, (c) searching across datasets, and (d) searching across bandwidth choices in constructing standard errors in time series regressions.222While (a)–(d) are arguably prevalent in empirical research, there are of course many other approaches to p𝑝pitalic_p-hacking (see, e.g., Simonsohn et al.,, 2014; Simonsohn,, 2020; McCloskey and Michaillat,, 2023, for discussions). From an econometric perspective, this implies that the alternative space of the testing problem is very large.
|
We construct theoretical results for the implied distribution of p𝑝pitalic_p-values under each approach to p𝑝pitalic_p-hacking in a simple model. The point of this exercise is twofold — by seeing how exactly p𝑝pitalic_p-hacking affects the distribution we can determine the testing method appropriate for detecting the p𝑝pitalic_p-hacking, and also we will be able to determine the features that lead to large or small deviations from the distribution of p𝑝pitalic_p-values when there is no p𝑝pitalic_p-hacking.333While we focus on the impact of these different types of p𝑝pitalic_p-hacking on the shape of the p𝑝pitalic_p-curve and the power of tests for detecting p𝑝pitalic_p-hacking, such explicit models of p𝑝pitalic_p-hacking are also useful in other contexts. For example, McCloskey and Michaillat, (2023) use a model of p𝑝pitalic_p-hacking to construct critical values that are robust to p𝑝pitalic_p-hacking. We then examine in Monte Carlo analyses extensions of these cases.
|
In time series regression, sums of random variables such as means or regression coefficients are standardized by an estimate of the spectral density of the relevant series at frequency zero. A number of estimators exist; the most popular in practice is a nonparametric estimator that takes a weighted average of covariances of the data. With this method, researchers are confronted with a choice of the bandwidth for estimation. Different bandwidth choices allow for multiple chances at constructing p𝑝pitalic_p-values, hence allowing for the potential for p𝑝pitalic_p-hacking.
|
D
|
Renewable energy consumption (% of total final energy consumption): Renewable energy consumption is the share of renewable energy in total final energy consumption. Source: World Bank WDI.
|
Fossil fuel energy consumption (% of total): Fossil fuel comprises coal, oil, petroleum, and natural gas products. Source: World Bank WDI.
|
Renewable energy consumption (% of total final energy consumption): Renewable energy consumption is the share of renewable energy in total final energy consumption. Source: World Bank WDI.
|
Renewable energy consumption (% of total final energy consumption): Renewable energy consumption is the share of renewable energy in total final energy consumption. Source: World Bank WDI.
|
Fossil fuel energy consumption (% of total): Fossil fuel comprises coal, oil, petroleum, and natural gas products. Source: World Bank WDI.
|
A
|
We also evaluate each model based on a longer-term, 5-year prediction window (1985–1989). In this case, each state will have five prediction errors, one for each post-treatment period. For the longer-term predictions, we calculate mean squared error based on the prediction errors in each pseudo-treated state over each post-treatment year.
|
In our first exercise, we exclude California (the true treated state). Instead, we assume that one other state from the control group (the “pseudo-treated” state) has been treated with a cigarette sales tax in 1989. We mask the post-1988 cigarette sales of the pseudo-treated state and apply each of the alternative causal inference methods. For each method, we assess the prediction error for the (actually untreated) outcome in the pseudo-treated state in the post-treatment period. This exercise will yield a valid assessment of the prediction errors for California in 1989 to the extent that, for each method we consider, the 1989 prediction error for California is drawn from the same distribution as the 1989 prediction error for the control states.
|
Because the out-of-sample prediction error determines the accuracy of the estimated treatment effect, we compare the various estimation methods along this dimension. The model predictions are visualized in Figure 3, and the resulting distribution of prediction errors is summarized in Table 1. Among the methods we consider, SyNBEATS yields the most accurate prediction, with an RMSE of 3.59, 54% improvement over the second-best alternative of SC. Similarly, for longer-term predictions, SyNBEATS yields the best performance, with an RMSE of 8.17, which is a 27% improvement over SC as the second-best alternative.
|
To compare SyNBEATS with MC and SDID, we replicate the analyses in the previous section. The results are presented in Table A.3. SyNBEATS dramatically outperforms MC in each analysis we consider with the data sets corresponding to Proposition 99 and the German Reunification. With respect to SDID, the results are more nuanced: SyNBEATS performs better in 6 out of the 8 analyses we consider using these two datasets, but SDID out-performs SyNBEATS in the remaining two. Notably, the performance gains of SyNBEATS compared to SDID are smaller than with the other estimators we consider. Finally, in predicting stock returns, SyNBEATS yields the lowest RMSE, but all three of the estimators yield comparable performance – again, consistent with the hypothesis that time series forecasts are unlikely to greatly improve the predictive power in this context.
|
As shown in Table 1, SyNBEATS outperforms other traditional estimators in their short-term predictions, improving the RMSE by 31% compared to the second-best alternative (SC). To further facilitate a direct comparison of short- to long-term predictions for each estimator, in Figure 4, we contrast predictions in the first year after treatment to those obtained in the fifth year after treatment. Unsurprisingly, all estimators perform worse for longer-term predictions, with the performance difference between SyNBEATS and SC closing as well. Focusing on the full five-year window, SyNBEATS slightly outperformed SC.
|
B
|
Beyond the work of Bojinov et al. (2023), we are not aware of prior studies of switchback experiments that consider using
|
The regular switchback estimator takes data from a regular switchback along with an (optional) burn-in
|
Our first result is an error decomposition for regular switchback estimators under our geometric mixing assumptions.
|
is on and the average outcome when treatment is off. Under our model, we show that this standard switchback is severely
|
induced by their modeling assumptions. We note that in our setting, i.e., under Assumptions 1 and 2,
|
B
|
We favour FA-LP by including the true number of six factors, estimated by principal components from the 120 variables, and estimate the
|
that the FFR rises by one on impact, as opposed to a size of one standard deviation, as is done by Bernanke et al. (2005). We implement this change to
|
In section 3.1 we compare our proposed method with unpenalized parameter of interest to the standard desparsified lasso in a sparse structural VAR. In section 3.2, we study our proposed method in an empirically calibrated DFM.
|
As this method matches the true DGP closely, we expect this benchmark to be a highly competitive standard.
|
A simple alteration of the desparsified lasso that leaves this parameter unpenalized thus brings the coverage rates much closer to the nominal level. Note that the standard desparsified lasso has coverage exceeding our proposed estimator at further horizons; this is because the true impulse response becomes close to zero, where the lasso’s bias towards zero is beneficial. This does not run counter to our conclusion, as we believe more uniform coverage over different parameter values is desirable.
|
C
|
The anchoring effect refers to “a systematic influence of initially presented numerical values on subsequent judgments of uncertain quantities,” where the judgement is biased toward the anchor (Teovanović (2019)). The anchoring effect has been replicated across a variety of contexts, as I discuss in Sect. 1.1, including with judgements involving money and anchors established by government policy. Given the prevalence of the anchoring effect, one would expect to find an anchoring bias generated by one of the most controversial figures of the economy: the minimum wage.
|
Since determining what wage to offer to employees is a complex judgement, employers may use the minimum wage as a convenient reference point upon which to base their offers. Due to the difficulty of conducting controlled experiments with employers, I seek to answer a related question: does the minimum wage function as an anchor for what people perceive to be a fair wage? Although the average person does not engage in wage determination, public discourse surrounding the fairness of wages can influence the economy through the political process, meaning that effects of minimum wage on perceptions of fairness can have broad implications. Thus, I ask human subjects on the crowdsourcing platform Prolific.co as well as an AI bot, specifically a version of OpenAI’s bot GPT-3 (GPT-3 (2022)), to determine the fair wage for a worker given their job description and a minimum wage.
|
I demonstrate that the minimum wage functions as an anchor for what Prolific workers consider a fair wage: for numerical values of the minimum wage ranging from $5 to $15, the perceived fair wage shifts towards the minimum wage, thus establishing its role as an anchor (Fig. 1 and Table 1). I replicate this result for a second job description, finding that the effect holds even for jobs where wages are supplemented by tips.
|
The main hypothesis is that minimum wage acts as an anchor on what is considered a fair wage for a job. That is, for a job description, the average response for what is considered a fair wage changes depending on whether it is conditioned on a value of the minimum wage m𝑚mitalic_m, and in particular it shifts towards that value m𝑚mitalic_m. As the minimum wage is decreased, I expect the wage that is deemed fair to decrease correspondingly. Next, I formalize this question into a statistical test.
|
Given the results established in this paper, how should governments regulate labor markets through minimum wages? One interpretation is that as long as the minimum wage is below the perceived fair wage, the minimum wage results in decreases in the judgements of perception of fairness, meaning that current minimum wages in the United States, which range from $7.25/hr to around $15/hr, may do more harm than good. However, one might respond that perceptions of fairness are by no means the only determinant of wages, and that many people are paid wages significantly below what is considered fair according to this study’s respondents. Thus, some may argue that the minimum wage should be increased to avoid an anchoring effect in the downward direction. Although this research does not resolve the question of how the government should set the minimum wage, it adds a new dimension by demonstrating how psychological phenomena can factor into the effects of policy.
|
A
|
23−α=2α−123𝛼2𝛼1\frac{2}{3}-\alpha=2\alpha-1divide start_ARG 2 end_ARG start_ARG 3 end_ARG - italic_α = 2 italic_α - 1
|
The OPE solution can be motivated intuitively by the following reasoning. First, note that if player 1 bets the optimal size of 2 with a winning hand against an equilibrium strategy (which calls a bet of 2 with probability 1313\frac{1}{3}divide start_ARG 1 end_ARG start_ARG 3 end_ARG), expected payoff is
|
In the no-limit clairvoyance game [2], player 1 is dealt a winning hand (W) and a losing hand (L) each with probability 12.12\frac{1}{2}.divide start_ARG 1 end_ARG start_ARG 2 end_ARG . (While player 2 is not explicitly dealt a “hand,” we can view player 2 as always being dealt a medium-strength hand that wins against a losing hand and loses to a winning hand.) Both players have initial chip stacks of size n𝑛nitalic_n, and they both ante $0.50 (creating an initial pot of $1). P1 is allowed to bet any integral amount x∈[0,n]𝑥0𝑛x\in[0,n]italic_x ∈ [ 0 , italic_n ] (a bet of 0 is called a check).222In the original formulation of the no-limit clairvoyance game [2] player 1 is allowed to bet any real value in [0,n]0𝑛[0,n][ 0 , italic_n ], making the game a continuous game, since player 1’s pure strategy space is infinite. For simplicity we consider the discrete game where player 1 is restricted to only betting integer values, though much of our equilibrium analysis will still apply for the continuous version as well. Then P2 is allowed to call or fold (but not raise). This game clearly falls into the class of one-step extensive-form imperfect-information games. The game is small enough that its solution can be computed analytically (even for the continuous version) [2].
|
So the OPE is the unique equilibrium where player 1 loses the same amount of expected payoff with both types of mistakes (betting 1 with a winning hand and
|
According to our above analysis, the unique Nash equilibrium strategy for player 1 is to bet 2 with probability 1 with a winning hand, to bet 2 with probability 2323\frac{2}{3}divide start_ARG 2 end_ARG start_ARG 3 end_ARG with a losing hand, and to check with probability 1313\frac{1}{3}divide start_ARG 1 end_ARG start_ARG 3 end_ARG with a losing hand. The Nash equilibrium strategies for player 2 are to call a bet of 2 with probability 1313\frac{1}{3}divide start_ARG 1 end_ARG start_ARG 3 end_ARG, and to call a bet of 1 with probability in the interval [12,23].1223\left[\frac{1}{2},\frac{2}{3}\right].[ divide start_ARG 1 end_ARG start_ARG 2 end_ARG , divide start_ARG 2 end_ARG start_ARG 3 end_ARG ] . As it turns out, the unique trembling-hand perfect equilibrium strategy for player 2 is to call vs. a bet of 1 with probability 2323\frac{2}{3}divide start_ARG 2 end_ARG start_ARG 3 end_ARG.444Observe that this game explicitly shows that Theorem 1 does not hold in general for extensive-form games, since all of the Nash equilibria in this game satisfy the alternative formulation of trembling-hand perfect equilibrium. To see this, consider the sequence of strategies for player 1 that bet 1 with probability ϵitalic-ϵ\epsilonitalic_ϵ with a winning hand and with probability ϵ2italic-ϵ2\frac{\epsilon}{2}divide start_ARG italic_ϵ end_ARG start_ARG 2 end_ARG with a losing hand. This sequence will converge to the unique Nash equilibrium strategy for player 1 as ϵ→0→italic-ϵ0\epsilon\rightarrow 0italic_ϵ → 0, and furthermore player 2 is indifferent between calling and folding vs. a bet of 1 against all of these strategies, so all of player 2’s Nash equilibrium strategies are best responses. So the equivalent formulation of trembling-hand perfect equilibrium is only valid for simultaneous strategic-form games and does not apply to extensive-form games. Since this is a one-step extensive-form imperfect-information game, this is also the unique quasi-perfect equilibrium. And since player 2’s strategy is fully mixed, this is also the unique one-sided quasi-perfect equilibrium. However, the unique observable perfect equilibrium strategy for player 2 is to call with probability 5959\frac{5}{9}divide start_ARG 5 end_ARG start_ARG 9 end_ARG. Interestingly, the OPE corresponds to a different strategy for this game than all the other refinements we have considered, and none of them correspond to the “natural” argument for calling with probability 1212\frac{1}{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG based on an assumption about the typical reasoning of human opponents. The OPE value of 5959\frac{5}{9}divide start_ARG 5 end_ARG start_ARG 9 end_ARG corresponds to the solution assuming only that player 1 has bet 1 but that otherwise all players are playing as rationally as possible. Note also that the OPE does not simply correspond to the average of the two interval boundaries, which would be 712.712\frac{7}{12}.divide start_ARG 7 end_ARG start_ARG 12 end_ARG .
|
C
|
The adversarial structure of public advocacy provides an additional benefit: the senders, having conflicting goals, cannot coordinate to influence the receiver’s decision. Resilience to collusion is desirable in organizations where informed agents can discuss their intentions before being consulted by the receiver. The following corollary confirms that the protocol in Proposition 4 is robust to non-binding pre-play communication.
|
The proof follows from the observation that there is no profitable coalitional deviation involving two opposed-biased senders. Likewise, the receiver cannot gain from a coalitional deviation because the equilibrium is already efficient. Therefore, the equilibrium in Proposition 4 is strong (Aumann, \APACyear1959) and coalition-proof (Bernheim \BOthers., \APACyear1987).
|
Equilibrium.— The equilibrium concept is the perfect Bayesian equilibrium (PBE). To test for the protocols’ robustness against collusion, I use the two related concepts of strong Nash equilibrium (Aumann, \APACyear1959) and coalition-proof Nash equilibrium (Bernheim \BOthers., \APACyear1987). An equilibrium is strong if no coalition of players can jointly deviate so that all players in the coalition get strictly better
|
Proposition 4 provides an equilibrium characterization, which allows us to understand the mechanism supporting truthful reporting on the equilibrium path. The key to efficiency in public advocacy stands in how the receiver allocates the burden of proof between the two senders.111111Since the receiver fully learns the state after sequentially consulting two opposed-biased senders, efficiency can be achieved by public advocacy protocols with N≥2𝑁2N\geq 2italic_N ≥ 2 senders. The focus on N=2𝑁2N=2italic_N = 2, based on a minimality principle, is therefore without loss of generality. Beliefs must be consistent with Bayes’ rule when senders deliver identical reports in a truthful equilibrium. In these cases, the receiver always follows the senders’ recommendations. By contrast, beliefs are free in all those cases where senders disagree. The construction of suitable off path beliefs is crucial in sustaining truthful equilibria.
|
payoffs; it is coalition-proof when resilient against those coalitional deviations that are self-enforcing.444A coalition is self-enforcing if there is no proper sub-coalition that, taking fixed the action of its complement, can agree to deviate from the deviation in a way that makes all of its members better off. The type of group deviations considered by the notion of coalition-proofness is consistent with the model because it preserves its non-cooperative nature. For a formal definition of strong and coalition-proof Nash equilibrium, see Aumann (\APACyear1959) and Bernheim \BOthers. (\APACyear1987), respectively. For a textbook definition of perfect Bayesian equilibrium, see Fudenberg \BBA Tirole (\APACyear1991). I refer to equilibria where senders always report truthfully as truthful, and to equilibria where the receiver always learn the state as fully revealing.
|
A
|
The generalized is thus FAS=[0.5,1]𝐹𝐴𝑆0.51FAS=[0.5,1]italic_F italic_A italic_S = [ 0.5 , 1 ] and does not contain β𝛽\betaitalic_β.
|
Because Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is here an endogenous explanatory variable, and because
|
that are themselves endogenous explanatory variables with γℓαℓ≠0subscript𝛾ℓsubscript𝛼ℓ0\gamma_{\ell}\alpha_{\ell}\neq 0italic_γ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT italic_α start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ≠ 0.
|
where Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT violates the exclusion assumption and Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and
|
of Z1subscript𝑍1Z_{1}italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, only identifies β𝛽\betaitalic_β when Z2subscript𝑍2Z_{2}italic_Z start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is
|
A
|
Note: The figure shows histograms of (real) market capitalization (in billions of U.S. dollars) of all the firms in Panel (a), for the firms whose real market capitalization is between the 5th ($11.6 million) and 95th ($6.2 billion) percentiles in Panel (b), and for the firms whose real market capitalization is below the 95th percentile in Panel (c).
|
A recent study closely related to ours is Singh et al. (2022), which uses an event study methodology to evaluate the stock market’s reaction to clinical trial announcements in the pharmaceutical industry. They allow for rich heterogeneity across drugs and correlate abnormal returns with clinical trial results, providing an important insight into characteristics that increase the stock market’s valuation of an experimental drug. However, they only compare the values (e.g., a drug with a better safety profile is commercially valuable) and do not estimate their magnitudes, which is one of our main objectives. In addition, our approach can be generalized by including more types of announcements and disease heterogeneity.
|
Given the pronounced right skewness in the firm size distribution, we estimate the value of drugs after removing outlier firms to ensure that extreme cases do not drive our results. This decision is motivated by our discussion of the market capitalization distribution in Section 3.3, which suggests that large and small firms may have fundamentally different characteristics.
|
Heterogeneity between small and large firms is crucial for our approach. First, large firms may have more expertise and resources for conducting clinical development, potentially increasing their chances of success and leading to heterogeneity in transition probabilities. Second, for the same reasons, large firms may conduct their clinical trials faster, leading to differences in the time investors wait to realize their returns. Third, large firms may select and develop certain types of drugs (Cockburn and Henderson, 2001; Krieger et al., 2022). Finally, heterogeneity in market capitalization implies that an announcement about a drug with a specific value would result in a different percentage change in the price of a single stock, depending on the firm’s size.
|
Our estimates suggest several important areas for future research. First, our approach excludes drugs developed by large pharmaceutical firms with a market valuation above the 95th percentile of the firm size distribution. To the extent that these firms develop different types of drugs, our approach fails to capture those drugs. To relax this assumption, we could consider large firms separately and keep track of announcements about acquired drugs.
|
C
|
The cost of a verification protocol is log2|𝒞|subscript2𝒞\log_{2}|{\mathcal{C}}|roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT | caligraphic_C |.
|
However, the goal of these protocols is different than traditional nondeterministic protocols in computer science: the protocols of Section 5 only aim to describe a matching to the applicants, not to verify that the matching is correct.
|
The (concurrent) representation complexity of a mechanism f𝑓fitalic_f is the minimum of the costs of all concurrent representation protocols for f𝑓fitalic_f.
|
Thus, the verification complexity of 𝖳𝖳𝖢𝖳𝖳𝖢{{\mathsf{TTC}}}sansserif_TTC and of any stable matching mechanism is Ω(|𝒜|)Ω𝒜\Omega\bigl{(}|\mathcal{A}|\bigr{)}roman_Ω ( | caligraphic_A | ).
|
The verification complexity of a mechanism f𝑓fitalic_f is the minimum of the costs of all verification protocols for f𝑓fitalic_f.
|
D
|
15 voters in all, with 3 experts: N=15𝑁15N=15italic_N = 15, K=3𝐾3K=3italic_K = 3. The two treatments
|
Table 1: p=0.7𝑝0.7p=0.7italic_p = 0.7, F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
|
In all experiments, we set π=0.5𝜋0.5\pi=0.5italic_π = 0.5, p=0.7𝑝0.7p=0.7italic_p = 0.7, and F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform
|
Table 2: p=0.7𝑝0.7p=0.7italic_p = 0.7, F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
|
With p=0.7𝑝0.7p=0.7italic_p = 0.7 and q𝑞qitalic_q uniform over [0.5,[0.5,[ 0.5 ,0.7], we have verified
|
A
|
In scenario (ii), illustrated by Figure 4, related variety is just slightly higher, and the model still accommodates for re-dispersion. However, the re-dispersion process is not smooth – the economy suddenly jumps to symmetric dispersion from a fairly asymmetric equilibrium spatial distribution.
|
In scenario (i), shown in Figure 3, related variety is such that within-region interaction is relatively less important (b=0.33𝑏0.33b=0.33italic_b = 0.33). For a low freeness of trade, symmetric dispersion is stable because firms wish to avoid the burden of a very costly transportation supplying to farmers from full agglomeration in a single region. As ϕitalic-ϕ\phiitalic_ϕ increases, the economy initially agglomerates, but then re-disperses as ϕitalic-ϕ\phiitalic_ϕ increases further. This re-dispersion process occurs because, for a very high economic integration, firms find it profitable to relocate to the less industrialized region in order to benefit from the pool of scientists in the more agglomerated region, which generates a higher chance of innovation and thus higher expected profits. Noteworthy, the turning point in the agglomeration process happens before industry reaches full agglomeration in a single region, as in Pflüger and Südekum, (2008). However, contrary to the latter, our model does not predict full agglomeration in the entire parameter range of economic integration when related variety is low enough. Re-dispersion in scenario (i) is more akin to geographical economic models of vertical linkages between upstream and downstream firms by Krugman and Venables, (1995); Venables, (1996) and Puga, (1999). However, in these models, re-dispersion is smooth altogether and occurs when workers are inter-regionally immobile and firms become too sensitive to regional cost differentials when economic integration is very high.
|
The case of high related variety, b∈(12,1)𝑏121b\in(\frac{1}{2},1)italic_b ∈ ( divide start_ARG 1 end_ARG start_ARG 2 end_ARG , 1 ), is much less diversified and can be accounted for resorting to a subset of the pictures from Figure 1. The history as economic integration increases is as follows. For a very low trade freeness, symmetric dispersion is the only stable equilibrium as in Figure 1(a). For an intermediate value of ϕitalic-ϕ\phiitalic_ϕ, one asymmetric dispersion equilibrium arises which is the only stable one and becomes more asymmetric as ϕitalic-ϕ\phiitalic_ϕ increases further. This is akin to the picture in Figure 1(b). Finally, the asymmetric dispersion equilibrium gives rise to stable full agglomeration in one single region once ϕitalic-ϕ\phiitalic_ϕ becomes very high. This is illustrated in Figure 1(c). In other words, when intra-regional interaction is relatively more important, knowledge spillovers are more localized, and thus constitute an additional agglomeration force.
|
In scenario (vi) we illustrate the qualitative change in the spatial structure of the economy as ϕitalic-ϕ\phiitalic_ϕ increases for b>1/2𝑏12b>1/2italic_b > 1 / 2, but with a higher λ𝜆\lambdaitalic_λ, since, with the parameter values of the previous scenario, agglomeration would be ubiquitously stable (and hence uninteresting) for higher values of b𝑏bitalic_b. In Figure 8, we can observe a supercritical pitchfork bifurcation, as in the model by Pflüger, (2004), where there is no innovation. That is, for low levels of economic integration, symmetric dispersion is stable. As ϕitalic-ϕ\phiitalic_ϕ increases, one region smoothly becomes more and more industrialized en route to a full agglomeration whereby that region becomes a core.
|
Scenario (iii) also just slightly increases related variety compared to the previous scenario (see Figure 5), and the story of spatial outcomes as economic integration increases is very similar, except that, in this case, full agglomeration is stable for a small range of intermediate values of ϕitalic-ϕ\phiitalic_ϕ, as predicted by Proposition 3. The parametrization here also corresponds to that illustrated in Figure 1.
|
D
|
In economics, Kleiner, Moldovanu and Strack (2021) characterize the extreme points of monotone functions on [0,1]01[0,1][ 0 , 1 ] that majorize (or are majorized by) some given monotone function, which is equivalent to the set of probability measures that dominate (or are dominated by) a given probability measure in the convex order. They then apply this characterization to various economic settings, including mechanism design, two-sided matching, mean-based persuasion, and delegation.222See also Arieli, Babichenko, Smorodinsky and
|
Yamashita (2023). Several recent papers in economics also exploit properties of extreme points to derive economic implications. See, for instance, Bergemann et al. (2015) and Lipnowski and Mathevet (2018). Candogan and Strack (2023) and Nikzad (2023)
|
In economics, Kleiner, Moldovanu and Strack (2021) characterize the extreme points of monotone functions on [0,1]01[0,1][ 0 , 1 ] that majorize (or are majorized by) some given monotone function, which is equivalent to the set of probability measures that dominate (or are dominated by) a given probability measure in the convex order. They then apply this characterization to various economic settings, including mechanism design, two-sided matching, mean-based persuasion, and delegation.222See also Arieli, Babichenko, Smorodinsky and
|
We first apply Theorem 2 and Theorem 3 to gerrymandering. Existing economic theory on gerrymandering has primarily focused on optimal redistricting or fair redistricting mechanisms (e.g., Owen and Grofman 1988; Friedman and Holden 2008; Gul and Pesendorfer 2010; Pegden et al. 2017; Ely 2019; Friedman and Holden 2020; Kolotilin and Wolitzky 2023). Another fundamental question is the scope of gerrymandering’s impact on a legislature. If any electoral map can be drawn, what kinds of legislatures can be created? In other words, what are the “limits of gerrymandering”?
|
When |supp(F)|>2supp𝐹2|\mathrm{supp}(F)|>2| roman_supp ( italic_F ) | > 2, this “concavafication” method requires finding the concave closure of a multi-variate function, which is known to be computationally challenging, especially when |supp(F)|=∞supp𝐹|\mathrm{supp}(F)|=\infty| roman_supp ( italic_F ) | = ∞.111111A recent elegant contribution by Kolotilin, Corrao and Wolitzky (2023) also provides a tractable method that simplifies these persuasion problems and more using optimal transport. For tractability, many papers have restricted attention to preferences where the only payoff-relevant statistic of a posterior is its mean (i.e., v^(G)^𝑣𝐺\hat{v}(G)over^ start_ARG italic_v end_ARG ( italic_G ) is measurable with respect to 𝔼G[x]subscript𝔼𝐺delimited-[]𝑥\mathbb{E}_{G}[x]blackboard_E start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT [ italic_x ]). See, for example, Gentzkow and Kamenica (2016), Kolotilin, Li, Mylovanov and
|
A
|
Comparison of decarbonization strategies. Figure 3 shows a comparison of four exemplary decarbonization strategies: The ‘Remove largest emitters first’ strategy, which aims to reach the highest emission savings with a minimum number of firms to be removed from production, the
|
(C) In the ‘Remove least-risky firms first (employment)’ strategy, firms are removed according to their ascending risk of triggering job loss, i.e., EW-ESRI; firms that are considered least systemically relevant for the production network are removed first.
|
‘Remove least-employees firms first’ strategy that aims at minimum job loss on the individual firm level,
|
The ‘Remove least-employees firms first’ strategy that aims at minimizing job loss at each individual firm, shown in Fig. 3B manages to keep expected job and output loss at low levels for the initially removed firms. But since this strategy focuses on job loss at the individual firm level, it fails to anticipate a highly systemically relevant firm whose closure results in high levels of expected job and output loss. Since CO2 emissions are not explicitly considered in this strategy, emission savings only rise incrementally with additional firms with comparatively low numbers of employees being removed. To reduce CO2 emissions by 17.35 %, this strategy puts 32.24% of output and 28.41% of jobs at risk, while removing 102 firms from the production network. This strategy therefore fails to secure jobs and economic output, while delivering its emission savings.
|
(B) In the ‘Remove least-employees firms first’ strategy, firms are closed according to their ascending numbers of employees.
|
B
|
9.0.0) (Gurobi Optimization, 2020). The source code and data generated for the illustrative three-node and the Nordics case studies are openly available at GitHub repositories (Belyak, 2022a, b).
|
To be able to conduct a thorough analysis of the optimal investment strategies and generation levels in this section, we first consider a simplified structure for the case study, hereinafter referred to as the illustrative instance. Then in Section 5, we consider a case study for the Nordic region. The structure of the illustrative instance is presented in Figure 1.
|
Due to the large-scale nature of the case study instance and the consequent computational challenges we have faced, we limited the range of the values for each of the input parameters to a discrete set with two values in the sensitivity analysis. The first one represents a possible “low” value, and the second one represents a “high” value for a corresponding parameter. Therefore, we solve the case study instance for each possible set of 4 input parameters where each parameter takes one of two possible values from a corresponding set which leads to solving 16 different instances in total. The values for the parameters are presented in Table 3. It is worth mentioning that the data for the TEB and GEB is presented for an annual time frame. More information on the procedure for generation input parameter values can be found in Appendix D.
|
Let us consider the case of a €1B GEB for each GenCo and compare the differences in optimal investment and generation portfolios when increasing TEB from €25M to €50M in a perfectly competitive market. The detailed analysis for this particular example is presented in Figure 5. The arrow in the figure indicates the direction of the power flow. The number with the sign “+” next to it on the left represents the capacity added to the transmission line (in MW). And the number without a sign indicates the total amount of energy transmitted during all time periods through this line (in MWh). The right frame of Figure 5 demonstrates the relative change in the output factors’ values in relation to the values presented on the left. As one can notice, doubling the TEB from €25M to €50M is followed by an increase of 100% in capacity and 99.92% in the flow through the line connecting node 2 (with the highest VRE availability) and node 3 (with the highest demand profile). This transmission capacity expansion motivates GenCos to invest in more VRE generation at node 2, increasing the VRE generation by 19.37% while reducing the conventional generation at node 3 by 11.88%. Ultimately, this new configuration leads to the decrease of GenCos’ costs by 3.20% without any decrease in total generation levels. The latter phenomenon, in turn, leads to an increase in the profit by 4.20% and, hence, the increase in the total welfare as well. This illustrates that, even in this stylised example, the model is capable of capturing the key features of the problem regarding the availability of system connectivity and its effect on Gencos’ motivation to expand their VRE generation.
|
In this paper, we study the impact of the TSO infrastructure expansion decisions in combination with carbon taxes and renewable-driven investment incentives on the optimal generation mix. To examine the impact of renewables-driven policies we propose a novel bi-level modelling assessment to plan optimal transmission infrastructure expansion. At the lower level, we consider a perfectly competitive energy market comprising GenCos who decide optimal generation levels and their own infrastructure expansion strategy. The upper level consists of a TSO who proactively anticipates the aforementioned decisions and decides the optimal transmission capacity expansion plan. To supplement the TSO decisions with other renewable-driven policies, we introduced carbon taxes and renewable capacity investment incentives in the model. Additionally, we accounted for variations in GenCos’ and TSO’s willingness to expand the infrastructure by introducing an upper limit on the generation (GEB) and transmission capacity expansion (TEB) costs. Therefore, as the input parameters for the proposed bi-level model, we considered different values of TEB, GEB, incentives and carbon tax. This paper examined the proposed modelling approach by applying it to a simple, three-node illustrative case study and a more realistic energy system representing Nordic and Baltic countries. The output factors explored in the analysis are the optimal total welfare, the share of VRE in the optimal generation mix and the total amount of energy generated.
|
A
|
Our main result (Theorem 1) characterizes all strategy-proof rules on the aforementioned domain. In particular, we find that all strategy-proof rules comply with the following two-step procedure. In the first step, each agent with single-peaked preferences is asked about her best alternative in the range of the rule (her “peak”), and for each profile of reported peaks, at most two alternatives are preselected. If only one alternative is preselected, this is the final outcome. If two alternatives are preselected, these two alternatives necessarily form a pair of contiguous alternatives in the range of the rule. In the second step, each agent with single-dipped preferences is asked about her worst alternative in the range of the rule (her “dip”). Then, taking into account the information of all “peaks” and “dips”, one of the two preselected alternatives is chosen.
|
Jackson (1994) for only single-peaked and the result in Manjunath (2014) for only single-dipped preferences. We also find out that the characterized family can be easily implemented in two steps and with few information. Finally, we establish that all strategy-proof rules are also group strategy-proof and show that Pareto efficiency implies a strong restriction on the range of the strategy-proof rules.
|
It can be seen that PE imposes a strong restriction on the range of f𝑓fitalic_f: only a range equal to the set of feasible alternatives or a range equal to the extreme alternatives of that set is compatible with SP and PE. In particular, any SP rule whose range is equal to the set of feasible alternatives is PE, while only a subset of those SP rules with range equal to the extremes of the set of feasible alternatives is PE. Note that if the set of feasible alternatives does not have a minimum or maximum, then PE can be only achieved when the range of f𝑓fitalic_f is the entire set of feasible alternatives.
|
We also show that all strategy-proof rules are group strategy-proof (Theorem 1). Finally, the range of any strategy-proof and Pareto efficient rule is either equal to the set of alternatives or coincides with the “extreme points” of the set of alternatives (Proposition 5).
|
Our main result (Theorem 1) characterizes all strategy-proof rules on the aforementioned domain. In particular, we find that all strategy-proof rules comply with the following two-step procedure. In the first step, each agent with single-peaked preferences is asked about her best alternative in the range of the rule (her “peak”), and for each profile of reported peaks, at most two alternatives are preselected. If only one alternative is preselected, this is the final outcome. If two alternatives are preselected, these two alternatives necessarily form a pair of contiguous alternatives in the range of the rule. In the second step, each agent with single-dipped preferences is asked about her worst alternative in the range of the rule (her “dip”). Then, taking into account the information of all “peaks” and “dips”, one of the two preselected alternatives is chosen.
|
C
|
−0.0(−0.0/−0.0)\underset{(-0.0/-0.0)}{-0.0}start_UNDERACCENT ( - 0.0 / - 0.0 ) end_UNDERACCENT start_ARG - 0.0 end_ARG
|
0.010.0/0.040.00.040.01\underset{0.0/0.04}{0.01}start_UNDERACCENT 0.0 / 0.04 end_UNDERACCENT start_ARG 0.01 end_ARG
|
0.040.01/0.080.010.080.04\underset{0.01/0.08}{0.04}start_UNDERACCENT 0.01 / 0.08 end_UNDERACCENT start_ARG 0.04 end_ARG
|
0.010.0/0.040.00.040.01\underset{0.0/0.04}{0.01}start_UNDERACCENT 0.0 / 0.04 end_UNDERACCENT start_ARG 0.01 end_ARG
|
0.367(0.04/0.54)0.040.540.367\underset{(0.04/0.54)}{0.367}start_UNDERACCENT ( 0.04 / 0.54 ) end_UNDERACCENT start_ARG 0.367 end_ARG
|
D
|
There exists an equilibrium of the best-value pricing managed campaign with efficient steering that:
|
price that balances the profit off-platform with the relaxation of the showrooming constraint. In particular, the second term in
|
offering their product only at a higher price, each advertiser can weaken the showrooming constraint and extract more surplus on the platform. Consequently, the off-platform prices increase with the number of on-platform shoppers.
|
integrated model that considers how auction mechanisms and data availability jointly determine match formation and surplus extraction both on and off large digital platforms. The auction mechanisms employed by the platform have substantial implications for
|
The argument proceeds by considering the problem of a vertically integrated platform that jointly maximizes the profit of the firms and the platform. The vertically integrated platform can jointly coordinate on-platform and off-platform pricing but still faces the showrooming constraint due to
|
D
|
While using net exports already produces a good fit of the flow matrix with the data, we observe that just by optimizing the weights we can improve the average R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT from 94% to 97% (first vs. second column).
|
In the following we will investigate in how far further adjustments to the trade data can improve these results.
|
In the following we will present an approach that allows to use data on trade flows for the approximation of P flows in a global model.
|
In particular, we have to investigate if the weighting scheme in the calculation of the trade matrix (see eq. 4) can be improved. In an ideal setting we would try to calculate the exact P content of each trade relationship that each country has with each other country for each goods category. This, however, is not feasible, due to resource constraints and data availability. We can however obtain the approximate P content indirectly by estimating the optimal weighting scheme that results in the flow matrix that gives us the most likely actual P flows between countries.
|
The statistics in table 1 show that these approximate corrections do in fact improve the fit of the model significantly. Interestingly the fit does not only improve with the mining data but also with the use data, which indicates that the correction has also an indirect effect on the column sums of the flow matrix.
|
A
|
It is well known, for matching markets, that there is no stable rule for which truth-telling is a dominant strategy for all agents (see Dubins and Freedman, 1981; Roth, 1982, 1985; Sotomayor, 1996, 2012; Martínez et al., 2004; Manasero and
|
Oviedo, 2022, among others). That is, given the true preferences and a stable rule, at least one agent might benefit from misrepresenting her preferences regardless of what the rest of the agents state. Thus, stable matchings cannot be reached through dominant “truth-telling equilibria". The stability of equilibrium solutions under true preferences is expected to be affected when agents behave strategically.
|
A stable rule is a function that associates each stated strategy profile to a stable matching under this stated profile. To evaluate such matchings workers, and firms use their true preferences and their true choice functions, respectively. A market and a stable rule induce a matching game. In this game, the set of strategies for each worker is the set of all possible preferences that she could state. Similarly, the set of strategies for each firm is the set of all possible choice functions that it could state.
|
In centralized markets, a board needs to collect the preferences and choice functions of all agents in order to produce a stable matching. Normally, agents are expected to behave strategically by not revealing their true preferences or their true choice functions in order to benefit themselves. When this is the case, the matching market becomes a matching game.
|
The main motivation of this paper is to provide a framework to study the Nash equilibrium solutions of the game induced by stable rules. In a many-to-one matching market with substitutable choice functions, we show that any stable matching rule implements, in Nash equilibrium, the individually rational matchings. Moreover, when only workers play strategically and firms’ choice functions satisfy the law of aggregate demand, we show that the firm-optimal stable rule ψFCsubscriptsuperscript𝜓𝐶𝐹\psi^{C}_{F}italic_ψ start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT implements the stable correspondence in Nash equilibrium. The analogous result with workers telling the truth and firms acting strategically does not hold. That is, if we consider a game in which the players are only the firms, we cannot implement the stable correspondence in Nash equilibrium. This fact was already noted by Sotomayor (2012), even under a more restrictive model with responsive preferences.
|
A
|
A second strand of the literature aims at assessing what happens to individual life trajectories after a default. This literature essentially focused on the impact of a harsh default, i.e. either Chapter 7 or Chapter 13 declarations or a foreclosure. Our work sheds some light on the short and medium term consequences of a soft default, an event that is substantially more likely (e.g. 1.5 versus 1 percent in 2010).
|
To this second strand of the literature belong, for example, Collinson et al. [2023], who investigate the impact of eviction on low income households in terms of homelessness, health status, labor market outcomes, and long term residential instability. Similarly, Currie and Tekin [2015] show that foreclosure causes an increase in unscheduled and preventable hospital visits.
|
Albanesi and Nosal [2018] investigate the impact of the 2005 bankruptcy reform, which made it more difficult for individuals to declare either Chapter 13 or Chapter 7. They find that the reform hindered an important channel of financial relief. Diamond et al. [2020] analyze the negative impact of foreclosures on foreclosed-upon homeowners. They find that foreclosure causes housing instability, reduced homeownership, and financial distress. Finally, Indarte [2022] analyzes the costs and benefits of household debt relief policies.
|
Our interest, in this paper, is on tracing individuals’ lives after a default, while we do not aim to distinguish between different default motives (see for example Ganong and Noel [2020] and the literature cited therein). More generally, our work is related to two strands of the literature, one which focuses on the individual determinants of default, whereas the other focuses on the individual and social costs of default and on the analysis of debt relief policies. For example, Lawrence [1995] belongs to the first strand, she builds a theoretical model of consumer’s choices with default as a possible option over the life-cycle. Similarly, Guiso et al. [2013] study the determinants of strategic default, i.e. when one does not repay the mortgage, even if she would be able to do so, because the house value has fallen below the value of the mortgage. Giliberto and Houston Jr [1989] develop a theoretical model of residential mortgage default when borrowers face beneficial as well as costly relocation opportunities. Mnasri [2018] finds that both income and geographical mobility are main trigger factors of default. In fact, households with a higher mobility rate (i.e. young households) are more likely to default. In general, all these studies find that default is more likely for unmarried, renters, and for those who have already moved from their birthplace.
|
A second strand of the literature aims at assessing what happens to individual life trajectories after a default. This literature essentially focused on the impact of a harsh default, i.e. either Chapter 7 or Chapter 13 declarations or a foreclosure. Our work sheds some light on the short and medium term consequences of a soft default, an event that is substantially more likely (e.g. 1.5 versus 1 percent in 2010).
|
A
|
_{jk}\right)∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_q ⋅ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT sign ( italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) italic_r start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT ),
|
Block Approval: Voters vote for any number of candidates.272727We use the same sincere strategy as for single-winner Approval Voting.
|
Approval: Vote for all candidates with uj≥EVsubscript𝑢𝑗𝐸𝑉u_{j}\geq EVitalic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≥ italic_E italic_V.
|
Minimax: Vote sincerely222222While a viability-aware strategy was included for Minimax in Wolk et al. (2023),
|
to Minimax. As with IRV, each ballot is a ranking of some or all of the candidates.101010While it is often recommended that equal rankings be allowed under
|
C
|
Each of these constraints will forbid exactly one outcome that is not in 𝒯𝒯\mathcal{T}caligraphic_T. As a result, it holds that 𝒮𝒙(ξ𝒯)=𝒯subscript𝒮𝒙subscript𝜉𝒯𝒯\mathcal{S}_{\bm{x}}(\xi_{\mathcal{T}})=\mathcal{T}caligraphic_S start_POSTSUBSCRIPT bold_italic_x end_POSTSUBSCRIPT ( italic_ξ start_POSTSUBSCRIPT caligraphic_T end_POSTSUBSCRIPT ) = caligraphic_T.
|
Corollary 1 follows from the observation that there are no imposed constraints on the set of possible outcomes in collective choice under dichotomous preferences. Note that Proposition 2 is necessary for this result in order to show that the considered class of ILPs ΞΞ\Xiroman_Ξ is sufficiently rich, i.e., that its set of optimal solutions can be equal to any subset of 2𝒜superscript2𝒜2^{\mathcal{A}}2 start_POSTSUPERSCRIPT caligraphic_A end_POSTSUPERSCRIPT. For relevant axiomatic properties in probabilistic social choice under dichotomous preferences, we refer the reader to Bogomolnaia and Moulin (2004), Bogomolnaia et al. (2005), Duddy (2015), Aziz et al. (2019), and Brandl et al. (2021).
|
In this section, we study which axiomatic properties are satisfied by the distribution rules described in Section 5. Interestingly, the following result implies that all axiomatic results that have been obtained for collective choice under dichotomous preferences (Bogomolnaia and Moulin, 2004; Bogomolnaia et al., 2005), also hold for distribution rules over optimal solutions of integer linear programs in ΞΞ\Xiroman_Ξ.
|
When studying a specific class of problems that can be modeled by an ILP in ΞΞ\Xiroman_Ξ, such as kidney exchange or knapsack, it may be the case that there exist sets of outcomes that do not correspond to the set of optimal solutions of any instance of that specific problem class. To illustrate this, one can observe, for example, that the set {(1,0),(1,1)}1011\{(1,0),(1,1)\}{ ( 1 , 0 ) , ( 1 , 1 ) } cannot correspond to the optimal solutions of any knapsack instance with two items in which the weights of the items in the objective function are strictly positive. Regardless, Corollary 1 still has the following implications with respect to the validity of the axiomatic results from collective choice under dichotomous preferences for such a specific subproblem that can be modeled by an ILP in ΞΞ\Xiroman_Ξ. First, all positive results of the type “(rule) satisfies (axiom)” remain valid. Second, the negative results of the type “(rule) does not satisfy (axiom)” are not guaranteed to hold for specific subproblems. To prove such negative axiomatic results for a specific class of subproblems, it suffices to provide an example instance in which a rule violates the considered axiom. Third, as a result, characterization results of the type “(rule) is the only rule satisfying (set of axioms)” are also not guaranteed to hold for specific classes of subproblems that can be modeled by an ILP in ΞΞ\Xiroman_Ξ.
|
All axiomatic results that have been obtained for collective choice under dichotomous preferences (fair mixing) also hold for distribution rules over optimal solutions of integer linear programs in Ξnormal-Ξ\Xiroman_Ξ.
|
D
|
Since it is based on actual trades, realized volatility (RV) is the ultimate measure of market volatility, although the latter is more often associated with the implied volatility, most commonly measured by the VIX index cboevix ; cboevixhistoric – the so called market ”fear index” – that tries to predict RV of the S&P500 index for the following month. Its model-independent evaluation demeterfi1999guide is based on options contracts, which are meant to predict future stock prices fluctuations whitepaper2003cboe . The question of how well VIX predicts future realized volatility has been of great interest to researchers christensen1998relation ; vodenska2013understanding ; kownatzki2016howgood ; russon2017nonlinear . Recent results dashti2019implied ; dashti2021realized show that VIX is only marginally better than past RV in predicting future RV. In particular, it underestimates future low volatility and, most importantly, future high volatility. In fact, while both RV and VIX exhibit scale-free power-law tails, the distribution of the ratio of RV to VIX also has a power-law tail with a relatively small power exponent dashti2019implied ; dashti2021realized , meaning that VIX is incapable of predicting large surges in volatility.
|
We fit CCDF of the full RV distribution – for the entire time span discussed in Sec. 2 – using mGB (7) and GB2 (11). The fits are shown on the log-log scale in Figs. 4 – 13, together with the linear fit (LF) of the tails with RV>40𝑅𝑉40RV>40italic_R italic_V > 40. LF excludes the end points, as prescribed in pisarenko2012robust , that visually may be nDK candidates. (In order to mimic LF we also excluded those points in GB2 fits, which has minimal effect on GB2 fits, including the slope and KS statistic). To make the progression of the fits as a function of n𝑛nitalic_n clearer, we included results for n=5𝑛5n=5italic_n = 5 and n=17𝑛17n=17italic_n = 17, in addition to n=1,7,21𝑛1721n=1,7,21italic_n = 1 , 7 , 21 that we used in Sec. 2. Confidence intervals (CI) were evaluated per janczura2012black , via inversion of the binomial distribution. p𝑝pitalic_p-values were evaluated in the framework of the U-test, which is discussed in pisarenko2012robust and is based on order statistics:
|
The main result of this paper is that the largest values of RV are in fact nDK. We find that daily returns are the closest to the BS behavior. However, with the increase of n𝑛nitalic_n we observe the development of ”potential” DK with statistically significant deviations upward from the straight line. This trend terminates with the data points returning to the straight line and then abruptly plunging into nDK territory.
|
While the standard search for Dragon Kings involves performing a linear fit of the tails of the distribution pisarenko2012robust ; janczura2012black , here we tried to broaden our analysis by also fitting the entire distribution using mGB (7) and GB2 (11) – the two members of the Generalized Beta family of distributions liu2023rethinking , mcdonald1995generalization . As explained in the paragraph that follows (7), the central feature of mGB is that, after exhibiting a long power-law dependence, it eventually terminates at a finite value of the variable. GB2, on the other hand, has a power-law tail that extends mGB’s power-law dependence to infinity.
|
It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is well known, the largest upheavals in the stock market happened on, and close to, the Black Monday, which was a precursor to the Savings and Loan crisis, the Tech Bubble, the Financial Crisis and the COVID Pandemic. Plotted on a log-log scale, power-law tails of a distribution show as a straight line. If the largest RV fall on the straight line they can be classified as Black Swans (BS). If, however, they show statistically significant deviations upward or downward from this straight line, they can be classified as Dragon Kings (DK) sornette2009 ; sornette2012dragon or negative Dragon Kings (nDK) respectively pisarenko2012robust .
|
D
|
§4.4.1 validates our approach: for a range of artificial value distributions, we first simulate bids, and then apply the noted iterative search procedure to back out, or elicit, the underlying value distribution. We can thus compare the originally postulated distribution with the elicited one.
|
Next, we demonstrate how our approach can be used as a step in an inference procedure aimed at determining the distribution of bidders’ values, which is not directly observable, from the observed distribution of bids. In §4.4.1, to validate our approach, we start from randomly generated values, then simulate bids, and use them in an iterative procedure to retrieve values from bid. Then, in §4.4.2, we leverage our simulator in a production environment. This application involves inferring the bidder’s value distributions for two scenarios: low and high traffic shopper queries. We utilize Hedge for the inference procedure, due to the bid dispersion observed with EXP3-IX (§4.2).
|
Utilizing our simulation approach, we also showed how to infer bidders’ valuations in the presence of more realistic auction rules. We demonstrated this using aggregate bid data from an e-commerce website in both low- and high-density auctions.
|
§4.4.2 employs aggregate bid data from an actual production environment (a major e-commerce website) and infer bidder value distributions in both low and high traffic shopper query scenarios.
|
The analysis is based on aggregated bid data for two specific shopper queries in an e-commerce setting, one characterized by low traffic and the other by high traffic. The data aggregation process converts all bids into a bid per impression, so we set all click-through rates to 1. Thus, we apply our analysis to a symmetric environment with a single query, unit click-through rates, and two different scenarios, one with a low number of bidders (low traffic) and one with a high number of bidders (high traffic).
|
C
|
While payoff externalities induced by a sharing contract conditioning on the winner’s identity can fix the inefficiencies induced by strategic experimentation, the structure of the sharing contract is notable. In particular, winner-take-all contracts and equal sharing are both inefficient; the efficient contract must guarantee something to the losers, but not too much or too little, and the payoffs in the efficient contract are asymmetric ex-post. Further, it is also significant that such a contract does not require the agents to observe each others’ actions; that is, the same sharing contract that restores efficiency in the observable-action model still uniquely induces the efficient outcome even if agents cannot observe each others’ actions. Therefore, observability of effort is not essential to restoring efficiency.
|
A logical way the agents might wish restore cooperative efficiency is if they can agree ex ante to a contract that specifies how to split the rewards from experimentation in the event of a breakthrough. Thus, in this section, I consider the problem of a regulator (or contest designer) who observes the outcome of the baseline experimentation game and decides how to award payoffs. I first formalize the broad mathematical definition of a sharing contract. I then show that within this broad class of contracts, efficiency can be restored by very simple contracts that only require the regulator to observe part of the outcome. More concretely, a regulator can restore efficiency if the regulator observes the winner/losers, or if the regulator observes the profile of effort at the end of the game. Notably, for the regulator, either piece of information is sufficient to restore efficiency. That is, the regulator does not need to observe the full history of effort.
|
While the formal analysis was constrained to a specific model, this theoretical work offers important insights for thinking about research. First, the condition for efficiency when there are breakthrough payoff externalities is that breakthroughs must have a neutral impact on the losers. As much of the contest literature has focused on thinking about how to award winners, the analysis in this paper suggests that the key to understanding whether the amount of research conducted in such an environment is socially efficient is to consider how the losers weigh the arrival of the breakthrough against the status quo. Second, the existence of simple contracts that restore efficiency suggests a method for sharing rewards for joint projects. The main insight is that the guarantee (or what agents are promised independent of their effort choices) must match their status quo opportunity cost of research effort. Indeed, these sharing contracts restore efficiency in a self-enforcing way; provided a contract that awards winners and losers in the right way, it becomes unnecessary to observe or contract on the actions of the other agents. On the other hand, if it is impractical or infeasible to identify the winner/losers, it is also sufficient for contracts to condition on effort shares at the time of breakthrough.
|
The corollary shows that an ex ante fair split of the rewards cannot result in an efficient outcome, so an efficient contract must still condition on some part of the outcome of the experimentation game. Recall that the previous subsection showed that conditioning the contract on the observation of winner/loser was sufficient to restore efficiency. An important insight from the baseline game is that conditional on a breakthrough occurring in an infinitesimal time interval [t,t+dt)𝑡𝑡𝑑𝑡[t,t+dt)[ italic_t , italic_t + italic_d italic_t ), the probability of agent i𝑖iitalic_i being the winner is ki(t)K(t)subscript𝑘𝑖𝑡𝐾𝑡\frac{k_{i}(t)}{K(t)}divide start_ARG italic_k start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_t ) end_ARG start_ARG italic_K ( italic_t ) end_ARG, where K𝐾Kitalic_K is instantaneous total flow effort and kisubscript𝑘𝑖k_{i}italic_k start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the instantaneous flow effort of agent i𝑖iitalic_i. Hence, the intuitive extension to restore efficiency when outcomes are nonattributable is to condition the sharing contract on ki(τ)/K(τ)subscript𝑘𝑖𝜏𝐾𝜏k_{i}(\tau)/K(\tau)italic_k start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_τ ) / italic_K ( italic_τ ), the instantaneous share of total flow effort at time of breakthrough.101010One could condition contracts on much stronger instruments, such as the full history of effort. However, I show an efficiency result, and hence I derive contracts to restore efficiency using as weak of a contract as possible. Using this, we can define an analogue of the sharing contracts.
|
Having shown that the identity of the winner is sufficient for restoring efficiency (even without observing effort), it might seem that this information is also necessary for implementing an efficient outcome. It is not; contracting on the effort profile at the time of breakthrough is also sufficient to restore efficiency. That is, if the identity of the winner is not contractible, then the effort profile at time of breakthrough also suffices. In particular, this implies that the full history of effort is redundant given the terminal effort profile, and further that the identity of the winner is sufficient but not necessary to restoring efficiency. Importantly for fairness considerations, contracting on the effort profile at time of breakthrough results in outcomes which are ex-post symmetric on the equilibrium path, unlike the asymmetry necessary for efficient behavior when contracting on the winner’s identity.
|
D
|
Existing nonparametric estimators of g0subscript𝑔0g_{0}italic_g start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT typically rely on
|
no function g𝑔gitalic_g is such that E[g(Z)|W]𝐸delimited-[]conditional𝑔𝑍𝑊E[g(Z)|W]italic_E [ italic_g ( italic_Z ) | italic_W ] is a linear function of the variables in X𝑋Xitalic_X.
|
the mapping g∈𝒢↦E[g(Z)|W=⋅]𝑔𝒢maps-to𝐸delimited-[]conditional𝑔𝑍𝑊⋅g\in\mathcal{G}\mapsto E[g(Z)|W=\cdot]italic_g ∈ caligraphic_G ↦ italic_E [ italic_g ( italic_Z ) | italic_W = ⋅ ] is injective.
|
}t)]=0\quad\forall t\in{\mathbb{R}}^{p}\,.italic_E [ italic_Y - italic_g ( italic_Z ) | italic_W ] = 0 a.s. ⇔ italic_E [ ( italic_Y - italic_g ( italic_Z ) ) roman_exp ( bold_i italic_W start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT italic_t ) ] = 0 ∀ italic_t ∈ blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT .
|
E[g(Z)|W]𝐸delimited-[]conditional𝑔𝑍𝑊E[g(Z)|W]italic_E [ italic_g ( italic_Z ) | italic_W ] in a generalized method of moments
|
D
|
We find that while participants’ behaviour is in line with the theoretical predictions, there is still a large part of behaviour that the model cannot account for. Using the Strategy Frequency Estimation Method (Dal Bó and Fréchette, 2011; Fudenberg et al., 2012), we allow for the presence of various behavioural types in our subject population, and we estimate the proportion of each type in our data (see Fischbacher et al. 2001; Bardsley and Moffatt 2007; Thöni and Volk 2018; Katuščák and Miklánek 2023; Préget et al. 2016). On top of G&M agents, we classify subjects to free-riders (who never contribute), altruists (who always contribute no matter their position), and conditional co-operators (who always contribute if they are in position 1 and contribute if at least one other person in the sample has contributed when they are in positions
|
2-4). Additionally, we investigate whether subjects align with the predictions of the G&M model (G&M type). We find that around 25% of the subjects behave according to the G&M model, the vast majority behaves in a conditional co-operating or altruistic way, and a non-significant proportion free rides. From a mechanism design point of view, we find that introducing uncertainty regarding the position, along with a constrained sample of previous actions (i.e. only what the immediate precedent player), maximises the public good provision.
|
The first thing to notice is the high value of β𝛽\betaitalic_β for all treatments, ranging between 0.819 and 0.893. This parameter is always significant and different from 0.5 (β𝛽\betaitalic_β values close to 0.5 indicate random behaviour, while values close to unity indicate almost deterministic behaviour). There is also little variation between treatments regarding the error probability. As a general overview of the results, the fraction of behaviour consistent with the G&M type ranges between 17.6% and 25.8% across all treatments252525This range increases to 20% and 26.1%, when a heterogeneous error probability is assumed., the fraction of the altruists between 8.7% and 40.6%, that of conditional co-operators between 24.2% and 66.9% and a very small and insignificant proportion of subjects is classified as free riders. In particular, when there is position uncertainty (T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT) the vast majority of the subjects are classified either as altruists or conditional co-operators 76.1% (64.8%) in T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT). This is in sharp contrast with the G&M model prediction, that subjects will defect if they observe at least one defection in their sample. While the behaviour of a conditional co-operator is straightforward (i.e. reciprocate to the existing contribution), the behaviour of the altruist is worth further exploration. On top of altruistic motives, a potential explanation could be that subjects were trying to signal to members of the group, placed later in the sequence, and motivate them to contribute.262626This is in line with Cartwright and Patel (2010) who use a sequential public goods game with exogenous ordering and show that agents early enough in the sequence who believe imitation to be sufficiently likely, would want to contribute. Support for the latter is given by the estimated proportion of altruists when the position is known (T3subscript𝑇3T_{3}italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT), where it drops to virtually zero (7.2% and insignificant), where due to the Treatment’s characteristics, contributing unconditionally does not seem to be an appealing strategy, as is the case when only the action of a past player is revealed (Treatment 2). If subjects expect the last player in the sequence to defect, they lack the motivation to unconditionally contribute. On the contrary, the proportion of conditional co-operators rises to 66.9% when there is position certainty. The G&M model fits well for around 1/4 of the experimental population (25.8% in both T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and T3subscript𝑇3T_{3}italic_T start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT), while it accounts for 17.6% of the subjects, when the sample is equal to 2 and there is position uncertainty (T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT). A potential explanation of this drop could be linked to the strict prediction of the model, that one should ignore any contributions in the sample and defect instead, if there is at least one defection. Finally, the proportion of free riders in the experimental population is virtually zero in T1subscript𝑇1T_{1}italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and T2subscript𝑇2T_{2}italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, as the fraction of free riders is estimated to be very low and is always insignificant. This result is in line with Dal Bó and Fréchette (2019) who find that the strategies that represent less than 10 per cent of the data
|
25% of the subjects behave according to the theoretical predictions of Gallice and Monzón (2019). Allowing for the presence of alternative behavioural types among the remaining subjects, we find that the majority are classified as conditional co-operators, some are altruists, and very few behave in a free-riding way.
|
The majority of the subjects behave in an altruistic or conditional co-operating way, around 25% of the subjects as G&M type, and free-riding is very rare.
|
A
|
We assign a High quality rating to those in the lowest CCEI quartile among the 60 potential experts and a Low quality rating to those in the top CCEI quartile.
|
Figure 5: Relative frequency that each expert is chosen in each information condition. The ratings code in the bottom row indicates the expert’s realized earnings, the quality of their decision making (Low or High), and their risk tolerance (Low, Medium, High).
|
From the choices in the simple and complex blocks of each of those 60 potential experts, we construct a quality rating using the Afriat (1973) Critical Cost Efficiency Index, as detailed in Online Appendix B.
|
We assign a High quality rating to those in the lowest CCEI quartile among the 60 potential experts and a Low quality rating to those in the top CCEI quartile.
|
We also assign a risk rating to each potential expert based on their average coefficient of relative risk aversion implied by each non-dominated choice in the simple and complex blocks. A potential expert is rated as High, Medium or Low risk tolerance according to whether that average is in the lowest, middle or upper third of the distribution across the 60 potential experts.
|
D
|
A proof sketch is in order. We abuse notation to write ψ+=EℙObs[ψ+(R)]superscript𝜓subscript𝐸superscriptℙObsdelimited-[]superscript𝜓𝑅\psi^{+}=E_{\mathbb{P}^{\textup{Obs}}}[\psi^{+}(R)]italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) ], where ψ+(r)superscript𝜓𝑟\psi^{+}(r)italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_r ) is a pointwise upper bound function satisfying:
|
ψ−(w¯,w¯)superscript𝜓¯𝑤¯𝑤\displaystyle\psi^{-}(\underline{w},\bar{w})italic_ψ start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ( under¯ start_ARG italic_w end_ARG , over¯ start_ARG italic_w end_ARG )
|
{+}(R)]italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) ] = italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_ϕ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) ]. We prove the stronger claim that ψ+(R)=ϕ+(R)superscript𝜓𝑅superscriptitalic-ϕ𝑅\psi^{+}(R)=\phi^{+}(R)italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) = italic_ϕ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) almost surely. For simplicity, this sketch will ignore the possibility that λ(R)Y=Q+(R)𝜆𝑅𝑌superscript𝑄𝑅\lambda(R)Y=Q^{+}(R)italic_λ ( italic_R ) italic_Y = italic_Q start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) with positive probability and drop “almost sure” caveats. As Dorn et al. (2022) note, ψ+(R)superscript𝜓𝑅\psi^{+}(R)italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) can be mapped to a simple DRO problem over (W−w¯(R))/(1−w¯(R))∈[0,(w¯(R)−w¯(R))/(1−w¯(R))]𝑊¯𝑤𝑅1¯𝑤𝑅0¯𝑤𝑅¯𝑤𝑅1¯𝑤𝑅(W-\underline{w}(R))/(1-\underline{w}(R))\in[0,(\bar{w}(R)-\underline{w}(R))/(%
|
A proof sketch is in order. We abuse notation to write ψ+=EℙObs[ψ+(R)]superscript𝜓subscript𝐸superscriptℙObsdelimited-[]superscript𝜓𝑅\psi^{+}=E_{\mathbb{P}^{\textup{Obs}}}[\psi^{+}(R)]italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT = italic_E start_POSTSUBSCRIPT blackboard_P start_POSTSUPERSCRIPT Obs end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R ) ], where ψ+(r)superscript𝜓𝑟\psi^{+}(r)italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_r ) is a pointwise upper bound function satisfying:
|
ψ+(R)superscript𝜓𝑅\displaystyle\psi^{+}(R)italic_ψ start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ( italic_R )
|
D
|
Neme, 2001). In Theorem 3, we show that the single-plateaued domain is maximal for our properties as well. Therefore, even though replacing strategy-proofness with NOM greatly expands the family of admissible rules, the maximal domain of preferences involved remains basically unaltered.
|
Morrill (2020). They try to single out those manipulations that are easily identifiable by the agents.
|
Morrill (2020) notion of obvious manipulations to the allocation of a non-disposable commodity among agents with single-peaked preferences. In the context of voting, Aziz and
|
Next, we analyze the maximality of the domain of preferences (including the domain of single-peaked preferences) for which a rule satisfying own-peak-onliness, efficiency, the equal division guarantee, and NOM exists. For the properties of efficiency, strategy-proofness, and symmetry, the single-plateaued domain is maximal (Ching and
|
Consider the problem of allocating a single non-disposable commodity among a group of agents with single-peaked preferences: up to some critical level, called the peak, an increase in an agent’s consumption raises his welfare; beyond that level, the opposite holds. In this context, an allotment rule is a systematic procedure that allows agents to select an allotment, among
|
B
|
If g𝑔gitalic_g is S𝑆Sitalic_S-unimodal, then every stable periodic orbit attracts at least one of a𝑎aitalic_a, b𝑏bitalic_b, or c𝑐citalic_c (i.e. the endpoints of I𝐼Iitalic_I or the critical point of g𝑔gitalic_g).
|
Proposition 5.5 means that all ”visible” orbits (in numerical experiments) are orbits containing a𝑎aitalic_a, b𝑏bitalic_b, or c𝑐citalic_c only (in the long run). We consider that only these visible orbits are meaningful in economics (or in real life) since it is widely believed that every economic modelling is some sort of an approximation of real economic activities and contains inevitable errors. We know that there are totally different point of view for economic modellings, but we do not argue here. Here is the third key result for this section:
|
In this paper, we interpret our numerical calculations based on Propositions 5.5 and 5.6. In particular, we look at the orbit starting from the critical point s𝑠sitalic_s, that is {s,f(s),f2(s),⋯}𝑠𝑓𝑠superscript𝑓2𝑠⋯\{s,f(s),f^{2}(s),\cdots\}{ italic_s , italic_f ( italic_s ) , italic_f start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_s ) , ⋯ } (we call this orbit the ”critical orbit”). If the critical orbit seems to eventually converge to a periodic orbit, we conclude that we can see the future: the average price in the long run will be the average price in this attracting periodic orbit. Note that in this case, f𝑓fitalic_f is neither ergodic nor has an acim since most fn(s)superscript𝑓𝑛𝑠f^{n}(s)italic_f start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ( italic_s ) accumulate around this attracting periodic orbit, but we do not care (since we can still predict the future). Otherwise, we compute (or give an estimate for) the following Lyapunov exponent at the critical point p=s𝑝𝑠p=sitalic_p = italic_s since the existence of a positive Lyapunov exponent at the critical point implies that the critical orbit is repelling and also is a strong indication for the existence of a chaos (hence the existence of an acim, see (CE1) in Proposition 5.8 below):
|
If g𝑔gitalic_g is S𝑆Sitalic_S-unimodal, then every stable periodic orbit attracts at least one of a𝑎aitalic_a, b𝑏bitalic_b, or c𝑐citalic_c (i.e. the endpoints of I𝐼Iitalic_I or the critical point of g𝑔gitalic_g).
|
If one of the conditions in Proposition 5.8 is satisfied (within some numerical limitation), we conclude that we can predict the future by Proposition 5.1. We must admit that our argument in this section is not rigorous (we hope to make it rigorous in the future), but we believe that we have provided enough (numerical/theoretical) evidence to support it. We stress that it is very hard to prove the existence of an acim for any non-expansive function g𝑔gitalic_g (even for an S𝑆Sitalic_S-unimodal g𝑔gitalic_g) by a rigorous analytic argument. There are only a few known examples of such, see a famous g(x)=4x(1−x)𝑔𝑥4𝑥1𝑥g(x)=4x(1-x)italic_g ( italic_x ) = 4 italic_x ( 1 - italic_x ) example due to Ulam and Neumann [Ulam and von Neumann, 1947], also see [Misiurewicz, 1981, Sec. 7 Examples] for more examples.
|
A
|
Using data from Kranz (2023) we compile a list of package usage from the replication packages of publications in top economics journals.
|
Here, require is able to match about 98 percent of all packages used in publications, and about 99 percent when weighted by publication intensity.
|
Using these metrics, package usage is even more skewed, with the top 100 packages accounting for 93 percent of all publications (green line) and 98 percent of all usage in code (red line).
|
Next, to ensure these requirements are satisfied, we add the corresponding line at the start of each do-file:
|
We then compute the fraction of publications that rely on each community-contributed package (dashed line) as well as the “publication intensity” of each package—the total number of times a package is used across all publications (dotted line).
|
D
|
Assumption 4.2 can be equivalently formulated with linear inequalities, i.e., there is some A∈ℝp×∑j=1Nnj𝐴superscriptℝ𝑝superscriptsubscript𝑗1𝑁subscript𝑛𝑗A\in\mathbb{R}^{p\times\sum_{j=1}^{N}n_{j}}italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_p × ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and b∈ℝp,p∈ℕformulae-sequence𝑏superscriptℝ𝑝𝑝ℕb\in\mathbb{R}^{p},p\in\mathbb{N}italic_b ∈ blackboard_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT , italic_p ∈ blackboard_N such that 𝕏={x∈ℝ∑j=1Nnj|Ax≤b}𝕏conditional-set𝑥superscriptℝsuperscriptsubscript𝑗1𝑁subscript𝑛𝑗𝐴𝑥𝑏\mathbb{X}=\{x\in\mathbb{R}^{\sum_{j=1}^{N}n_{j}}|Ax\leq b\}blackboard_X = { italic_x ∈ blackboard_R start_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_n start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUPERSCRIPT | italic_A italic_x ≤ italic_b }.
|
Assumption 4.2 is satisfied for many games found in the literature (see, e.g., [16, GNEP (21)] and [2, Proposition 3]). Included are also special cases like mixed strategies (without further constraints) as then 𝕏=[0,1]N𝕏superscript01𝑁\mathbb{X}=[0,1]^{N}blackboard_X = [ 0 , 1 ] start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT or, more generally, box constraints.
|
Constraints of this type are also frequently considered in the literature: they are called ‘orthogonal constraint sets’ in [21] and ‘classical games’ in [2].
|
Usually, when considering convex games, the focus is on providing the existence of a unique equilibrium point and developing methods for finding this particular equilibrium, see e.g. [21], where additional strong convexity conditions are assumed to guarantee uniqueness of the Nash equilibrium. In this paper, we will consider convex games without additional assumptions, hence allowing for games with a unique, several, or infinitely many equilibria. Our aim is to approximate the set of Nash equilibria for any desired error bound ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0.
|
Non-cooperative shared constraint games are a special type of generalized games as not only the cost function, but also the constraint set of player i𝑖iitalic_i in optimization problem (1) can depend on the strategy x−i∗superscriptsubscript𝑥𝑖x_{-i}^{*}italic_x start_POSTSUBSCRIPT - italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT of all other players. In the notation of [12], in which such problems are explicitly called “generalized Nash games with shared constraints”, the constraint for player i𝑖iitalic_i can be denoted by Ki(x−i∗):={xi∈𝒳i|(xi,x−i∗)∈𝕏}assignsubscript𝐾𝑖superscriptsubscript𝑥𝑖conditional-setsubscript𝑥𝑖subscript𝒳𝑖subscript𝑥𝑖superscriptsubscript𝑥𝑖𝕏K_{i}(x_{-i}^{*}):=\{x_{i}\in\mathcal{X}_{i}\;|\;(x_{i},x_{-i}^{*})\in\mathbb{%
|
A
|
In predictive learning decisions are often made on the basis of predictions automatically while in causal applications, the estimates generally need to be interpreted by a human being. Following on from this, in general predictive systems are used for individual-level decisions (e.g. targeting product recommendations) while the nature of causal questions, particularly in government means that we are interested in outcomes across an entire system (e.g. would changing the school-leaving age boost incomes later in life). Governments generally do not have the capacity (or mandate) to apply policies at the individual-level in many policy areas even if it is in theory possible to do such a thing with individual-level treatment effect estimates. For this reason, there is similar or somewhat less importance in having model transparency for oversight in the causal case compared to the predictive one, but there is the same need for oversight over what we argue is a joint decision made by the human policy-maker and the machine learning system (Citron, 2007; Busuioc, 2021). For the same reason, it is also important that there is some transparency in the machine learning system for decision-makers and analysts who have to extract insight from the analysis, critique the modelling and weight how much they trust the evidence.
|
A method for removing selection effects from data in causal inference using predictive machine learning models as nuisance models. Essentially it involves using these two nuisance models, one to predict treatment, one to predict outcome then taking the residuals from these models and feeding them into an estimator of some sort. So long as models are cross-fit (or more simply, predictions are made out-of-sample) using machine learning models will not have a biasing effect. Theoretically, this process of taking residuals removes selection effects from the data (if there are no unobserved confounders).
|
Causal machine learning is a broad term for several different families of methods which all draw inspiration from machine learning literature in computer science. The most widely-used method here and our focus for this paper is the causal forest (Wager & Athey, 2018; Athey et al., 2019) which uses a random forest made up of debiased decision trees to minimise the R-loss objective (Nie & Wager, 2021) in order to estimate HTEs (generally after double machine learning is applied for local centering). The causal forest (at least as implemented in the generalised random forest paper and companion R package grf) consists of three key parts, local centering, finding kernel weights and then plug-in estimation. Local centering removes selection effects in the data (assuming we meet the assumptions of control-on-observables identification) by estimating nuisance parameters using two nuisance models one estimating treatment assignment, one estimating outcomes both as a function of a set of control variables (Athey et al., 2019).222Note that local centering is not always strictly neccessary. Many causal forest studies use experimental data for example Ajzenman et al. (2022); Zhou, Zhan, Chen, Lin, Zhang, Zheng, Wang, Huang, Xu, Liao, Tian & Zhuang (2023) and so do not require local centering (Wager & Athey, 2018) However, in practice papers written after Athey et al. (2019) which added local centering to the causal forest generally use it. This may simply be for reasons of simplicity (nuisance models are estimated automatically anyway) or because it may improve the efficiency of the estimator per Abadie & Imbens (2006). For this reason while papers using experimental data do not include explicit identification through nuisance models per se, in practical terms the process of estimation is identical and so the points made in this paper around estimation of effects in observational data are entirely applicable to cases where experimental data are used as well. The term nuisance here means that the parameters themselves are not the target of the analysis, but are neccessary for estimation of the actual quantity of interest, a treatment effect. This local centering is an adaptation of the double machine learning method which is a popular approach to average treatment effect estimation (Chernozhukov et al., 2018). These models can use arbitrary machine learning methods so long as predictions are not made on data used to train the nuisance model (this is in order to meet regularity conditions in semi-parametric estimation (Chernozhukov et al., 2018)). In practice, in the causal forest, nuisance models are generally random forests and predictions are simply made out-of-bag i.e. only trees for which a datapoint was not sampled into its training data are used to make predictions. The residual from these predictions are taken to be the locally centered data. This locally centered data is then fed into a final model designed to find heterogeneity in the data by minimising R-Loss (Nie & Wager, 2021). Predictions are not made directly out of this model as with a standard random forest, instead this forest is used to derive an adaptive kernel functionn to define the bandwidth used in CATE estimates. Essentially, this weight is based on how many times for a given covariate set x𝑥xitalic_x, each datapoint in the sample falls into the same leaf on a tree in the ensemble as a datapoint with covariate values x𝑥xitalic_x. These weightings are then used in a plug-in estimator (by default Augmented Inverse Propensity Weighting) to obtain a final CATE estimate. This is essentially just a weighted average of doubly robust scores with weightings given by the kernel distance according to the forest model. More formally for CATE estimate τ^(x)^𝜏𝑥\hat{\tau}(x)over^ start_ARG italic_τ end_ARG ( italic_x ), kernel function (from the final causal forest model) K(⋅)𝐾⋅K(\cdot)italic_K ( ⋅ ) and doubly robust scores Γ^^Γ\hat{\Gamma}over^ start_ARG roman_Γ end_ARG
|
The transparency needs for these two kinds of models varies. One can imagine research questions where it is helpful to understand the nuisance models as well as the final model, but for the most part this is not neccessary. We still need some amount of transparency over nuisance functions, mostly to diagnose problems in model specification. The goal of nuisance modelling is not to maximise predictive power and try and get as close to the Bayes error as possible, rather it is to model the selection effects out of treatment and outcome (Chernozhukov et al., 2018). Checking the distribution of nuisance parameters is useful here, how well do models fit the data? How well is the overlap assumption met? There are also a range of non-parametric refutation tests to check how well a given set of nuisance models (Sharma et al., 2021).
|
The most rudimentary difference in the structure of models is that causal machine learning methods generally involve the fitting of several models with different purposes where predictive applications typically involve fitting one, or several with the same purpose in an ensemble method (Chernozhukov et al., 2018; Athey et al., 2019). For example, in the case of DML-based methods (including the casual forest), this involves fitting two nuisance models and then employing some other estimator to generate a treatment effect estimate from the residuals of these models.
|
D
|
From Figure 4 and Table 1, it is evident that our approach consistently demonstrates the lowest bias across all metrics compared to other approaches. The data splitting method also manages to achieve relatively low biases but exhibits significantly higher variance. Furthermore, it’s worth noting that the true variance of the data splitting estimator is considerably larger than the standard error estimated from a two-sample t-test. Consequently, this could potentially lead to confidence intervals that underestimate the true level of variability.
|
We further provide information on the bias and standard errors of treatment effect estimators obtained using various methods in Table 1. In each metric, the first column represents the bias in comparison to the true global treatment effect (GTE). The second column displays the standard deviation calculated from the results of the 100 A/B tests. Lastly, the third column showcases the standard error estimates obtained through two-sample t-tests in a single A/B test
|
where NTsubscript𝑁𝑇N_{T}italic_N start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT represents the number of users in the treatment group and FinishisubscriptFinish𝑖\text{Finish}_{i}Finish start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and StayDurationisubscriptStayDuration𝑖\text{StayDuration}_{i}StayDuration start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT indicate whether a user finished watching a video and their duration of stay, respectively. While for control users, we averaged the control values based on the control linear fusion formula:
|
6: When a user is assigned to the treatment group, the platform recommends an item based on the treatment algorithm and model, and vice versa.
|
In Table 2, we have calculated the experimentation costs. For treatment users, we computed the average treatment values based on the treatment linear fusion formula, i.e.,
|
D
|
Recall that the game runs until (absolute) time t=99𝑡99t=99italic_t = 99. We say that the attack is successful in a given run of the game in which some player attacks if (1) player α𝛼\alphaitalic_α initiates an attack by (absolute) time t^α≜49≜subscript^𝑡𝛼49\hat{t}_{\alpha}\triangleq 49over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ≜ 49, (2) player β𝛽\betaitalic_β initiates an attack by (absolute) time t^β≜99≜subscript^𝑡𝛽99\hat{t}_{\beta}\triangleq 99over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT ≜ 99, and (3) the attack prospect (recall that this is a part of the state of nature) is 1111.232323We fix the values of t^αsubscript^𝑡𝛼\hat{t}_{\alpha}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT and t^βsubscript^𝑡𝛽\hat{t}_{\beta}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_β end_POSTSUBSCRIPT (and the number of rounds in the game) for concreteness. Nothing in our analysis depends on the specific choice of these values. If neither player initiates an attack, then each player receives utility 00. If some player initiates an attack, then each player receives utility 1111 if the attack is successful and utility U<0𝑈0U<0italic_U < 0 otherwise.
|
The welfare-maximizing equilibria in these three examples (and especially in the latter two) might seem at first glance to be driven by qualitatively different effects. It is therefore unclear how one might generally characterize welfare-maximizing equilibria in a manner that captures all three examples, let alone captures equilibria in all coordinated-attack games. Nonetheless, the main result of this section is a unified characterization of welfare-maximizing equilibria in all coordinated-attack games. Notably, this is achieved by stating the characterization using our new notion of common knowledge.
|
Before we commence our analysis of equilibrium behavior in coordinated-attack games, we demonstrate the diverse structure of equilibria in different coordinated-attack games via several examples. We start with a coordinated-attack game that has an equilibrium of a particularly simple form.
|
We emphasize that equilibrium behavior in coordinated-attack games cannot in general be characterized by common knowledge as traditionally defined. Indeed, recalling the coordinated-attack game from Example 1, we note that by Theorem 1 (see also the discussion that follows the proof of that theorem), common knowledge as traditionally defined of the attack prospect is never attained in that coordinated-attack game. Nonetheless, the welfare-maximizing Nash equilibrium results in a successful attack in every history in which the attack prospect is 1111. Moreover, whenever α𝛼\alphaitalic_α initiates an attack in this equilibrium, β𝛽\betaitalic_β does not yet even know that the attack prospect is 1111; indeed, β𝛽\betaitalic_β only learns the attack prospect after time t^αsubscript^𝑡𝛼\hat{t}_{\alpha}over^ start_ARG italic_t end_ARG start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT, i.e., at a time at which it is too late for α𝛼\alphaitalic_α to initiate a successful attack.
|
In this Section, we demonstrate the usefulness of our notion of common knowledge for characterizing equilibrium behavior in a family of dynamic Bayesian coordination games that we call coordinated-attack games. In a coordinated-attack game, each player may decide whether and when to initiate an attack, and players are better off initiating attacks only if the state of nature is such that an attack would be successful, and only if each of the players initiates her attack sufficiently early. If one of the players initiates an attack but the other does not do so early enough (or at all), then both players suffer a loss (due to their joint army’s forces diminishing or being wiped out completely). Initiating an attack early carries risk (both regarding the other player’s action and regarding the state of nature), but also has the potential for reward. Our characterization of equilibrium behavior in these games holds regardless of the specific technology by which the players learn about the state of nature and by which they communicate,
|
B
|
The estimation step needs to account for two sources of errors, that is, the usaul estimation error in obtaining a consistent estimator that corresponds to the IVX instrumentation of the nearly integrated regressors, and the second source of error is the sampling error in generating the forecast for the VaR from the first stage of the process. More specifically, this implies that using a Bahadur representation of the QR-IVX estimator we need to determine the precise stochastic order of the remainder term when the generated regressor is included in the conditional quantile specification of the model. Although when the parameter vector from the first stage is obtained via the IVX filtration, it has the usual convergence rate that the IVX estimator has, an estimation error carries in the second-stage estimation procedure, which requires us to consider a suitable correction to the overall variance due to the presence of the generated regressor.
|
Moreover, we expect that the stochastic equicontinuity property to still hold regardless of the plug-in estimation approach and the presence of time series nonstationarity.
|
the presence of both a generated covariates and persistent regressors, for the joint estimation of the risk measure pair of (𝖵𝖺𝖱,𝖢𝗈𝖵𝖺𝖱)𝖵𝖺𝖱𝖢𝗈𝖵𝖺𝖱(\mathsf{VaR},\mathsf{CoVaR})( sansserif_VaR , sansserif_CoVaR ), remains an open problem. Several studies in the literature develop estimation and inference methods in nonstationary predictive regressionss robust to the unknown form of persistence, with a notable approach being the IVX filtration of Phillips and Magdalinos, (2009). Motivated by the aforementioned issues, our research objective is to study the large-sample theory of a doubly IVX corrected estimator in nonstationary quantile predictive regression for the purpose of estimating risk measures. To the best of our knowledge, our approach that incorporates the time series properties of regressors (nonstationarity) when estimating risk measures in a low dimensional setting (small number of regressors relative to the sample size), is a novel contribution to the literature.
|
We focus on the IVX estimators in each estimation stage, since it is well-known to be robust to the abstract degree of persistence (e.g., see Lee, (2016)). As a result, the doubly corrected IVX estimator (which is the IVX estimator obtained from the second step estimation procedure), is the main parameter of interest in terms of statistical inference and asymptotic theory. Specifically, we establish the asymptotic properties of the doubly corrected IVX estimator which verify the mixed Gaussianity property of the limiting distribution regardless of the degree of persistence in both estimation stages. Moreover, we consider a suitable correction to the expression of the asymptotic variance-covariance matrix in the second estimation step, which is adjusted in order to account for the first-step estimation error that produces the generated regressor under nonstationarity.
|
The estimation step needs to account for two sources of errors, that is, the usaul estimation error in obtaining a consistent estimator that corresponds to the IVX instrumentation of the nearly integrated regressors, and the second source of error is the sampling error in generating the forecast for the VaR from the first stage of the process. More specifically, this implies that using a Bahadur representation of the QR-IVX estimator we need to determine the precise stochastic order of the remainder term when the generated regressor is included in the conditional quantile specification of the model. Although when the parameter vector from the first stage is obtained via the IVX filtration, it has the usual convergence rate that the IVX estimator has, an estimation error carries in the second-stage estimation procedure, which requires us to consider a suitable correction to the overall variance due to the presence of the generated regressor.
|
A
|
Assumption (CCTSB) requires that (CCTS) holds only until the period before each unit is treated, while (CCTSA) requires (CCTS) to hold after treatment begins. Many previous works have distinguished between the parallel trends before treatment versus after treatment, noting that (CCTSB) can be directly tested under assumption (CNAS) while (CCTSA) cannot, since untreated potential outcomes are not observed for units after treatment, and noting that parallel trends holding before treatment is no guarantee of holding after treatment. See Bilinski and Hatfield (2018); Kahn-Lang and Lang (2020); Dette (2020); Sun and Abraham (2021); Ban and Kedagni (2022); Callaway and Sant’Anna (2022); Roth (2022), Henderson and Sperlich (2022, Equation 22), Borusyak et al. (2024); Rambachan and Roth (2023), and Callaway (2023, Section 3.3). See Wooldridge (2021, Section 7.1, Equation 7.4) for an example of how (CCTSB) can be tested under (CNAS).
|
We can contrast (CCTS) with an unconditional or marginal version, which Wooldridge (2021) calls Assumption (CTS).
|
Finally we discuss estimating the average treatment effects marginalized over 𝑿isubscript𝑿𝑖\boldsymbol{X}_{i}bold_italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. As we showed in Theorem A.5(a) and (b), because the treatment effects are interacted with the covariates centered with respect to their cohort means, under either (CCTS) or (CTS) and (CIUN) we can estimate the average treatment effects using
|
Because (CTS) does not account for covariates, (CCTS) is often thought of as more plausible than (CTS). We generally agree, though in Appendix A.2 we add some nuance by pointing out that (CTS) can hold when (CCTS) does not hold (Theorem A.2). In such a setting it is not possible to estimate conditional average treatment effects using regression (1.5); we show this formally in Theorem A.5(a) in the appendix. Further, it turns out that even marginal average treatment effect estimates from models like FETWFE that rely on (CCTS) will in general be inconsistent.
|
However, (CTS) holding is enough to estimate marginal average treatment effects consistently using many estimators, so it is reasonable to wonder whether regression (1.5) might be able to estimate treatment effects consistently under (CTS) and some additional assumptions. The following assumption will turn out to allow FETWFE to consistently estimate marginal average treatment effects if (CTS) holds, even if (CCTS) does not.
|
A
|
To recap, the ℓ2superscriptℓ2\ell^{2}roman_ℓ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT local parameter space is “too small” for the QMLE, but is the suitable local parameter space for the fixed effects approach. This implies that, as explained earlier, QMLE is a better procedure than the fixed effects method.
|
By analyzing the local likelihood ratios, we are able to inform the merits of different estimators that are otherwise hard to discern based on the usual asymptotics alone (e.g., limiting distributions).
|
We show that quasi-maximum likelihood method applied to the system attains the efficiency bound. These results are obtained under an increasing number of incidental parameters. Contrasts with the fixed effects estimators are made. In particular,
|
To have the limit process of the local likelihood ratios reside in a Hilbert space and to directly apply the convolution theorem
|
The preceding corollaries imply that, under normality, we are able to establish the asymptotic efficiency bound in the presence of increasing number of
|
A
|
Since fat tails are scale free, they are naturally analyzed on a log-log scale, where complementary cumulative distribution function (CCDF) ought to be a straight line with negative slope. LF of CCDF, however, must be supplemented by a statistical test to determine the likelihood that the points in the tail conform to linearity or whether outliers may be present wheatley2015multiple . In this case we are concerned with possibility of observing DK, where the ends of the tails shoot upward from LF, or nDK, where tails’ ends shoot down.
|
This paper is organized as follows. In Section 2 we present the analytical form of mGB and GB2 distributions and discuss the limiting behaviors of both. In Section 3 we fit HP and HPI with mGB and GB 2, as well as the fit tails directly using LF. For each test we conduct a U-test, for which a null hypothesis is formulated, and plot p-values which reflect on the goodness of fit, as well as whether DK or nDK behavior may be present. We conclude in Section 4 with discussion of our results.
|
Towards this end, we performed a U-test pisarenko2012robust but we did not limit it to LF alone. We also aimed at describing the entire distribution, not just tails, and thus performed a U-test on mGB and GB2, which we employed for this purpose. GB2 was used for fitting since it is the most flexible distribution with fat tails and mGB, while exhibiting similar tails over a wide range of variable, abruptly terminates at finite value, which seems appropriate to describe nDK behavior liu2023rethinking . Of particular importance is the fact that both mGB and GB2 arise as steady-state (stationary) distributions of stochastic differential equations that mean to serve as models of economic exchange dashti2020stochastic ; bouchaud2000wealth ; ma2013distribution .
|
Towards this end we studied a combined multi-year distribution of HPI for years 2000-2022, which contained 201040 data points. The main result, as seen in Figs. 11 and 12 below is that the tails of the combined HPI is more aligned with the finite upper limit of HPI and, accordingly, with mGB distribution. Of course, such upper limit of the variable does not have to be fixed – it may change as HPI is updated annually.
|
Our interest to house prices and house price indices was motivated by their being proxies to income distributions. Income distributions may be possible to describe by models of economic exchange, some of which can be reduced to stochastic differential equations with well-defined steady-state distributions. One class of such models results in steady-state distributions that belong to the Generalized Beta family of distributions. Therefore we also attempted to fit the entire empirical distributions with Generalized Beta Prime and modified Generalized Beta distributions: for a particular relationship between scale parameters the former is characterized by a power-law tail, while the latter follows the same power-law dependence, which is subsequently terminated at the finite value of the variable.
|
B
|
Two challenges arise when using SCM with higher frequency data, such as when the outcome is measured every month versus every year. First, because there are more pre-treatment outcomes to balance, achieving excellent pre-treatment fit is typically more challenging. Second, even when excellent pre-treatment fit is possible, higher-frequency observations raise the possibility of bias due to overfitting to noise. A recent review by Abadie and i Bastida (2022) explicitly cautions about such bias from using disaggregated outcomes in SCM. Instead, researchers can first aggregate the outcome series into lower-frequency (e.g., annual) observations, and then estimate SCM weights that minimize the imbalance in these aggregated pre-treatment outcomes. Doing so mechanically improves pre-treatment fit as well.
|
In this paper, we propose a framework for temporal aggregation for SCM. Adapting recent results from Sun, Ben-Michael and Feller (2023), we first derive finite-sample bounds on the bias for SCM under a linear factor model when using temporally disaggregated versus aggregated outcome series.
|
There are many directions for future work that incorporate recent innovations in panel data methods, including first de-noising (e.g., Amjad, Shah and Shen, 2018) or seasonally adjusting the disaggregated outcome series. We could also explore choosing an optimal level of temporal aggregation for a single SCM objective. Finally, questions about temporal aggregation also arise in event study and other panel data models, suggesting further avenues for fruitful research.
|
Theorem 1 in the appendix formally states high-probability bounds on the bias terms, which we obtain using results from Sun, Ben-Michael and Feller (2023).
|
Two challenges arise when using SCM with higher frequency data, such as when the outcome is measured every month versus every year. First, because there are more pre-treatment outcomes to balance, achieving excellent pre-treatment fit is typically more challenging. Second, even when excellent pre-treatment fit is possible, higher-frequency observations raise the possibility of bias due to overfitting to noise. A recent review by Abadie and i Bastida (2022) explicitly cautions about such bias from using disaggregated outcomes in SCM. Instead, researchers can first aggregate the outcome series into lower-frequency (e.g., annual) observations, and then estimate SCM weights that minimize the imbalance in these aggregated pre-treatment outcomes. Doing so mechanically improves pre-treatment fit as well.
|
A
|
, }t(x)\leq\gamma(x)\text{ for all }x\in\mathfrak{X}\}.caligraphic_T = { italic_t : caligraphic_X → blackboard_R : italic_t affine, italic_t ( italic_x ) ≤ italic_γ ( italic_x ) for all italic_x ∈ fraktur_X } .
|
The structure of the paper is as follows. Section 2 formalizes our model and provides a method for obtaining the asymptotic distribution, or upper bounds thereof, of minimax test statistics. Section 3 shows that, under general conditions, critical values can be obtained for those distributions using the bootstrap. The main focus of the paper is on single hypothesis tests, but Section A of the appendix extends our distributional results uniformly over a class of parameter spaces and underlying probability distributions. Proofs are relegated to Section B in the appendix.
|
It would likely be possible to extend the results of this paper to noncompact settings using bounded entropy conditions and finite sample concentration inequalities coming from empirical process theory (e.g. Chernozhukov et al., (2023), Assumption 3.3 and references therein). However, imposing compactness greatly streamlines the exposition and proofs of this paper, and is congruous with assumptions that are commonly imposed for the purposes of estimation and inference.
|
Lemma 3.5 below shows that this formulation of our inference problem is consistent with the main assumptions of the paper.
|
Lemma 3.3 suggests that a tractable way of estimating e.g. (3.2) is to solve the triple optimization problem:
|
C
|
Partition 𝐗=(𝐗1,𝐗2)𝐗subscript𝐗1subscript𝐗2\mathbf{X}=\left(\mathbf{X}_{1},\mathbf{X}_{2}\right)bold_X = ( bold_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , bold_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) where
|
(𝐗1h,subscript𝐗1ℎ\mathbf{X}_{1h},bold_X start_POSTSUBSCRIPT 1 italic_h end_POSTSUBSCRIPT , standardized) and the rest of the covariates
|
suppose ytsubscript𝑦𝑡y_{t}italic_y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the first element of the high-dimensional vector
|
from zero and 𝐗2subscript𝐗2\mathbf{X}_{2}bold_X start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is the matrix of covariates with the
|
𝐗1subscript𝐗1\mathbf{X}_{1}bold_X start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the matrix of covariates with the corresponding vector of
|
D
|
All code was written in Python using PyTorch and the pref_voting library (pypi.org/project/pref-voting/), version 0.4.42, for all functions dealing with voting. Training and evaluation was parallelized across nine local Apple computers with Apple silicon, the most powerful equipped with an M2 Ultra with 24-core CPU, 76-core GPU, and 128GB of unified memory, running Mac OS 13, as well as up to sixteen cloud instances with Nvidia A6000 or A10 GPUs running Linux Ubuntu 18.04.
|
We begin by discussing our results under the uniform utility model and then turn to the 2D spatial model in Section 4.9.
|
To generate utility profiles for our experiments described below, we first used a standard uniform utility model (see, e.g., (Merrill, 1988, p. 16)): for each voter independently, the utility of each candidate for that voter is drawn independently from the uniform distribution on the [0,1]01[0,1][ 0 , 1 ] interval. We then also used a 2D spatial model: each candidate and each voter is independently placed in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT according to the multivariate normal distribution (as in Merrill (1988)) with no correlation between the two dimensions; then the utility of a candidate for a voter is the square of the Euclidean distance between the candidate and the voter (using the quadratic proximity utility function as in (Samuel Merrill and
|
Figure 2. Results using the 2D spatial model. Top: the average profitability of submitted rankings by the best performing MLP with any hidden layer configuration for a given voting method and information type, averaging over 3–6 candidates and 5, 6, 10, 11, 20, and 21 voters. Bottom: the ratio of the average profitability of the MLP’s submitted ranking to that of the ideal manipulator’s submitted ranking.
|
Comparing Figure 1 for the uniform utility model and Figure 2 for the 2D spatial model model, the most striking differences are (1) that all voting methods become less profitably manipulable (roughly by one half) under the spatial model and (2) even the best MLP’s could not learn to profitably manipulate against Minimax, Nanson, and Split Cycle under the spatial model.111111A natural thought to explain (2) is that Minimax, Nanson, and Split Cycle are Condorcet consistent, and there is a high frequency of Condorcet winners under the 2D spatial model (yet this must be squared with the results for Stable Voting). However, even when there is a Condorcet winner in 𝐏𝐏\mathbf{P}bold_P and we are using a Condorcet voting method, a voter may still have an incentive to submit an insincere ranking in order to create a majority cycle, possibly resulting in a different winner. On the other hand, such possibilities are evidently rare and difficult to learn to exploit. On the other hand, the comparative usefulness, for manipulating against each voting method, of the different types of limited information is largely the same under the spatial model as under the uniform utility model (this is true even for Minimax, Nanson, and Split Cycle, looking at which types of information produce less negative results). We conjuncture that these findings about types of limited information are robust across other standard probability models as well.
|
A
|
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumer switched to a new expert in the current round. Undertreated, Overtreated and Invested_LR are lagged variables (one round).
|
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
|
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
|
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
|
∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01, ∗∗∗ p<0.001𝑝0.001p<0.001italic_p < 0.001
|
A
|
Fig. 13 shows that URLLC support provides lower profits than eMBB support except for very low c𝑐citalic_c. The sum of profits only decreases slowly as c𝑐citalic_c increases.
|
Finally, when aggregating the SPs’ profits and the consumer surplus into the social welfare quantity, Fig. 8 shows that the variation of the aggregated profit dominates.
|
Fig. 6 shows that the aggregated surplus of the URLLC-supported users is greater than the surplus of the eMBB-supported users, which is sensible since it is a measure that aggregates the quality of the service, the price and the number of subscribers. The same figure shows that the total consumer surplus does not exhibit high variation with ϵitalic-ϵ\epsilonitalic_ϵ, but it peaks when the number of URLLC-supported users is maximum and the AoI is minimum.
|
Fig. 12 shows that the aggregated surplus of the URLLC-supported users is greater than the surplus of the eMBB-supported users. The same figure shows that the total consumer surplus decreases as c𝑐citalic_c increases.
|
Finally, when aggregating the SPs’ profits and the consumer surplus into the social welfare quantity, Fig. 14 shows that the variation of the aggregated consumer surplus dominates.
|
D
|
There is a large body of work deriving maximal inequalities and its derivatives such as the functional CLT for dependent data under mixing conditions; cf. [16, 3, 2, 29, 17, 6], and [10] and [22] for reviews. Closest to the present paper is the important work in [17] wherein the authors establish a functional invariance principle in the sense of Donsker for absolutely regular empirical processes. To the best of our knowledge, this and all other existing results in this literature are derived under assumptions implying that the mixing coefficients decay fast enough to zero, e.g. ∑k=1∞β(k)<∞superscriptsubscript𝑘1𝛽𝑘\sum_{k=1}^{\infty}\beta(k)<\infty∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_β ( italic_k ) < ∞ (see [17]), or kpp−2(logk)2p−1p−2β(k)=o(1)superscript𝑘𝑝𝑝2superscript𝑘2𝑝1𝑝2𝛽𝑘𝑜1k^{\frac{p}{p-2}}(\log k)^{2\frac{p-1}{p-2}}\beta(k)=o(1)italic_k start_POSTSUPERSCRIPT divide start_ARG italic_p end_ARG start_ARG italic_p - 2 end_ARG end_POSTSUPERSCRIPT ( roman_log italic_k ) start_POSTSUPERSCRIPT 2 divide start_ARG italic_p - 1 end_ARG start_ARG italic_p - 2 end_ARG end_POSTSUPERSCRIPT italic_β ( italic_k ) = italic_o ( 1 ) for some p>2𝑝2p>2italic_p > 2 (see [3]) where β𝛽\betaitalic_β is the β𝛽\betaitalic_β-mixing coefficient defined in [28].
|
Unfortunately, the approach utilized in [17] and related papers cannot be applied to establish maximal inequalities when the aforementioned restrictions on the mixing coefficients do not hold. One of the cornerstones of this approach relies on insights that can be traced back to Dudley in the 1960s ([19]) for Gaussian processes. Dudley’s work states that the “natural” topology to measure the complexity of the class of functions ℱℱ\mathcal{F}caligraphic_F is related to the variation of the stochastic process. In [17], the authors use this insight to construct a “natural” norm, which turns out to depend on the β𝛽\betaitalic_β-mixing coefficients. Unfortunately, without restrictions on the mixing coefficients, this approach is not feasible, as this norm may not even be well-defined.
|
The main result in the paper shows that the L1superscript𝐿1L^{1}italic_L start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT norm of supf,f0∈ℱ|Gn[f−f0]|subscriptsupremum𝑓subscript𝑓0ℱsubscript𝐺𝑛delimited-[]𝑓subscript𝑓0\sup_{f,f_{0}\in\mathcal{F}}|G_{n}[f-f_{0}]|roman_sup start_POSTSUBSCRIPT italic_f , italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ caligraphic_F end_POSTSUBSCRIPT | italic_G start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT [ italic_f - italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ] | is bounded (up to constants) by the aforementioned complexity measure. Moreover, a corollary of this result establishes that when a summability condition, similar to the one in [17], holds, our bound replicates, and in some cases improves (up to constants) upon, the results in the literature. Conversely, when the summability restriction does not hold — i.e., the mixing coefficients do not decay quickly to zero — a bound of the form 1 remains valid. However, in this case, the quantity ΓΓ\Gammaroman_Γ is comprised not only of the complexity measure, as in the standard case, but also of a scaling factor that depends on the mixing properties and the sample size. This last result implies that, in this case, the concentration rate is not root-n, as in the standard case, but slower and is a function of the mixing rate.
|
The family of norms proposed in the proof is linked to the dependence structure, which is captured by suitably chosen mixing coefficients. Papers such as [3, 17] use β𝛽\betaitalic_β-mixing while some other papers use stronger concepts such as ϕitalic-ϕ\phiitalic_ϕ-mixing (see [20]). In this paper, however, we use the weaker notion of τ𝜏\tauitalic_τ-mixing introduced by [13, 14]. These coefficients not only are typically weaker than the β𝛽\betaitalic_β-mixing ones, thereby encompassing a wider class of stochastic processes (see [13, 14] and Remark 1 below for a more thorough discussion), but they are also adaptive to the size of the class of functions ℱℱ\mathcal{F}caligraphic_F — a property not enjoyed by standard mixing coefficients.
|
These results leave open the question of what type of maximal inequality one can obtain in contexts where the mixing coefficients do not satisfy these conditions. Many process do not satisfy them, either because the data exhibits long-range dependency or long-memory and this feature is modeled using slowly decaying dependence structure, e.g. see [4] in the context of Markov processes; or the data is described by a so-called infinite memory chain (see [18]); or simply because the β𝛽\betaitalic_β-mixing coefficients decay at a slow polynomial rate (see [8]). More generally, there is the open question of how do the mixing properties affect the concentration rate of the maximal inequality 1. This paper aims to provide insights into these questions.
|
D
|
In contrast, our LTU framework (which generalizes von Neumann’s TU framework) involves a continuum of contracts.
|
We show that matching problems with LTU are equivalent to two-player games which are a nonzero-sum generalization of von Neumann’s hide-and-seek game.
|
In this section we show how the LTU matching problem can be reframed as a generalization of the two-person game known as hide-and-seek.
|
Interestingly, the method of Scarf (1967) also involves two-person games – although not hide-and-seek.
|
Overall, the combination of these two results created an equivalence between matching problems with TU and zero-sum hide-and-seek games.
|
C
|
Our comparison of the concealed vs. revealed contracts has a direct isomorphism with third degree monopoly price discrimination. Specifically, our concealed setting corresponds to monopoly pricing without price discrimination, with the seller acting as principal and buyer acting as agent. Our revealed setting corresponds to a monopoly seller enacting third degree price discrimination over markets segmented by X𝑋Xitalic_X.
|
Thus, with minor adjustments, we can apply results from the price discrimination literature that characterize the effects of third degree monopoly price discrimination on total welfare. Mirroring Varian [1985]’s seminal work, Lemma 4 shows that total welfare increases only if the quantity of tasks completed also increases in the revealed setting compared to the concealed setting.
|
The question of whether price discrimination increases total welfare has been well studied [Varian, 1985].
|
Our comparison of the concealed vs. revealed contracts has a direct isomorphism with third degree monopoly price discrimination. Specifically, our concealed setting corresponds to monopoly pricing without price discrimination, with the seller acting as principal and buyer acting as agent. Our revealed setting corresponds to a monopoly seller enacting third degree price discrimination over markets segmented by X𝑋Xitalic_X.
|
To give additional sufficient conditions for concealment and revelation beyond the anchored setting with one zero-cost type, we apply an analysis technique similar to that of Aguirre et al. [2010], who analyzed the effects of third degree monopoly price discrimination on total welfare.
|
A
|
Stratified permutation is an instance of permutations with restricted positions (see, e.g., Rosenbaum, 1984; Diaconis
|
as the LHS of the inequality above is the sum of the areas of nssubscript𝑛𝑠n_{s}italic_n start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT
|
Z=hπ∈ℝn𝑍subscriptℎ𝜋superscriptℝ𝑛Z=h_{\pi}\in\mathbb{R}^{n}italic_Z = italic_h start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, where π∼𝒰(𝕊n)similar-to𝜋𝒰subscript𝕊𝑛\pi\sim\mathcal{U}(\mathbb{S}_{n})italic_π ∼ caligraphic_U ( blackboard_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) and 𝕊nsubscript𝕊𝑛\mathbb{S}_{n}blackboard_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is the set of all stratified permutations in this setting.
|
Bickel, 2021); thus, the set of such permutations forms a subset of all permutations of observation indices. Chen
|
et al. (2011, Chapter 6) provide normal approximation results for a class of restricted permutations — in contrast with the set of entire permutations required in Hoeffding (1951)’s CLT — but do not consider stratified permutations. The present paper fills this gap in the literature by developing normal approximation for this important class of permutations.
|
C
|
A collection of ranked experiments identifies spectral subdivision C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG if for any prior μ∈intΔ𝜇intΔ\mu\in\operatorname{int}\Deltaitalic_μ ∈ roman_int roman_Δ there exists a decision problem that satisfies the collection that induces C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG, and any decision problem and prior that satisfies the collection induces C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG.
|
There exists a collection of ranked experiments and utility differences that identifies the value of information for the agent.
|
For any spectral subdivision C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG, there exists a collection of ranked experiments that identifies it.
|
By construction, for a decision problem whose value function induces subdivision C𝐶Citalic_C, at prior μ∈intΔ𝜇intΔ\mu\in\operatorname{int}\Deltaitalic_μ ∈ roman_int roman_Δ, the DM is indifferent between any experiment π𝜋\piitalic_π possessing π~∈C~i~𝜋subscript~𝐶𝑖\tilde{\pi}\in\tilde{C}_{i}over~ start_ARG italic_π end_ARG ∈ over~ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as a submatrix and any experiment π~′superscript~𝜋′\tilde{\pi}^{\prime}over~ start_ARG italic_π end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT that is the same as π𝜋\piitalic_π except that π~~𝜋\tilde{\pi}over~ start_ARG italic_π end_ARG is replaced by an arbitrary garbling. We term C~isubscript~𝐶𝑖\tilde{C}_{i}over~ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT a spectral element, and the set C~≔{C~1,…,C~t}≔~𝐶subscript~𝐶1…subscript~𝐶𝑡\tilde{C}\coloneqq\left\{\tilde{C}_{1},\dots,\tilde{C}_{t}\right\}over~ start_ARG italic_C end_ARG ≔ { over~ start_ARG italic_C end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , over~ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } the spectral subdivision. Importantly, a spectral subdivision is prior-free: it is a set of families of matrices.
|
A collection of ranked experiments identifies spectral subdivision C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG if for any prior μ∈intΔ𝜇intΔ\mu\in\operatorname{int}\Deltaitalic_μ ∈ roman_int roman_Δ there exists a decision problem that satisfies the collection that induces C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG, and any decision problem and prior that satisfies the collection induces C~~𝐶\tilde{C}over~ start_ARG italic_C end_ARG.
|
B
|
If ν𝜈\nuitalic_ν has a single preference in its support, we are done. Suppose otherwise that n≥2𝑛2n\geq 2italic_n ≥ 2, so that that ≻1subscriptsucceeds1\succ_{1}≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ≻2subscriptsucceeds2\succ_{2}≻ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT are in the support of ν𝜈\nuitalic_ν. Since ≻1subscriptsucceeds1\succ_{1}≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ≻2subscriptsucceeds2\succ_{2}≻ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT differ, this means that there exists some pair (x,y)𝑥𝑦(x,y)( italic_x , italic_y ) such that x≻1ysubscriptsucceeds1𝑥𝑦x\succ_{1}yitalic_x ≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_y and y≻2xsubscriptsucceeds2𝑦𝑥y\succ_{2}xitalic_y ≻ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT italic_x. Since ν𝜈\nuitalic_ν satisfies the single crossing property, for all i≥2𝑖2i\geq 2italic_i ≥ 2, we have that y≻ixsubscriptsucceeds𝑖𝑦𝑥y\succ_{i}xitalic_y ≻ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_x. Let A𝐴Aitalic_A denote the set such that ≻1∈L(x,A)\succ_{1}\in L(x,A)≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ italic_L ( italic_x , italic_A ). Since x≻1ysubscriptsucceeds1𝑥𝑦x\succ_{1}yitalic_x ≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_y, y∈A𝑦𝐴y\in Aitalic_y ∈ italic_A and thus L(x,A)∩{≻1,…,≻n}={≻1}𝐿𝑥𝐴subscriptsucceeds1…subscriptsucceeds𝑛subscriptsucceeds1L(x,A)\cap\{\succ_{1},\dots,\succ_{n}\}=\{\succ_{1}\}italic_L ( italic_x , italic_A ) ∩ { ≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , ≻ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } = { ≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT }. The result now continues by induction.
|
It follows from Proposition 4.3 that the supports of SCRUM representations satisfy edge decomposability. Specifically, we know that every subset of a SCRUM support is itself the support of some SCRUM representation. It then follows that the lowest ranked preference in the support is the unique element of some L(x,A)𝐿𝑥𝐴L(x,A)italic_L ( italic_x , italic_A ) among preferences in the support. In other words, once the support of a SCRUM representation is pinned down, we can recursively find the probability weights on the preferences by looking at the lowest and then the next lowest ranked preferences in the support. We now show through an example that SCRUM supports fail to capture every edge decomposable model.
|
Unlike previously, it is not immediately obvious that the supports of SCRUM representations are edge decomposable. In order to see that they are, note the following.
|
et al. (2017) and Turansick (2022). We show that every set of preferences which can be the support of some distribution over preferences satisfying the conditions of either Apesteguia
|
et al. (2017). SCRUM puts further structure on Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT in that SCRUM assumes that Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is endowed with some exogenous linear order ⊳contains-as-subgroup\rhd⊳. We say that a random choice rule p𝑝pitalic_p is rationalizable by SCRUM if there exists a distribution over preferences ν𝜈\nuitalic_ν such that the support of ν𝜈\nuitalic_ν can be ordered so that it satisfies the single-crossing property with respect to ⊳contains-as-subgroup\rhd⊳.
|
A
|