robench-2024b
Collection
48 items
•
Updated
context
stringlengths 100
14.5k
| A
stringlengths 100
4.09k
| B
stringlengths 100
3.15k
| C
stringlengths 100
3.91k
| D
stringlengths 100
4.49k
| label
stringclasses 4
values |
---|---|---|---|---|---|
A special example of p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures which is so called OCE, is discussed in the next section. Finally, in Sect. 5, the p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures are used to study the dual representation of the p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-dynamic risk measures.
|
4 Optimized Certainty Equivalent on 𝐋𝐩(⋅)superscript𝐋𝐩⋅\mathbf{L^{p(\cdot)}}bold_L start_POSTSUPERSCRIPT bold_p ( ⋅ ) end_POSTSUPERSCRIPT
|
3 Convex risk measures on 𝐋𝐩(⋅)superscript𝐋𝐩⋅\mathbf{L^{p(\cdot)}}bold_L start_POSTSUPERSCRIPT bold_p ( ⋅ ) end_POSTSUPERSCRIPT
|
In this section, a special class of p(⋅)𝑝⋅p(\cdot)italic_p ( ⋅ )-convex risk measures that is the Optimized Certainty Equivalent (OCE) is studied and it will be used as an example of dynamic risk measures in Sect. 5.
|
5 Dynamic risk measures on 𝐋𝐩(⋅)superscript𝐋𝐩⋅\mathbf{L^{p(\cdot)}}bold_L start_POSTSUPERSCRIPT bold_p ( ⋅ ) end_POSTSUPERSCRIPT
|
A
|
As we approximated u𝑢uitalic_u by u~~𝑢\tilde{u}over~ start_ARG italic_u end_ARG and u^^𝑢\hat{u}over^ start_ARG italic_u end_ARG in Lemma 4.2, we would approximate a process 2Z⋅2σ(⋅,S⋅)S⋅2superscriptsubscript𝑍⋅2𝜎⋅subscript𝑆⋅subscript𝑆⋅\frac{2Z_{\cdot}^{2}}{\sigma(\cdot,S_{\cdot})S_{\cdot}}divide start_ARG 2 italic_Z start_POSTSUBSCRIPT ⋅ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_σ ( ⋅ , italic_S start_POSTSUBSCRIPT ⋅ end_POSTSUBSCRIPT ) italic_S start_POSTSUBSCRIPT ⋅ end_POSTSUBSCRIPT end_ARG by u.𝑢u\,.italic_u . Likewise, a random variable 1∫0TZt𝑑t1superscriptsubscript0𝑇subscript𝑍𝑡differential-d𝑡\frac{1}{\int_{0}^{T}Z_{t}\,dt}divide start_ARG 1 end_ARG start_ARG ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_Z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT italic_d italic_t end_ARG is approximated to F.𝐹F\,.italic_F . Examine following Lemmas 5.3 and 5.4 in comparison to Lemmas 4.2, 4.3, and 4.4.
|
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , as t→0,→𝑡0t\rightarrow{0}\,,italic_t → 0 , we have
|
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , as t→0→𝑡0t\rightarrow{0}italic_t → 0, we have
|
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , as T→0→𝑇0T\rightarrow{0}italic_T → 0, we have
|
Under Assumption 1, for any p>0,𝑝0p>0\,,italic_p > 0 , there exists a positive constant Dpsubscript𝐷𝑝D_{p}italic_D start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT depending only on p𝑝pitalic_p such that the following inequalities hold.
|
B
|
Moreover, even if our GSA methodology was born in order to deal with simulation models, one could think about using it as a method to deal with Machine Learning-oriented methods dealing with functional data [37]. Its role in this context would be to provide a simple yet probabilistically sound way to perform significance testing of input parameters.
|
Figure 5: Pvalues for the SSP2 - SSP3 Transition. In all the panels, the x axis represents time (from 2020 to 2090), while the y are the value of the adjusted (full line) and unadjusted (dotted line) p-value functions, from 0 to 1. Rows and colors denote different drivers, while the two columns are for Individual and Interaction effects.
|
By looking at sensitivity indices, like in the previous case the impacts of income (GDPPC) and energy intensity (END) are the most evident. In the SSP2 to SSP3 case we also observe a probably significant time dynamics for the fossil fuel availability (FF) variable. Differently from the previous case, we also observe that the interaction effects for energy intensity (END) and income (GDPPC) have the same direction.
|
A fundamental tool to understand and explore the complex dynamics that regulates this phenomenon is the use of computer models. In particular, the scientific community has oriented itself towards the use of coupled climate-energy-economy models, also known as Integrated Assessment Models (IAM). These are pieces of software that integrate climate, energy, land and economic modules, to generate predictions about decision variables for a given period (usually, the next century). They belong to two very different paradigms [see e.g. 38]: detailed process models which have provided major input to climate policy making and assessment reviews such as those of the IPCC. And benefit-cost models such as the Dynamic Integrated Climate-Economy (DICE) model [20], for which the economics Nobel prize was awarded in 2018. A classic variable of interest in this kind of analysis is the level of future CO2𝐶subscript𝑂2CO_{2}italic_C italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT emissions, since these directly affect climatic variables, such as global average temperature.
|
Matteo Fontana acknowledges financial support from the European Research Council, ERC grant agreement no. 336155 - project COBHAM ’The role of consumer behaviour and heterogeneity in the integrated assessment of energy and climate policies’. Massimo Tavoni acknowledges financial support from the European Research Council, ERC grant agreement no. 101044703 - project EUNICE. The authors would also like to thank three anonymous reviewers for the insightful comments provided.
|
D
|
The equivalence between the absence of arbitrage opportunities and the existence of a martingale measure, or the fundamental theorem of asset pricing (FTAP in short), is a core topic to mathematical finance. FTAP results are discussed in classical models under the assumption that the dynamics of risky assets are known precisely, see [49], [22], [25], [27], etc. Nonetheless, model uncertainty, i.e., the risk of using wrong models, cannot be ignored in practice. Since the seminal work of Knight (1921) [48], uncertainty modeling has emerged as effective tools to address this issue.
|
The pathwise approach, pioneered by [36], makes no assumptions on the dynamics of the underlying assets. Instead, the set of all models which are consistent with the prices of observed vanilla options was investigated and bounds on the prices of exotic derivatives were derived. The approach was applied to barrier options in [13], to forward start options in [38], to variance options in [17], to weighted variance swaps in [24], among others. [23] introduced the concept of model independent
|
of all probability measures on ΩtsubscriptΩ𝑡\Omega_{t}roman_Ω start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. In [1], a pathwise version of the first FTAP was given, under the existence of a superlinearly growing option. This condition ensures the compactness of the set of martingale measures compatible with option prices. In the parametrization setting, we prove a robust version of the DMW theorem without any condition on ΩΩ\Omegaroman_Ω and the existence of other traded options, thus our results cannot be deduced from [12], [1]. Technically, we assume the continuity of the price processes with respect to the uncertain parameters. In addition, the laws of the uncertain price processes in the current setting are not necessarily of the product forms, see Example 4.2 of [59].
|
The three approaches are also different from technical points of view. The pathwise approach assumes that there are some traded vanilla options from which marginal distributions of the underlying assets are deduced. Techniques with martingale optimal transports are employed to derive robust bounds for other exotic options. In the quasi-sure approach, ones have to work in a “local fashion” where heavy tools from the theory of analytic sets and measurable selections are applied to glue one-period solutions together by using dynamic programming. In contrast, the parametrization framework does not require the existence of other options as a part of modelling, and it allows to use the standard arguments from the classical no arbitrage pricing theory. Our proof techniques include a new global argument without dynamic programming. The global argument is suitable for continuous time models and in particular for models with transaction costs, see [19] for more details. Most importantly, we are able to apply the Lpsuperscript𝐿𝑝L^{p}italic_L start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT theory to reach satisfactory results without restrictive conditions.
|
From the modelling point of view, the parametrization framework differs from the pathwise and the quasi-sure approaches in different ways. In the pathwise approach, randomness and filtrations are generated by the canonical process. The quasi-sure approach works with Polish spaces and filtrations come from universal completion of Borel sigma fields. In contrast, the parametrization approach does not require any conditions on the state space. It may incorporate different sources of randomness to each price process and the filtration could be greatly richer than the natural filtration of stock prices. These properties are useful if one wishes to deal with many price processes and more complex payoffs. For example, in a discrete-time quasi-sure setting where European options are available for static trading, [6] showed that, in their languages, the superhedging price (hedger’s price) of an American option can be strictly greater than the highest model-based price (Nature’s price). In a similar framework, [40] showed that these
|
A
|
Sadler (2015) note that cascade and diffusion utilities coincide in their binary-state binary-action setting.
|
Ozdaglar (2011) and others. The foundation in our general setting is a novel compactness-continuity argument.
|
Ozdaglar (2011), owes to certain monotonicity that does not extend beyond their binary-binary setting.
|
Ozdaglar (2011) provide a general treatment of observational networks in an otherwise classical setting. But they only allow for binary states and binary actions. They introduce the condition of expanding observations, explaining that this property of the network is necessary for learning. They establish that it is also sufficient for learning with unbounded beliefs. Building on Banerjee and
|
Sadler (2015) note that cascade and diffusion utilities coincide in their binary-state binary-action setting.
|
B
|
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0\lambda=0italic_λ = 0. This notion of local power is inspired by the corresponding notions in Section 4 of Romano
|
Romano (2005b). We show that any globally most powerful protocol is also locally most powerful (and thus admissible if λ=0𝜆0\lambda=0italic_λ = 0) under linearity and normality.
|
Here, we consider the general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0 and show that when λ>0𝜆0\lambda>0italic_λ > 0, the planner’s subjective utility from research implies a notion of power. Globally optimal protocols generally depend on both λ𝜆\lambdaitalic_λ and the planner’s prior π𝜋\piitalic_π. We restrict our attention to the following class of planner’s priors ΠΠ\Piroman_Π.
|
We consider two notions of optimality: maximin optimality (corresponding to the case where λ=0𝜆0\lambda=0italic_λ = 0) and global optimality (corresponding to the more general case where λ≥0𝜆0\lambda\geq 0italic_λ ≥ 0). Accordingly, we say that r∗superscript𝑟r^{*}italic_r start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a maximin optimal if
|
Instead of WAP, one could compare maximin protocols in terms of their power over a local (to θ=0𝜃0\theta=0italic_θ = 0) alternative space or focus on admissible maximin protocols. In Appendix C.2, we consider a notion of local power with the property that locally most powerful protocols are also admissible when λ=0𝜆0\lambda=0italic_λ = 0. This notion of local power is inspired by the corresponding notions in Section 4 of Romano
|
A
|
Compared with these previous studies, this study identifies the exogenous shocks that transform the economy from stagnation to growth based on economic history studies and quantitatively examines the magnitude of the shocks.
|
We can incorporate into the model the elements of endogenous growth models, wherein scientists engage in the R&D of manufacturing goods in the non-Malthusian state, and the basic properties of the model would not change.
|
This section analytically investigates the properties of the model, particularly the population dynamics of the Malthusian state and the effect of a sudden increase in land supply.
|
The remainder of this paper is organized as follows: Section 2 introduces the model. Section 3 discusses the analytical properties of the proposed model.
|
As explained in Section 3.2, I model the relief of land constraints, which Pomeranz argues was the cause of the Great Divergence and the Industrial Revolution in Britain, as a sudden increase in Z𝑍Zitalic_Z.
|
C
|
For the experiment, we turn our focus to the purely congestive case, using the number of free-riders in a group to describe an efficient structure.
|
The treatment variation was implemented in the second part. In three baseline sessions, consisting of a total of 72 subjects in 18 groups, subjects were told that the second part of the experiment would be exactly the same as the first part, except that subject IDs would be randomly reassigned. In five treatment sessions, consisting of 112 subjects across 28 groups, subjects were also told that they would play the game for another 15 rounds. However, in addition to reassigning ID’s, subjects were also told that they would be shown how much benefit they received in the previous round from each other subject in their group. Although providing subjects with information about the past behavior of others in their group does not change the unique Nash equilibrium, this information treatment facilitates direct reciprocity where the baseline sessions do not. After the two main parts of the experiment were finished, subjects completed a series of questionnaires designed to elicit behavioral characteristics. Questions from this section are shown in Appendix LABEL:app:questions. Sessions lasted no longer than an hour. At the end of the session, subjects were paid privately by check, earning an average of $16.69, including a $10 show-up fee.
|
We estimate this model using the data from our laboratory experiment and present the results of these estimations in Table 2.
|
In this section, we describe the design and procedures of the laboratory experiment in greater detail. The experiment was conducted using undergraduate students in the XS/FS Experimental Social Sciences Laboratory at Florida State University. We collected data from a total of 184 subjects across eight sessions. Subjects were recruited using ORSEE (Greiner, 2015), and played a computerized version of the game programmed using zTree (Fischbacher, 2007). Instructions used in the experiment, including screenshots of the decision screens, are contained in Appendix LABEL:app:instructions.
|
In Section 2, we lay out a simple theoretical framework for the collaborative sharing environment. Section 3 describes the design and procedures for the laboratory experiment testing the effects of different information structure on collaboration patterns. We present and discuss the reduced-form results of the experiment in Section 4. Then, in Section 5 we develop and estimate a structural empirical framework to analyze social preferences using our experimental data, and discuss the results of the estimation and the goodness of fit. This section also presents the three counterfactual simulations designed to measure the value of social preferences for trust and reciprocity. Section 6 concludes.
|
C
|
\mathcal{B}(\mathbb{R})\text{ measurable }\big{\}}{ italic_f ∈ roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_X , caligraphic_B ( blackboard_X ) : italic_ξ ↦ ∫ start_POSTSUBSCRIPT blackboard_X end_POSTSUBSCRIPT italic_f ( italic_x ) italic_ξ ( start_OPFUNCTION roman_d end_OPFUNCTION italic_x ) is caligraphic_E ( roman_Ξ ) - caligraphic_B ( blackboard_R ) measurable }, where ℰ(Ξ)ℰΞ\mathcal{E}(\Xi)caligraphic_E ( roman_Ξ ) is defined in the initial way, along with the fact that pointwise convergence preserves measurability (cf. [3, Section 4.6, Lemma 4.29]). ℰ(Ξ)ℰΞ\mathcal{E}(\Xi)caligraphic_E ( roman_Ξ ) can be defined as the σ𝜎\sigmaitalic_σ-algebra containing sets {ξ∈Ξ:∫𝕏f(x)ξ(dx)∈B}conditional-set𝜉Ξsubscript𝕏𝑓𝑥𝜉d𝑥𝐵\{\xi\in\Xi:\int_{\mathbb{X}}f(x)\xi(\operatorname{d\!}x)\in B\}\,{ italic_ξ ∈ roman_Ξ : ∫ start_POSTSUBSCRIPT blackboard_X end_POSTSUBSCRIPT italic_f ( italic_x ) italic_ξ ( start_OPFUNCTION roman_d end_OPFUNCTION italic_x ) ∈ italic_B } for all f∈ℓ∞(𝕏,ℬ(𝕏))𝑓superscriptℓ𝕏ℬ𝕏f\in\ell^{\infty}(\mathbb{X},\mathcal{B}(\mathbb{X}))italic_f ∈ roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_X , caligraphic_B ( blackboard_X ) ) and B∈ℬ(ℝ)𝐵ℬℝB\in\mathcal{B}(\mathbb{R})italic_B ∈ caligraphic_B ( blackboard_R ).
|
\mathcal{B}(\mathbb{R})\text{ measurable }\big{\}}{ italic_f ∈ roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_X , caligraphic_B ( blackboard_X ) : italic_ξ ↦ ∫ start_POSTSUBSCRIPT blackboard_X end_POSTSUBSCRIPT italic_f ( italic_x ) italic_ξ ( start_OPFUNCTION roman_d end_OPFUNCTION italic_x ) is caligraphic_E ( roman_Ξ ) - caligraphic_B ( blackboard_R ) measurable }, where ℰ(Ξ)ℰΞ\mathcal{E}(\Xi)caligraphic_E ( roman_Ξ ) is defined in the initial way, along with the fact that pointwise convergence preserves measurability (cf. [3, Section 4.6, Lemma 4.29]). ℰ(Ξ)ℰΞ\mathcal{E}(\Xi)caligraphic_E ( roman_Ξ ) can be defined as the σ𝜎\sigmaitalic_σ-algebra containing sets {ξ∈Ξ:∫𝕏f(x)ξ(dx)∈B}conditional-set𝜉Ξsubscript𝕏𝑓𝑥𝜉d𝑥𝐵\{\xi\in\Xi:\int_{\mathbb{X}}f(x)\xi(\operatorname{d\!}x)\in B\}\,{ italic_ξ ∈ roman_Ξ : ∫ start_POSTSUBSCRIPT blackboard_X end_POSTSUBSCRIPT italic_f ( italic_x ) italic_ξ ( start_OPFUNCTION roman_d end_OPFUNCTION italic_x ) ∈ italic_B } for all f∈ℓ∞(𝕏,ℬ(𝕏))𝑓superscriptℓ𝕏ℬ𝕏f\in\ell^{\infty}(\mathbb{X},\mathcal{B}(\mathbb{X}))italic_f ∈ roman_ℓ start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_X , caligraphic_B ( blackboard_X ) ) and B∈ℬ(ℝ)𝐵ℬℝB\in\mathcal{B}(\mathbb{R})italic_B ∈ caligraphic_B ( blackboard_R ).
|
\left\{\xi\in\Xi:\xi(A)\in B\right\}\right\}{ ( italic_x , italic_a ) ∈ blackboard_X × blackboard_A : italic_P ( italic_t , italic_x , italic_a , ⋅ ) ∈ { italic_ξ ∈ roman_Ξ : italic_ξ ( italic_A ) ∈ italic_B } }
|
In view of Lemma B.9, we have ℬ(Ξ)=ℰ(Ξ)ℬΞℰΞ\mathcal{B}(\Xi)=\mathcal{E}(\Xi)caligraphic_B ( roman_Ξ ) = caligraphic_E ( roman_Ξ ).
|
is the σ𝜎\sigmaitalic_σ-algebra containing sets of the form {ξ∈Ξ:ξ(A)∈B}conditional-set𝜉Ξ𝜉𝐴𝐵\{\xi\in\Xi:\xi(A)\in B\}{ italic_ξ ∈ roman_Ξ : italic_ξ ( italic_A ) ∈ italic_B } with A∈ℬ(𝕏)𝐴ℬ𝕏A\in\mathcal{B}(\mathbb{X})italic_A ∈ caligraphic_B ( blackboard_X ) and B∈ℬ([0,1])𝐵ℬ01B\in\mathcal{B}([0,1])italic_B ∈ caligraphic_B ( [ 0 , 1 ] ). In other words, ℰ(Ξ)ℰΞ\mathcal{E}(\Xi)caligraphic_E ( roman_Ξ ) is the smallest σ𝜎\sigmaitalic_σ-algebra on ΞΞ\Xiroman_Ξ such that the mapping A↦ξ(A)maps-to𝐴𝜉𝐴A\mapsto\xi(A)italic_A ↦ italic_ξ ( italic_A ) is ℰ(Ξ)ℰΞ\mathcal{E}(\Xi)caligraphic_E ( roman_Ξ )-ℬ([0,1])ℬ01\mathcal{B}([0,1])caligraphic_B ( [ 0 , 1 ] ) measurable for any A∈ℬ(𝕏)𝐴ℬ𝕏A\in\mathcal{B}(\mathbb{X})italic_A ∈ caligraphic_B ( blackboard_X ). Equivalently,111One direction is obvious because ξ(A)=∫𝕏𝟙A(x)μ(dx)𝜉𝐴subscript𝕏subscript1𝐴𝑥𝜇d𝑥\xi(A)=\int_{\mathbb{X}}\mathbbm{1}_{A}(x)\mu(\operatorname{d\!}x)italic_ξ ( italic_A ) = ∫ start_POSTSUBSCRIPT blackboard_X end_POSTSUBSCRIPT blackboard_1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT ( italic_x ) italic_μ ( start_OPFUNCTION roman_d end_OPFUNCTION italic_x ) for A∈ℬ(𝕏)𝐴ℬ𝕏A\in\mathcal{B}(\mathbb{X})italic_A ∈ caligraphic_B ( blackboard_X ). The other direction follows from an application of monotone class theorem for functions (cf. [31, Theorem 5.2.2]) on {f∈ℓ∞(𝕏,ℬ(𝕏):ξ↦∫𝕏f(x)ξ(dx) is ℰ(Ξ)-ℬ(ℝ) measurable }\big{\{}f\in\ell^{\infty}(\mathbb{X},\mathcal{B}(\mathbb{X}):\xi\mapsto\int_{%
|
C
|
Table 3: Analysis of the expected value of including uncertainty (NV: newsvendor model; PF: point forecasts).
|
Table 10 in the Appendix provides additional results on combinations where we apply distributional information for two sources of uncertainty while in the third source relying on the expected value. We find that the value of including uncertainty varies between the different model components, while also the sequence of including distributional information matters. For example, we find that including the probability distribution for supply is only beneficial when the retailer also accounts for uncertainty in demand.
|
While the application of the lookahead policy allows the retailer to account for uncertainty in the stochastic variables demand, supply, and spoilage in a multi-period setting where we assume underlying parameters for the probability distributions to be known, in practice, retailers need to adequately estimate these distribution from features such as historical data before they are able to make replenishment order decisions based on probabilistic information. To this end, data collection, data preparation, and data analysis require operational effort and costs for retailers, which needs to be taken into account. Thus, we now evaluate the benefit of applying distributional information for each of the different sources of uncertainty (limiting information on the other two sources to point forecasts). The results will give us insights into the value of probabilistic information. For each source of uncertainty, we consider two different information settings: (i) the retailer knows only the expected value of the uncertain quantity and (ii) the retailer knows the full probability distribution.
|
In our analysis, for each information scenario, the retailer optimises the replenishment order quantity in each demand period according to the information available (i.e. expected values or distributions). This allows us to estimate the EVIU, i.e. cost reductions gained from precise distributional information, for each source of uncertainty as well as for the whole model. Table 3 provides information on the different settings compared to probabilistic information for all sources of uncertainty, and additionally gives the savings compared to the newsvendor model as well as the setting of point forecasts. Including distributional information for demand only already leads to a comprehensive reduction in total costs relative to point forecasts (-51.6%). To account for the variation in demand, the retailer here increases replenishment order quantities and holds a significantly higher safety stock. Therefore, the average inventory level and amount of spoilage increase more than threefold compared to the situation of point forecasts. However, because of the asymmetric cost structure, savings due to the increased service level exceed additional expenditures for spoilage and inventory holding. Improvements with respect to costs are obtained also when including only the shelf life’s probability distribution, but with a much smaller effect — -6.8% total costs compared to point forecasts — due to the low probability of spoilage within the first two sales periods.
|
Our simulation study in Chapter 4 suggests that retailers are already able to reduce costs substantially even when accounting only for demand uncertainty. Therefore, we further compare average costs when using the lookahead policy incorporating only information on the demand distribution with the benchmark policy for the SKU mushrooms and every fulfilment centre (Table 7). We find that using only the demand distribution reduces average costs over all fulfilment centres by 22.9%, whereas additionally including distributional information on the shelf life and supply shortages leads to a further cost reduction of only 1.1%. These findings corroborate the results from the simulation study, indicating that the demand distribution is the main source of uncertainty and the most relevant information to incorporate in the replenishment order decision.
|
A
|
Notes: The table reports FE and IV estimates with robust standard errors (in parenthesis), including time and country fixed-effects. In columns 1 and 2, the dependent variable is the log of CO2 emissions (thousand metric tons of CO2), whereas in column 3 is external debt. The external debt is instrumented by the exposure of country i’s to international liquidity shocks (see equation 2). *** p<0.01, ** p<0.05, * p<0.1. The sample period is 1991-2015. F first stage is the F statistic for the first stage of the instrumental variables estimates. The Kleibergen-Paap rk LM statistic test for the relevance of the instruments. The Anderson-Rubin Wald Test statistic test for the statistical significance of the main (beta) coefficient associated with the external debt in the presence of potentially weak instruments.
|
Our main research question is what is the effect of external debt on GHG emissions? We only found a few papers that address this relationship, most of which deal with a single country (Katircioglu and Celebi, , 2018; Beşe et al., 2021b, ; Beşe et al., 2021a, ; Beşe and Friday, , 2022; Bachegour and Qafas, , 2023). As regards our work, it is more aligned with Akam et al., (2021), who use a panel of thirty-three heavily indebted poor countries. However, we expand the scope of analysis by using a wide panel of seventy-eight EMDEs. In addition, unlike previous literature, we use external instruments to deal with potential endogeneities in the relationship between external debt and GHG emissions. In particular, we exploit the exposure to global push factors of international monetary liquidity (Reinhart and Reinhart, , 2009; Forbes and Warnock, , 2012; Rey, , 2015) as an exogenous variation in external debt.
|
We find a positive and statistically significant effect of external debt on GHG emissions when we take into account the potential endogeneity problems. A 1 pp. rise in external debt causes, on average, a 0.5% increase in GHG emissions.
|
We contribute to the recent study of the relationship between external debt on GHG emissions with causal evidence in a wide panel of countries. We estimate the impact of external debt on GHG emissions in a panel of 78 EMDEs from 1990 to 2015 and, unlike previous literature, we use external instruments to address potential endogeneity problems. Specifically, we use international liquidity shocks as instrumental variables for external debt.
|
We contribute to the recent study of the relationship between external debt on GHG emissions with causal evidence in a wide panel of countries. We estimate the impact of external debt on GHG emissions in a panel of 78 EMDEs from 1990 to 2015 and, unlike previous literature, we use external instruments to address potential endogeneity problems. Specifically, we use international liquidity shocks as instrumental variables for external debt.
|
C
|
A regime-switching model is very natural given that the history of inflation is a succession of periods of low and high inflation of varying lengths. The idea to use such models is not new as it was first proposed for US inflation by Evans and Wachtel (1993). We follow Amisano and Fagan (2013) in the use of an regime-switching AR(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) process except that we do not make the transition probabilities depend on money growth.
|
Table 6: Parameters calibrated on the log-returns of the CPI-U. We refer to Section 3.2.4 for the definitions of these parameters.
|
Table 2: Parameters of the regime-switching AR(1)𝐴𝑅1AR(1)italic_A italic_R ( 1 ) process and the Gamma random walk. The parameters of the former are inspired from parameters calibrated on real inflation data that we present later while the parameters of the latter are obtained by moment-matching as described above.
|
Similarly to the previous section, we simulate 10000 one-year paths with a monthly frequency for both calibrated models and we check that the distributions of the annual log-returns (i.e. the annual inflation rates) are close to the historical ones. The comparison of the empirical densities (see 14(b)) does not reveal significant deviations from the historical density and the hypothesis of same distribution is not rejected by the Kolmogorov-Smirnov test (see Table 4(b)) for both models. Note that the simulated paths all start from 0 and the initial regime for the RSAR(1) process is sampled from the stationary distribution of the Markov chain.
|
The GRM is calibrated by matching the three first moments of the historical annual log-returns while the RSAR(1) process is calibrated by log-likelihood maximization. The calibrated parameters are reported in Table 6(b).
|
D
|
Finally, each respondent is asked to indicate what they consider a fair wage for that job description. Their answer R𝑅Ritalic_R is recorded, with values ranging between r=0𝑟0r=0italic_r = 0 and r=50𝑟50r=50italic_r = 50. Improper answers were excluded from the survey and constituted less than 0.5% of the responses.
|
Table 1: Humans vs. AI. These results are reported in graphical form and discussed in Fig. 1. Notice a similar trend but with a downward offset of about $5 for the AI. For the anchors of $50 and $100, the histogram splits into two modes, rendering the mean, median, and standard deviations not representative. The modal analysis for the anchors of $50 and $100 is shown in Fig. 2 and Fig. 4.
|
A trained transformer is a deterministic map, so the collection of tokens in response to a certain input string is unchanged if I apply the string repeatedly. Each output token represents a logit vector with as many components as there are words or characters in the set of tokens (GPT-3 uses sub-word tokenization). The components of this vector can be interpreted as the log-probability of the output token representing the token corresponding to the component. The probabilities for every component can be arranged as a vector called the softmax vector. These are the probabilities displayed in the GPT-3 interface when selecting Full Spectrum for the option Show Probabilities. In practice, since for GPT-3 the number of possible tokens is very large, the interface shows only the top-5 or top-6 values and indicates the sum of the probabilities for these top-ranking choices, typically in the 80-100% range.
|
I demonstrate that the minimum wage functions as an anchor for what Prolific workers consider a fair wage: for numerical values of the minimum wage ranging from $5 to $15, the perceived fair wage shifts towards the minimum wage, thus establishing its role as an anchor (Fig. 1 and Table 1). I replicate this result for a second job description, finding that the effect holds even for jobs where wages are supplemented by tips.
|
A summary of the results for realistic values of the anchor is shown in Fig. 1. The full range is in Fig. 2. I aggregate data into histograms approximating the probabilities of a certain wage P𝑃Pitalic_P for each job description. For each job description, I compute:
|
D
|
\mathcal{P}\in\mathcal{M}^{\square}}\omega,\quad\text{s.t.}\ \ \square\in\{1,2\}over^ start_ARG italic_ω end_ARG start_POSTSUBSCRIPT i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT □ end_POSTSUPERSCRIPT = italic_ω start_POSTSUBSCRIPT i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT □ end_POSTSUPERSCRIPT / ∑ start_POSTSUBSCRIPT caligraphic_P ∈ caligraphic_M start_POSTSUPERSCRIPT □ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_ω , s.t. □ ∈ { 1 , 2 }
|
Then we perform Top-K𝐾Kitalic_K filtering guided by ω𝜔\omegaitalic_ω, i.e., sampling the refined super metapaths with importance factors in the top K𝐾Kitalic_K for subsequent feature aggregation.
|
After grouping the super metapaths, we update the features of the target CA by aggregating the features of other nodes in the metapath, and the final target CA feature is obtained by processing multiple super metapaths of the same group, as illustrated in Fig. 5(c).
|
In order to alleviate information redundancy and feature explosion during metapath feature aggregation, we adjust the importance factor of super metapaths before doing so.
|
After adjusting the importance factor of all the super metapaths, we then perform feature aggregation to update the CA features.
|
D
|
We also assume that, on average, there is no negative selection between discontinued drugs and their (ex-ante) profitability. This assumption is reasonable because the primary reason for discontinuations is negative clinical trial results; see, for example, DiMasi (2013) and Khmelnitskaya (2022).
|
Let us consider implementing the drug buyout scheme at the start of the discovery stage. This policy intervention faces different tradeoffs compared to the intervention after FDA approval. The main difference is that, at the discovery stage, the uncertainty associated with drug development has yet to be resolved, and the development costs are still ahead. In contrast, after FDA approval, all uncertainties are resolved, and R&D costs are sunk.
|
Panel (a) shows the mean of the expected cost of clinical trials and the FDA application and review process (in millions of U.S. dollars) at the time of discovery. The row “All Drugs” refers to all the drugs in our sample, and, “Drugs with Complete Path” refers to the sample of drugs for which we observe discovery, FDA application, and FDA approval announcements. Of the 84 such drugs, 29 belong to the Middle 90% and Bottom 95% samples.
|
These milestones inform us of the time it takes for a drug to reach the market from its initial discovery. In some cases, we also have sales data available, which allows us to evaluate the accuracy of our estimates of the drugs’ values. In the rest of the paper, we first summarize the institutional details and the data used in our analysis. Then, we formalize the idea that a firm’s change in market value within a tight window around these milestone announcements can be used to identify the value and cost of the drug under development.
|
Even though most scientific experiments are completed at the time of application, additional expenses are still involved in setting up manufacturing capacity, as well as legal and administrative fees.111111The FDA has prepared a set of instructions for drugs to receive approval, which clarifies that the “FDA may approve an NDA or an ANDA only if the methods used in, and the facilities and controls used for, the manufacture, processing, packing, and testing of the drug are found adequate to ensure and preserve its identity, strength, quality, and purity” (Food and Drug Administration, 2010). Our estimate is, therefore, the total of these different costs.
|
D
|
15 voters in all, with 3 experts: N=15𝑁15N=15italic_N = 15, K=3𝐾3K=3italic_K = 3. The two treatments
|
With p=0.7𝑝0.7p=0.7italic_p = 0.7 and q𝑞qitalic_q uniform over [0.5,[0.5,[ 0.5 ,0.7], we have verified
|
In all experiments, we set π=0.5𝜋0.5\pi=0.5italic_π = 0.5, p=0.7𝑝0.7p=0.7italic_p = 0.7, and F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform
|
Table 1: p=0.7𝑝0.7p=0.7italic_p = 0.7, F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
|
Table 2: p=0.7𝑝0.7p=0.7italic_p = 0.7, F(q)𝐹𝑞F(q)italic_F ( italic_q ) Uniform over [0.5,0.7]0.50.7[0.5,0.7][ 0.5 , 0.7 ]
|
C
|
The applications of quantum algorithms in finance include portfolio optimization [rebentrost2018quantum],
|
We use the following definition to describe the quantum measurement of any arbitrary normalized state characterized by n𝑛nitalic_n-qubits. For further details, we refer to, e.g., [marinescu2011classical, Chapter 2.5].
|
[chakrabarti2021threshold, doriguello2022quantum, fontanela2021quantum, kubo2022pricing, ramos2021quantum, QC5_Patrick, rebentrost2018quantum, QC4_optionpricing]. We also refer to the monograph [jacquier2022quantum] and surveys [egger2020quantum, jacquieroverview2023, orus2019quantum] for (further) applications of quantum computing in finance.
|
In this paper, we propose a quantum Monte Carlo algorithm to solve high-dimensional Black-Scholes PDEs with correlation and general payoff function which is continuous and piece-wise affine (CPWA), enabling to price most relevant payoff functions used in finance (see also Section 2.1.2). Our algorithm follows the idea of the quantum Monte Carlo algorithm proposed in [chakrabarti2021threshold, QC5_Patrick, QC4_optionpricing] which first uploads the multivariate log-normal distribution and the payoff function in rotated form and then applies a QAE algorithm to approximately solve the Black-Scholes PDE to price options.
|
The applications of quantum algorithms in finance include portfolio optimization [rebentrost2018quantum],
|
B
|
Here, τrsubscript𝜏𝑟\tau_{r}italic_τ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT is defined in Lemma 3.4, and ζh:=inf{s≥0;σBBs3+μBs=h}assignsubscript𝜁ℎinfimumformulae-sequence𝑠0subscript𝜎𝐵subscriptsuperscript𝐵3𝑠subscript𝜇𝐵𝑠ℎ\zeta_{h}:=\inf\{s\geq 0;~{}\sigma_{B}B^{3}_{s}+\mu_{B}s=h\}italic_ζ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT := roman_inf { italic_s ≥ 0 ; italic_σ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT italic_B start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT + italic_μ start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT italic_s = italic_h } with convention inf∅=+∞infimum\inf\emptyset=+\inftyroman_inf ∅ = + ∞. From these representations, it follows that ψr(r,h)+ψrr(r,h)>0subscript𝜓𝑟𝑟ℎsubscript𝜓𝑟𝑟𝑟ℎ0\psi_{r}(r,h)+\psi_{rr}(r,h)>0italic_ψ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_r , italic_h ) + italic_ψ start_POSTSUBSCRIPT italic_r italic_r end_POSTSUBSCRIPT ( italic_r , italic_h ) > 0 for all (r,h)∈ℝ+2𝑟ℎsuperscriptsubscriptℝ2(r,h)\in\mathbb{R}_{+}^{2}( italic_r , italic_h ) ∈ blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
|
where the function φ(r,h)∈C2(ℝ+2)𝜑𝑟ℎsuperscript𝐶2superscriptsubscriptℝ2\varphi(r,h)\in C^{2}(\mathbb{R}_{+}^{2})italic_φ ( italic_r , italic_h ) ∈ italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) is given by (3.13) in Lemma 3.3. Then, the function ψ(r,h)𝜓𝑟ℎ\psi(r,h)italic_ψ ( italic_r , italic_h ) is a classical solution to the following Neumann problem with Neumann boundary conditions at r=0𝑟0r=0italic_r = 0 and h=0ℎ0h=0italic_h = 0:
|
Then, the function l(r,z)𝑙𝑟𝑧l(r,z)italic_l ( italic_r , italic_z ) is a classical solution to the following Neumann problem with Neumann boundary condition at r=0𝑟0r=0italic_r = 0:
|
By applying Lemma 3.2 and Proposition 3.5, the function v(r,h,z)𝑣𝑟ℎ𝑧v(r,h,z)italic_v ( italic_r , italic_h , italic_z ) defined by (3) is a classical solution to the following Neumann problem:
|
Then, the function u(x,h,z)𝑢𝑥ℎ𝑧u(x,h,z)italic_u ( italic_x , italic_h , italic_z ) is a classical solution to the following HJB equation with Neumann boundary conditions:
|
C
|
It is clearly visible that closing the first firm already saves 7% of emissions and that one needs to close 7 companies to reach the emissions reduction target of 20%. The expected job loss curve (blue) and the expected output loss curve (green) show large jumps with the third firm being removed, followed by a slowly increasing regime before leveling off after the removal of 81 firms. We further read from the plot that to achieve the target of 20.25% in emission reductions, approximately 32.61% of output and 28.56% of jobs are lost in this strategy.
|
To empirically test our framework, we approximate hypothetical decarbonization efforts with the removal of firms from the Hungarian production network. A firm that is removed from the production network no longer supplies its customers nor does it place demand to its (former) suppliers in the subsequent time step. It also stops emitting CO2. This hypothetical scenario allows us to quantify the worst-case outcomes in terms of job and economic output loss of a strict command-and-control approach towards decarbonization. In our simulation, a decarbonization strategy is realized as follows. We first rank firms according to four different charachteristics, CO2 emissions, number of employees, systemic importance, and CO2 emissions per systemic importance. Then, for every of these four strategies (shown in Fig. 3) firms are cumulatively removed from the production network to assess the effects of the given heuristic. The first data point in Fig. 3 represents the highest ranked firm being removed from the production network. The second data point represents the highest and the second highest ranked firm, according to the respective heuristic, and so forth until all ETS firms are removed from the network. Each set of closed firms reduces the total CO2 output by the combined annual CO2 emissions of the respective firms. The closure of firms initializes a shock in the production network which results in the loss of jobs and economic output. These effects are calculated using the ESRI shock propagation algorithm [28] once in the output-weighted and once in the employment-weighted version. The removal of all 119 Hungarian ETS firms results in 31.7% of job and 38.2% of economic output loss. This is the same for all decarbonization strategies, but the order in which firms are removed from the production network determines the fractions of expected job and output loss on the way to this final value. The shock-propagation is deterministic, which means that the same set of closed firms always leads to the same outcomes in terms of CO2 reduction, job and output losses. The time horizon of our analysis is one year, as we consider the annual emissions of companies and a shock propagation on a production network that is assumed to remain constant. The estimated job and output losses can therefore be considered worst-case estimates that will likely be smaller when applied to the economy in the real world. Employees who lost their jobs would try to find a new employer. Some jobs might in fact be easily transferred between firms or even sectors, while highly specialized jobs might be harder to replace, see for example [41] [42]. We do not consider these effects in the present framework, but project immediate total potential job loss as a consequence of an imposed decarbonization policy, imposed by a hypothetical social planner. In addition, firms that lost a supplier or a buyer would try to establish new supply relations. In the present modeling framework this is only captured heuristically by assuming that firms with low market shares within their respective NACE4 industry sector are more easily replaceable and firms with high market shares are more difficult to replace. Explicitly considering rewiring and the reallocation of jobs during the shock propagation remains future work. In theory, all combinations of removals of the 119 ETS firms would need to be tested to find the truly optimal strategy with respect to maximum CO2 reduction and minimal expected job and output loss. Since this would result in a combinatorial explosion of possibilities, our goal here is to find a satisfying heuristic that allows for acceptable levels of expected job and output loss for a given CO2 reduction target. In total, we test eleven different heuristics for their potential to rapidly reduce CO2 emissions while securing high levels of employment and economic output. The outcomes for the four main decarbonization strategies are displayed in Fig. 3 and discussed in the subsequent section. The remaining strategies are shown and discussed in the SI section S2.
|
The ‘Remove least-employees firms first’ strategy that aims at minimizing job loss at each individual firm, shown in Fig. 3B manages to keep expected job and output loss at low levels for the initially removed firms. But since this strategy focuses on job loss at the individual firm level, it fails to anticipate a highly systemically relevant firm whose closure results in high levels of expected job and output loss. Since CO2 emissions are not explicitly considered in this strategy, emission savings only rise incrementally with additional firms with comparatively low numbers of employees being removed. To reduce CO2 emissions by 17.35 %, this strategy puts 32.24% of output and 28.41% of jobs at risk, while removing 102 firms from the production network. This strategy therefore fails to secure jobs and economic output, while delivering its emission savings.
|
This results in only a gradual increase of expected job and output loss in the beginning, but fails to anticipate the effects of a systemically very important firm which triggers widespread job and output losses. 102 firms need to be closed in this strategy to reach the benchmark.
|
‘Remove least-employees firms first’ strategy that aims at minimum job loss on the individual firm level,
|
B
|
Regarding the representation of deregulation in the power sector, i.e., decoupling transmission and generation expansion decisions, one can pinpoint two generalised strategies in the literature. The first spans investigations aimed at developing an optimal transmission network expansion strategy that would account for various possible developments of the generation infrastructure. Examples of such strategy can be found in (Sun et al., 2018; Mortaz and Valenzuela, 2019). Nonetheless, while the burden to formulate exhaustive uncertainty sets appears to be challenging on its own, this modelling strategy prevents generation companies (GenCos) from being dynamic market players capable of making reactive decisions regarding generation levels and capacity expansion.
|
Regarding the representation of deregulation in the power sector, i.e., decoupling transmission and generation expansion decisions, one can pinpoint two generalised strategies in the literature. The first spans investigations aimed at developing an optimal transmission network expansion strategy that would account for various possible developments of the generation infrastructure. Examples of such strategy can be found in (Sun et al., 2018; Mortaz and Valenzuela, 2019). Nonetheless, while the burden to formulate exhaustive uncertainty sets appears to be challenging on its own, this modelling strategy prevents generation companies (GenCos) from being dynamic market players capable of making reactive decisions regarding generation levels and capacity expansion.
|
In this paper, we study the impact of the TSO infrastructure expansion decisions in combination with carbon taxes and renewable-driven investment incentives on the optimal generation mix. To examine the impact of renewables-driven policies we propose a novel bi-level modelling assessment to plan optimal transmission infrastructure expansion. At the lower level, we consider a perfectly competitive energy market comprising GenCos who decide optimal generation levels and their own infrastructure expansion strategy. The upper level consists of a TSO who proactively anticipates the aforementioned decisions and decides the optimal transmission capacity expansion plan. To supplement the TSO decisions with other renewable-driven policies, we introduced carbon taxes and renewable capacity investment incentives in the model. Additionally, we accounted for variations in GenCos’ and TSO’s willingness to expand the infrastructure by introducing an upper limit on the generation (GEB) and transmission capacity expansion (TEB) costs. Therefore, as the input parameters for the proposed bi-level model, we considered different values of TEB, GEB, incentives and carbon tax. This paper examined the proposed modelling approach by applying it to a simple, three-node illustrative case study and a more realistic energy system representing Nordic and Baltic countries. The output factors explored in the analysis are the optimal total welfare, the share of VRE in the optimal generation mix and the total amount of energy generated.
|
Another strategy attempts to develop efficient modelling tools to consider the planning of the transmission and generation infrastructure expansion in a coordinated manner. For example, this coordinated modelling approach has been considered in (Moreira et al., 2017; Tian et al., 2020; Zhang et al., 2020). For the modelling assessment proposed in this paper, we consider a decentralised planning strategy to ensure the representation of the reactive position (i.e., acting as price-takers) of GenCos.
|
The proposed model assumes the TSO to take a leading position and anticipate the generation capacity investment decisions influenced by its transmission system expansion. This assumption leads to the bi-level structure of the proposed model. Such a modelling approach is widely used in energy market planning. As an example, Zhang et al. (2016) exploited a bi-level scheme to consider integrated generation-transmission expansion at the upper level and modified unit-commitment model with demand response at the lower level. Virasjoki et al. (2020) considered a bi-level structure when formulating the model for optimal energy storage capacity sizing and use planning. In this paper, we reformulate the model proposed in (Virasjoki et al., 2020) to consider welfare maximising TSO at the upper level, making decisions in the transmission lines instead of energy storage. An analogous strategy has been considered by Siddiqui et al. (2019) during the investigation of the indirect influence of the TSO’s decisions as a part of an emissions mitigation strategy aligned with different levels of carbon charges in a deregulated industry. Aimed at the analytical implications, their paper neglects VRE and demand-associated uncertainty, as well as the heterogeneity of the GenCos, while assuming unlimited generation capacity. These shortcomings are addressed in the current paper by means of introducing VRE intermittency and allowing GenCos to invest in diversified power generation technologies. Furthermore, we account for various investment budget portfolios for TSO and GenCos to investigate how GenCos’ investment capital availability influences the total VRE share in the optimal generation mix.
|
C
|
\frac{2}{\pi}&,\alpha=1.\end{cases}italic_C start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT = { start_ROW start_CELL divide start_ARG 1 - italic_α end_ARG start_ARG roman_Γ ( 2 - italic_α ) roman_cos ( divide start_ARG italic_π italic_α end_ARG start_ARG 2 end_ARG ) end_ARG end_CELL start_CELL , italic_α ≠ 1 end_CELL end_ROW start_ROW start_CELL divide start_ARG 2 end_ARG start_ARG italic_π end_ARG end_CELL start_CELL , italic_α = 1 . end_CELL end_ROW
|
be the cumulative distribution function for the stable density fStablesubscript𝑓Stablef_{\text{Stable}}italic_f start_POSTSUBSCRIPT Stable end_POSTSUBSCRIPT.
|
The density fStable∈Cb∞(ℝ)subscript𝑓Stablesuperscriptsubscript𝐶𝑏ℝf_{\text{Stable}}\in C_{b}^{\infty}(\mathbb{R})italic_f start_POSTSUBSCRIPT Stable end_POSTSUBSCRIPT ∈ italic_C start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT ( blackboard_R ) of
|
For stable densities we therefore suggest to set C3subscript𝐶3C_{3}italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT in Theorem
|
of the density is known precisely, i.e., we have to know C3subscript𝐶3C_{3}italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT
|
C
|
\pi^{j},\,i=1,\ldots,n.italic_π start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT = italic_φ start_POSTSUPERSCRIPT italic_i , * end_POSTSUPERSCRIPT ( over~ start_ARG italic_μ end_ARG start_POSTSUPERSCRIPT - italic_i end_POSTSUPERSCRIPT ) + divide start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG italic_n end_ARG ∑ start_POSTSUBSCRIPT italic_j ≠ italic_i end_POSTSUBSCRIPT italic_π start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT , italic_i = 1 , … , italic_n .
|
{2}=0.7italic_μ = 0.03 , italic_σ = 0.2 , italic_δ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1 , italic_δ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 2 , italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.5 , italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.7, and α=0.01𝛼0.01\alpha=0.01italic_α = 0.01. For the specific choice of parameters, we can determine the unique constant Nash equilibrium numerically by maximizing the function from (4.6) for i=1,2𝑖12i=1,2italic_i = 1 , 2 and solving the fixed point problem afterwards. The results are summarized in Figure 2. We included the Nash equilibrium in the case of linear price impact (γ=1𝛾1\gamma=1italic_γ = 1) for comparison (dashed horizontal lines).
|
In order to solve the best response problem (3.3), we fix some investor i𝑖iitalic_i and assume that the strategies πjsuperscript𝜋𝑗\pi^{j}italic_π start_POSTSUPERSCRIPT italic_j end_POSTSUPERSCRIPT, j≠i𝑗𝑖j\neq iitalic_j ≠ italic_i, of the other agents are given. Under these conditions we can rewrite the optimization problem (3.3) into a classical portfolio optimization problem in a similar (but not identical) price impact market. Afterwards, the Nash equilibria can be determined using the solution to the classical problem.
|
This paper is organized as follows. In the next section, we introduce the linear price impact financial market. In Section 3, we explicitly solve the problem of maximizing expected exponential utility which results in the unique constant Nash equilibrium. The argument of the utility function consists of the difference of some agents’ wealth and a weighted arithmetic mean of the other agents’ wealth. We also examine the influence of the price impact parameter α𝛼\alphaitalic_α to the Nash equilibrium and the stock price attained by inserting the arithmetic mean of the components of the Nash equilibrium. In Section 4, we substitute the linear impact of the agents arithmetic mean on the stock price process by a nonlinear one. We prove that the problem of maximizing CARA utility is well-posed as long as the influence is sublinear and does not have an optimal solution if the influence is superlinear. In Section 5, we assume that agents use CRRA utility functions (power and logarithmic utility) and insert the product of some agents wealth and a weighted geometric mean of the other agents’ wealth into the expected utility criterion. Similar to the CARA case, we are able to explicitly determine the unique constant Nash equilibrium.
|
Note that we can find a unique Nash equilibrium if and only if problem (3.7) and the fixed point problem for πisuperscript𝜋𝑖\pi^{i}italic_π start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT, given in terms of the system of equations (3.8), are uniquely solvable.
|
D
|
This, in turn, means that there is currently no methodology that can adequately reflect the P flows that are necessary before biomass production. In today’s world of unprecedented geopolitical power shifts and increasingly monopolistic commodity supply structures, it is in the vital interest of any country or economy to understand commodity flows on a global scale. How can a country like China, which once was the largest importer of P, be such a relevant exporter of P? Classical analyses fail to answer such questions. In fact, phosphate is a raw material that is essential for all nations. The situation is different for other commodities such as technology metals. Of course, the markets may be smaller, but the knowledge how an industrialized country can be affected by even minute changes in raw material supply is one of the game changing issues of our time.
|
Trade data is not a useful measure for the flow of P per se. Measurements are usually taken as a USD value, but not in a meaningful unit that would provide information on the material P content of a traded good.111Since 2006 quantity data (mostly tonnage) is available for some items in the used trade statistics. For our data set it was however not feasible to utilize this data. A way around this limitation is to forgo the notional amount of the traded goods and to interpret the trade in terms of the shares of globally available P in a specific year. These shares have to closely match the fractions of P that have been mined in a specific year (disregarding changes in stocks and corrections explained in section 2.3) as well as the shares used in each country as fertilizers (disregarding other uses of P).
|
Our approach to P flows therefore aims to use much more detailed trade data as the basis of the analysis \citep[see also][]chen_p_net. The novelty of our approach is that we transform and connect these data to other sources in such a way that we receive results that can again be interpreted in terms of the material flow of P, and not just as monetary value of traded amounts. It is therefor for the first time possible to quantify how much P is (in a material sense) transferred between countries as either raw material, preliminary product, or fertilizer with the intended use in agricultural production. This model is meant to show the trade-based first round of global P flows (before biomass production) in greater detail than currently available and can thus serve as the foundation for the analysis of P supply security and resilience.
|
or as country-wise exceedence footprints \citepp_exceed. With these approaches it is possible to cover most of the countries in the world, however, for the analysis of flows that happen before the production of biomass, the resolution of input-output data cannot deliver satisfactory results, since mineral resources, fertilizers and its intermediary products, as well as manure cannot be traced in detail. While data based on fertilizer production can remedy this deficit for some regions, it is currently not possible to map P flows globally on this basis.
|
We show that trade data can be used to approximate the flow of mineral resources in a meaningful way when combined with other data sources. Our flow analysis provides a useful foundation for the analysis of global P flows in terms of phosphate rock, fertilizers and related goods before biomass production. As such, it allows to derive valuable information for the analysis of vulnerabilities in countries’ supply relationships, including food security. For this the translation of nominal bilateral trade flows into material flows of P is an important step in terms of accuracy. We provide the information on (a) the origin of P flows, (b) their destinations and approximate material composition and (c) the resulting complex system of dependencies in supply.
|
B
|
{2}(g_{2}-g_{1})\mathds{1}_{\{R\in(g_{2},\infty)\}}.italic_ψ start_POSTSUPERSCRIPT italic_f , 4 end_POSTSUPERSCRIPT ( italic_R ) = italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_R - italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) blackboard_1 start_POSTSUBSCRIPT { italic_R ∈ ( italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ] } end_POSTSUBSCRIPT + italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT - italic_g start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) blackboard_1 start_POSTSUBSCRIPT { italic_R ∈ ( italic_g start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , ∞ ) } end_POSTSUBSCRIPT .
|
We can see the impact of different characteristics of an EPS, such as participation rate, leg setting, and maturity, on fair premiums. It is widely acknowledged that a typical investor would not be willing to pay an upfront premium, especially when it is substantial. Therefore, we propose to focus on EPS products with a null fair premium at inception. An EPS with a null fair premium still has the participation rate for the protection leg desired by the holder, whereas the participation rate for the fee leg is chosen to ensure that the provider
|
The most typical specifications of the protection leg are analogous to those of the fee leg. A selection of a particular protection leg depends on the buyer’s preferences and thus it would be natural to expect that a broad spectrum of products should be offered by EPS providers.
|
Let us introduce two most practically relevant forms of an EPS, which are called the buffer EPS and the floor EPS. Notice that the proposed terminology for a generic EPS is referring directly to the protection leg, rather than the fee leg for which the choice of a buffer
|
Assuming that a perfect hedge of an EPS is feasible, the provider would be indifferent with respect to the buyer’s choice of the structure of the protection leg. However, in reality only a partial hedging can be attained for more complex cross-currency products and thus some forms of the protection leg are likely to be preferred by providers. For instance, the presence of a floor clause is expected to be appreciated by providers since it provides a lower bound on their downside risk exposure even when the market experiences a catastrophic downturn.
|
B
|
However, the defaulters on larger amounts or with a subsequent harsh default have substantially higher penalties in terms of income and location (see Figures 5 and 8), they move to lower median home values areas and to zip codes with lower average wages and higher shares of minorities (see Appendix J).
|
What seems to be happening is that there are individuals who are delinquent on smaller amounts, possibly because of uninsurable shocks, who suffer the consequences of such defaults, but substantially less than those who default on larger amounts and seek bankruptcy and other legal reliefs. The latter appear to have overextended their lines of credit, in particular on mortgages (presumably because of location choices), then gone under in their accounts and essentially diverged from their earlier life trajectories. They end up in substantially worse neighborhoods (of different CZs) with median home values that decrease about 4-times as much as those for the lower delinquent amounts/no-harsh default.
|
What seems to be happening is that there are consumers who are delinquent on smaller amounts, possibly because of uninsurable shocks, who suffer the consequences of such defaults, but substantially less than those who default on larger amounts and seek bankruptcy and other legal reliefs. The latter appear to have overextended their lines of credit, in particular on mortgages (presumably because of location choices), then gone under in their accounts and essentially diverged from their earlier life trajectories. They end up in substantially worse neighborhoods (of different CZs) with median home values that decrease about 4-times as much as those for the lower delinquent amounts/no-harsh default.
|
We find that the defaulters on larger amounts or with a subsequent harsh default have substantially higher penalties in terms of income and location, they move to lower median home values areas and to zip codes of lower economic activity.
|
We show that the recovery is slow, painful, and in many respects only partial. In particular, after several years, up to 10, credit scores are still lower by 16 points, incomes never recover and appear to be substantially lower (by about 7,000USD or 14% of the 2010 mean), the defaulters live in lower “quality” neighborhoods (as measured by the median house value and other indicators such as proxies for average zip code income), are less likely to own a home, and are more likely to have low credit limits. We find that the negative effects of a soft default are larger for those individuals who are overextended in their credit lines, in particular the ones of mortgage. Being indebted in a way that is unsustainable for them in the long run, such individuals have also a higher probability of a subsequent harsh default (i.e. Chapter 7, Chapter 13, foreclosure). In addition, they end up in substantially worse neighborhoods, with lower median home values, and these moves are likely to have a substantial effect also on their labor market outlooks.
|
B
|
_{jk}\right)∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_q ⋅ italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT sign ( italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_s start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) italic_r start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT ),
|
to Minimax. As with IRV, each ballot is a ranking of some or all of the candidates.101010While it is often recommended that equal rankings be allowed under
|
Minimax: Vote sincerely222222While a viability-aware strategy was included for Minimax in Wolk et al. (2023),
|
Block Approval: Voters vote for any number of candidates.272727We use the same sincere strategy as for single-winner Approval Voting.
|
Approval: Vote for all candidates with uj≥EVsubscript𝑢𝑗𝐸𝑉u_{j}\geq EVitalic_u start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ≥ italic_E italic_V.
|
B
|
The two models are then calibrated to three different data sets, the 2018-21 data, the 2021-23 data and the whole 2018-23 data using Markov Chain Monte Carlo methods. This Bayesian approach to calibration allows a joint estimation of latent factors, taking into account possible interdependencies and also avoids the need to make strong apriori assumptions such as setting tresholds for jump sizes (cf. [15]). For each of these data sets, we provide model parameters along with simulations of the spot price and assessment of model adequacy through posterior predictive checking.
|
We calibrate the 3-factor model and the 4-factor model to the spot-price data in the time interval 2018-2021. We start with an overview of the posterior properties of the model parameters obtained from the MCMC procedure described in Section 3.5. Later in this section, we present a more detailed analysis of our calibration results.
|
The paper is structured in the following way: In Section 1 we give a non-exhaustive overview of the literature on electricity spot price models and their calibration. Table 1 provides a direct comparison of the characteristics for some of these models. Section 2 introduces the 4444-factor model, which is an extension of the model of [9]. In Section 3, we give a detailed description of the MCMC procedure, which is used for the calibration of the 4444-factor model. Section 4 contains the calculation of the p𝑝pitalic_p-values which are crucial to assess model adequacy. Finally Section 5 provides the model parameters obtained from the MCMC algorithm together with simulations and a posterior predictive check for each model in each of the respective time periods. To conclude, we present interpretations for our results and discuss suggestions for future research.
|
We calibrate the 3-factor model and the 4-factor model to the spot-price data in the time interval 2021-2023. We start with an overview of the posterior properties of the model parameters obtained from the MCMC procedure described in Section 3.5. Later in this section, we present a more detailed analysis of our calibration results.
|
We calibrate the 3-factor model and the 4-factor model with changepoint to the spot-price data in the whole time interval 2018-2023. We start with an overview of the posterior properties of the model parameters obtained from the MCMC procedure described in Section 3.5. Later in this section, we present a more detailed analysis of our calibration results.
|
B
|
{𝐘^(t)}t=kr+1(k+1)rsuperscriptsubscriptsuperscript^𝐘𝑡𝑡𝑘𝑟1𝑘1𝑟\{\hat{\mathbf{Y}}^{(t)}\}_{t=kr+1}^{(k+1)r}{ over^ start_ARG bold_Y end_ARG start_POSTSUPERSCRIPT ( italic_t ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_t = italic_k italic_r + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_k + 1 ) italic_r end_POSTSUPERSCRIPT←←\leftarrow← Predict on 𝒟~testksuperscriptsubscript~𝒟test𝑘\widetilde{\mathcal{D}}_{\text{test}}^{k}over~ start_ARG caligraphic_D end_ARG start_POSTSUBSCRIPT test end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT defined by Eq. (14b) Compute test loss by Eq. (19);
|
Figure 4. Overview of DoubleAdapt with a data adapter DA𝐷𝐴{DA}italic_D italic_A and a model adapter MA𝑀𝐴{MA}italic_M italic_A. The parameters are shown in red.
|
7 Update data adapter DA𝐷𝐴{DA}italic_D italic_A and model adapter MA𝑀𝐴{MA}italic_M italic_A:
|
Figure 4 depicts the overview of our DoubleAdapt framework, which consists of three key components: forecast model F𝐹Fitalic_F with parameters θ𝜃\thetaitalic_θ, model adapter MA𝑀𝐴{MA}italic_M italic_A with parameters ϕitalic-ϕ\phiitalic_ϕ, and data adapter DA𝐷𝐴{DA}italic_D italic_A with parameters ψ𝜓\psiitalic_ψ.
|
+MA𝑀𝐴+{MA}+ italic_M italic_A+H𝐻Hitalic_H+H−1superscript𝐻1H^{-1}italic_H start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT
|
B
|
Table 4 repeats the analysis presented in Table 3 Column (3) and Column (6), but with emotions assessed separately for messages containing earnings or trading-related information (“Finance”) (Columns 2 and 6) and those conveying other information (“Chat”) (Columns 1 and 5). Next, I also contrast messages containing original information (“Original”) (Columns 3 and 7) and those disseminating existing information (“Dissemination”) (Columns 4 and 8).666To keep the sample consistent and the point estimates comparable, I restrict the sample to IPOs that have at least one post belonging to each of the four categories.
|
Notes: This table presents the relationship between investor enthusiasm and two stylized facts regarding initial public offering (IPO) returns. Columns 1-4 depict the first day return, calculated as the difference between the closing and the IPO price, divided by the IPO price. Columns 5-7 illustrate the 12-month industry adjusted return, computed from three months to 12 months post-IPO. I employ the Fama-French 48-industry classification for industry classification. First Day Return-90,-1 corresponds to the average first-day return of recent IPOs from 90 days before the IPO up till the day before. Columns (3), (4), (6), and (7) incorporate year and industry fixed effects. I also report the standardized effects of my main independent variable, investor enthusiasm, on my dependent variables. Robust standard errors are reported in parentheses. ∗ p<0.10𝑝0.10p<0.10italic_p < 0.10, ∗∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01. Continuous variables are winsorized at the 0.5% and 99.5% levels to mitigate the impact of outliers.
|
Notes: This table presents the relationship between investor emotions, investor types and two stylized facts regarding initial public offering (IPO) returns. In Panel (a) dependent variable is the first day return, which is computed as the difference between the closing and the IPO price, divided by the IPO price. In Panel (b), the dependent variable is the 12-month industry adjusted return, which is computed from three months after the IPO until 12 months after the IPO (Columns 5-8). Controls, along with year and industry fixed effects are used in each column. The industry classification used is the Fama-French 48-industry classification. Robust standard errors are reported in parentheses. ∗ p<0.10𝑝0.10p<0.10italic_p < 0.10, ∗∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01. Continuous variables winsorized at the 0.5% and 99.5% level to mitigate the impact of outliers.
|
Notes: This table presents the relationship between investor emotions, information content and two stylized facts regarding initial public offering (IPO) returns. The first dependent variable is the first day return, which is computed as the difference between the closing and the IPO price, divided by the IPO price. This is shown in Columns 1-4. The second dependent variable is the 12-month industry adjusted return, which is computed from three months after the IPO until 12 months after the IPO (Columns 5-8). Controls, along with year and industry fixed effects are used in each column. The industry classification used is the Fama-French 48-industry classification. I also report the standardized effects of my main independent variable, investor enthusiasm, on my dependent variables. Robust standard errors are reported in parentheses. ∗ p<0.10𝑝0.10p<0.10italic_p < 0.10, ∗∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01. Continuous variables winsorized at the 0.5% and 99.5% level to mitigate the impact of outliers.
|
Notes: This table presents the correlation between investor emotions, information content and two stylized facts regarding initial public offering (IPO) returns. The first dependent variable is the first day return, which is computed as the difference between the closing and the IPO price, divided by the IPO price. This is shown in Columns 1-4. The second dependent variable is the 12-month industry adjusted return, which is computed from three months after the IPO (Columns 5-8). The industry classification used is the Fama-French 48-industry classification. Robust standard errors are reported in parentheses. ∗ p<0.10𝑝0.10p<0.10italic_p < 0.10, ∗∗ p<0.05𝑝0.05p<0.05italic_p < 0.05, ∗∗∗ p<0.01𝑝0.01p<0.01italic_p < 0.01. Continuous variables winsorized at the 1% and 99% level to mitigate the impact of outliers.
|
C
|
In the first use case, we aim to improve the performance of Random Forest methods for churn prediction. We introduce quantum algorithms for Determinantal Point Processes (DPP) sampling [16], and develop a method of DPP sampling to enhance Random Forest models. We evaluate our model on the churn dataset using classical DPP sampling algorithms and perform experiments on a scaled-down version of the dataset using quantum algorithms. Our results demonstrate that, in the classical setting, the proposed algorithms outperform the baseline Random Forest in precision, efficiency, and bottom line, and also offer a precise understanding of how quantum computing can impact this kind of problem in the future. The quantum algorithm run on an IBM quantum processor gives similar results as the classical DPP on small batch dimensions but falters as the dimensions grow bigger due to hardware noise.
|
In our work, we use quantum neural networks with orthogonal and compound layers. Although these neural networks roughly match the general VQC construction, they produce well-defined linear algebraic operations, which not only makes them much more interpretable but gives us the ability to analyze their complexity and scalability. Because we understand the actions of these layers precisely, we are able to identify instances for which we can design efficient classical simulators, allowing us to classically train and test the models on real-scale datasets.
|
In this work, we have explored the potential of quantum machine learning methods in improving forecasting in finance, with a focus on two specific use cases within the Itaú business: churn prediction and credit risk assessment. Our results demonstrate that the proposed algorithms, which leverage quantum ideas, can effectively enhance the performance of Random Forest and neural network models, achieving better accuracy and training with fewer parameters.
|
In the second use case, we aim to explore the performance of neural network models for credit risk assessment by incorporating ideas from quantum compound neural networks [17]. We start by using quantum orthogonal neural networks [17], which add the property of orthogonality for the trained model weights to avoid redundancy in the learned features [18]. These orthogonal layers, which can be trained efficiently on a classical computer, are the simplest case of what we call compound neural networks, which explore an exponential space in a structured way. For our use case, we design compound neural network architectures that are appropriate for financial data. We evaluate their performance on a real-world dataset and show that the quantum compound neural network models both have far fewer parameters and achieve better accuracy and generalization than classical fully-connected neural networks.
|
In [17], an improved method of constructing orthogonal neural networks using quantum ideas was developed. We describe it below in brief.
|
C
|
It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is well known, the largest upheavals in the stock market happened on, and close to, the Black Monday, which was a precursor to the Savings and Loan crisis, the Tech Bubble, the Financial Crisis and the COVID Pandemic. Plotted on a log-log scale, power-law tails of a distribution show as a straight line. If the largest RV fall on the straight line they can be classified as Black Swans (BS). If, however, they show statistically significant deviations upward or downward from this straight line, they can be classified as Dragon Kings (DK) sornette2009 ; sornette2012dragon or negative Dragon Kings (nDK) respectively pisarenko2012robust .
|
For large n𝑛nitalic_n we also observe that mGB approximates the tail end better than GB2 – consistent with smaller KS values in Fig. 15 and smaller number of nDK. However, neither approximates the preceding portion of the tail well as indicated by the ”potential” DK. This has to do with the fact that neither of the distributions appear as a solution of a first-principle model describing average RV. Finally, in the first plot in Fig. 15, we observe that after roughly 5 - 7 days the slope of the GB2 tail saturates, consistent with the correlation range of daily RV dashti2021realized . The slope of LF, on the other hand, increases with n𝑛nitalic_n. However neither is consistence with a naive assumption of the distribution having the same slope as that of the daily RV.
|
With the above in mind, we first address Figs. 4 – 13. According to Figs. 4 and 5, daily RV appears to be the closest of being commensurate with the Black Swan behavior as both LF and GB2 approximate the tail of the distribution better than mGB and LF does not point to existence of either DK, p<0.05𝑝0.05p<0.05italic_p < 0.05, or nDK, p>0.95𝑝0.95p>0.95italic_p > 0.95. The n=1𝑛1n=1italic_n = 1 behavior undergoes a dramatic change with the increase of n𝑛nitalic_n, as seen in Figs. 6 – 13, where we observe that, first, the ”potential” DK, p<0.05𝑝0.05p<0.05italic_p < 0.05, develop at the earlier portions of the tails, only to terminate in nDK at the tail ends.
|
It should be emphasized that RV is agnostic with respect to gains or losses in stock returns. Nonetheless, it has been habitual that large gains and losses occur at around the same time. Here we wish to address the question of whether the largest values of RV fall on the power-law tail of the RV distribution. As is well known, the largest upheavals in the stock market happened on, and close to, the Black Monday, which was a precursor to the Savings and Loan crisis, the Tech Bubble, the Financial Crisis and the COVID Pandemic. Plotted on a log-log scale, power-law tails of a distribution show as a straight line. If the largest RV fall on the straight line they can be classified as Black Swans (BS). If, however, they show statistically significant deviations upward or downward from this straight line, they can be classified as Dragon Kings (DK) sornette2009 ; sornette2012dragon or negative Dragon Kings (nDK) respectively pisarenko2012robust .
|
The main result of this paper is that the largest values of RV are in fact nDK. We find that daily returns are the closest to the BS behavior. However, with the increase of n𝑛nitalic_n we observe the development of ”potential” DK with statistically significant deviations upward from the straight line. This trend terminates with the data points returning to the straight line and then abruptly plunging into nDK territory.
|
D
|
Feng et al. (2021) establish the convergence to equilibrium of learning algorithms in first- and second-price auctions, as well as multi-slot VCG mechanisms. Our results in §4.1 provide an empirical counterpart to their theoretical results, but also add nuance as to the speed of convergence of different algorithms in realistic-sized auctions. Hartline et al. (2015) establish the convergence of no-regret learning to coarse Bayes correlated equilibrium in general games with incomplete information; we leverage their results in §3.3.
|
The second branch instead takes the perspective of a single bidder who uses learning algorithms to guide her bidding process. Weed et al. (2016) focus on second-price auctions for a single good, and assume that the valuation can vary either stochastically or adversarially in each auction. In a similar environment, Balseiro et al. (2018) and Han et al. (2020) study contextual learning in first-price auctions, where the context is provided by the bidder’s value. For auctions in which the bidder must learn her own value (as is often the case in the settings we consider), Feng et al. (2018) proposes an improved version of the EXP3 algorithm that attains a tighter regret bound. There is also a considerable literature that studies optimal bidding with budget and/or ROI constraints using reinforcement-learning: e.g., Wu et al. (2018), Ghosh et al. (2020), and references therein, and Deng et al. (2023). Golrezaei et al. (2021) also studies the interaction between a seller and a single, budget- and ROI-constrained buyer.
|
Nekipelov et al. (2015) proposes techniques for estimating agents’ valuations in generalized second-price auctions, which stands in contrast to our method that directly utilizes agents’ learning algorithms and is independent of the specific auction format. In a different direction,
|
Feng et al. (2021) establish the convergence to equilibrium of learning algorithms in first- and second-price auctions, as well as multi-slot VCG mechanisms. Our results in §4.1 provide an empirical counterpart to their theoretical results, but also add nuance as to the speed of convergence of different algorithms in realistic-sized auctions. Hartline et al. (2015) establish the convergence of no-regret learning to coarse Bayes correlated equilibrium in general games with incomplete information; we leverage their results in §3.3.
|
To the best of our knowledge, the closest papers to our own are Kanmaz and Surer (2020), Elzayn et al. (2022), Banchio and Skrzypacz (2022), and Jeunen et al. (2022). The first reports on experiments using a multi-agent reinforcement-learning model in simple sequential (English) auctions for a single object, with a restricted bid space. Our analysis focuses on simultaneous bidding in scenarios that are representative of actual online ad auctions. The second focuses on position (multi-slot) auctions and, among other results, reports on experiments using no-regret learning (specifically, the Hedge algorithm we also use) under standard generalized second-price and Vickerey-Groves-Clarke pricing rules. Our analysis is complementary in that we allow for different targeting clauses and more complex pricing rules such as “soft floors”. The third studies the emergence of spontaneous collusion in standard first- and second-price auctions, under the Q𝑄Qitalic_Q-learning algorithm (Watkins, 1989)). The fourth describes a simulation environment similar to ours that is mainly intended to help train sophisticated bidding algorithms for advertisers. We differ in that we allow for bids broadly targeting multiple queries, and focus on learning algorithms that allow us to model auctions with a large number of bidders; in addition, we demonstrate how to infer values from observed bids.
|
B
|
Our work is related to and extends various strands of the literature, which we briefly summarise below. Prior to G&M’s research, the timing of contributions and the level of funds raised had received considerable attention in the theoretical literature. Varian (1994) shows that, under appropriate assumptions, a sequential contribution mechanism elicits lower contributions than a simultaneous contribution mechanism. The crux of this result lies in the set-up of the model, where a first mover may enjoy a first-mover advantage and free-ride. On the other hand, Cartwright and Patel (2010) using a sequential public goods game with exogenous ordering, show that agents early enough in the sequence would want to contribute, if they believe that imitation from others is quite likely. In the context of fundraising, Romano and Yildirim (2001) examine the conditions under which a charity prefers to announce contributions in the form of a sequential-move game, while Vesterlund (2003) shows that an announcement strategy of past contributions, not only helps worthwhile organisations to reveal their type, but it also helps the fundraiser reduce the free-rider problem, a result that Potters et al. (2005) confirm experimentally.
|
Andreoni et al. 2002; Coats et al. 2009; Gächter et al. 2010; Figuieres et al. 2012; Teyssier 2012 in public goods games without a threshold). The vast majority of the aforementioned studies conclude that the sequential protocol is significantly more effective in solving the public goods problem, compared to the simultaneous protocol, and that the effect of information on contribution is dramatic (Erev and Rapoport, 1990). Suleiman et al. (1994) highlight two properties that define the sequential protocol: (1) information regarding the position in the sequence, and (2) information about the actions of preceding players. Therefore, there may be various information structures that underlie social dilemmas or the provision of a public good in real life, and relaxing or modifying these structures can lead to more realistic protocols of play. Most of the previous literature has focused on the comparison between the simultaneous and the sequential mechanism, while in terms of available information, the most commonly employed information structure is either full information on past decisions or no information at all. Both features appear to be closer to the reality that characterises the provision of public goods. In our experiment, rather than comparing the effectiveness of different contribution mechanisms, our focus is on the effect of the available information provided to the subjects, as well as the effect of positional awareness on the contribution choices. Table 1 summarises the experimental design features of the studies closest to ours.
|
In this model, individuals have to make decisions sequentially, without knowing their position in the sequence (position uncertainty), but are aware of the decisions of some of their predecessors by observing a sample of past play. In the presence of position certainty, those placed in the early positions of the sequence would want to contribute, in an effort to induce some of the other group members to co-operate (Rapoport and Erev, 1994), while late players, would want to free-ride on the contributions of the early players. Nevertheless, if the agents are unaware of their position in the sequence, they would condition their choice on the average payoff, from all potential positions, and they would be inclined to contribute so as to induce the potential successor to do so as well. G&M show that full contribution can occur in equilibrium, where given appropriate values of the parameters of the game (i.e. return from contributions), it is predicted that there exists an equilibrium where all agents contribute.
|
A similar result regarding the superiority of the sequential mechanism has also been established in the literature on general public goods, particularly in the context of common pool resource games 444Sequential mechanisms have also been analysed in give-some and take-some social dilemma games, for example, see Tung and Budescu (2013).. This literature has identified significant ordering effects, even in the case where later subjects in a sequence could not observe past decisions (see Rapoport et al. 1993; Budescu et al. 1995; Suleiman et al. 1996; Rapoport 1997). The model we test, makes sharp predictions regarding the behaviour of the agents for each of the potential positions in the sequence, subject to the available size and content of the sample of past decisions. Our experimental design allows us to manipulate the content of this sample, in terms of the number of past contributions, and observe the role of this information in this kind of ordering effect. While the main theoretical prediction in step-level public goods games is that the players at the early positions of the sequence will free-ride, the framework we are exploring predicts that players at the beginning of the sequence will contribute instead, to incentivise their successors. This is closely related to the leading-by-example literature, based on the linear public goods game (see Gächter et al. 2012; Levati et al. 2007; Potters et al. 2007; Güth et al. 2007; Figuieres et al. 2012; Sutter and Rivas 2014; Préget et al. 2016, in the context of public goods games, or Moxnes and Van der Heijden 2003, in a public bad experiment). They all find robust evidence of first movers contributing more than later movers, and later movers’ contributions to be positively correlated to the first movers’ contributions, indicating reciprocal motives.
|
The early and recent experimental literature has provided substantial evidence on the superiority of a sequential contribution mechanism compared to a simultaneous one (see Erev and Rapoport 1990; Rapoport and Erev 1994; Rapoport 1997, in step-level public goods games; and
|
D
|
The financial market is marked by the participation of a diverse range of investors, each with their unique attitudes and investment strategies. In order to capture this diversity, we build upon existing concepts and introduce the multi-SSQW that corresponds to a multitude of investors. This approach allows us to design a quantum model that simulates various investor sentiments. Such a quantum model aids in simulating trader sentiment within the financial market, enabling us to predict the distribution of short-term financial prices. The scheme for simulating the distribution of short-term financial prices is introduced with a practical approach to find patterns in complex data and map them to a multi-SSQW dynamic system and the architecture has shown below in Fig.(4). In the multi-SSQW framework, the U3 gate sets the initial quantum state, encoding the market’s overall sentiment. Subsequent unitary operations, represented by two U3 gates for each investor, model individual investor biases towards specific assets. The evolution operator, W^^𝑊\hat{W}over^ start_ARG italic_W end_ARG, is decomposed into two halves—first moving with C^θ1subscript^𝐶subscript𝜃1\hat{C}_{\theta_{1}}over^ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT and incrementing (S^+subscript^𝑆\hat{S}_{+}over^ start_ARG italic_S end_ARG start_POSTSUBSCRIPT + end_POSTSUBSCRIPT) and then applying C^θ2subscript^𝐶subscript𝜃2\hat{C}_{\theta_{2}}over^ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT before decrementing (S^−subscript^𝑆\hat{S}_{-}over^ start_ARG italic_S end_ARG start_POSTSUBSCRIPT - end_POSTSUBSCRIPT). These coin operators, C^θksubscript^𝐶subscript𝜃𝑘\hat{C}_{\theta_{k}}over^ start_ARG italic_C end_ARG start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT, and the shift operators, S^±subscript^𝑆plus-or-minus\hat{S}_{\pm}over^ start_ARG italic_S end_ARG start_POSTSUBSCRIPT ± end_POSTSUBSCRIPT, collectively govern the walker’s direction and position, with each W^^𝑊\hat{W}over^ start_ARG italic_W end_ARG operator reflecting the nuances of investor sentiment towards a financial asset within a quantum simulation model. Figure 4 illustrates the mechanics of a quantum system designed to simulate and analyze financial markets, capturing the dynamics of investor behavior and market fluctuations.
|
meaningful results. Using a multi-SSQW quantum circuit to simulate financial stock distributions can be seen as employing well-orchestrated circuits. The multi-SSQW approach is designed to navigate complex quantum state spaces efficiently, aiming for rapid exploration and convergence to a desirable state or solution through the controlled evolution of multiple walkers. The advantages of multi-SSQW, such as its rapid modeling and prediction capacity, make it highly beneficial for financial simulations in fast-paced and volatile financial markets.
|
The probability distributions in position space of DTQW [33, 34, 35], shown in Fig.(2), do not resemble the probability distributions in everyday life. In the marketplace, prices are typically determined by the interaction of buyers and sellers. The price of a good or service in the market is established through the agreement between investors. Sentiment-induced buying and selling is an important determinant of stock price variation. The shaping of short-term financial market prices predominantly hinges on the sentiment of investor[38, 39], broadly classified into optimism and pessimism. Investors who are optimistic play a proactive role in investing, which creates an upward push for the stock price. On the contrary, pessimistic investors, who decide to sell and withdraw from the market, generate a downward pull on the stock price. Inspired by the free market economy, we introduce the split-step quantum walk(SSQW)[40, 41] that can be regarded as a financial simulation.
|
In this section, we demonstrate the ability of multi-SSQW to function as an effective financial simulator. It is capable of accurately modeling intricate financial systems and providing reliable simulations. One of the highlights of this approach is its inherent capability to exhibit convergence and provide rapid results, which makes it a powerful tool in financial analytics and modeling. The boxplots reflect simulation outcomes that measure the multi-SSQW’s fidelity in financial market modeling. They suggest that as the diversity of quantum walkers —representative of market participants— increases, the precision of the simulations improves, marked by lower MSE and KL divergence values. This relationship underscores the robustness of the multi-SSQW approach in capturing the complex dynamics of financial markets, providing a compelling tool for analysts. The methodology’s reliable convergence indicates its utility in producing accurate market simulations, affirming its value in enhancing financial analysis and forecasting. In machine learning, integrating more parameters can boost accuracy without imposing a substantial computational load. Crucially, the method demonstrates steady convergence, highlighting this approach’s efficacy.
|
We have extended the concept of SSQW to multi-SSQW, employing multiple walkers to represent investors with diverse investment strategies in the market. In modeling the intrinsic uncertainty in financial markets, we showcase the efficacy of our purpose-built multi-SSQW quantum algorithm and circuitry through its application in replicating the price distribution in real-world stock markets.
|
D
|
More generally, chasing past performance in financial decisions is a form of success-based imitation.
|
In contrast to our five experts, in Apesteguia et al. (2020) subjects could choose among 80 leaders and in Holzmeister et al. (2022) there was no choice.
|
The present study extends the design of Apesteguia et al. (2020) by varying the complexity of the underlying task and the information investors receive about the experts. When our investors do not have access to information on experts’ decision quality, we confirm that a substantial fraction of subjects chooses to delegate to experts with previously high earnings.111Other studies besides Apesteguia et al. (2020) finding an important role for previous earnings in the choice of “experts” include Huck et al. (1999), Offerman et al. (2002), Apesteguia et al. (2010) and Huber et al. (2010).
|
Of course, depending on the link between an action’s current earnings and future earnings, such imitation may actually decrease payoffs (see e.g. Vega-Redondo, 1997; Huck et al., 1999; Offerman et al., 2002) as well as possibly increase them (see e.g. Schlag, 1998; Apesteguia et al., 2018).
|
See e.g. Pelster and Hofmann (2018) and Apesteguia et al. (2020) for discussion of the scope and operational details of such platforms.
|
C
|
We are ready to state our main result in the second part of the paper. Here, to simplify the exposition and to obtain sharp numerical results, we fix (α,β)=(0.75,0.5)𝛼𝛽0.750.5(\alpha,\beta)=(0.75,0.5)( italic_α , italic_β ) = ( 0.75 , 0.5 ), (x¯,y¯)=(4,2)¯𝑥¯𝑦42(\bar{x},\bar{y})=(4,2)( over¯ start_ARG italic_x end_ARG , over¯ start_ARG italic_y end_ARG ) = ( 4 , 2 ). A similar analysis (as follows) can be done for any α,β,x¯,y¯𝛼𝛽¯𝑥¯𝑦\alpha,\beta,\bar{x},\bar{y}italic_α , italic_β , over¯ start_ARG italic_x end_ARG , over¯ start_ARG italic_y end_ARG (our choice of the parameters is completely ad hoc and our method is quite generic). We obtain
|
For λ𝜆\lambdaitalic_λ values as in Theorem 6.2 (satisfying the SC), we obtain a pretty smooth relation between λ𝜆\lambdaitalic_λ and the ergodic sums of f𝑓fitalic_f as in Figure 15 (using 5000500050005000 terms to estimate the ergodic sums). Extending Theorem 6.2 (and Figure 15) using the naive estimates of the ergodic sum (that is ∑k=09999fk(s)superscriptsubscript𝑘09999superscript𝑓𝑘𝑠\sum_{k=0}^{9999}f^{k}(s)∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 9999 end_POSTSUPERSCRIPT italic_f start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_s )) for λ𝜆\lambdaitalic_λ that does not satisfy (SC), we obtain Theorem 1.4 (and Figure 16).
|
For 2.75<λ<42.75𝜆42.75<\lambda<42.75 < italic_λ < 4 except λ𝜆\lambdaitalic_λ values corresponding to the few windows in Figure 12 (and possibly except some λ𝜆\lambdaitalic_λ values whose total Lebesgue measure is 00, see Proposition 6.3 below), there exists a unique acim for f𝑓fitalic_f. Moreover for these λ𝜆\lambdaitalic_λ values, the ergodic sums of f𝑓fitalic_f are as in Figure 15.
|
We need ”Lebesgue almost” (or ”except a set of measure zero”) in Theorems 1.4, 6.2, and Proposition 6.3 since the following (anomalous) examples are known, see [Hofbauer and Keller, 1990] and [Johnson, 1987]: for a quadratic map Tλ(x)=λx(1−x)subscript𝑇𝜆𝑥𝜆𝑥1𝑥T_{\lambda}(x)=\lambda x(1-x)italic_T start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT ( italic_x ) = italic_λ italic_x ( 1 - italic_x ) (parametrised by λ𝜆\lambdaitalic_λ), there exists λ𝜆\lambdaitalic_λ such that Tλsubscript𝑇𝜆T_{\lambda}italic_T start_POSTSUBSCRIPT italic_λ end_POSTSUBSCRIPT does not have an attracting periodic orbit and shows a chaotic behaviour, but does not have an acim. We expect that for our f𝑓fitalic_f, we obtain examples of the same properties (although we have not checked yet). The point is that we do not care such anomalous cases since the λ𝜆\lambdaitalic_λ values corresponding to such examples are of Lebesgue measure zero and our approach in this (and the last) section is probabilistic.
|
For 1<λ<41𝜆41<\lambda<41 < italic_λ < 4, the ergodic sums of f𝑓fitalic_f are as in Figure 1 (possibly except some λ𝜆\lambdaitalic_λ values whose total Lebesgue measure is 00).
|
D
|
When transacting on the Uniswap Labs interface, users are shown a quoted output amount (resp. input amount) for the input amount (resp. output amount) that they entered into the interface, in the form of a quoted average execution price. After seeing the quoted price, users can then decide whether to sign and broadcast the swap transaction.
|
the minimum amount out or the maximum amount in, which is the worst case amount of the output/input asset that the user is willing to receive/spend; and a deadline, specifying a deadline by which the swap must be completed, after which the swap is invalid. Even if a transaction is finalized on the blockchain, the underlying swap may fail due to a violation of the minimum amount out (resp. maximum amount in). For any given block B𝐵Bitalic_B in a blockchain, the trades contained in B𝐵Bitalic_B are defined to be the trades specified by the swaps in B𝐵Bitalic_B that succeed.
|
where 𝗋𝖾𝖺𝗅𝗂𝗓𝖾𝖽𝖯𝗋𝗂𝖼𝖾isubscript𝗋𝖾𝖺𝗅𝗂𝗓𝖾𝖽𝖯𝗋𝗂𝖼𝖾𝑖\mathsf{realizedPrice}_{i}sansserif_realizedPrice start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the average realized execution price of the swap (the amount of the input asset spent, over the amount of the output asset received), and 𝗊𝗎𝗈𝗍𝖾𝖽𝖯𝗋𝗂𝖼𝖾isubscript𝗊𝗎𝗈𝗍𝖾𝖽𝖯𝗋𝗂𝖼𝖾𝑖\mathsf{quotedPrice}_{i}sansserif_quotedPrice start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the decision price shown to the user.222Here, slippage is the difference between the realized output amount and the quoted output amount, expressed as a percentage of the realized output amount. Alternatively, we may express slippage as a percentage of the quoted output amount, with no major changes in interpretation.
|
The slippage tolerance of a swap is defined as the ratio of the quoted amount out over the minimum amount out (resp. the quoted amount in over the maximum amount in), minus 1, expressed in basis points (bps). The price impact of a swap is defined as the ratio of the quoted price over the market mid price minus 1, expressed in bps.
|
The price impact of a swap is defined as the ratio of the quoted price over the market mid price minus 1, expressed in bps, and directly measures market depth. We assume that the quoted price incorporates LP fees and the expected liquidity consumption of the swap, as in the case with the Uniswap Labs interface.
|
C
|
In addition to FMA data, we collect additional public information documented on their websites. Our aim is to categorize VASPs by their service offering. We construct categorical variables that indicate whether the VASP offers custody services, facilitates payments, allows users to exchange cryptoassets, implements a trading platform, or offers consulting or investment services. We consider 21 VASPs for which we could gather sufficient information. Data for each (anonymized) VASP are reported in A.
|
Whilst the sample is small and the features are few, to ensure consistency and objectivity in categorizing VASPs we exploit an unsupervised learning method.
|
We implement two approaches to extract on-chain VASP-related information for the UTXO-based and the account-based DLTs. The entities that operate on the Bitcoin blockchain interact with each other as a set of pseudo-anonymous addresses. We exploit known address clustering heuristics (Androulaki et al.,, 2013; Ron & Shamir,, 2013; Meiklejohn et al.,, 2016) to associate addresses controlled by the same entity121212New addresses can be created in each transaction. However, if they are re-used across transactions, they can be linked and identified as belonging to the same entity.. Furthermore, we exploit a collection of public tagpacks, i.e., attribution tags that associate addresses with real-world actors, to filter the clusters associated with any of the VASPs considered in our study. We expanded the dataset by conducting manual transactions with the VASPs in our sample (further details are discussed in A, where we also report a list of the addresses used). We identified 88 addresses and their corresponding clusters associated with four different VASPs.
|
Similarly to VASP-2, on-chain activity is higher than the value reported on the balance sheet after 2020. As expected, the amount of cryptoasset holdings is small, as the VASP is non-custodial, and exceeds 100K EUR only after 2021. All reported assets are ether: the absence of stablecoins is expected, as this VASP trades bitcoin, ether, and a few other cryptoassets. However, we could not identify bitcoin flows from or to their wallets in the time frame we considered. To identify the addresses associated with this VASP, we relied on manual transactions: re-identification attacks are a possible strategy to collect attribution tags. While this strategy is effective for Ethereum accounts, the Bitcoin addresses we gathered identify the VASP activity dating back to November 2022 only, thus outside of the time frame we considered.
|
VASP-5 is the last we analyze; values are shown in Figure 9. This VASP bases its services on the purchase and sale of bitcoins. For this VASP, using both attribution tags in the TagPack database mentioned above and re-identification strategies, we could only gather information for a few months in between 2014 and 2017 and after 2021. The results are consistent only for the years 2015 and 2016, when the VASP held very small amounts of cryptoassets, if compared to the subsequent years.
|
A
|
In convex multi-objective optimization one usually focuses on weakly Pareto optimal points (or weakly ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points) since they can equivalently be characterized as solutions to the weighted sum scalarization. That is, a point x∗∈𝕏superscript𝑥𝕏x^{*}\in\mathbb{X}italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ blackboard_X is weakly Pareto optimal for the convex problem (2) if and only if x∗superscript𝑥x^{*}italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a solution to the scalar problem minx∈𝕏∑i=1mwigi(x)subscript𝑥𝕏superscriptsubscript𝑖1𝑚subscript𝑤𝑖subscript𝑔𝑖𝑥\min_{x\in\mathbb{X}}\sum_{i=1}^{m}w_{i}g_{i}(x)roman_min start_POSTSUBSCRIPT italic_x ∈ blackboard_X end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) for some w∈ℝ+m∖{0}𝑤subscriptsuperscriptℝ𝑚0w\in\mathbb{R}^{m}_{+}\setminus{\{0\}}italic_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT + end_POSTSUBSCRIPT ∖ { 0 } (Corollary 5.29 of [9]). However, as stated in Remark 3.4, the concept of weak Pareto optimality is not meaningful for our problem (3) and we have to work with Pareto optimal points instead. For Pareto optimal points there is not a one-to-one correspondence to solutions of weighted sum scalarizations, but only the following implication: a point x∗∈𝕏superscript𝑥𝕏x^{*}\in\mathbb{X}italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ blackboard_X is Pareto optimal for a convex problem (2) if x∗superscript𝑥x^{*}italic_x start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is a solution to the scalar problem minx∈𝕏∑i=1mwigi(x)subscript𝑥𝕏superscriptsubscript𝑖1𝑚subscript𝑤𝑖subscript𝑔𝑖𝑥\min_{x\in\mathbb{X}}\sum_{i=1}^{m}w_{i}g_{i}(x)roman_min start_POSTSUBSCRIPT italic_x ∈ blackboard_X end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) for some w∈ℝ++m𝑤subscriptsuperscriptℝ𝑚absentw\in\mathbb{R}^{m}_{++}italic_w ∈ blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT + + end_POSTSUBSCRIPT (Theorem 5.18(b) of [9]). The absence of an equivalent characterization of the set of Pareto optimal points through scalarizations will make it impossible to solve our problem in full generality for convex games.
|
However, despite this issue we will be able to compute a set which contains the set of all Pareto optimal points of the convex problem (3) (and is included in the set of all ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points) if we make additional assumptions on the structure of the constraint set 𝕏𝕏\mathbb{X}blackboard_X. We will consider two different structures of 𝕏𝕏\mathbb{X}blackboard_X, one in Assumption 4.2 and one in Remark 4.13.
|
The goal of this paper is to introduce a method which approximates the set of all Nash equilibria of a convex game. Hence, ϵitalic-ϵ\epsilonitalic_ϵ-approximate solution concepts are considered for both, Nash equilibria and Pareto optimality. Similar to the characterizations proven in [6], the set of ϵitalic-ϵ\epsilonitalic_ϵ-Nash equilibria for any possible N𝑁Nitalic_N-player game can be characterized by the intersection of ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points of N𝑁Nitalic_N multi-objective problems for any ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0. For convex games these multi-objective problems are convex. In general, the set of Pareto optimal points as well as the set of ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points are not finitely generated and thus cannot be computed exactly. However, due to the Lipschitz continuity of the convex cost functions and by making additional assumptions on the structure of the convex constraint set, we will (for each of the N𝑁Nitalic_N specific convex problems) be able to compute a finitely generated set which, on one hand, contains all Pareto optimal points and, on the other hand, is a subset of the ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points for this problem for some specific ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0. As a consequence, by taking the intersections over these N𝑁Nitalic_N sets yields a finitely generated set X𝑋Xitalic_X which contains the set of true Nash equilibria NE(f,𝕏)NE𝑓𝕏\operatorname{NE}(f,\mathbb{X})roman_NE ( italic_f , blackboard_X ) for the convex game while being contained in the set of ϵitalic-ϵ\epsilonitalic_ϵ-approximate Nash equilibria ϵNE(f,𝕏)italic-ϵNE𝑓𝕏\epsilon\operatorname{NE}(f,\mathbb{X})italic_ϵ roman_NE ( italic_f , blackboard_X ), i.e.
|
As mentioned above in issue (i), we try to cover the set of all Pareto optimal points and therefore make additional assumptions on the structure of the constraint set 𝕏𝕏\mathbb{X}blackboard_X. In the following, we will consider constraints sets 𝕏𝕏\mathbb{X}blackboard_X that are polytopes, whereas in Remark 4.13, we consider the case where each player i𝑖iitalic_i has an independent convex constraint set 𝕏i⊆𝒳isubscript𝕏𝑖subscript𝒳𝑖\mathbb{X}_{i}\subseteq\mathcal{X}_{i}blackboard_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⊆ caligraphic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Problem (iii) can be handled by a small modification of the algorithm in [14] that is possible because of the particular structure of the objective function of problem (3) being linear in all but the last component and the directions c¯isubscript¯𝑐𝑖\bar{c}_{i}over¯ start_ARG italic_c end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT being zero in these linear components.
|
In the case of linear games the set of all Pareto optimal points of problem (3) can be computed exactly and Theorem 3.1 can be used to numerically compute the set of all Nash equilibria of such games, see [6]. If the game is not linear, approximations need to be considered. In the following, we will therefore relate the set of ϵitalic-ϵ\epsilonitalic_ϵ-approximate Nash equilibria with the ϵitalic-ϵ\epsilonitalic_ϵ-Pareto optimal points of problem (3). To do so we will fix the directions c¯i=(0,…,0,1)⊤∈ℝ+mi∖{0}subscript¯𝑐𝑖superscript0…01topsubscriptsuperscriptℝsubscript𝑚𝑖0\bar{c}_{i}=(0,...,0,1)^{\top}\in\mathbb{R}^{m_{i}}_{+}\setminus{\{0\}}over¯ start_ARG italic_c end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = ( 0 , … , 0 , 1 ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT + end_POSTSUBSCRIPT ∖ { 0 } for all i∈{1,…,N}𝑖1…𝑁i\in\{1,...,N\}italic_i ∈ { 1 , … , italic_N }. The choice of this direction ensures that no ϵitalic-ϵ\epsilonitalic_ϵ-deviation is allowed in the other player’s strategies, so for each player i𝑖iitalic_i the strategy x−isubscript𝑥𝑖x_{-i}italic_x start_POSTSUBSCRIPT - italic_i end_POSTSUBSCRIPT of the other players stays fixed, while an ϵitalic-ϵ\epsilonitalic_ϵ-deviation is allowed for the objective fi(x)subscript𝑓𝑖𝑥f_{i}(x)italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) of player i𝑖iitalic_i.
|
A
|
Different answers have been giving to this limitations. Some authors suggest to introduce price spikes thanks to jump-diffusion processes [28] [26] [33] while others explore multi-factor jump-diffusion models [46] or alternative distributions for the residuals [16]. Next sections will concentrate on these proposals and on their estimation power.
|
We begin our analysis by exploring various marginal models for spot energy price and daily temperature. In particular, we dive deep into a large literature on energy and commodity modeling [29] [58]. First, we examine mean-reverting diffusion models. Pioneering models by Gibson and Schwartz [34], Schwartz [53], and Lucia and Schwartz [45] propose two- or three-factors Gaussian diffusion dynamics to model commodity assets. However, the presence of non-Gaussian behaviors, including spikes, jumps, and heavy tails, has led to a refinement of these initial models. One proposal is to extend mean-reverting diffusion processes to Levy noises. Thus, compound Poisson processes have been studied by Geman and Roncoroni [33], Cartea and Figueroa [26], and Meyer-Brandis and Tankov [46]. A second widespread proposal is to move to multi-factor models with Brownian [45] [8] or Levy increments [12] [18]. Finally, Benth and Benth explore the relevance of mean-reverting diffusion processes with Normal Inverse Gaussian (NIG) increments [16]. We compare these models and consider different estimation and process characterization challenges for day-ahead auction market clearing prices [57] for the French and Northern Italian electricity markets. Finally, we propose to model the daily day-ahead log spot prices with mean-reverting processes and NIG increments.
|
Several papers have studied the possibility to consider non-Gaussian increments. A particularly popular one is to combine Brownian motion with a compound Poisson process [28] that would capture the price spikes usually observed in energy prices, extending (5) as follows
|
Taking inspiration of Schwartz’s models, several papers have explored the possibility to combine multi-factor models with Levy processes [18] [46]. The adaptation of (5) to a multi-factor model of n𝑛nitalic_n factors takes the following form:
|
Multi-factor models with non-Gaussian increments represent another popular alternative to model erratic dynamics. Two factors and three factors models with Gaussian increments were developed by Schwartz through different collaborations [53] [52] [34] [45]. The idea behind is that the spot prices could be driven by a long-term and a short-term dynamics, so that the spot price would integrate long-run terms and potential circumstantial tensions on the energy market. The estimation of such models can be performed through classic Kalman filtering. However, they do not answer the issue of bad fitting of the residuals with normal distribution as can be observed on the right figure of Figure 5.
|
B
|
The other possibility is that trading volume is accurate but the reported open interest is incorrect. Market participants do observe the changes in open interest and attempt to infer how informed investors are being positioned in the market. The general heuristic is that if open interest is rising and the price is increasing then the aggressors, presumably informed investors, are the buyers. If on the other hand open interest is rising and the price is falling, the aggressors would be the sellers. If, however, as the price increases open interest decreases, this presumably implies shorts covering (and implies the same for the longs when price is dropping). This heuristic may or may not be valid, but its validity is less important than whether market participants pay attention to it and whether some trade according to it. Given that exchanges generate revenue by extracting fees on traded volume, it is conceivable that they could potentially generate false signals, when the market has none to offer, by modulating open interest artificially, with the expectation that this would incentivize market participants to trade more. If that is what actually takes place, it would require those signals to be as clear and large in magnitude as possible, such that all market participants notice them. In this case we would expect 𝔼SP[XTV|XTV>0]subscript𝔼𝑆𝑃delimited-[]subscript𝑋𝑇𝑉ketsubscript𝑋𝑇𝑉0\mathbb{E}_{SP}[X_{TV}|X_{TV}>0]blackboard_E start_POSTSUBSCRIPT italic_S italic_P end_POSTSUBSCRIPT [ italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT | italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT > 0 ] to be inflated, similar to what we see for ByBit, OKX, and Binance in Table 4 and Table 5 for all sub-periods. Although such activity would be almost impossible to prove beyond reasonable doubt it is instructive to look at the XTVsubscript𝑋𝑇𝑉X_{TV}italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT on the tick-by-tick level during an eventful day, e.g., a significant market decline. On August 17, 2023 the Bitcoin price crashed more than 10%percent1010\%10 % during the US trading session. In Figure 1, we see this event as observed on ByBit, and in Figure 2 we see the same day as it unfolded on Kraken. In both plots the red lines represent the excess total variation observed, with the left axis measuring the magnitude of the excess in USD. The right axis represents the price, and the blue line is the last traded price at the time of the open interest update. Comparing the two figures it is immediately evident that on ByBit there seem to be almost no time intervals where XTV=0subscript𝑋𝑇𝑉0X_{TV}=0italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT = 0, while on Kraken this condition holds almost for the entirety of the day. The few spikes in XTVsubscript𝑋𝑇𝑉X_{TV}italic_X start_POSTSUBSCRIPT italic_T italic_V end_POSTSUBSCRIPT on Kraken could conceivably be explained by delayed reporting of liquidations or open interest, as most of them are localized in time intervals with sharp price fluctuations.
|
In this work we consider the most liquid Bitcoin perpetual swaps on seven of the top cryptocurrency exchanges.
|
We find that trading volume cannot be reconciled with the reported changes in open interest for the majority of these exchanges. It is unclear whether this is due to delayed or unreported trading volume or due to incorrectly reported open interest. In our view, the most likely scenario is that both are true, perhaps, however, not to the same degree on every exchange. Although we could not perfectly reconcile these quantities for any of the exchanges in question, we find that there are discernible differences in behavior across these exchanges. The discrepancies on ByBit and OKX are so frequent and large in magnitude that these two exchanges merit a category of their own. On these exchanges we could not reconcile trading volume with reported open interest in any time period, with the implied trading volume being in the range of hundreds of billions over and above the reported trading volume, assuming the open interest is the quantity that is correct. If in fact, however, the trading volume is the more accurately reported quantity, this would imply that the open interest on these exchanges is almost completely fabricated. This could perhaps be explained by certain incentive structures baked into the scenario: leading market participants to believe that informed investors are taking large positions in these markets (as implied by the large change in open interest) could—depending on the participants’ prior positioning—lead to panic or fear of missing out on potential profits, thereby increasing trading volume, and profit for the exchange. Given that volatility and trading volumes in Bitcoin and other cryptocurrencies have been trending lower in 2023202320232023 we believe that the latter is a more plausible explanation. Figures 1 and 2 also seem to point in that direction.
|
Perpetual swaps were introduced by BitMEX in 2016201620162016 [Hayes,]. They are futures contracts with no expiry. These contracts allow for high leverage with most cryptocurrency exchanges offering leverage in the range of 100100100100x–125125125125x and some recent platforms allowing up-to 1000100010001000x(!) leverage111Reference intentionally omitted as trading with such high leverage, especially in volatile markets like cryptocurrencies, is ill-advised. 1000𝐱1000𝐱1000\mathbf{x}1000 bold_x leverage implies, in the best case, that the liquidation price is 10101010 basis points away from the entry price and more realistically 5555 basis points considering fees. Interestingly a reduction in allowed leverage, in retrospect, can be a sign that an exchange is in distress [Reynolds,]. These contracts are designed to track an underlying exchange rate, e.g. BTC / USD, such that speculators can gain exposure to that underlying while holding a collateral of their choice (usually USDT). The first perpetual swap was what is commonly referred to today as inverse perpetual. In inverse perpetual contracts, profits and losses as well as margin are paid in the base asset, e.g., BTC for the BTC/USD inverse perpetual, while the price of the contract is quoted in units of the quote asset. Except for their use as an instrument for speculation, this kind of perpetual swap was originally also used as a tool to hedge exposure to the underlying. This can be achieved by opening a short position in the contract with 1111x leverage. As the inverse perpetual is coin margined, every change in the price of the underlying is offset by the short position in the inverse perpetual which results in a stable equity curve when denominated in units of the quote asset. Before USDT was accepted as the de facto stablecoin in cryptocurrency trading, this was the primary mechanism for traders to hedge their Bitcoin exposure.
|
Our datasets are comprised of tick-by-tick trades, block trades, liquidations, and open interest as reported by the APIs of the respective exchanges mentioned in Table 1. We limit our attention to Bitcoin linear perpetuals quoted in USDT (https://tether.to/en/) and inverse perpetuals quoted in USD, as these are the most liquid derivatives. We focus on two periods: i) 2023/01/01 to 2023/01/31 (period 1) which is the beginning of the year and is usually a period of naturally higher trading volume and ii) 2023/07/01 to 2023/09/30 (period 2) containing most of the summer of months of 2023202320232023 and September as it is the most recent month prior to this work which enables us to see if our observations are still pertinent in recent data. The infrastructure as well as the collected data are proprietary; however, in the interest of encouraging reproduction of this work, we offer a few suggestions on free and open source resources that can help in that respect (please see the Appendix for further details).
|
A
|
(y)}\right)( italic_F ( italic_w ) , italic_L ( italic_w ) ) = ( divide start_ARG ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_w end_POSTSUPERSCRIPT italic_d italic_y italic_ρ ( italic_y ) end_ARG start_ARG ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_d italic_y italic_ρ ( italic_y ) end_ARG , divide start_ARG ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_w end_POSTSUPERSCRIPT italic_d italic_y italic_y italic_ρ ( italic_y ) end_ARG start_ARG ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT italic_d italic_y italic_y italic_ρ ( italic_y ) end_ARG )
|
The Gini coefficient is 00 for a density concentrated on mean wealth (that is, for a wealth-egalitarian society) whereas it approaches its upper limit of 1111 as the wealth is concentrated into an ever-vanishing proportion of the population. See [5, 19] for a discussion of the nonstandard properties of wealth distributions that maximize the Gini coefficient under the dynamics of Eq. 2.
|
The Gini coefficient also has a geometric interpretation when the Lorenz curve is used to represent a distribution of wealth. If the population were to all have the mean wealth, then the Lorenz curve would be the identity and correspond to an egalitarian society. As a population moves toward total oligarchy, the Lorenz curve is pushed into the bottom right corner of the unit square. The Gini coefficient can be equivalently defined as twice the area between the diagonal and the Lorenz curve of a distribution of wealth – this varies between zero and unity. For the functions F𝐹Fitalic_F and L𝐿Litalic_L defined above in terms of ρ𝜌\rhoitalic_ρ, we have
|
We introduced a variant of the Yard-Sale Model for which the Gini coefficient of economic inequality monotonically increases under the resulting continuum dynamics yet the rate of change in time of the Gini coefficient permits an upper bound. The way in which this bound holds is similar to the entropy – entropy production bounds for nonlinear Fokker-Planck equations. In the econophysics case, the twin results of Corollary 4.3 and Theorem 4.5 may be interpreted as the adage wealth begets wealth but with the constraint that the accumulation of wealth into a small portion of society begins to limit how quickly more can be extracted from the poor.
|
A Lorenz curve represents a distribution of wealth in the unit square, [0,1]×[0,1]0101[0,1]\times[0,1][ 0 , 1 ] × [ 0 , 1 ], by plotting on the abscissa the fraction of a population with wealth less than w𝑤witalic_w and the fraction of total wealth held by this subset of the population on the ordinate. More precisely, the Lorenz curve is a w𝑤witalic_w-parameterized plot of
|
B
|
Table 4 and Figure 8 display the pricing results up to 100 dimensions. It is clear that the LSM doesn’t perform well as DKLs in high dimensional MJD. When d≥60𝑑60d\geq 60italic_d ≥ 60, the pricing errors of LSM are greater than 5%percent55\%5 % and even reach 20%percent2020\%20 % in 100 dimensions. In contrast, the maximum error of DKL200 is lower than 1%percent11\%1 % in most of the scenarios. This result shows that the DKL can be seen as a competitive alternative to LSM in pricing high dimensional American option under MJD.
|
A common approach to mitigate the curse of dimensionality is the regression-based Monte Carlo method, which involves simulating numerous paths and then estimating the continuation value through cross-sectional regression to obtain optimal stopping rules. [1] first used spline regression to estimate the continuation value of an option. Inspired by his work, [2] and [3] further developed this idea by employing least-squares regression. Presently, the Least Squares Method (LSM) proposed by Longstaff and Schwartz has become one of the most successful methods for pricing American options and is widely used in the industry. In recent years, machine learning methods have been considered as potential alternative approaches for estimating the continuation value. Examples include kernel ridge regression [4, 5], support vector regression [6], neural networks [7, 8], regression trees [9], and Gaussian process regression [10, 11, 12]. In subsequent content, we refer to algorithms that share the same framework as LSM but may utilize different regression methods as Longstaff-Schwartz algorithms. Besides estimating the continuation value, machine learning has also been employed to directly estimate the optimal stopping time [13] and to solve high-dimensional free boundary PDEs for pricing American options [14].
|
Figure 5 illustrates the pricing error and computational time of DKL methods with various numbers of inducing points in 2-dimensional and 50-dimensional cases. It is noteworthy that there are no significant increases in computation time as the dimensions increase, leading to the conclusion that DKL models are not susceptible to the curse of dimensionality. Additionally, an increase in the number of inducing points may lead to higher computation time and improved accuracy. Table 2 presents the results of the GPR with conventional RBF kernel. Together with Figure 5, we notice that DKL models with a sufficient number of inducing points (M≥40)𝑀40(M\geq 40)( italic_M ≥ 40 ) are both faster and more accurate than GPR. Furthermore, the pricing errors of GPR exceed (5%)percent5(5\%)( 5 % ) in the 50-dimensional case, suggesting the potential necessity of incorporating deep kernel learning into the Longstaff-Schwartz algorithm. Table 3 illustrates the influence of the feature extractor structure on pricing accuracy. According to Table 3, the performance of the model with (d𝑑ditalic_d-31250-2) feature extractor is inferior to that of narrower feature extractors. This may be attributed to the heightened risk of overfitting in wider neural networks when estimating the continuation value, as a larger number of parameters requires training. The data also suggests that employing a neural network with a deep architecture as a feature extractor can significantly enhance accuracy. Figure 6 summarizes the behavior of the DKL models in both the 5-dimensional and 50-dimensional cases while varying the number of training iterations. It is observable that the pricing error remains relatively small in the d=5𝑑5d=5italic_d = 5 case after 1250 iterations, whereas the d=50𝑑50d=50italic_d = 50 case requires a greater number of iterations to converge.
|
In this work, we will apply a deep learning approach based on Gaussian process regression (GPR) to the high-dimensional American option pricing problem. The GPR is a non-parametric Bayesian machine learning method that provides a flexible solution to regression problems. Previous studies have applied GPR to directly learn the derivatives pricing function [15] and subsequently compute the Greeks analytically [16, 17]. This paper focuses on the adoption of GPR to estimate the continuation value of American options. [10] initially integrated GPR with the regression-based Monte Carlo methods, and testing its efficacy on Bermudan options across up to five dimensions. [11] further explored the performance of GPR in high-dimensional scenarios through numerous numerical experiments. They also introduced a modified method, the GPR Monte Carlo Control Variate method, which employs the European option price as the control variate. Their method adopts GPR and a one-step Monte Carlo simulation at each time step to estimate the continuation value for a predetermined set of stock prices. In contrast, our study applies a Gaussian-based method within the Longstaff-Schwartz framework, requiring only a global set of paths and potentially reducing simulation costs. Nonetheless, direct integration of GPR with the Longstaff-Schwartz algorithm presents several challenges. First, GPR’s computational cost is substantial when dealing with large training sets, which are generally necessary to achieve a reliable approximation of the continuation value in high dimensional cases. Second, GPR may struggle to accurately estimate the continuation value in high-dimensional scenarios, and we will present a numerical experiment to illustrate this phenomenon in Section 5.
|
Valuing an American option involves an optimal stopping problem, typically addressed through backward dynamic programming. A key idea is the estimation of the continuation value of the option at each step. While least-squares regression is commonly employed for this purpose, it encounters challenges in high-dimensions, including a lack of an objective way to choose basis functions and high computational and storage costs due to the necessity of calculating the inverses of large matrices. These issues have prompted us to replace it with a deep kernel learning model. The numerical experiments show that the proposed approach outperforms the least-squares method in high-dimensional settings, and doesn’t required specific selection of hyper-parameters in different scenarios. Additionally, it maintains a stable computational cost despite increasing dimensions. Therefore, this method holds promise as an effective solution for mitigating the curse of dimensionality.
|
D
|
S(z,y)=∫0y−zℓ(s)ds.𝑆𝑧𝑦superscriptsubscript0𝑦𝑧ℓ𝑠differential-d𝑠S(z,y)=\int_{0}^{y-z}\ell(s)\,{\mathrm{d}}s\,.italic_S ( italic_z , italic_y ) = ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_y - italic_z end_POSTSUPERSCRIPT roman_ℓ ( italic_s ) roman_d italic_s .
|
The optimal coupling of the MK minimisation problem induced by the scoring function given in (14) is the comonotonic coupling.
|
The optimal coupling of the MK minimisation problem induced by the score given in (11) is the comonotonic coupling.
|
The optimal coupling of the MK minimisation problem induced by any consistent generalised piecewise linear score is the comonotonic coupling.
|
The optimal coupling of the MK minimisation problem induced by any consistent scoring function for the entropic risk measure is the comonotonic coupling.
|
A
|
\overline{x}^{(i)}\right\}.italic_U start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , over¯ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) := - roman_exp { - divide start_ARG 1 end_ARG start_ARG italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG ( 1 - divide start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG italic_n end_ARG ) italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT + divide start_ARG italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG over¯ start_ARG italic_x end_ARG start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT } .
|
In this section, we consider the n𝑛nitalic_n-agent games. The market model is same as [23] and each agent invests in their own specific stock or in a common riskless
|
For the n𝑛nitalic_n-agent games, we define, for each agent i=1,⋯,n𝑖1⋯𝑛i=1,\cdots,nitalic_i = 1 , ⋯ , italic_n, the type vector
|
Now we formulate the representative agent’s optimization problem. Note that this is a mean field game with common noise B𝐵Bitalic_B, so conditional expectations given B𝐵Bitalic_B will be involved. As argued in [11, 22], conditionally on the Brownian motion B, we can get some kind of law of large numbers and asymptotic independence between the agents as n→∞→𝑛n\rightarrow\inftyitalic_n → ∞, which suggests that the average wealth X¯tsubscript¯𝑋𝑡\overline{X}_{t}over¯ start_ARG italic_X end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and consumption c¯tsubscript¯𝑐𝑡\overline{c}_{t}over¯ start_ARG italic_c end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT should be 𝔽Bsuperscript𝔽𝐵\mathbb{F}^{B}blackboard_F start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT-adapted processes. Then, the expected payoff of the representative agent is
|
Each agent derives a reward from their discounted inter-temporal consumption and final wealth, to be specific, for agent i𝑖iitalic_i, the expected payoff is
|
D
|
The fees paid by non-atomic arbitrage transactions exceed current block rewards on the Ethereum PoS consensus layer repeatedly. For instance, their value exceeds the current consensus layer block reward by more than a factor of 10 in 15,360 blocks during our data collection period and we further note that their value measured in fees can be seen as a lower bound for the profit that can be extracted.
|
Non-atomic arbitrage opportunities existed ever since the launch of DEXes, as these naturally arise when you have two markets quoting prices for the same assets. However, Ethereum’s transition from Proof-of-Work (PoW) to Proof-of-Stake (PoS) in September 2022 marked a watershed moment, due to the changes in block building. On Ethereum PoS time is divided into slots with each slot being assigned to a single known validator, i.e., the PoS equivalent of a miner, who is responsible for proposing a block and extending the chain. While validators are assigned a slot, around 90% of blocks are no longer built by the validator themselves, but are instead outsourced through the novel proposer-builder separation (PBS) to a small set of specialized builders [3]. These builders bid to have their assembled block selected by the validator. The acquired block building right is valuable partly due to users paying tips for block inclusion, but, more importantly, due to maximal extractable value (MEV), i.e., the value that can be extracted by strategically ordering, including or excluding transactions. Trades exploiting non-atomic arbitrage opportunities make up a part of MEV as arbitrageurs want to be the first to exploit this opportunity, and therefore are keen to be included at the top of the block [4].
|
To provide a better understanding of non-atomic arbitrage, we go through a case study of block 18,360,789 – a block with a significant price change in the lead-up to the block, i.e., the time between the previous block proposal and the block proposal itself. In Figure 4, we plot the time and the value of bids from the builders. Note that we hightlight bids from rsyncbuilder (shown in red) and beaverbuild (shown in blue), as these two builders were previously identified as having integrated searchers that perform these non-atomic arbitrage trades [4]. The bids from all remaining builders are shown in yellow. We further indicate the time of the bid corresponding to the block that was chosen by the proposer, i.e., the relays deliver the highest bid to the proposer when the proposer requests it. Finally, we also plot the relative price change of ETH-USDT and BTC-USDT on Binance.com during the same time. Recall that USDT is a stablecoin pegged to the $.
|
Coming back to Figure 4, we can observe that around five seconds after the price starts to change on Binance.com the bids start to increase. At this point, the price difference appears to be big enough for non-atomic arbitrage to be profitable. Further, we find that bids from the builders that are associated with non-atomic arbitrage transactions are higher than those from the rest and that the bids continue to increase as the ETH and BTC prices on Binance.com increase. The block chosen by the proposer was built by rsyncbuilder and its bid, i.e., the value received by the proposer was a staggering 10.32 ETH (≈\approx≈$23,000 at the time of this writing). In Table I, we take an in-depth look a the non-atomic arbitrage trades we identified with our heuristics that we will introduce in the following Section 5. In the block, there were 26 transactions with volume exceeding $6 million that we identified as performing non-atomic arbitrage trades. Remarkably, all but three of these transactions were by the rsyncsearcher3 – a searcher we identified to be linked to the rsyncbuilder that won the block (cf. Section 6.3). Additionally, we highlight that the rsyncsearchers paid a remarkable 11.15 ETH (i.e., more than the PBS bid) for their transactions. Thus, the rsyncbuilder built a high-value block largely attributed to fees paid by trades we labeled as non-atomic arbitrage by its own integrated searcher.
|
Previous works [6, 18] have demonstrated that MEV (i.e., high-value transactions) presents a risk to the consensus layer in PoW. To be exact, the consensus is vulnerable to time-bandit attacks, as it can be rational for the block proposer to fork the blockchain to exploit MEV in previous blocks themselves. Re-orgs, required by time-bandit attacks, have become harder in Ethereum PoS [30], but regardless such high-value transactions present a challenge to the consensus layers. For instance, an entity controlling a significant proportion of the staking power could purposefully withhold attestations for blocks preceding its turn as block proposer to increase the chance of a possible re-org during its turn as proposer being successful. Note that the biggest staking pool currently controls around one-third of the staking power [31] and that the losses from missed attestations are minimal.
|
D
|
In summary, current research on LLMs in financial applications aligns with and reinforces the methodologies underpinning each component of the proposed system. However, to the best of our knowledge, the presented approach is distinct both in its design and evaluation methodology as it leverages multi-modal financial data, instead of barely news or news with historical prices, to deliver actionable and interpretable investment recommendations for the analyzed stocks while it outperforms high-performing ETFs. Unlike traditional methods that lean heavily on quantitative analysis, where sentiment indicators are used as features of predictive models, MarketSenseAI emphasizes language understanding and reasoning to generate investment insights after processing numerical and text data. This approach allows for the provision of detailed, AI-generated explanations for each recommendation, enhancing the interpretability and trustworthiness of the investment decisions. What is more, the evaluation considers transaction costs and the number of trades, highlighting MarketSenseAI’s applicability in real-world settings.
|
The model’s output, structured in a concise format, includes a decision (”buy”, ”sell”, or ”hold”) along with a clear, step-by-step explanation of the reasoning behind this choice. The terms ”buy” and ”sell” are defined within the context of portfolio positioning (long and short positions, respectively), while ”hold” indicates no inclusion in the portfolio’s composition regarding the specific stock.
|
MarketSenseAI’s architectural framework, depicted in Figure 1, merges four core components responsible for data inputs with a fifth component to facilitate the final recommendation (i.e., buy, hold, or sell). This component synthesizes all the information and provides a concise explanation for the respective decision. Each component is built upon OpenAI’s API and employs the GPT-4 model (OpenAI, 2023a), utilizing zero-shot prompting and in-context learning to execute distinct tasks (Dong et al., 2022).
|
The implementation of MarketSenseAI was executed using Python 3.11, leveraging the LangChain framework (Chase, 2022) for prompt construction and utilizing OpenAI’s API for accessing the GPT-4 model. Each component of MarketSenseAI, as outlined in Section 3, functions independently, running as a standalone script. The outputs from these components are systematically stored in a datastore, ensuring organized and efficient data management. As MarketSenseAI accesses GPT-4 through OpenAI’s API, the operational aspects fall under OpenAI’s operational jurisdiction. This arrangement allows our framework to leverage the capabilities of GPT-4 efficiently, ensuring consistent processing times per stock, while offloading the computational and hardware management responsibilities to OpenAI’s robust infrastructure.
|
The signal generation component, as the final stage in the MarketSenseAI pipeline (Figure 1), integrates the textual outputs from the news, fundamentals, price dynamics, and macroeconomic analysis components. This process results in a comprehensive investment recommendation for a specific stock, paired with a detailed rationale.
|
B
|
The proof of this quadrature discretisation result follows from an application of Theorem 2.1 in Grzelak (2022a). In B, we analogously obtain the pdf of the randomised component process Yjϑ(t)subscriptsuperscript𝑌italic-ϑ𝑗𝑡Y^{\vartheta}_{j}(t)italic_Y start_POSTSUPERSCRIPT italic_ϑ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_t ) in this way.
|
To address the limitation highlighted earlier, we now introduce a local volatility model for the composite process that is fully defined within the well-established framework of stochastic processes with deterministic parameters, while maintaining the marginal distributions obtained from the quadrature discretization of the randomised composite process.
|
that has no added layer of randomisation. We show that the solution of this SDE has a probability density function of the same shape as is obtained from the randomised composite process Xϑ(t)superscript𝑋bold-italic-ϑ𝑡X^{\bm{\vartheta}}(t)italic_X start_POSTSUPERSCRIPT bold_italic_ϑ end_POSTSUPERSCRIPT ( italic_t ), which implies a parametrisation of the SDE such that the marginal densities of the local volatility model and the randomised composite process coincide up to the quadrature discretisation error of the randomised model.
|
The discretization achieved through Gauss quadrature is significant for the applicability of this technique, as it links the random composite process to a finite number of concrete conditional component processes. This is valuable in applications like the process calibration, where each conditional process may benefit from an analytical form allowing rapid calibration.
|
The main result of this section is the stochastic switching equivalent to 3.6, an SDE of local-volatility type which may be defined in the framework of the probability space (Ω¯,ℱ¯,ℙ¯)¯Ω¯ℱ¯ℙ(\bar{\Omega},\bar{\mathcal{F}},\bar{\mathbb{P}})( over¯ start_ARG roman_Ω end_ARG , over¯ start_ARG caligraphic_F end_ARG , over¯ start_ARG blackboard_P end_ARG ). The solution of this SDE exhibits marginal distributions that align with those of the quadrature discretisation of the randomised composite process.
|
C
|
The dashed lines correspond to the convexity calculated in the Hull-White model using Eqs. (E.1) and (E.2). This model is calibrated at the ATM implied volatility of a caplet with the same contract duration.
|
We also compare the 3M SOFR futures convexity of Eq. (5.8) with the 3M Eurodollar convexity, where both include the effects of option smile and skew. For more details about the calculation of the Eurodollar convexity, see Appendix D.
|
Fig. 2 shows the impact of not modelling correctly the option smile and skew on the futures convexity may be around 20%percent2020\%20 % for short maturities.
|
the impact of correctly capturing market skew and smile in convexity by comparing with the convexity extracted using a Hull-White model calibrated to the at-the-money
|
Finally, we show the difference between the convexity of 3M SOFR futures and the convexity of the Eurodollar future in Fig. 3.
|
B
|
Not_Invested ×\times× High-Ability ×\times× 𝐏𝐞superscript𝐏𝐞\mathbf{P^{e}}bold_P start_POSTSUPERSCRIPT bold_e end_POSTSUPERSCRIPT
|
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumers experiences undertreatment.
|
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumer switched to a new expert in the current round. Undertreated, Overtreated and Invested_LR are lagged variables (one round).
|
This table reports marginal effects of panel logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is a binary variable that equals 1 if the consumers experiences overtreatment.
|
This table reports results of a panel ordered logistic regressions using subject-level random effects and a cluster–robust VCE estimator at the matched group level (standard errors in parentheses). The dependent variable is an ordinal variable that captures the number of consumers (0 - 3) who approached the expert in the current round. Undertreated and Overtreated are lagged variables (one round).
|
D
|
Trader: Individuals or entities engaged in the purchase and sale of perpetual contracts. These traders furnish collateral to maintain and manage their positions through the trading of such contracts.
|
Matching Module: This module is entrusted with storing, correlating, and executing purchase and sale orders of contracts.
|
Risk Control Module: This module is vital for assessing and supervising the position of every trader account, contingent on the orders that have been executed. Its role is pivotal in ascertaining that the provided collateral is sufficient to offset potential deficits. Furthermore, in specific scenarios, it assumes control of a trader’s position and proceeds with its liquidation.
|
Custody Module: This module is responsible for ensuring the security of assets across all trader accounts. It consistently updates and retains the latest balance details and facilitates both deposit and withdrawal operations initiated by traders.
|
The Oracle Pricing Model(Fig. 6) and VAMM Model (Fig. 7) gravitate toward a greater degree of decentralization. Both models leverage smart contracts for order matching, with Liquidity Providers assuming the role of direct counterparties to traders, engendering an indirect mode of trade execution among traders. Nevertheless, it is imperative to acknowledge that both models still rely upon centralized constituents. The involvement of Oracles and Keepers in the Risk Control Module remains a requisite for effecting the liquidation process. Moreover, the Oracle Pricing Model hinges upon Oracles for the determination of precise trade prices.
|
C
|
(iii) The Pensions Regulator’s interventions, arguing for higher levels of prudence, have specifically referenced the high SfS and TP liabilities, as reported by USS, see Section 5.1, suggesting that these invite regulatory concern.
|
Turning next to the USS 2023 discussion of the funding ratio condition as reproduced in Appendix A.2. USS makes clear for the first time in any consultation material that their SfS modelling ‘comfortably passes’ the benefit payment condition but that the funding ratio condition is not quite passed. Any reasonable reading would suggest that the funding ratio condition dominates and therefore sets the SfS liabilities for the 2023 valuation. The same USS discussion also makes clear that the funding ratio is highly sensitive to the input assumptions and particularly binding in the early years of the simulation.
|
USS public definitions of SfS made no, or minimal, reference to the funding ratio condition until 2023. Stakeholders that did reference SfS (including JEP, UUK and UCL) did not mention the funding ratio condition, see Appendix A.2.
|
The funding ratio condition is shown in general to dominate over the benefit payment condition, and by a significant margin. This strongly indicates that the funding ratio condition is setting the SfS liabilities121212USS indicate the funding ratio also dominates the 2023 valuation, Sec. 3.3 and App. A.2..
|
The funding ratio condition as described in Section 3.3 does not measure the ability to pay pension benefits. It is also clear from Section 3.2 that the funding ratio condition dominates in setting the SfS liabilities. This means the funding ratio condition obscures the other SfS condition, the benefit payment condition.
|
B
|
We calculate the GA at a confidence level of q=99.9%𝑞percent99.9q=99.9\%italic_q = 99.9 % over a time horizon of T=1𝑇1T=1italic_T = 1 year.
|
For each of the inputs, we compute the GA according to each of the three approaches for the confidence level q=0.999𝑞0.999q=0.999italic_q = 0.999. Results are summarized in Table 4.3 and Figure 4.2. As can be observed, the prediction error for the first order GA approximation from (A.3) is on average about twice as large as for the NN-based GA. This documents that the NN GA significantly outperforms the analytic GA with respect to the approximation error. When considering only the very small portfolios with less than 25 obligors as shown in Table 4.4, the effect is comparable. Overall, the results show that even for high quantiles as q=0.999𝑞0.999q=0.999italic_q = 0.999 the NN can be efficiently trained to accurately predict the GA for small and concentrated portfolios.
|
We compute the NN GA and the analytic approximation GA in both the actuarial CreditRisk+ model and the MtM approach and calculate the percentage error with respect to the exact GA obtained by MC simulations with IS.
|
First we investigate the effect of reducing the number of obligors by gradually deleting obligors from the originally sampled portfolio, depicted in Figure 5.1 (a) for the actuarial and in Figure 5.2 (a) for the MtM approach. We observe that with an increasing number of obligors the GA tends to decrease while this relation is not necessarily monotone since adding single obligors with large exposure or high PDs might also increase the GA as can be clearly observed in the MtM case. Figure 5.1 (a) further shows that the approximate GA in the actuarial approach overestimates the true GA (approximated by the NN) significantly which is also in line with the results in Figure 4.1. In the MtM approach, it highly depends on the choice of the portfolio whether the approx. GA over- or underestimates the true GA with overestimation occurring more frequently than underestimations (compare Figure 5.2 (a) and also Figure 4.2).
|
The GAs for the described MDB portfolios are reported in Tables 5.2 and 5.3. Our results for the percentage error between the NN GA and the exact MC GA show that the NN approach is highly accurate for both the actuarial CreditRisk+ model and the MtM approach. Comparing the results to the percentage error of the approximate GAs documents that our NN GA clearly outperforms the respective analytical approximations. For the CreditRisk+ model the percentage error for the approximate GA is up to 75 times higher than the one for the NN GA for the case ELGD=45%ELGDpercent45{\rm ELGD}=45\%roman_ELGD = 45 % and even more for ELGD=10%ELGDpercent10{\rm ELGD}=10\%roman_ELGD = 10 %. The approximate GA for the MtM setting is more accurate in general but still performs worse than the NN GA with a percentage error of up to 14 times higher than the one for the NN GA.
|
B
|
Another notable advance in LLMs is the development of a multi-agent framework. Park et al. (2023)[3] suggest a novel mode of interaction among LLMs that mimics human collaborative dynamics. This framework allows individual LLMs to specialise in distinct areas of expertise, enabling them to work in concert towards a common goal. The synergy achieved through this collaboration enhances the overall performance of the LLM ecosystem, as evidenced by the specialised and collaborative efforts detailed by Li et al. (2024)[4]. Moreover, the scope of application for the multi-agent framework transcends routine tasks. Boiko et al. (2023)[5] showcase its aptitude in conducting complex scientific research. This capability indicates that when deployed within such a framework, LLMs can effectively support intricate and knowledge-intensive tasks.
|
This proposed multi-agent AI framework offers a comprehensive solution that automates the process of anomaly detection in tabular data, follow-up analysis and reporting. This workflow can not only improve efficiency but also enhance the accuracy and reliability of financial market analysis. By reducing the reliance on manual processes, the framework presents the potential to reduce human error and bias. Furthermore, the rapid processing capabilities of the AI agents could shorten the time from anomaly detection to action, enabling more timely and effective responses to market anomalies.
|
The demonstration of AI in financial market analysis through a multi-agent workflow showcases the potential of emerging technologies to improve data monitoring and anomaly detection. Integrating LLMs with traditional analysis methods could significantly enhance the precision and efficiency of market oversight and decision-making. This approach promises to streamline the review of data, enabling quicker detection of market anomalies and timely information for decision-makers.
|
These recent technology advances offer a pathway to significantly streamline, and potentially automate, the labour-intensive processes of traditional financial market data analysis. This paper introduces a framework designed to replicate and enhance the financial market data validation workflow. By employing a multi-agent AI model, the framework intends to harness the potential of AI to elevate efficiency while maintaining, possibly augmenting, the rigor and thoroughness of established data analysis methodologies. The overarching goal of this initiative is to merge AI’s autonomy with the traditional analysis methods, which can redefine the paradigm of data analysis in financial markets.
|
This advance in AI-driven financial market data analysis suggests a reconfiguration of the data analysis and decision-making landscape. With ongoing advances in AI technology, the future envisages a framework capable of autonomously executing increasingly complex analytical tasks, diminishing the need for human oversight. This evolution towards an AI-centric approach in financial market data analysis is anticipated not only to streamline anomaly detection and review procedures but also to find applicability in various areas requiring complex data analytical capabilities.
|
C
|
Group C - Untrained Group. The third group serves as a control to understand the performance improvement obtained from training. The agents in this group load the random initialized parameters and run simulations without training.
|
Group C - Untrained Group. The third group serves as a control to understand the performance improvement obtained from training. The agents in this group load the random initialized parameters and run simulations without training.
|
Group B - Testing Group. The agents in this group are pre-trained for 10 hours and are used in the simulation without continuing training.
|
For each random seed, we generate the parameters of the neural networks for the Group C agents directly. Each agent in Group C is trained for 10 hours and their parameters become the parameters used for each agent in Group B. The same parameters are used to initialize the agents in Group A. We are describing the process in detail as this is similar to a matched pairs testing design to minimize randomness for comparison purposes. This is important because we only repeat this process for 10 random seeds, that is 10 simulations. Each simulation takes 20 hours when running all of them in parallel and there are a lot of computational resources required for this study. This process is illustrated in Figure 2.
|
In this experiment, we only use continual learning Group A MM agents, as they need to adapt to changing market conditions. The LT agents are also from Group A but their reward function is changed through the evolution of the target buy/sell parameters. Specifically, Figure 9 shows the price process resulting from the activity of these informed LT agents. The four phases separated by dashed red lines are: Sell (0.3/0.4), Buy (0.4/0.35), Balanced buy and sell (0.4/0.4), and last Buy (0.4/0.3), the numbers in parenthesis indicate (buy fraction/sell fraction). We expect to see the price movement aligning with the target buy/sell parameters. Additionally, similar to the previous study we collect states in the first and the last phases (i.e., steps 0-10,000 when the price goes down and steps 30,000-36,000 when the price goes up). We feed these states to the MM’s policy function at the beginning of the day (Before) and at the end of the trading day (After). Figure 10 shows the distributions of the outputs in action symmetric and asymmetric tweak. In both scenarios when the price is going down and when the price is going up, the action symmetric tweak (top row) gets larger, thus the MM agent becomes more conservative and tends to enlarge the spread. This aligns with the findings in the previous experiment. We can also see that the distribution of price asymmetric tweak becomes negative when prices are going down and becomes positive when prices are going up. This means the learned agents change their expectations of future prices along with the observed market direction. In contrast, the asymmetric tweak average is close to zero for agents that are not continuously trained. This helps explain why the price tends to come back to pre-flash-sale values for the agents in Group B in Figure 7.
|
C
|
However, in the case of image encoders the classification categories are largely non-overlapping. This is perhaps surprising, but the number of ‘things’ in the world is obviously unfathomably large so it wouldn’t be feasible to assign a probability to each of them. Thus, one encoder might have ‘palace’ as a category and another has ‘church’. Similarly, one encoder might have ‘aviary’ while another has ‘orchard’. Thus if we combine multiple encoders together we get different assessments about the likelihood that the images belong to certain categories. Seemingly important for our dataset is that none of the encoders that we used contain a category for ‘house’. If they all did, we might be back in an environment where each encoder returned essentially the same information – that all images were houses.
|
Given the importance of visual aspects of properties in determining real estate values, housing has been used as an application in the computer science literature to test how deep learning can be used to conduct visual content analysis and scene understanding. Law et al. (2019) develop and train two separate architectures to identify features in Google Street View images and satellite images, respectively, in London, UK. These features are used to predict a property’s desirability, which they use to predict housing prices. Like in our study, they find that information about a location’s desirability can improve the accuracy of predictive house price models. However, unlike Law et al. (2019), we show that these gains can be achieved using multiple well-established, pre-trained encoders that economists are becoming more familiar with, using only a single exterior photo from a property.
|
Given the importance of visual features of housing in purchasing decisions, we assess whether image data can improve housing price predictions that come from standard hedonic analyses. Hedonic pricing models offer a relatively straightforward approach to estimate housing prices using observable characteristics about a house and its location and are widely used in economics. However, the standard hedonic pricing model can overlook important but unobservable attributes that influence how houses are priced.
|
These findings demonstrate how deep learning can be used to information from images that is unobservable in traditional data. We apply this to housing, where visual details are particularly relevant since these details influence buyers’ perception of a property. However, these subtle visual details are not captured by the characteristics included in traditional housing data.
|
While two houses may be similar across these features, many other characteristics influence the perceived worth of each property. These kinds of differences are observable when potential buyers look at photos of, or visit, the property but are not captured in structured housing data.
|
C
|
In our methodology, we employed a two-step process to analyze the impact of COVID-19 on various industries.
|
The graphs outline the variability of Personal Income (PI) across different industry sectors over time, from the first quarter of 2020 to the second quarter of 2023 in response to Covid-19 shock. The impacts are measured as deviations from the forecasted PI without the pandemic.The key findings include:
|
This study employed time-series analysis to examine the variability in Personal Income (PI) across various industry sectors during Covid 19. To capture the pre-pandemic trends and isolate the impact of COVID-19, ARIMA models were fitted to data up to 2019 Q4 (end of the pre-COVID period) for each industry. This approach allowed for a baseline against which to compare the pandemic period’s deviations.
|
Next, we calculated the impact of the pandemic on each industry by comparing these forecasted values with the actual data from 2020 Q1 to 2023 Q2. This comparison was made for each quarter, allowing us to assess both the immediate and extended impacts of the pandemic. The impact was quantified as the difference between the forecasted (expected without COVID-19) and actual PI values across all 13 sectors.
|
First, we forecasted Personal Income (PI) for each industry using ARIMA models, projecting 14 quarters ahead from the first quarter of 2020, based on data up to the end of 2019. This forecasting created a baseline to compare against the actual PI observed during the pandemic. We selected a specific window of the time series data, from 2020 Q1 to 2023 Q2, to focus on the period most likely to be impacted by COVID-19. This window includes both the onset of the pandemic and subsequent quarters, allowing for an analysis of both immediate and longer-term effects.
|
D
|
Suppose that there is a pool of identically distributed extremely heavy-tailed losses (i.e., infinite mean), possibly statistically dependent.
|
Each agent (e.g., a reinsurance provider) needs to decide whether and how to diversify in this pool.
|
Moreover, if u∗<a/2ksuperscript𝑢𝑎2𝑘u^{*}<a/2kitalic_u start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT < italic_a / 2 italic_k, i.e, the optimal position of each external agent is very small compared with the total position of each loss in the market, the loss Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for each i∈[n]𝑖delimited-[]𝑛i\in[n]italic_i ∈ [ italic_n ], has to be shared by one internal agent and k𝑘kitalic_k external agents to achieve an equilibrium.
|
This is related to the question raised in the Introduction: By Proposition 6, as long as the agent’s risk preference is monotone, an agent should not diversify, under the setting of this section.
|
For instance, in the context of reinsurance, h(x)=x∧cℎ𝑥𝑥𝑐h(x)=x\wedge citalic_h ( italic_x ) = italic_x ∧ italic_c for some threshold c∈ℝ𝑐ℝc\in\mathbb{R}italic_c ∈ blackboard_R corresponds to an excess-of-loss reinsurance coverage; see e.g., OECD (2018).
|
A
|
We take the average of these category-specific stress indicators to compile a comprehensive stress index that reflects the overall market conditions.
|
Finally, we scale the resulting average to fall between 0 and 1 by applying the cumulative distribution function (norm.cdf) to the computed stress index, which normalizes our final index value.
|
We then apply a statistical method called z-scoring to this 10-day sentiment average, which helps us understand how strongly the news is leaning compared to the norm.
|
If we refine our analysis on the Sharpe ratio, we can also notice that the strategy based on the Stress index alone always comes second indicating that the signals emitted by the stress index seem quite robust and more effective than the ones using the VIX index. Regarding turnover, which measures the frequency of trading within the portfolio, we observe notable differences across strategies. For instance, in the S&P 500 (Table 2), the ’SI+News’ strategy exhibits the highest turnover rate at 13.4, indicating a more active trading approach. This contrasts with strategies like ’SI’ and ’Dynamic SI News’ which have lower turnover rates, suggesting a more passive strategy. The buy and hold strategy has by definition no turnover as we hold the position for ever. It is interesting to note that the Stress Index based strategy is effectively more moderate in terms of turning the position compared to the VIX based strategy.
|
Because the stress index final result is a number between 0 and 1 thanks to the cumulative distribution function of the normal distribution, we directly get a stress index signal.
|
A
|
Fractional Hot Deck Imputation (FHDI): Here in this work (Song et al., 2020) each missing value has been replaced with a set of weighted imputed values here a missing value of the recipient unit gets replaced by the similar values of the donor unit. The values of donor unit are assigned with fractional weights in this prediction.
|
Machine Learning models were deployed by many researchers, including (Leo et al., 2019), for banking risk management. The authors of (Mai et al., 2019) use deep learning models for the same purpose. Other authors (Smiti and Soui, 2020) use deep learning for borderline smote, wherein they focused on imperfect classification. The authors in (Zięba et al., 2016) use ensemble-boosted trees for bankruptcy prediction. Similar work has been done by the authors in (Zakaryazad and Duman, 2016) for fraud detection and direct marketing using Artificial Neural Networks. The authors in (Wang et al., 2017) use autoencoder techniques, and neural networks with dropouts and compare the existing proposed models. The authors in (Aniceto et al., 2020) use Logistic regression as the benchmark model for comparing the results of different machine-learning techniques. This model can be used for classification tasks wherein it is used to describe a data relationship between dependent and independent variables. It performs predictive analysis. The authors of (Chen et al., 2016) use the k-nearest neighbors algorithm (k-NN) as a machine learning method without parameters for the classification of Bankruptcy vs. Non-Bankruptcy and achieved a decent score. and achieved good classification accuracy(Filletti and Grech, 2020). The authors in (Leo et al., 2019) use a Decision Tree as a classifier for better prediction by allocating weights to it, making decisions that are easy to infer. It is considered a non-parametric algorithm due to the tree size growing to match the classification problem’s complexity(Bellovary et al., 2007). Here the most relevant feature acts as the root node, and the following relevant features form its child. The authors of (Zięba et al., 2016) use multiple decision trees in a combined form to represent random forests or random decision forests, an ensemble learning method used for classification and regression tasks as training and outputs the class based on classification, and found very high accuracy. The authors of (Pawełek et al., 2019) and (Kumar and Ravi, 2007) use a gradient-boosting algorithm to predict the bankruptcy of Polish Companies. Firstly, it is used to remove the outliers from the dataset and then to predict bankruptcy. In this paper, the authors indicated that by removing the outliers from the dataset using gradient boosting, it is possible to increase the prediction rate. The authors of (Mai et al., 2019) use Neural networks to predict the accuracy and found that it outperforms the accuracy as compared with all existing machine learning models. Like each neuron in our brain comes up with a simple task and controls the complex and challenging functions, cognitive tasks, etc.(Jouzbarkand et al., 2013) Using logistic regression, each neuron can be related mathematically, and therefore the overall artificial neural network can be considered as multiple logistic regression classifiers attached to each other. (Mai et al., 2019)
|
Fig. 5 shows the error in the prediction of individual values with different methods. As it could be observed from the figure the proposed granular prediction method results in low error consistently over all the years. The performance of FHDI and Autoencoder are equally good in most of the cases. Please note that FHDI needs to repeat the regression process several times with different weights for a single value. On the other hand, the autoencoder must develop an encoder and decoder architecture with high-level representation learning. Compared to the aforesaid methods, the proposed method would produce a prediction only with a very small segment of the dataset, in these experiments we considered δ=5<<d=64andη=7<<N≅10,000formulae-sequence𝛿5much-less-than𝑑64𝑎𝑛𝑑𝜂7much-less-than𝑁10000\delta=5<<d=64~{}and~{}\eta=7<<N\cong 10,000italic_δ = 5 < < italic_d = 64 italic_a italic_n italic_d italic_η = 7 < < italic_N ≅ 10 , 000, and with a single regression for a value by exploring the merits of granulation.
|
The overall method defined here for bankruptcy prediction has been proven to be effective over all the five years Polish dataset. The newly formulated data imputation technique with contextual granule has been compared with three other popular methods, and resulted in higher or almost equal accuracy even compared to autoencoder-based estimators. Moreover, this imputation method has reflected its robustness while tested with the increasing rate of missing values, and henceforth it has proven its reliability. The effectiveness of the entire pipeline has also been demonstrated with the impacts of feature reduction and data balancing. The end-to-end pipeline designed here results in accuracies more than 90%percent9090\%90 % for the prediction of bankruptcy in most cases. However, the proposed data imputation method could be verified with other high-dimensional datasets, and its prediction accuracy with categorical data could be checked. This imputation method may not be much efficient once the impurity is more than 50%percent5050\%50 %, since more than half of the database may need to be scanned while forming the granules around each missing entry, thereby making it computationally rigorous. Further, the pipeline designed here could also be validated with other bankruptcy datasets.
|
Autoencoder: Autoencoders have become popular now-a-days for missing value imputation (Gjorshoska et al., 2022). Here the autoencoder approximates the values by learning a higher-level representation of its input.
|
D
|
The main fields that can be found in a transaction are: bank, account, transaction date, amount, relative balance, and several text fields (description, reference, payer and payee)
|
The information used to build the training dataset consists of the 3 months of banking transactions prior to the signature date of 4763 loans given between 2017 and 2023, together with daily account balance and financial product information for the same period. With this information, more than 350 variables are generated for each customer-loan pair, many of which are obtained through an automatic feature engineering process using featuretools, an open source python framework which uses a technique called deep feature synthesis [14] in order to compute features for relational datasets.
|
The loan application process begins with the customer specifying the characteristics of his/her desired loan, and continues with the declaration of certain personal data, including both socio-demographic and professional information. Then, customer is required to aggregate their bank accounts. This provides bank movements and financial products data to the process. Once the accounts are successfully aggregated, a classifier model is triggered which associates each transaction with one of 70 categories and could also detect the company or commerce associated with the bank transaction. The categorisation of transactions provides us with fundamental information to be used in our feature extraction process, through which we obtain more than 350 features to feed Fintonic risk model, called FinScore model.
|
The risk model of Wanna, Fintonic’s financial institution, is a binary classification model trained to predict a customer’s probability of default in the next 12 months based on their last 3 months of aggregated banking information. To train the model, information from the history of loans given by Wanna has been used under one of the following conditions: the loan has been fully repaid, or the customer has defaulted, defined as a loan with a receipt unpaid for more than 3 months. Thus, we define the binary target of the model as 1 in case of default, and 0 otherwise.
|
It is important to note that the client can add bank accounts of which he/she is not necessarily the account holder, which is a problem since the loan sanction should be made only on those accounts on which he/she is the account holder. For this reason, we have a service to check this beforehand, obtaining the features only for the set of accounts with a positive result. Also, we rely on third-party data services to complement our gathered data and comply with Spain regulatory framework.
|
B
|
In conclusion, we find that the graph embedding method works better in separating building blocks associated with the same protocol in comparison with FFC.
|
Table 2: Clustering results on building blocks with combinations of node features and building block labels. The best results for each target label are highlighted through gray shading, indicating that the Signature Group node feature produced the optimal clustering outcome evaluated by both target labels.
|
We evaluate the clustering performance by computing the homogeneity, completeness, V-measure, and purity over both two target labels and the four node features defined in Section 3.
|
higher values for the clustering evaluated on the protocol target labels, compared to the financial functionalities.
|
We note that the information used for the building block target label Financial Functionality Category differs from that used as node feature for the Signatures Selectors and the Signatures Group; indeed, the former uses information from the name of the function invoked only, while the latter two use data of all functions of a contract, included their arguments. Moreover, for the target label Protocol, we have labels for all 10,000 building blocks. For the Financial Functionality Category label, instead, not all building blocks contained one of the regular expressions defined in Table 1; these were categorized as ‘other’ and excluded from the evaluation.
|
A
|
\right\|\boldsymbol{\phi}\left(t-t_{j}\right)\right)\right)over~ start_ARG bold_h end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT ( italic_t ) = ReLU ( ∑ start_POSTSUBSCRIPT italic_j ∈ italic_n start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( [ 0 , italic_t ] ) end_POSTSUBSCRIPT bold_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT ( bold_h start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_l - 1 ) end_POSTSUPERSCRIPT ( italic_t ) ∥ bold_e start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ∥ bold_italic_ϕ ( italic_t - italic_t start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ) )
|
for a downstream task, node classification. We build our model within the TGN framework excluding the memory module. This was due to the very large size of the data used in our experiments, leading to out-of-memory errors. In graph embedding module, temporal embeddings for a dynamic graph are generated, specifically creating embeddings for each node at time step t𝑡titalic_t. We build various functions f𝑓fitalic_f to effectively learn the connectivity between nodes. A node embedding can be represented as:
|
AllSetTransformer (Chien et al. 2021) comprises two multiset functions with SetTransformer(Lee et al. 2019) for aggregating node or hyperedge embeddings.
|
In addition to the two graph embedding methods proposed in (Rossi et al. 2020), we further experimented with two additional methods to explore the effectiveness of various graph embedding techniques.
|
Furthermore, we investigate various graph embedding modules within the TGN framework. While variants exist within TGN, the results consistently affirm the model’s ability to achieve remarkable performance in the anomaly detection task. This contributes to a deeper understanding of the factors influencing TGNs’ effectiveness in handling the complexity of emergency contact interactions.
|
C
|
The self-stated purpose of much of this literature is to offer managerial insight, yet identifying antecedents to turnover is only the first step in designing programs to reduce turnover. Our study is the first to examine an explicit retention program, and is able to leverage a comparatively rich dataset on drivers’ self-reported concerns to uncover paradoxical correlations that highlighting the importance of interest alignment and job-fit in trucking.
|
To understand the interesting pattern, we borrow the unfolding model of labor turnover (Lee & Mitchell, 1994), where “shocks” cause employees to re-examine their current employment relationships. From the perspective of the employer, these shocks can be “positive” (reducing turnover), or “negative” (increasing turnover). Scare quotes are necessary because the ultimate effect of shocks is moderated by the extent to which an employee is embedded within a firm, that is, how the shock itself highlights existing job fit, relational linkages, and sacrifices (Lee & Mitchell, 1994; Mitchell et al., 2001; Lee et al., 2004). Harmful events with respect to other firm or employee objectives can increase retention. For example, an equipment failure is bad news, both for the firm and the driver. Drivers are paid piece-rate, so that inoperative equipment reduces their ability to earn. The firm, as residual claimant in this context, has similar incentives towards maintaining equipment at a high-level. When equipment fails, it highlights an important area of mutual interest alignment between the driver and the motor carrier organization, paradoxically increasing retention.
|
With truck drivers less tied to their job through relational linkages, they are more sensitive to job “shocks”–random and often unexpected events that cause employees to re-examine their current employment relationship (Lee & Mitchell, 1994; Mitchell et al., 2001; Lee et al., 2004). Additionally, deprived of relational options for embedding employees within an organizations, managers may be tempted to treat or prevent the shock, rather than addressing the underlying job-fit characteristics, which the shock highlights. For example, recent studies have found shocks such as pay (Conroy et al., 2022) or schedule(Bergman et al., 2023) variance increase turnover. A naive policy would be to defend against these shocks, such as by reducing pay frequency so that workers experience fewer of these negative events. However, while this is sometimes the only option for managers, it is the equivalent of palliative care, failing to treat the fundamental job-fit mis-alignments the shock highlights.
|
Truckers experience a variety of shocks in the course of their duties which may trigger reassessment of current employment including traffic congestion, equipment failures, detention during loading and unloading, variation in pay, and so on. The effect of the shock, however is moderated by embeddedness, and, in particular, what the shock reveals about job-fit. The ultimate effect of a shock, with respect to retention, is predictable based on interest alignment between firm and employee.
|
We use the unfolding model of labor turnover (Lee et al., 2004, 1996) as our primary theoretical scaffolding for hypothesis development. In this framework, labor turnover proceeds along four possible pathways, which for density of exposition, we present out of numeric order. It can occur because of evolving job dissatisfaction by the worker (path 4) or be precipitated by shocks–unexpected events that cause employees to re-evaluate their current employment relationships (paths 1-3). Path 2 is a “push” driven path without a pre-planned response and without a specific job alternative in mind. An example might be being assigned to a new sales region when a worker hasn’t had this experience previously. The shock triggers a re-evaluation of current employment relationship which can result in exit. Path 1 is a “script-driven” pathway, where an event occurs that triggers a planned and immediate exit. Examples include surpassing a savings target or being assigned to a new sales territory (when a worker has prior experience with this event) could trigger a worker to leave employment. Path 3 encompasses “pull” shocks where a specific job alternative enters consideration, as when an employee becomes aware of another job possibility. For our purposes, the usefulness of the unfolding model is not in making precise distinction between shock pathways 1-3, but in understanding what different types of exogenous shocks reveal about retaining employees.
|
D
|
We do not draw the graphs in this case as they are precisely the same as in Figure 2, except that now each edge or loop carries a possibly different weight (not represented in Figure 2).
|
Table 4 depicts the top twenty portfolios ranked by their annualised expected return (ER) for the A2 cases (positive part of the correlation matrix), Table 6 for the A3 cases (subplus function of the negative elements of the correlation matrix) and Table 8 for the
|
In Tables 5, 7 and 9 we see a similar picture as in Table 3. Table 5 depicts the top twenty portfolios ranked by their annualised Sharpe Ratios for the A2 cases (positive part of the correlation matrix), Table 7 for the A3 cases (subplus function of the negative elements of the correlation matrix) and Table 8 for the
|
For this method, we filter important information by constructing a minimal spanning tree (MST), starting from the correlation matrix. MSTs are a class of graphs that connect all vertices by placing an edge among the most correlated pairs without forming any cycles. MSTs tend to retain only significant correlations. Analyzing the tree structure, as a representation of the market can provide insights into the stability and state of the market and predict how shocks will propagate through a network. We use Prim’s algorithm for computing the MST from the correlation matrix, as suggested in [11] after a comparison with other equivalent algorithms.
|
A possible next preprocessing step, that we take as optional in order to analyse its effect, is what is called shrinkage [12] of the correlation matrix C𝐶Citalic_C. This consists in constructing the covariance matrix from the correlation matrix; then, a linear combination of the covariance matrix and a matrix coming from a single-index model is computed; finally, one divides by the standard deviations again. For more details, see [12]. In the following, we refer to the correlation matrix constructed as described above (with or without shrinkage) as C𝐶Citalic_C.
|
C
|
Table 12: Macro performance and training and testing times using selected textual features and most relevant temporal features from the combinatorial analysis.
|
The effect of numerical and temporal features became more apparent when we checked the behaviour by class. Table 10 shows the results of the first experiment in that case. Note that precision and recall were very asymmetric between past and future (∼similar-to\sim∼10% precision asymmetry with the svc classifier, ∼similar-to\sim∼19% recall asymmetry with the nn classifier). In addition, the precision of both classifiers was barely above 75% for future.
|
Table 2 shows the training and testing complexity of the Machine Learning algorithms we used in our analysis for c𝑐citalic_c target classes, f𝑓fitalic_f features, i𝑖iitalic_i algorithm instances (where applicable) and s𝑠sitalic_s dataset samples. For the specific case of the nn algorithm, m𝑚mitalic_m represents the number of neurons, and l𝑙litalic_l its layers. The dt and rf algorithms have logarithmic training complexity (Witten et al., 2016; Hassine et al., 2019), that is, less than svc (Vapnik, 2000) (which, however, has a low classification response time compared to other alternatives when using a linear kernel, as in our work). At the end, nn has the highest training and testing complexity (Han et al., 2012; Witten et al., 2016).
|
In this field, nlp techniques have been successfully applied to noise removal and feature extraction (Sun et al., 2014; Liu, 2015; Fisher et al., 2016; Xing et al., 2018) from financial reports such as news (Zhang & Skiena, 2010; Alanyali et al., 2013; Atkins et al., 2018), micro-blogging comments (Sun et al., 2014; Fisher et al., 2016; Rickett, 2016; Wang, 2017; Xing et al., 2018) and social media (Ioanăs & Stoica, 2014; Sun et al., 2016). These techniques have been often combined with Machine Learning algorithms (Huang et al., 2012; Prollochs et al., 2015), which can be divided into supervised (based on manual annotations) (Alanyali et al., 2013; Prollochs et al., 2015) and unsupervised approaches (Huang et al., 2012; Prollochs et al., 2015).
|
Table 12 shows that, with this second selection, we attained well over 80% precision and recall performance with the svc classifier, which takes considerably less time to train than the nn. Furthermore, Table 13 shows the precision and recall of the svc classifier by class. Note that all metrics exceeded 80% as pursued, a level that, compared to other Machine Learning financial applications in the literature (Zhu et al., 2017; Atkins et al., 2018; Zhu et al., 2019; Dridi et al., 2019; De Arriba-Pérez et al., 2020), is similar and even superior.
|
D
|