robench-2024b
Collection
48 items
•
Updated
text_with_holes
stringlengths 196
5.41k
| text_candidates
stringlengths 70
1.23k
| A
stringclasses 6
values | B
stringclasses 6
values | C
stringclasses 6
values | D
stringclasses 6
values | label
stringclasses 4
values |
---|---|---|---|---|---|---|
The seminal work of Artzner et al. (1999) has bestowed upon the field of risk assessment a set of four pivotal axioms that stand as the cornerstones of coherence for any reputable risk measure. Building upon this foundational framework, Föllmer and Schied (2002), in tandem with the pioneering efforts of Frittelli and Rosazza-Gianin (2002) expanded the purview of risk measures. <|MaskedSetence|> <|MaskedSetence|> Shushi and Yao (2020) proposed two multivariate risk measures based on conditional expectation and derived explicit formulae for exponential dispersion models. Zuo and Yin (2022) considered the multivariate tail covariance for generalized skew-elliptical distributions. Cai et al. (2022) defined a new multivariate conditional Value-at-Risk
risk measure based on the minimization of the expectation of a multivariate loss function. While these advancements have introduced sophisticated risk measures, it’s important to highlight that their theoretical foundation frequently exists within a static framework. <|MaskedSetence|> Dynamic risk measures represent a sophisticated and evolving field within risk management, extending the analysis beyond static frameworks to account for temporal changes in risk. Unlike traditional static risk measures that provide a snapshot assessment, dynamic risk measures recognize the fluid nature of financial markets and aim to capture how risk evolves over time. Introduced by Riedel (2004), dynamic coherent risk measures offer a framework that allows for a more nuanced understanding of risk dynamics. This advancement enables a comprehensive assessment of risk in the context of changing market conditions and evolving investment portfolios. Additionally, the introduction of dynamic convex risk measures by Detlefsen and Scandolo (2005) further enriched the field, providing insights into the time consistency properties of risk measures over different time horizons..
|
**A**: The conventional depiction of these theories operates within a fixed temporal frame, offering a foundational understanding of risk.
Over the past two decades, not only has the study of static risk measures flourished, but also dynamic theories of risk measurement have developed into a thriving and mathematically refined area of research.
**B**: They introduced the broader class which are convex risk measures by dropping one of the coherency axioms.
**C**: Song and Yan (2009) gave an overview of representation theorems for various static risk measures.
|
ACB
|
BCA
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> Numerical analysis of the Asian option was conducted in Geman and Yor (1993); Linetsky (2004); Broadie et al. (1999); Boyle and Potapchik (2008). <|MaskedSetence|> <|MaskedSetence|> We expect our analysis to help overcome the numerical inefficiency in the short-maturity regime.
.
|
**A**:
Our study is of practical interest because existing numerical methods have proven to be less efficient in the case of short maturity or low volatility.
**B**: However, as pointed out in Fu et al.
**C**: (1999); Vecer (2002), such methods are either problematic in the short-maturity regime or computationally expensive.
|
ABC
|
ABC
|
ACB
|
ABC
|
Selection 1
|
A very natural framework to tackle this specific issue is Functional Data Analysis (FDA) [29], the branch of statistics that deals with studying data points that come in the shape of continuous functions over some kind of domain. <|MaskedSetence|> <|MaskedSetence|> [11] proposes a similar approach, without specifying a fixed functional basis, and proposing an innovative functional pick-and-freeze method for estimation. <|MaskedSetence|> In all the cited works around GSA techniques for functional outputs uncertainty is not explicitly explored. A very sound framework for the GSA of stochastic models with scalar outputs is provided in [2]..
|
**A**: [9] instead use a bayesian framework, based on adaptive splines to extract also in this case non-time-varying indices.
**B**: This approach is thus not capable of detecting the presence of time variations in impacts, nor does it address the issue of statistical significance of impacts.
**C**: FDA is a niche yet established area in the statistical literature, with many applied and methodological publications in all domains of knowledge, including spatial and space-time FDA [7, 16, 13, 19, 19, 12], coastal engineering [21], environmental studies [3, 18], transportation science [27] and epidemiology [32].
Methodologies for GSA that are able to deal with functional outputs are present in the literature: [14] propose non-time-varying sensitivity indices for models with functional outputs, based on a PCA expansion of the data.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 1
|
The pathwise approach, pioneered by [36], makes no assumptions on the dynamics of the underlying assets. Instead, the set of all models which are consistent with the prices of observed vanilla options was investigated and bounds on the prices of exotic derivatives were derived. The approach was applied to barrier options in [13], to forward start options in [38], to variance options in [17], to weighted variance swaps in [24], among others. <|MaskedSetence|> A notion of weak arbitrage was discussed in [21] to deal with the case of infinitely many given options. In discrete time, [28] proved a duality result for a class of continuous payoffs in a specific topological setup. Using the theory of Monge–Kantorovich mass transport, [9] established superhedging dualities for exotic options. <|MaskedSetence|> <|MaskedSetence|> [16] proved a superhedging
duality theorem, characterized.
|
**A**: [23] introduced the concept of model independent
arbitrage and characterized three different situations that a set of option prices would fall into: absence of arbitrage, model-independent arbitrage, or weak forms of model-dependent arbitrage.
**B**: Pathwise versions of FTAP were given in [60] for a one-period market model and in [1] for a continuous time model where a superlinearly growing option is traded.
**C**: In discrete time markets, [15], [14] proved versions of FTAP by investigating different notions of arbitrage and using different sets of admissible scenarios.
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 3
|
Our leading application of excludability is to preferences with single-crossing differences (SCD). <|MaskedSetence|> <|MaskedSetence|> By contrast, DUB appears to be a new condition on information structures, although Milgrom (1979) utilizes a related property in the context of auction theory. <|MaskedSetence|> It requires that for any state ω𝜔\omegaitalic_ω and any prior that puts positive probability on ω𝜔\omegaitalic_ω, there exist both: (i)
signals that make one arbitrarily certain that the state is at least ω𝜔\omegaitalic_ω; and (ii).
|
**A**: Like SCD, DUB is formulated for a (totally) ordered state space.
**B**: SCD is a familiar property (Milgrom and
Shannon, 1994) that is widely assumed in economics: it captures settings in which there are no preference reversals as the state increases.
**C**: Here we show that learning obtains when the information structure satisfies
directionally unbounded beliefs (DUB).
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> Here we briefly describe extensions to two settings with successively more scope for interaction between hypotheses. <|MaskedSetence|> In the second we allow for arbitrary economic interactions between treatments—for example, complementary treatments—as well as threshold payoff functions. We summarize our results here and refer to Appendix C.1 for a detailed discussion.
.
|
**A**: As a result, treatments interacted only via the research cost function.
**B**:
Section 3.2 imposed linearity on the research payoff function and welfare.
**C**: In the first, we continue to assume no economic interactions between treatments but allow for interactions in the researcher’s payoff function through threshold effects.
|
BAC
|
ACB
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> This final figure surpasses Britain’s total crop and pasture land combined. <|MaskedSetence|> If we add cotton, sugar, and timber circa 1830, we have somewhere between 25,000,000 and 30,000,000 ghost acres, exceeding even the contribution of coal by a healthy margin. <|MaskedSetence|> 276)
Based on this calculation, I set the land supply Z𝑍Zitalic_Z after the relief of land constraints to
.
|
**A**: (p.
**B**:
…[R]aising enough sheep to replace the yarn made with Britain’s New World cotton imports by would have required staggering quantities of land: almost 9,000,000 acres in 1815, using ratios from model farms, and over 23,000,000 acres in 1830.
**C**: It also surpasses Anthony Wrigley’s estimate that matching the annual energy output of Britain’s coal industry circa 1815 would have required that the country magically receive 15,000,000 additional acres of forest.
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 4
|
<|MaskedSetence|> This includes both questions about positive reciprocity (e.g. <|MaskedSetence|> <|MaskedSetence|> At the onset of the treatment, they also shift more weight toward direct reciprocity. However, this shift toward direct reciprocity is potentially offset by a decrease in altruism (measured by additional weight placed on the costs of contributing) coupled with a strong decrease in generalized reciprocity. This suggests that individuals who have a high overall reciprocity attribute use new information to discriminate between collaborators as a mechanism for punishment.
.
|
**A**:
The characteristic that we describe as overall reciprocity consists of positive weights on the answers to all of the questions in the reciprocity questionnaire.
**B**: “If someone does me a favor, I am prepared to return it”), as well as negative reciprocity (“If someone puts me in a difficult position, I will do the same to them”).
**C**: Estimates of the interaction between this characteristic and the behavioral utility terms suggest that these individuals are more altruistic in the baseline and behave more in line with generalized reciprocity.
|
ABC
|
ABC
|
CBA
|
ABC
|
Selection 4
|
Our approach to formulating risk-averse MDPs is grounded in the understanding that law-invariant convex risk measures can be interpreted as functionals defined on the space of probabilities over ℝℝ\mathbb{R}blackboard_R. <|MaskedSetence|> [61, 1, 26, 32]). <|MaskedSetence|> In a similar vein, we explore DRMs at the level of distributions, conceptualizing them as nested compositions of state-dependent law-invariant convex risk measures. It is important to emphasize that previous studies on DRMs at the distributional level, such as in [61, 6], primarily focused on static one-step risk measures in characterization or construction. Our approach, which allows the risk measures to vary according to the state, introduces additional complexity. A significant advantage of this distributional-level construction is that it automatically ensures MDPs with identical distributions are treated as equivalent in terms of risk when assessed under the proposed DRMs. Moreover, it seamlessly integrates latent costs and random actions through the concept of regular conditional distributions. Furthermore, our framework provides an appropriate foundation for balancing various assumptions, including a weakly continuous transition kernels, while still ensuring the attainment of the optimal outcome. This, in turn, allows for greater flexibility in risk-averse modeling. It’s important to note that, although the construction above may seem like a straightforward alteration of existing frameworks, it involves some unique technical aspects that have not been previously discussed. <|MaskedSetence|>
|
**A**: For simplicity, we consider bounded costs, which allows for conditional risk mappings that contain essential supremum as a major ingredient – a feature that is often omitted otherwise.
The main contributions of this paper can be summarized as follows:.
**B**: This perspective has been effectively employed in various contexts to further the development of risk measure theory (cf.
**C**: Notably, [61] is a seminal contribution that systematically investigates static risk measures and DRMs from a distributional standpoint.
|
BCA
|
BCA
|
BCA
|
CBA
|
Selection 1
|
<|MaskedSetence|> Retailers therefore have to deal with two key challenges that make it difficult to integrate available information when determining optimal replenishment orders (Fildes et al.,, 2019). First, the underlying probability distributions or their parameters need to be estimated from historical data, and perhaps features, using suitable prediction methods. Second, retailers need to adequately incorporate those forecasts and other results on the various sources of uncertainty from this first phase of analysis into the decision-making process (Raafat,, 1991, Silver et al.,, 1998). Due to the online nature of e-grocery retailing, there is comprehensive data pertaining to customer behaviour available. In our case, this opens the path to using approaches from descriptive and predictive analytics to solve the first task. The utilisation of insights generated by these two steps in the decision making process is referred to as prescriptive analytics (see Lepenioti et al.,, 2020, for a literature review).
The existing literature emphasises the value of new types of data available in e-grocery compared to brick-and-mortar retailing, such as information on unbiased customer preferences given by uncensored demand data (Ulrich et al.,, 2021) or orders by customers for future demand periods known in advance (Siawsolit and Gaukler,, 2021). <|MaskedSetence|> However, recent data-driven approaches for inventory management mostly focus on modelling uncertainty in customer demand only, making restrictive assumptions concerning other sources of uncertainty. <|MaskedSetence|> In this paper, we address these limitations by proposing a flexible multi-period inventory management framework that explicitly enables to consider perishable goods with a stochastic shelf life of multiple periods. While approaches based on a multi-period newsvendor setting assume a fixed shelf life (see e.g. Kim et al.,, 2015) or backordering (see e.g. Zhang et al.,, 2020), our lost sales model can represent shelf-life either via a fixed value or using a probability distribution, and additionally allows to incorporate the risk of potential delivery shortfalls. Thus, we are able to take into account all relevant stochastic variables, namely demand, shelf lives of SKUs, and delivery shortfalls using suitable probability distributions to allow for a data-driven inventory management process meeting the requirements of a real-world e-grocery retailing business case..
|
**A**:
This task becomes even more challenging as the distributions of the various uncertain variables impacting the stochastic inventory process are typically unknown to the decision maker.
**B**: For example, Xu et al., (2021) consider shelf lives to be fixed at a single demand period as within the newsvendor model.
**C**: Indeed, these data enhance the quality of demand distribution estimations that need to be taken into account when determining replenishment order quantities.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 3
|
•
Financial and Trade Globalization Indexes. <|MaskedSetence|> According to Shahbaz et al., (2013, 2015); Shahbaz et al., 2017b , globalization could increase pollution if trade and capital flows induce an economic expansion (Dinda, , 2008; Sirag et al., , 2018; Phong, , 2019) and, especially, if investments are directed to emission-intensive production. However, if globalization is associated with the diffusion of efficient and environmentally friendly technologies and institutions, it could negatively affect GHG emissions (Runge, , 1994; Wheeler, , 2000; Jayadevappa and Chhatre, , 2000; Liddle, , 2001; Cole, , 2006). Furthermore, the “Pollution Haven” Hypothesis suggests a process of relocation of polluting industries from developed economies with tight environmental regulation to countries beginning the industrialization phase and few environmental regulations (Copeland and Taylor, , 2004). In consequence, trade would be positively associated with GHG emissions in low and middle-income countries but negatively related to high-income countries (Zilio and Caraballo, , 2014; Allard et al., , 2018; Sánchez and Caballero, , 2019). Managi, (2004) finds that trade liberalization is positively associated with carbon emissions in a panel of sixty-three developed and developing countries over the 1960-1999 period. Notwithstanding, previous literature also finds non-significant results for the trade openness variable (Lee et al., , 2009; You et al., , 2015). Recent works deal with other measures of globalization. <|MaskedSetence|> Furthermore, Lee and Min, (2014) find that the KOF index negatively affects carbon emissions for a panel of 255 countries from 1980 to 2011. <|MaskedSetence|>
|
**A**: However, Shahbaz et al., 2017b and Shahbaz et al., 2017a find a positive relationship for Japan and twenty-five developed economies, respectively.
.
**B**: Shahbaz et al., (2013) and Shahbaz et al., (2015) find that the KOF globalization index is negatively correlated with CO2 emissions in Turkey and India, respectively.
**C**: The expected sign is ambiguous for both variables as predicted by theory (Grossman and Krueger, , 1991).
|
ACB
|
CBA
|
CBA
|
CBA
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> The obtained p𝑝pitalic_p-values (see Table 5(b)) show that the GRM model is rejected at any standard level while the RSAR(1) process is not. This is particularly interesting given that the p𝑝pitalic_p-values of the two-sample Kolmogorov-Smirnov test reported in Table 4(b) are very close. Moreover, this result is in line with the empirical observation that the inflation dynamics exhibits roughly two regimes in the inflation data set (see Figure 14): a regime of low inflation (e.g. <|MaskedSetence|> between 1972 and 1982).
.
|
**A**: between 1982 and 2021) and a regime of high inflation (e.g.
**B**:
Again, we complete this preliminary verification with the signature-based validation test.
**C**: We implement the same steps described in the previous section (here, we obtain m=72𝑚72m=72italic_m = 72 historical paths) except that we work with the log-signature which lead to higher statistical powers on synthetic data.
|
BCA
|
BCA
|
BCA
|
BAC
|
Selection 2
|
4.1 Limitations of the Experiments
In our human subject experiments, we attempted to control spurious variables by running the test at the same time of the day (6PM PST) and restricting the poll to adult respondents. <|MaskedSetence|> I reserve their analysis for future projects. Unlike other aggregate analyses where GPT-3 can be used at least as a coarse proxy, intersectional analysis cannot be conducted by using an LLMs since the user has no visibility into how the LLM is trained and how attributes of individuals who contributed to the training data affect the trained model. An additional dimension to explore is the type of job. <|MaskedSetence|> <|MaskedSetence|> I leave this extension for future investigation as well.
.
|
**A**: However, there are many dimensions that, if changed, may affect the outcome, including age, level of instruction, geographic location, income and current profession, political inclination, awareness of the minimum wage in their location, and others.
**B**: I have chosen the most common jobs in the US for these experiments, which happen to be jobs for which minimum wage considerations apply.
**C**: I have not tested the effect for jobs that are widely understood as being compensated at a level far above the minimum wage such as physicians, airline pilots, nurses, college professors, etc.
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 1
|
To illustrate the impact of FDA approvals on pharmaceutical companies, we consider the case of ChemoCentryx, which was developing ANCA-associated vasculitis therapy. On October 8, 2021, the company announced that the FDA approved its application. <|MaskedSetence|> Last accessed November 14, 2023. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Figure 1-(a) shows that the market responded positively to this news, and its stock price increased sharply, resulting in a rise in the company’s market value.333Click here for Chemocentryx’s (archived) announcement document.
**B**: These abnormal returns reflect the market’s revised expectation of the firm’s future earnings from selling the drug now that it has received FDA approval.
Figure 1: Examples of Drug Development Announcements.
**C**: Because it was the only relevant news on that day, we can attribute this increase to the abnormal returns associated with the announcement.
|
BAC
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> To that end, we first introduce quantum circuits that can perform arithmetic operations on two complement’s numbers representing signed dyadic rational numbers, together with its complexity analysis. This allows us to provide a rigorous error and complexity analysis when uploading first a truncated and discretized approximation of the multivariate log-normal distribution and then uploading an approximation of the CPWA payoff function in rotated form, where the approximation consists of truncation as well as the rounding of the coefficients of the CPWA payoff function. <|MaskedSetence|> In particular, we prove that the computational complexity of our algorithm only grows polynomially in the space dimension d𝑑ditalic_d of the Black-Scholes PDE and in the (reciprocal of the) accuracy level ε𝜀\varepsilonitalic_ε. Moreover, we show that for payoff functions which are bounded, our algorithm indeed has a speed-up compared to classical Monte Carlo methods.
To the best of our knowledge, this is the first work in the literature which
provides a rigorous mathematical error and complexity analysis for a quantum Monte Carlo algorithm which approximately solves high-dimensional PDEs. <|MaskedSetence|>
|
**A**: We refer to Remark 2.22 for a detailed discussion of the complexity analysis.
.
**B**:
Our main contribution lies in a rigorous error analysis as well as complexity analysis of our algorithm.
**C**: This together with a rigorous error and complexity analysis when applying the modified iterative quantum amplitude estimation algorithm [fukuzawa2022modified] allows us to control the output error of our algorithm to be bounded by the pre-specified accuracy level ε∈(0,1)𝜀01\varepsilon\in(0,1)italic_ε ∈ ( 0 , 1 ), while bounding its computational complexity; we refer to Theorem 1 for the precise statement of our main result.
|
ABC
|
BCA
|
BCA
|
BCA
|
Selection 3
|
<|MaskedSetence|> In Section 2, we introduce the auxiliary state processes with reflections and derive the associated HJB equation with two Neumann boundary conditions for the auxiliary stochastic control problem. In Section 3, we address the solvability of the dual PDE problem by verifying a separation form of the solution and the probabilistic representations, the homogenization of Neumann boundary conditions and the stochastic flow analysis. <|MaskedSetence|> It is also verified therein that the expected total capital injection is bounded. <|MaskedSetence|>
|
**A**: The verification theorem on the optimal feedback control is presented in Section 4 together with the technical proofs on the strength of stochastic flow analysis and estimations of the optimal control.
**B**:
The rest of the paper is organized as follows.
**C**: Finally, the proof of an auxiliary lemma is reported in Appendix A.
.
|
BAC
|
ACB
|
BAC
|
BAC
|
Selection 3
|
Differentiating risky from non-risky emitters requires studying their role in the production network. This result is especially relevant to be considered in climate-economic models such as IAMs. One avenue to do this is to integrate the presented firm-level production network approach with an existing macroeconomic model. <|MaskedSetence|> <|MaskedSetence|> Alternatively, modifying a suitable macroeconomic agent-based model could relax assumptions of I-O CGE models. <|MaskedSetence|>
|
**A**: Several models use an Input-Output Computable General Equilibrium (I-O CGE) approach to study decarbonization at the sectoral level.
**B**: Candidates include the E3ME [11] or DSK [12] models, designed for firm-level dynamics but initialized from sectoral data.
.
**C**: These models could be modified to allow the use of firm-level production networks instead of sector-level input-output tables, which, in principle, use the same mathematical formalism.
|
ACB
|
CBA
|
ACB
|
ACB
|
Selection 4
|
<|MaskedSetence|> Additionally, the increase of the VRE share among the energy sources may jeopardise sufficient power quality (i.e., requirements for uninterrupted power supply and stable conditions of voltage and current), energy systems stability, power balance and efficient transmission and distribution of power (Sinsel et al., 2020). However, existing transmission systems design is not capable of coping with significant levels of renewable penetration (Moreira et al., 2017). Consequentially, renewable-driven expansion of generation requires new approaches for transmission network planning.
The surveys on existing transmission expansion planning literature Hemmati et al. <|MaskedSetence|> <|MaskedSetence|> Mathematical optimisation has been extensively applied as a solution method primarily in an academic context as it eradicates the risk of suboptimality of the solution Lumbreras and Ramos (2016). However, one should take into account the trade-off between the computational feasibility of solving the problem to optimality and its scale, which in turn is augmented as the modelling detail level and the size of the network modelled increase.
.
|
**A**: However, in liberalised electricity markets, such as those found in EU countries, the UK, and North America, renewable energy revenues are insufficient to provide an adequate return to VRE capacity for private investors (Haar, 2020).
**B**: (2013); Lumbreras and Ramos (2016); Niharika et al.
**C**: (2016) suggest the decisions regarding the structure of the power market, the level of detail on the operation of the system and the solution method for the problem to be amongst the key factors defining the distinct approaches.
|
BAC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
We contribute to two strands of literature. <|MaskedSetence|> Strategic interaction in portfolio optimization problems has been motivated for example by [10] and [31] through competition between agents. Since then, portfolio choice problems including strategic interaction between investors have been widely studied. The competitive feature is usually modeled through a relative performance metric. <|MaskedSetence|> [5] consider two agents in a continuous-time model which includes stocks following geometric Brownian motions. They use power utility functions and maximize the ratio of the two investors’ wealth. [20] also consider stocks driven by geometric Brownian motions and n𝑛nitalic_n agents maximizing a weighted difference of their own wealth and the arithmetic mean of the other agents’ wealth. Structurally similar objective functions including the arithmetic mean have been used by [6]. There, the unique Nash equilibrium for n𝑛nitalic_n agents is derived in a very general financial market using the unique solution to some auxiliary classical portfolio optimization problem. <|MaskedSetence|> They derive the unique constant Nash equilibrium using both the arithmetic mean under CARA utility and the geometric mean under CRRA utility. Later, their work has been extended by [34] to consumption-investment problems including relative concerns. In a similar asset specialization market with bounded market coefficients, [22, 25] find a one-to-one correspondence between Nash equilibria and suitable systems of FBSDE’s for agents applying power utilities to the multiplicative relative performance metric in order to find optimal investment (and consumption) strategies. [16, 17] use forward utilities of both CARA and CRRA type with and without consumption. More general financial markets (including e.g. stochastic volatility and incomplete information) were, for example, used in [33], [24] and [28].
.
|
**A**: [35] consider the case of asset specialization for n𝑛nitalic_n agents.
**B**: More specifically, either the additive relative performance metric, introduced by [19, 20], or the multiplicative performance metric, introduced by [4], are included into the utility function.
**C**: The first one is the literature on strategic interaction between agents.
|
CBA
|
CBA
|
CBA
|
CBA
|
Selection 4
|
Hence, when one compares the figures of phosphate rock mining and P fertilizer use (and averages these over a few years) one can come to the conclusion that about 70% of mined phosphate rock (in the following abbreviated as PR) ends up as fertilizer. Considering losses in the production processes of fertilizer and tendential under-reporting in fertilizer use, we know that the actual share is in fact higher. <|MaskedSetence|> For example, we know that about 90% of processed mined phosphate is used in a chemical wet process and mostly converted to phosphoric acid, out of which about 82% is used to make fertilizer. Considering further that 15% of P fertilizers are not made from phosphoric acid, we can approximate a lower bound of 80% of total P used in fertilizer \citep[see][]herman_processing. <|MaskedSetence|> This means that the flow of P used for other purposes \citep[e.g. via phosphorus compounds, see also][]shinh is handled as if it flows in the same way. <|MaskedSetence|>
|
**A**: Other studies estimate a higher fraction of P fertilizer use, at the upper end of the spectrum \citetfao04 estimates that 90% of mined PR is used by the fertilizer industry.333These differences can partly be explained by slightly varying approximations of the shares of fertilizer production processes, as well as by differences in the accounting for animal feed supplements (∼similar-to\sim∼7%).
In any case, we note that in our analysis we make the simplifying assumption that the entire PR production (and no other source) is used as fertilizer.
**B**: Therefore, studies have looked at the technical processes that are involved in fertilizer production.
**C**: We also implicitly assume that mining and use take place in the same year, and we neglect the effect of changes in stocks.
.
|
BAC
|
BAC
|
BAC
|
BAC
|
Selection 1
|
<|MaskedSetence|> We henceforth focus on the accumulation funds since they enable the provision of highly tailored investment options, accommodating diverse individual beliefs and investment preferences. <|MaskedSetence|> In particular, 1.00 trillion AUD (around 40% of the total) was invested in MySuper, which is a default option offered by large APRA-regulated super funds. From the perspective of asset classes, 53.3% of the total was invested in equities, with a breakdown of 21.9% in Australian listed equities, 26.4% in international listed equities, and 5.1% in unlisted equities. <|MaskedSetence|> Property and infrastructure accounted for 15.6% of the total, while other asset classes, encompassing hedge funds and commodities, represented 2.2% of the overall investment portfolio. This nuanced investment allocation strategy underscores the unique nature of Australian superannuation scheme, catering to members’ varied expectations and preferences..
|
**A**: Accumulation account holders have the flexibility to allocate their investments across various asset classes, including equities, cash investments, property, infrastructure, as well as other assets such as hedge funds and commodities, which also means that their risk-taking capacity is virtually unlimited.
As of September 2023, Australians have collectively accumulated 3.56 trillion AUD, out of which 2.47 trillion AUD held in 1368 APRA-regulated super funds and 0.89 trillion AUD in ATO-regulated self-managed super funds; this makes Australia the fourth largest holder of pension fund assets worldwide.
**B**: Fixed income and cash investments constituted 28.8% of total investments, distributed as 20.3% in fixed income securities (bonds) and 8.5% in cash (bank bills).
**C**:
In contrast to other pension systems, Australian super funds distinguish themselves by establishing personal accounts for their members with two main types of super funds: defined benefit super funds with shared investment risks and limited risk-taking capacity ([10]), which are now mostly closed to new members, and nowadays prevailing accumulation super funds, which grow with contributions and idiosyncratic investment returns.
|
CAB
|
CAB
|
CAB
|
CBA
|
Selection 3
|
A second strand of the literature aims at assessing what happens to individual life trajectories after a default. This literature essentially focused on the impact of a harsh default, i.e. <|MaskedSetence|> Our work sheds some light on the short and medium term consequences of a soft default, an event that is substantially more likely (e.g. 1.5 versus 1 percent in 2010).
To this second strand of the literature belong, for example, Collinson et al. [2023], who investigate the impact of eviction on low income households in terms of homelessness, health status, labor market outcomes, and long term residential instability. <|MaskedSetence|> They find that the reform hindered an important channel of financial relief. Diamond et al. [2020] analyze the negative impact of foreclosures on foreclosed-upon homeowners. They find that foreclosure causes housing instability, reduced homeownership, and financial distress. <|MaskedSetence|>
|
**A**: Finally, Indarte [2022] analyzes the costs and benefits of household debt relief policies..
**B**: Similarly, Currie and Tekin [2015] show that foreclosure causes an increase in unscheduled and preventable hospital visits.
Albanesi and Nosal [2018] investigate the impact of the 2005 bankruptcy reform, which made it more difficult for individuals to declare either Chapter 13 or Chapter 7.
**C**: either Chapter 7 or Chapter 13 declarations or a foreclosure.
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 1
|
In these models the (log-)spot price is described as a superposition of two latent stochastic processes: The long term behavior is modeled as an arithmetic Brownian motion, the short term behavior as an OU-process. <|MaskedSetence|> <|MaskedSetence|> In the long run, the process always fluctuates around a deterministic mean value. The calibration procedure is mainly based on likelihood estimation.
The model of Benth, Kallsen and Meyer-Brandis [3] incorporates spikes and allows mean reversion to a stochastic base level by modeling the price as a sum of several OU-processes, some of which are driven by pure jump processes. A method to calibrate such a model was suggested by Tankov and Meyer-Brandis [15], who investigate a superposition of two OU-processes, one driven by a Levy jump process, the other driven by a Brownian motion.
To calibrate their model, they came up with the so-called hard thresholding technique, where first the mean reversion parameters are estimated from the autocorrelation function and then maximum likelihood methods are applied to filter out the spikes path. <|MaskedSetence|> The downside of calibrating the mean reversion rates separately is that some parameter interdependencies are being neglected.
In [4] the models proposed in [8] and [3] as well as the one-factor mean-reversion jump-diffusion model of [5] are calibrated to German spot price data adopting different calibration techniques. A comparison of the resulting model properties shows that the.
|
**A**: The same approach is used by Hinderks and Wagner [11] for their two-factor model.
**B**: Since both components are Gaussian, the model can be calibrated using Kalman filter techniques.
**C**: However, it turns out that Gaussian processes cannot appropriately describe the spikes, which frequently occur in observed spot price data.
The one factor log-price model of Geman and Roncoroni [8] generates the characteristic spikes by making the jump direction and intensity level-dependent: High price levels lead to high jump intensity and downward jumps are more likely, whereas if the price is low, jumps are rare and upward-directed.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
<|MaskedSetence|> IL methods show superior efficiency compared with RR methods in either offline training or online training. <|MaskedSetence|> Thus the meta-learning methods are not much slower than naïve IL, and the excess time cost is small enough to omit. It is practical for model selection, hyperparameter tuning, and retraining algorithm selection. <|MaskedSetence|>
|
**A**:
Table 3 compares the total time costs in offline training and online training of different methods.
**B**: Moreover, the low time cost of DoubleAdapt in offline training paves the way for collaboration with RR, e.g., periodically retraining the meta-learners once a year.
.
**C**: As we adopt the first-order approximation of MAML, we avoid the expensive computation of Hessian matrices.
|
ACB
|
ACB
|
ACB
|
CBA
|
Selection 2
|
I use EmTract to generate emotion variables by quantifying the content of each message and then average the results across all messages from ninety days before the IPO up till the market opening on the day of the IPO. <|MaskedSetence|> I then use a predictive regression to test whether emotions predict first-day returns. I take several steps to mitigate estimation concerns, such as ruling out reactive emotions by looking at the impact of emotions before the IPO. I also tackle mis-attribution by using an additional emotion model for robustness checks.
My analysis focuses on the role of investor emotions in explaining two stylized facts about IPO returns. I document two main findings. First, I find that IPOs with high levels of pre-IPO investor enthusiasm tend to have a significantly higher first-day return of 29.73%, compared to IPOs with lower levels of pre-IPO investor enthusiasm, which have an average first-day return of 17.59%. <|MaskedSetence|> However, this initial enthusiasm may be misplaced, as IPOs with high pre-IPO investor enthusiasm demonstrate a much lower average long-run industry-adjusted return of -8.22%, compared to IPOs with lower pre-IPO investor enthusiasm, which have an average long-run industry-adjusted return of -0.14%. <|MaskedSetence|>
|
**A**: In a regression setting that controls for IPO characteristics, such as offer amount and 1-month past industry returns, I show that a standard deviation increase in pre-IPO enthusiasm translates into a 14.30% standard deviation increase in first-day returns.
**B**: I also distinguish between different types of messages by using two classification schemes; the first isolates messages conveying information related to earnings, firm fundamentals, or stock trading from general chat, and the second separates messages conveying original information from those disseminating existing information.
**C**: Even after controlling for first-day return, IPO characteristics and past industry returns, I find that a standard deviation increase in pre-IPO enthusiasm translates into a 12.74% standard deviation decrease in 12-month industry-adjusted returns..
|
BCA
|
BAC
|
BAC
|
BAC
|
Selection 2
|
Figure 8: Quantum hardware-ready procedure for DPP sampling.
We repeated this process for a number of trees and estimated the F1222We chose the F1 score as the evaluation metric for two reasons. Firstly, a single decision tree, unlike the random forest, does not provide an estimate of the likelihood, which is required for the computation of ROC-AUC as we had used before. Secondly, we had an imbalanced dataset and thus needed a metric that balanced precision and recall. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: score for every tree.
**B**: The F1 score is particularly effective when used in these scenarios.
**C**: We then compared the results for different sampling methods: uniform sampling, quantum DPP sampling using a simulator, and quantum DPP sampling using a quantum processor..
|
BAC
|
CAB
|
BAC
|
BAC
|
Selection 4
|
<|MaskedSetence|> 2 with the time series of RV from 1970 to 2021, including expanded views of the aforementioned periods of market upheavals. In Sec. 3 we give analytical expressions of the two distribution functions used to fit the entire RV distribution: modified Generalized Beta (mGB), which is discussed in great detail in a companion paper liu2023rethinking , and Generalized Beta Prime (GB2), which is essentially a limiting case of mGB and is chosen because it has power-law tails. mGB is chosen because it exhibits long stretch of power-law dependence before dropping off and terminating at a finite value of the variable, thus mimicking the nDK behavior of RV liu2023rethinking . Additionally, both mGB and GB2 emerge as steady-state distributions of a stochastic differential equation for stochastic volatility liu2023rethinking . <|MaskedSetence|> <|MaskedSetence|> Towards this end we also use a linear fit (LF) of the tails. For all three fits, we provide confidence intervals janczura2012black and, more importantly, the results of a U-test pisarenko2012robust , which evaluates a p𝑝pitalic_p-value for the null hypothesis that a data point comes from a fitting distribution pisarenko2012robust . Sec. 5 is a discussion of results obtained in Sec. 4.
.
|
**A**: 4 we describe fits of RV with mGB and GB2 and give a detailed description of the tails, specifically in regards to possible DK/nDK.
**B**:
To gain further insight into this phenomenon, we start in Sec.
**C**: In Sec.
|
CAB
|
BCA
|
BCA
|
BCA
|
Selection 3
|
The results in Table 7 show that, even without optimizing hard floors, there is no benefit to soft-floors, whether high or low; this is true, even considering soft floors that are not covered by the analysis in Zeithammer (2019).101010In a first-price auction without a reserve price, no regular bidder would bid above 0.50.50.50.5. Even with a higher reserve price, no regular bidder would bid above 1.01.01.01.0. Thus, soft floors above 1.01.01.01.0 are not covered by the results in Zeithammer (2019). In fact, soft-floors lead to lower revenues than standard reserve prices. <|MaskedSetence|> <|MaskedSetence|> This may be undesirable even if it does maximize ad revenues. <|MaskedSetence|> We exclude reserve prices between 1.01.01.01.0 and 1.81.81.81.8, as they yield lower revenue than reserve prices at 1.81.81.81.8 or below 1.01.01.01.0. RP=00 is the same as SFRP=00..
|
**A**: One feature of the above parameterization is that, since the stronger bidders have a high valuation and at least one of them appears with high probability (0.75)0.75(0.75)( 0.75 ), it is optimal for the seller to target that bidder only, by fixing a high reserve price of 1.81.81.81.8.
**B**: This means that the regular bidders are excluded from participation.
**C**: Yet even without completely shutting the low-valuation bidders out of the market, a moderate standard reserve price (0.60.60.60.6 in this case) yields higher revenues than any soft-floor, or no floor.
Table 7: Ad revenue: soft-floor reserve price (SFRP) vs standard reserve price (RP).
|
ABC
|
ABC
|
ABC
|
ACB
|
Selection 2
|
We find that while participants’ behaviour is in line with the theoretical predictions, there is still a large part of behaviour that the model cannot account for. Using the Strategy Frequency Estimation Method (Dal Bó and Fréchette, 2011; Fudenberg et al., 2012), we allow for the presence of various behavioural types in our subject population, and we estimate the proportion of each type in our data (see Fischbacher et al. <|MaskedSetence|> 2016). <|MaskedSetence|> <|MaskedSetence|> We find that around 25% of the subjects behave according to the G&M model, the vast majority behaves in a conditional co-operating or altruistic way, and a non-significant proportion free rides. From a mechanism design point of view, we find that introducing uncertainty regarding the position, along with a constrained sample of previous actions (i.e. only what the immediate precedent player), maximises the public good provision.
Our work is related to and extends various strands of the literature, which we briefly summarise below. Prior to G&M’s research, the timing of contributions and the level of funds raised had received considerable attention in the theoretical literature. Varian (1994) shows that, under appropriate assumptions, a sequential contribution mechanism elicits lower contributions than a simultaneous contribution mechanism. The crux of this result lies in the set-up of the model, where a first mover may enjoy a first-mover advantage and free-ride. On the other hand, Cartwright and Patel (2010) using a sequential public goods game with exogenous ordering, show that agents early enough in the sequence would want to contribute, if they believe that imitation from others is quite likely. In the context of fundraising, Romano and Yildirim (2001) examine the conditions under which a charity prefers to announce contributions in the form of a sequential-move game, while Vesterlund (2003) shows that an announcement strategy of past contributions, not only helps worthwhile organisations to reveal their type, but it also helps the fundraiser reduce the free-rider problem, a result that Potters et al. (2005) confirm experimentally..
|
**A**: 2001; Bardsley and Moffatt 2007; Thöni and Volk 2018; Katuščák and Miklánek 2023; Préget et al.
**B**: Additionally, we investigate whether subjects align with the predictions of the G&M model (G&M type).
**C**: On top of G&M agents, we classify subjects to free-riders (who never contribute), altruists (who always contribute no matter their position), and conditional co-operators (who always contribute if they are in position 1 and contribute if at least one other person in the sample has contributed when they are in positions
2-4).
|
ABC
|
ACB
|
ACB
|
ACB
|
Selection 4
|
In traditional finance, the random walk theory is a prevalent model. <|MaskedSetence|> In other words, the future price of a stock is independent of its past prices, making it impossible to predict a stock’s future trajectory based on historical data alone. This idea forms the basis of the Efficient Market Hypothesis[5], stating that all available information is already incorporated into the stock’s current price and changes to that price will only be triggered by unforeseen events. Stock prices are the product of an ongoing interplay of buying and selling transactions from all participants in the stock market. <|MaskedSetence|> Therefore, the random walk theory of stock prices doesn’t mean prices are entirely chaotic, but rather that they evolve based on the aggregate of numerous decisions made by market participants, often in response to new information.
Given that uncertainty is a fundamental characteristic shared by both finance and quantum mechanics, it is promising to employ quantum principles for the simulation of financial markets.
The potential applications of quantum computing in the financial sector[6, 7] are incredibly vast. More recent work has focused on the quantum algorithm for amplitude estimation[8] and Monte Carlo with the pricing of financial derivatives[9, 10, 11, 12, 13, 14]. Ref. [8], which builds upon Grover’s quantum search method to improve the likelihood of identifying desired outcomes in quantum algorithms without needing to know the success probabilities in advance. Ref. [9] presents a quantum algorithm for Monte Carlo pricing of financial derivatives, demonstrating how quantum superposition and circuits can implement payoff functions and extract prices through quantum measurements. Ref. [11] details a method for option pricing using quantum computing. This method leverages amplitude estimation to achieve a quadratic speed increase over traditional Monte Carlo methods, showcasing significant advancements in quantum algorithm applications and financial modeling techniques. Furthermore, Ref.[10, 12, 13, 14] explore various aspects, from implementing quantum computational finance and option pricing to leveraging quantum advantage in market risk assessment and stochastic differential equations. <|MaskedSetence|>
|
**A**: This dynamic process reflects the collective sentiment, beliefs, and actions of all these market participants.
**B**: This theory, first postulated by the French mathematician Louis Bachelier[4], posits that the trajectory of stock prices is essentially random.
**C**: Each study contributes to the broader understanding of how quantum algorithms can offer a more efficient, accurate and comprehensive approach to financial simulations, surpassing traditional computational methods and providing new insights into quantum finance’s potential..
|
BAC
|
ACB
|
BAC
|
BAC
|
Selection 3
|
In contrast to our work, they offer no information on earnings or decision quality, nor do their investors choose among experts.
Their investors indicate i) whether they want to delegate their decision to their expert, ii) their maximum willingness to pay for delegation and iii) how much risk they want the expert to take on their behalf. Holzmeister et al. (2022) find that investors delegate most frequently to the algorithm, and least frequently to experts with fixed preferences. Moreover, delegation is positively correlated with general trust and blame shifting tendencies as elicited in a survey. (Our experiment uses observed behavior to confirm the importance of blame shifting.) Moreover, Holzmeister et al. <|MaskedSetence|> (2022), Stefan et al. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: (2022) identify a significant problem with risk communication in the sense that while finance professionals in general take into account the client’s desired risk level, the constructed portfolios show considerably overlap across the different requested risk levels.
By contrast, we find that the fraction of delegation does not increase as we move from a task where there is no scope for risk communication to one where risk communication matters.
**B**: This suggests that while the increasing risk tolerance motive may matter in the decision of how much risk investors ask the expert to take, that motive does not appear to explain why investors choose to delegate in the first place.
.
**C**: (2022) find that investors ask the expert to take more risk than they believe themselves to have taken, consistent with the increasing risk tolerance motive.333Analysing the same data-set as Holzmeister et al.
|
ACB
|
CAB
|
CAB
|
CAB
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> If the reader is familiar with ergodic theory, skip Subsection 5.1. <|MaskedSetence|> Note that our strategy (philosophy) in this and the next sections stems from [Lyubich, 2012] and [Shen and van Strien, 2014] (these are quite readable expository articles on recent developments of unimodal dynamics). We stress that a deep result by Avila et.al. (Proposition 6.3) theoretically supports our argument.
.
|
**A**: Our basic references for ergodic theory are classical [Collet and Eckmann, 1980], [Day, 1998], and [W. de Melo, 1993].
**B**: Here, we give a quick review of ergodic theory.
**C**:
Our (numerical/theoretical) argument in this and the next sections use ergodic theory.
|
CBA
|
CAB
|
CBA
|
CBA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> It is, therefore, important to audit this trust assumption. Our reordering slippage provides a way for the public to monitor builders’ behavior, without the need to acquire private data.
As Table 6 shows, we do not find conclusive evidence that any of the top 5 builders (by private transaction count) are misbehaving, at least from a cursory examination. <|MaskedSetence|> Investigating the validity of trust assumptions required by the MEV ecosystem remains an important open question..
|
**A**: This may suggest that the penalties associated with breaking users’ trust are large enough to incentivize builders not to defect.
**B**:
When participating in the mev-boost ecosystem, searchers and users of private RPCs must trust that builders do not frontrun or ‘unpack’ the mev-boost bundles that are sent to builders.
**C**: While in traditional markets, it is the regulator that audits the intermediaries, in DeFi this trust relies on incentives (or even on goodwill).
|
BCA
|
BCA
|
BAC
|
BCA
|
Selection 1
|
<|MaskedSetence|> We construct an unbalanced panel data131313i.e., time observations are different for different VASPs. <|MaskedSetence|> Ultimately, in our empirical analysis, we use the data of four Austrian VASPs for which we can identify on-chain and off-chain data. Our variable of interest is a firm-level measure of crypto asset holdings. Some firms describe their crypto asset holdings as explicit balance-sheet items; for other firms that aggregate them with other items we construct a variable that approximates the corresponding crypto asset holdings from their described asset items. <|MaskedSetence|> The variable crypto asset holdings in form of red markers in Figure 6, Figure 7, Figure 8 and Figure 9 represents those balance-sheet items.
.
|
**A**: starting from 2014 to 2021.
**B**:
We collect balance-sheet data for 17 Austrian VASPs through the Austrian Commercial Register.
**C**: The balance sheet does not allow us to distinguish between cryptoasset holdings such as ether and bitcoin.
|
BCA
|
BAC
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> These financial instruments are based on an underlying weather index and trigger a claim depending on the value of the index at maturity, similar to other financial market derivatives. These instruments experienced significant success in the early 2000s, reaching $45 billion in notional volume traded in the market in 2006 according to the World Risk Management Association [4]. Mainly dominated by temperature-based derivatives, up to 95% of the market, the weather market remained illiquid with small volumes traded in the standardized open market and most of the volume traded OTC [56]. <|MaskedSetence|> This corresponded to a general slowdown of financial markets, but also, according to Pérez-González and Yun [50], to the birth of new hybrid derivatives that could combine both volumetric and price risk. These new products, also called quantos, were indexed to two underlying parameters, one proxying the volumetric risk, typically a weather parameter, and one proxying the price risk, typically the spot price of electricity, gas or oil. These double-indexed products already existed in the market for other financial assets (foreign exchange, bonds, commodities) [7] [36]. They are technically challenging because they require a convincing model of the joint distribution of the underlyings. <|MaskedSetence|>
|
**A**:
Weather derivatives emerged in the 1990s as a response to this need for risk transfer.
**B**: It also led to extensive research into the modeling of weather derivatives and best pricing methodologies [37] [17] [11] [1] [23] [22] [21].
By 2008, the weather market experienced a significant slowdown, with trading volumes declining to $11.8 billion in 2011 [5].
**C**: Our analysis will focus on finding a model to price temperature and spot electricity price quantos..
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 3
|
<|MaskedSetence|> It is unclear whether this is due to delayed or unreported trading volume or due to incorrectly reported open interest. In our view, the most likely scenario is that both are true, perhaps, however, not to the same degree on every exchange. Although we could not perfectly reconcile these quantities for any of the exchanges in question, we find that there are discernible differences in behavior across these exchanges. The discrepancies on ByBit and OKX are so frequent and large in magnitude that these two exchanges merit a category of their own. On these exchanges we could not reconcile trading volume with reported open interest in any time period, with the implied trading volume being in the range of hundreds of billions over and above the reported trading volume, assuming the open interest is the quantity that is correct. If in fact, however, the trading volume is the more accurately reported quantity, this would imply that the open interest on these exchanges is almost completely fabricated. This could perhaps be explained by certain incentive structures baked into the scenario: leading market participants to believe that informed investors are taking large positions in these markets (as implied by the large change in open interest) could—depending on the participants’ prior positioning—lead to panic or fear of missing out on potential profits, thereby increasing trading volume, and profit for the exchange. Given that volatility and trading volumes in Bitcoin and other cryptocurrencies have been trending lower in 2023202320232023 we believe that the latter is a more plausible explanation. <|MaskedSetence|> Although we could not reconcile the changes in open interest with trading volume, the frequency and magnitude of the discrepancies is such that it leaves room for some relatively more benign explanation (see Section 5). <|MaskedSetence|> For these exchanges we could reconcile changes in open interest with trading volume on almost all sub-periods (see Tables 4 and 5).
.
|
**A**: We find that trading volume cannot be reconciled with the reported changes in open interest for the majority of these exchanges.
**B**: The last group of exchanges is formed by Kraken and HTX, who have the lowest number of discrepancies.
**C**: Figures 1 and 2 also seem to point in that direction.
Binance, Deribit and BitMEX form, conceptually, another cluster of exchanges.
|
ACB
|
ABC
|
ACB
|
ACB
|
Selection 4
|
The paper is organized as follows. The variant of the Yard-Sale Model on which the present paper focuses is motivated and defined in Section 2. The Gini coefficient is briefly reviewed in Section 3 with particular focus on its invariance under a normalization of the equations of motion. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Plots and descriptions of the numerical method are included. The asymptotics of the modified system when a redistributive tax is incorporated are derived in Section 5 and shown to match the classical Yard-Sale Model with taxation.
.
|
**A**: In Section 4 it is proven both that the Gini coefficient increases monotonically in time under the induced dynamics and that its rate of increase may be bounded.
**B**: This result is then re-stated for a more general class of evolutionary models.
**C**: The evolutionary, integro-differential PDE are numerically solved to demonstrate the bound holding in experiment.
|
ABC
|
ACB
|
ABC
|
ABC
|
Selection 4
|
A common approach to mitigate the curse of dimensionality is the regression-based Monte Carlo method, which involves simulating numerous paths and then estimating the continuation value through cross-sectional regression to obtain optimal stopping rules. [1] first used spline regression to estimate the continuation value of an option. Inspired by his work, [2] and [3] further developed this idea by employing least-squares regression. Presently, the Least Squares Method (LSM) proposed by Longstaff and Schwartz has become one of the most successful methods for pricing American options and is widely used in the industry. In recent years, machine learning methods have been considered as potential alternative approaches for estimating the continuation value. Examples include kernel ridge regression [4, 5], support vector regression [6], neural networks [7, 8], regression trees [9], and Gaussian process regression [10, 11, 12]. In subsequent content, we refer to algorithms that share the same framework as LSM but may utilize different regression methods as Longstaff-Schwartz algorithms. Besides estimating the continuation value, machine learning has also been employed to directly estimate the optimal stopping time [13] and to solve high-dimensional free boundary PDEs for pricing American options [14].
In this work, we will apply a deep learning approach based on Gaussian process regression (GPR) to the high-dimensional American option pricing problem. The GPR is a non-parametric Bayesian machine learning method that provides a flexible solution to regression problems. Previous studies have applied GPR to directly learn the derivatives pricing function [15] and subsequently compute the Greeks analytically [16, 17]. This paper focuses on the adoption of GPR to estimate the continuation value of American options. <|MaskedSetence|> <|MaskedSetence|> They also introduced a modified method, the GPR Monte Carlo Control Variate method, which employs the European option price as the control variate. <|MaskedSetence|> In contrast, our study applies a Gaussian-based method within the Longstaff-Schwartz framework, requiring only a global set of paths and potentially reducing simulation costs. Nonetheless, direct integration of GPR with the Longstaff-Schwartz algorithm presents several challenges. First, GPR’s computational cost is substantial when dealing with large training sets, which are generally necessary to achieve a reliable approximation of the continuation value in high dimensional cases. Second, GPR may struggle to accurately estimate the continuation value in high-dimensional scenarios, and we will present a numerical experiment to illustrate this phenomenon in Section 5..
|
**A**: Their method adopts GPR and a one-step Monte Carlo simulation at each time step to estimate the continuation value for a predetermined set of stock prices.
**B**: [11] further explored the performance of GPR in high-dimensional scenarios through numerous numerical experiments.
**C**: [10] initially integrated GPR with the regression-based Monte Carlo methods, and testing its efficacy on Bermudan options across up to five dimensions.
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 1
|
<|MaskedSetence|> <|MaskedSetence|> The most well-known family of scoring functions are the Bregman divergences that elicit the mean, where a functional is called elicitable if it is a minimiser of an expected score, see Definition 2.2. Other elicitable functionals are quantiles, expectiles, and shortfall risk measures; tools used in risk management. Scoring functions are by nature asymmetric, making them ideal candidates for asymmetric cost functions in the Monge-Kantorovich OT problem. Indeed, we propose novel asymmetric Monge-Kantorovich (MK) divergences where the OT cost functions are statistical scoring functions. As a Bregman divergence elicits the mean and gives raise to a BW divergence, our new MK divergences can be seen as generalisations of BW divergences, and thus the Wasserstein distance. In addition to scoring functions that elicits the mean, we study scoring functions that elicit the quantile, the expectile, and law-invariant convex risk measures. <|MaskedSetence|> Furthermore, as an elicitable functional possesses infinitely many scoring functions, and thus gives raise to infinitely many MK divergences, the comonotonic optimal coupling is typically simultaneously optimal. Using the celebrated Osband’s principle in statistics, we propose ways to create novel MK divergences that are attained by the anti- or comonotonic coupling. Furthermore, we prove that MK divergences induces by any law-invariant elicitable coherent risk measure are attained by the comonotonic coupling. Finally, we provide two applications to robust stochastic optimisation. First, we derive sharp bounds on distortion risk measures when admissible distributions belong to a BW-ball around a reference distribution, thus significantly generalising recent results of [3], who solve this problem for the special case of a Wasserstein ball. Second, we find the cheapest payoff (reflecting terminal wealth) under the constraint that its distribution lies within a BW-ball around a benchmark distribution.
This paper is organised as follows. Section 2 introduces the MK divergences after reviewing the statistical concepts elicitability and scoring functions and the relevant topics in OT. Section 3 is devoted to MK divergences induced by elicitable risk functionals such as the quantile, expectile, and shortfall risk measure. We find that for distributions on the real line the majority of the new MK divergences are attained by the comonotonic coupling. Applications of the new divergences to risk measure bounds, significantly generalising recent results by [3], and portfolio management are provided in Section 4.
.
|
**A**: The study of elicitability is a fast growing field in statistics and at its core are scoring functions that incentivise truthful predictions and allow for forecast comparison, model comparison (backtesting), and model calibration [17, 12].
**B**: In sensitivity analysis, scoring functions are utilised for defining sensitivity measures which quantify the sensitivity of an elicitable risk measure to perturbations in the model’s input factors [13].
**C**: Interestingly, we find that most of the introduced MK divergences are attained by the comonotonic coupling.
|
ABC
|
ABC
|
BCA
|
ABC
|
Selection 4
|
Another research direction with fruitful outcomes is time-inconsistent control problem, where the Bellman optimality principle does not hold.
There are many important problems in mathematical finance and economics incurring time-inconsistency, for example, the mean-variance selection problem and the investment-consumption problem with non-exponential discounting. The main approaches to handle time-inconsistency are to search for, instead of optimal strategies, time-consistent equilibrium strategies within a game-theoretic framework. Ekeland and Lazrak [14] and Ekeland and Pirvu [15] introduce the precise definition of the equilibrium strategy in continuous-time setting for the first time. Björk et al. [5] derive an extended HJB equation to determine the equilibrium strategy in a Markovian setting. Yong [30] introduces the so-called equilibrium HJB equation to construct the equilibrium strategy in a multi-person differential game framework with a hierarchical structure. <|MaskedSetence|> In contrast to the aforementioned literature, Hu et al. <|MaskedSetence|> <|MaskedSetence|> Some recent studies devoted to the open-loop equilibrium concept can be found in [2, 3, 29, 18]. Specially, Alia et al. [3], closely related to our paper, study a time-inconsistent investment-consumption problem under a general discount function, and obtain an explicit representation of the equilibrium strategies for some special utility functions, which is different from most of existing literature on the time-inconsistent investment-consumption problem, where the feedback equilibrium strategies are derived via several complicated nonlocal ODEs; see, e.g., [26, 6].
.
|
**A**: The solution concepts considered in [5, 30] are closed-loop equilibrium strategies and the methods to handle time-inconsistency are extensions of the classical dynamic programming approaches.
**B**: The open-loop equilibrium control is characterized by a flow of FBSDEs, which is deduced by a duality method in the spirit of Peng’s stochastic maximum principle.
**C**: [20] introduce the concept of open-loop equilibrium control by using a spike variation formulation, which is different from the closed-loop equilibrium concepts.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 1
|
<|MaskedSetence|> Section 3.1), PBS relay data (cf. <|MaskedSetence|> Section 3.3), and cryptocurrency price data (cf. Section 3.4). <|MaskedSetence|> Thus, our data set covers the entire history of the Ethereum PoS up until 31 October 2023.
.
|
**A**:
To measure the prevalence and impact of non-atomic arbitrage trades on the Ethereum ecosystem, we collect four different types of data.
Namely, we collect Ethereum blockchain (cf.
**B**: All data we collect from block 15,537,393, i.e., the block of the merge on Ethereum on 15 September 2022, to block 18,473,542, i.e., the last block on 31 October 2023.
**C**: Section 3.2), Ethereum network data (cf.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
We posit that the state-of-the-art LLM, GPT-4, possesses the requisite capacity to weigh and reason upon these different categories of data, as evidenced by its demonstrated proficiency in complex financial reasoning tasks (Callanan et al., 2023). In its operation, the GPT-4 model is prompted to adopt the role of an expert financial analyst. <|MaskedSetence|> By applying this technique, MarketSenseAI can effectively analyze and synthesize news, company fundamentals, stock performance data, and macroeconomic factors that could influence the given stock, thus providing reasoned and structured insights into stock selection. This approach is particularly useful in complex domains like finance, where the ability to navigate through multifaceted data and reason like an expert is crucial. <|MaskedSetence|> <|MaskedSetence|> The prompt structure is as follows:
.
|
**A**: This approach employs a Chain of Thought methodology (Wei et al., 2022), guiding the model through a logical, multi-step reasoning process that reflects an expert financial analyst’s thinking pattern.
**B**: Concurrently, in-context learning is employed to dynamically adjust the analysis based on current financial situations and evolving market data (Dong et al., 2022).
**C**: This dual strategy allows MarketSenseAI to provide deep insights that adapt in changing market conditions and investors’ preferences, representing a significant advancement in AI-driven financial analysis.
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
<|MaskedSetence|> We obtain error bounds for its density approximation compared to the randomised model as well as its characteristic function.
The extension to regime switches at stochastic times is the subject of Section 4. After enhancing the underlying probabilistic framework to allow for the stochastic switching times, we follow the previously established procedures of Sections 2 and 3 in constructing composite processes, transforming these into local volatility models and obtaining their characteristic function. Here, we distinguish between two types of stochastic switching, involving a fixed and a random number of switches between regimes.
In Section 5, we propose a Markov-modulated randomised framework in which the regime switches are driven by an underlying Markov chain and obtain the characteristic function of the underlying process.
Numerical results are showcased in Section 6, featuring trajectories of both the local volatility and stochastic switching models. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: In Section 3, the local volatility model which circumvents potential issues with the randomisation formulation is constructed.
**B**: Furthermore, we illustrate a financial application by solving the pricing problem of a European option with an underlying that is modelled using the proposed local volatility models.
**C**: The results are summarised in Section 7 and additional proofs are given in the appendix.
.
|
ABC
|
CBA
|
ABC
|
ABC
|
Selection 1
|
<|MaskedSetence|> There are two treatments. In Skill, subjects learn that experts can pay 10 Coins each round to increase their diagnostic precision to 90%. In Algorithm, experts can pay 10 Coins each round to rent an algorithmic decision aid that increases the expert’s maximum diagnostic precision to 90% if used correctly. For the algorithmic decision aid, consumers also learn that experts are not forced to use the system, but can choose to ignore it. All subjects know that consumers pay for the investment by automatically paying 10 Coins more per treatment if they choose to approach an investing expert. <|MaskedSetence|> Experts first decide whether they want to invest, then choose their price vector, and proceed to diagnosis and treatment. During the diagnosis, investments allow experts to utilize four input numbers, and those in Algorithm can forego the decision aid with the click of a button. <|MaskedSetence|> Otherwise, nothing changes. Upon completing the credence goods experiment, subjects proceed to a short post-experimental questionnaire and answer a battery of demographic questions as well as a question about their risk attitudes (Dohmen et al., 2011).
Nature draws problem hExperts makeinvestment decisiondExperts setprices 𝐏𝐏\mathbf{P}bold_PConsumers observed and 𝐏𝐏\mathbf{P}bold_PConsumers chooseExpert or σ𝜎\sigmaitalic_σExperts receivediagnostic signal kExperts chooseHQT or LQTPayoffsare realized .
|
**A**: Consumers observe each expert’s investment decision.
**B**: Then, subjects complete another 15 rounds.
**C**:
After 10 rounds, all subjects are informed that experts now have the opportunity to invest into their diagnostic precision (Figure 1).
|
CBA
|
ACB
|
CBA
|
CBA
|
Selection 4
|
We investigate the distinct behaviors of traders on exchanges within different models by examining the relationship between the price volatility of the underlying asset, i.e., Bitcoin, and various trading activities, i.e., trading volume, open interest, liquidation, and leverage. Our study encompasses prominent exchanges of LOB Model such as Binance (https://www.binance.com/en) and Bybit (https://www.bybit.com), alongside exchanges of Oracle Pricing Model like GMX (https://gmx.io) and GNS (https://gains.trade), and Perpetual Protocol V2 (https://perp.com) which employs VAMM Model. <|MaskedSetence|> While the effects of trading activity on price volatility in exchanges of LOB Model reflect the market depth and can be explained by Kyle’s model [13], the VAMM Model (Perpetual Protocol V2) introduces a nuanced dynamic in the impact of open interest, varying between long and short positions. This asymmetry is attributable to the VAMM’s price formation mechanism, where the rate of change in an asset’s relative price inversely correlates with its abundance in the liquidity pool. Consequently, market depth increases with rising open interests in short positions, as the underlying asset accumulates in the liquidity pool, and the reverse holds true for long positions. While in exchanges of LOB and VAMM Model traders’ trading activity help form the price, traders in exchanges of Oracle Pricing Model (GMX and GNS) accept the price offered by the Oracle and thus act as pure price takers. Therefore, trading activity should be interpreted as purely traders’ reaction to the price change of the underlying asset. When the price become more volatile, trading volumes increase while open interests decrease, with change in long and short positions different. These empirical estimations can be explained by the predictions based on Shalen’s dispersion of beliefs model (1993) [18], which address the asymmetry of information traders can access. <|MaskedSetence|> Akyildirim et al. [1] studied the impact of Bitcoin futures on the cryptocurrency market, especially the introduction of CME and CBOE futures contracts in December 2017. Alexander et al. <|MaskedSetence|> Hung et al. [12] identifies substantial pricing effects and breakpoints in market efficiency, indicating the dominant role of Bitcoin futures in price discovery compared to spot markets. As a special case of future contracts, perpetual future contracts are scarcely investigated, although with much higher trading volume. Besides the theoretical discussion on the arbitrage between perpetual future markets and spot markets [11], there lacks enough empirical works examining perpetual future traders’ behavior, especially in DEXs. After the pioneering work by Soska et al. [20], which conducted the first analysis on the trader profile for perpetual future contracts in BitMEX, i.e., a CEX, Alexander et al. [4] constructed the optimal hedging strategy with empirical corroboration.
.
|
**A**: We also find that uninformed traders tend to overreact more to positive news than negative, evidenced by more of the long positions accumulated.
Existing literature evaluates cryptocurrency future contracts mainly in terms of their relationship with the spot market.
**B**: [2] found that BitMEX derivatives lead the price discovery process over Bitcoin spot markets.
**C**: The empirical evidence shows that, for exchanges of LOB Model (Binance and Bybit), price volatility is positively related with trading volumes but negatively related with open interests, consistent with findings in the traditional future markets.
|
CAB
|
CBA
|
CAB
|
CAB
|
Selection 3
|
Section 3 demonstrated that a little-known or understood ‘funding ratio condition’, contained within the USS self-sufficiency(SfS) definition, is strongly influenced by the gilt yield and produces unnecessarily high self-sufficiency liabilities. Further, this condition does not predict the ability to pay benefits. Preliminary independent analysis, presented in the same section, strengthens these conclusions.
The high level of the SfS liabilities then sets the post-retirement discount rate at a low level. This is because the post-retirement discount rate is chosen to be equal, or very close, to the SfS discount rate. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The Target Reliance exhibits only a small Amber window, and so.
|
**A**: This leads to the 97-99% correlation of the post-retirement discount rate with the gilt yield, as seen in Figure 5.
Section 4 considered the USS metrics of Actual and Target Reliance.
**B**: Target Reliance is defined as SfS liabilities minus TP liabilities plus the USS estimated cost of moving to SfS.
**C**: The Target Reliance status of Red, Amber or Green aims to measure whether the cost of moving to self-sufficiency could be borne by employers.
|
CAB
|
ABC
|
ABC
|
ABC
|
Selection 2
|
Next, we study the impact of varying the credit quality or the relative weights of individual loans in the portfolio on the GA𝐺𝐴GAitalic_G italic_A. <|MaskedSetence|> We depict the composition of these portfolios ordered by an increasing exposure share together with the assigned PDPD{\rm PD}roman_PD in Figures 5.1 (b) and 5.2 (b), respectively.
To study the effect of a change in the credit quality of an individual borrower on the name concentration risk of the credit portfolio, we consider a single notch downgrade for each of the obligors in the respective portfolios. <|MaskedSetence|> <|MaskedSetence|> We show these effects in Figure 5.1 (c) and 5.2 (c).
.
|
**A**: Therefore, to consider an average-sized portfolio, we construct a portfolio consisting of only 50505050 obligors sampled according to our input distributions for the actuarial and the MtM case.
**B**: It can be observed that the effect on the GA is the more pronounced the larger the exposure of the respective obligor is, i.e., if a borrower with a large relative weight gets downgraded, this increases the name concentration of the portfolio and therefore leads to a rising GA.
**C**: The opposite effect occurs when the rating of the borrower improves.
|
ACB
|
ABC
|
ABC
|
ABC
|
Selection 4
|
Management Discussion: Upon the consolidation of expert analyses into a summary report, this document is forwarded to a panel of management agents. <|MaskedSetence|> Mirroring real-world organizational dynamics, these management agents are engineered to adopt high-level perspectives, contrasting with the detail-oriented focus of the data expert agents. <|MaskedSetence|> Through their deliberations, the management agents exchange views, debate interpretations, and evaluate the implications of the findings. <|MaskedSetence|>
|
**A**: This design ensures that strategic insights and broader contexts are considered in the decision-making process.
**B**: These agents, each focusing on distinct areas, engage in a review and discussion of the report’s findings.
**C**: At the end of the discussion, these agents reach a conclusion on the next course of action.
.
|
BAC
|
BAC
|
BAC
|
ABC
|
Selection 2
|
In our simulation, the ECN is gone. <|MaskedSetence|> When the simulator replays market activity there is a single predetermined price path. Asset prices may diverge slightly but they will all come back to the real history. In the original work, all agents use a shared policy which they learn collaboratively. <|MaskedSetence|> In our case, the shared policy creates behavior that is quite correlated despite the fact that each agent has its own set of hyper-parameters as well as reward structure. <|MaskedSetence|> Only in a realistic market can we try to understand RL agents’ behavior. We present the specific formulations for both MM and LT agents in the following subsection..
|
**A**: This allows us to diverge from the one reality problem inherent in financial simulators.
**B**: We believe this approach works and it is relevant in their case, precisely due to the fixed flow ECN.
**C**: This is why in our implementation each agent has its own policy function.
Since our system does not have a stream of orders guiding the evolution of asset prices, we have to establish the realism of the simulated market activity.
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 3
|
We also extend the work of Zhao et al. (2019), who used deep learning models to process interior and exterior photos of houses from listings in Australia to come up with a measure of how aesthetically pleasing the photos were. This measure was then used as an input in a model that used multiple machine learning methods to predict housing prices.
This discussion provides an outline for how our current work differs from and contributes to the existing literature. The first contribution is related to our methodology. <|MaskedSetence|> In Ludwig and Mullainathan (2024), predictions come from a convolutional neural network. <|MaskedSetence|> (2019), predictions come from both ordinary least squares and a convoluted model that augments a linear model with the predictions of desirability from a neural network. <|MaskedSetence|> We take a broader approach to compare the predictions from each of the models considered in the literature: ordinary least squares, neural networks, and a convoluted model that uses predictions from a neural network in ordinary least squares.
.
|
**A**: In Law et al.
**B**: Existing research that uses deep learning models has so far primarily focused on using one or two prediction methods.
**C**: This was based on work by Peterson and Flanagan (2009), Ahmed and Moustafa (2016), which compare the results of predictions from standard linear hedonic price models to predictions from neural networks.
|
BAC
|
BAC
|
BAC
|
ABC
|
Selection 1
|
Given the recent emergence of the COVID-19 pandemic, the body of literature related to this topic is relatively limited. However, many scholars have studied the phenomenon from various aspects to restore the world’s economic status and the health status of people. <|MaskedSetence|> As a result, the economic and health consequences of COVID-19 were more damaging and ubiquitous than those of the other two outbreaks. Reference (6) disentangled these economic and health effects, and they argued that health should be prioritized over material well-being. <|MaskedSetence|> Reference (8) reported their results in terms of both health and economic outcomes. Reference (9) examined the nexus between job loss and mental health. They found that the effects of being unemployed are different for every individual. <|MaskedSetence|>
|
**A**: Some were more anxious and stressed than others (9)..
**B**: This section summarizes such studies.
There were two more main coronaviruses (SARS-CoV in 2002 and MARS-CoV in 2012) before COVID-19, but they could not spread as much as the COVID-19 pandemic did.
**C**: They provided an example to support their claim pointing out the fact that the expenditure on health does not say anything about the outcomes on health (21, 6).
|
BCA
|
BCA
|
CAB
|
BCA
|
Selection 4
|
From a risk management point of view, the message from Sections 6.1 and 6.2 is clear. If a careful statistical analysis leads to statistical models in the realm of infinite means, then the risk manager at the helm should take a step back and question to what extent classical diversification arguments can be applied. <|MaskedSetence|> <|MaskedSetence|> Of course, the discussion concerning the practical relevance of infinite mean models remains. <|MaskedSetence|> From a methodological point of view, we expect that the results from Sections 4 and 5.2 carry over to the above heterogeneous setting.
.
|
**A**: When such underlying models are methodologically possible, then one should think carefully about the applicability of standard risk management arguments; this brings us back to Weitzman’s Dismal Theorem as discussed towards the end of Section 1.
**B**: As a consequence, it is advised to hold on to only one such super-Pareto risk.
**C**: Though we mathematically analyzed the case of identically distributed losses, we conjecture that these results hold more widely in the heterogeneous case.
|
CBA
|
CBA
|
CBA
|
BAC
|
Selection 3
|
Figure 4 illustrates the performance of a news-based strategy against a naive long-only strategy within the S&P 500 universe. <|MaskedSetence|> In Figure 5, we observe a similar analysis within the NASDAQ universe. <|MaskedSetence|> Figure 6 extends this comparison to the Major Equities Markets, offering a broader view of the strategy’s applicability in a global context.
Figures 7, 8, and 9 delve into strategies that amalgamate news with stress index data. <|MaskedSetence|>
|
**A**: This figure highlights the temporal evolution of the strategy’s effectiveness in comparison to the benchmark.
**B**: The subplot is particularly insightful for understanding the allocation differences underlined by the news-based strategy.
**C**: These figures offer an intriguing perspective on the synergistic effects of combining these two data sources across different market universes..
|
ABC
|
ABC
|
ACB
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> 6. It can be observed from the figure that the performance of the proposed Granular Semantic method is almost consistent and as good as the other methods with low impurity (<10%absentpercent10<10\%< 10 %). Once the impurity increases, the proposed method performed better in all cases compared to the other methods. <|MaskedSetence|>
|
**A**: To verify the reliability and robustness of the proposed method in comparison with the existing data imputation methods we performed the study by varying the amount of injected impurities.
**B**: It proves the utility of considering the semantics of the feature and dropping the missing values while forming the granules.
Figure 6: Variation in average error with increasing impurity over all the five year’s data
.
**C**: The results of it is shown in Fig.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
<|MaskedSetence|> Such scenarios are common in the financial technology sector, including Neobanks and FinTech companies. By examining the specific case of Fintonic’s loan and fraud model, we illustrate the potential advantages and early-stage applicability of QML for this particular scenario. <|MaskedSetence|> <|MaskedSetence|> Our initial findings reveal that SQS not only exhibits a superior ability to identify patterns from a minimal dataset but also demonstrates enhanced performance compared to data-intensive algorithms like XGBoost, in line with previous results in the literature [7] that indicate superior generalization capability considering fewer datapoints. Such advancements position SQS as a valuable asset in the highly competitive landscape of FinTech and Neobanking, suggesting its potential to redefine industry standards through its efficient data processing and analytical prowess.
.
|
**A**:
In this work we propose an end-to-end model composition algorithm, focusing on the development and integration of efficient quantum kernels to address the limitations of classical models, particularly for unbalanced datasets with a small number of samples.
**B**: We introduce a novel method named Systemic Quantum Score (SQS) that leverage evolutionary algorithms [3] for efficient Quantum Kernel design.
**C**: This innovative approach is designed to surpass the capabilities of traditional classical models, particularly within the demanding context of the Finance sector.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> However, the application of TGN for GAD in the finance domain is still not well-established.
Addressing this gap, our research presents a comprehensive framework for utilizing TGN in the financial domain, specifically for anomaly detection. <|MaskedSetence|> Our findings reveal that TGN, with its ability to learn from dynamic edges, shows remarkable performance, demonstrating its potential as a powerful tool in financial fraud detection. This research not only advances the field of GAD in finance but also opens new avenues for the application of dynamic graph models in real-world scenarios.
.
|
**A**: TGN can learn from a graph that evolves over time, making it potentially suitable for applications like real-time fraud detection in financial transactions.
**B**:
TGN has emerged as a promising model capable of capturing these dynamic changes in nodes and edges effectively.
**C**: We experimented with various models within this framework, comparing their performance against traditional static GNN models and hypergraph neural network baselines.
|
BAC
|
CBA
|
BAC
|
BAC
|
Selection 4
|
As a class, truck drivers are much less embedded within a job than workers in other occupations because external embeddedness considerations rarely fix workers to a particular job. For example, a truck driver does not necessarily need to move locations to take a job with a different company, even if that company is located far away. <|MaskedSetence|> Drivers work remotely, making them less beholden to relational links that are important in other occupations. <|MaskedSetence|> Truck driver embeddedness relies more narrowly on considerations of job fit, resulting in more sensitivity to shocks.
Truckers experience a variety of shocks in the course of their duties which may trigger reassessment of current employment including traffic congestion, equipment failures, detention during loading and unloading, variation in pay, and so on. <|MaskedSetence|> The ultimate effect of a shock, with respect to retention, is predictable based on interest alignment between firm and employee.
.
|
**A**: Additionally, skills are highly transferable across firms so that there is comparatively little sacrifice involved in changing jobs.
**B**: Truckers are also less internally embedded.
**C**: The effect of the shock, however is moderated by embeddedness, and, in particular, what the shock reveals about job-fit.
|
BAC
|
CAB
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> <|MaskedSetence|> We have compared, and tested on historical data from the stocks included in the S&P 500 index, various centrality measures (possibly depending on a parameter α𝛼\alphaitalic_α) as well as possible associated tweaks such as how to exactly construct the underlying adjacency matrix, whether to look at central or peripheral nodes, and how to precisely use the centrality measure in building the portfolio. We note that, although perhaps not as thoroughly, the latter two aspects are studied already in [16], whereas the former seems to be investigated here for the first time.
We propose the use of a much wider range of centrality measures, including the fairly new NBTW and the use of subgraph centralities (Exponential, Katz and NBTW) which perform excellently on our tests. In addition, we propose the use of a plethora of transformations of correlations matrices and alternative formulations for the adjacency matrices. <|MaskedSetence|>
|
**A**: We also utilize a simple threshold instead of other more sophisticated filtering techniques and we depict the most frequently used threshold values.
.
**B**: In this paper, we aim to fill these gaps by proposing a more systematic comparison of portfolio selection strategies purely based on graph centrality, including a study of potentially different behaviours when a parameter, such as Katz’s α𝛼\alphaitalic_α, varies within its range of possible values.
**C**: Moreover, we include in our analysis more recent and mathematically sophisticated measures that have been shown to be qualitatively different than its predecessors, such as NBTW centrality [2, 3, 10].
|
CAB
|
BCA
|
BCA
|
BCA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> However, the latter is computationally expensive due to the configuration time of the different hidden layers for each feature in the model. <|MaskedSetence|> In any case, the improvement of the svc classifier in Table 9 over the rule-based baseline was at least 8% for all metrics.
The effect of numerical and temporal features became more apparent when we checked the behaviour by class. Table 10 shows the results of the first experiment in that case. Note that precision and recall were very asymmetric between past and future (∼similar-to\sim∼10% precision asymmetry with the svc classifier, ∼similar-to\sim∼19% recall asymmetry with the nn classifier). In addition, the precision of both classifiers was barely above 75% for future..
|
**A**: By comparing these results with those of the second experiment (Table 9), we observe a ∼similar-to\sim∼4% improvement in accuracy for the nn classifier thanks to the numerical and temporal features.
**B**: The best classification model was svc followed by nn.
**C**:
Table 8 shows the results obtained with n𝑛nitalic_n-grams.
|
CBA
|
CBA
|
ACB
|
CBA
|
Selection 1
|
We evaluated inter-annotator agreement using two well-known state-of-the-art metrics: Alpha-reliability and accuracy.
Table 11 shows the coincidence matrix of relevance across all annotators. <|MaskedSetence|> <|MaskedSetence|> The mean values were 0.552 and 0.861, respectively. <|MaskedSetence|> Inter-agreement accuracy was very high, at over 80%..
|
**A**: The two components in the diagonal show the number of news sentences on which all the annotators agreed, while the other two components show the cases on which at least one annotator disagreed.
**B**: Previous works have considered an Alpha-reliability value above 0.41 to be acceptable [66, 67, 68, 69].
**C**: Tables 12 and 13 show the Alpha-reliability and accuracy coefficients by pairs of annotators.
|
CAB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
Derakhshan & Beigy [4] introduced the LDA-POS method, which leverages part-of-speech information in the LDA model to separate words based on their part-of-speech tags. This model demonstrated notable results on two datasets, one in English and the other in Persian. The study utilized a comment dataset from Yahoo Finance covering the period from 2012 to 2013 and data from the Sahamyab website in Iran spanning from 2016 to 2018. <|MaskedSetence|> It was compared to the Bag-of-Words (BOW) model and the sentiment polarity model. Experimental results demonstrated that, on both the validation and independent test sets, the LDA-POS method achieved higher average prediction accuracy compared to the BOW model. The prediction accuracy of the LDA-POS method on English and Persian datasets was comparable, suggesting its suitability for different languages. While sentiment analysis contributed to improved prediction accuracy, the LDA-POS method outperformed both the sole use of sentiment polarity and the BOW method. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: The LDA-POS method combines the LDA topic model with part-of-speech tagging for comment analysis.
**B**: However, the obtained accuracy was not significantly high, with values of 56.24% and 55.33%, respectively [4].
**C**: This could be attributed to the relatively small size of the dataset, which may have limited the model’s ability to capture sufficient patterns and generalize effectively.
.
|
ABC
|
ABC
|
BCA
|
ABC
|
Selection 2
|
These bounds were more recently enforced by additional information on the dependence structure, which led to the creation of improved Fréchet–Hoeffding bounds and the pricing of multi-asset options in the presence of additional information on the dependence structure, see e.g. <|MaskedSetence|> [22].
The setting of dependence uncertainty is closely linked with optimal transport theory, and its tools have also been used in order to derive bounds for multi-asset option prices, see e.g. <|MaskedSetence|> [3] for a formulation in the presence of additional information on the joint distribution.
More recently, Aquino and Bernard [1], Eckstein and Kupper [11], and Eckstein et al. <|MaskedSetence|>
|
**A**: Tankov [24], Lux and Papapantoleon [17], and Puccetti et al.
**B**: Bartl et al.
**C**: [12] have translated the model-free superhedging problem into an optimization problem over classes of functions, and used neural networks and the stochastic gradient descent algorithm for the computation of the bounds.
.
|
ABC
|
ABC
|
ABC
|
ABC
|
Selection 1
|
In recent years, the rapid progress in manufacturing of quantum processing units (QPUs) and the development of hybrid quantum-classical workflows, not only for universal quantum computers (Abrams and Lloyd, 1998; Farhi et al., 2014; Zheng, 2021; Brandhofer et al., 2022; Chen et al., 2023), but also for quantum annealers (Elsokkary et al., 2017; Orús et al., 2019; Cohen et al., 2020a, b; Phillipson and Bhatia, 2021; Grant et al., 2021; Romero et al., 2023; Palmer et al., 2022; Jacquier et al., 2022), has re-ignited interest in this type of problems. Meanwhile, quantum annealers have been shown to provide quantum advantage for certain classically intractable problems (King et al., 2023) and seem to provide a promising platform for solving quadratic binary optimization and integer quadratic optimization, even in the presence of hard and weak constraints. <|MaskedSetence|> For a broader review of quantum computing applications in finance, see refs. Orús et al., 2019; Jacquier et al., 2022; Herman et al., 2023.
Recently, awareness of environmental, social and governance (ESG) aspects of investing has grown among private and institutional investors alike. <|MaskedSetence|> The trend toward more ESG awareness is likely to get further amplified by regulatory updates on international- and national level. See for example ref. Bruno and Lagasio, 2021 for an overview of ESG regulation in the banking sector across Europe. In January 2023, European authorities agreed on a European implementation of the internationally developed Basel III update that will result in updated capital requirements regulation (CRR) and capital requirements directive (CRD), including requirements on ESG awareness and inclusion into risk management. <|MaskedSetence|> The inclusion of ESG risk as an additional risk factor besides historical covariance into the Markowitz framework (see eq. 1) is actively being investigated (Pedersen et al., 2021; Utz et al., 2014; Chen et al., 2021; López Prol and Kim, 2022)..
|
**A**: Where up to now integrating ESG constraints in investment decisions has been up to individual preferences, it can be expected to become a required standard in the near future in the EU.
**B**: Based on these prospects, portfolio optimization is a natural application for quantum computing in finance, and in particular quantum annealers.
**C**: A growing number of financial products caters to the growing demand and incorporates ESG aspects into the product design.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
Figure 1 illustrates how karma works out to the benefit of everyone at hand of an example involving three intersection encounters. Let’s start with the encounter in the top center. <|MaskedSetence|> As a result, the blue car’s karma account goes up by 4 to 13. Let’s move along clockwise. Blue now happens to have high urgency and bids 4, thus outbidding and getting priority over orange whose karma account is nine and bids 3. <|MaskedSetence|> Let’s move along clockwise. <|MaskedSetence|> Thus the circle closes, et cetera, et cetera.
.
|
**A**: Orange now has high urgency and bids 4, thus outbidding low-urgency lila who has 5 karma left and bids 1.
**B**: Now the orange karma goes up by 4 to 13.
**C**: The high-urgency lila car has a current karma account of 9 and bids 4, thus outbidding the low-urgency blue car whose karma account is also 9 but bids 2.
|
CBA
|
CBA
|
CBA
|
ABC
|
Selection 3
|
Our framework is characterized by multiple time periods, variation in treatment timing, and staggered adoption (once a tweet is treated, it remains treated in the following periods). To estimate the average treatment effects in this difference in difference (DiD) setup, we use the Callaway and Sant’Anna, (2021) estimators. For all tweets, we analyze the diffusion of quotes, replies, and retweets by analyzing the volume of virality per one hour interval from the publication of the tweet up to 36 hours after the publication. We use doubly robust DID estimators with bootstrapped standard errors and no anticipation. <|MaskedSetence|> In the main specification, we focus on tweets in English but we also present our results when we use tweets in all languages in Appendix A.
We compare the diffusion of tweets with a visible Community Note rated just above the publication threshold (between 0.40 and 0.43) with tweets having no visible Note and rated just below the publication threshold (between 0.37 and 0.40). The control group is composed of 711 tweets and the treated group of 575 tweets. <|MaskedSetence|> <|MaskedSetence|> We also add as a control the sentiment of the tweet computed using Vader (Hutto and Gilbert,, 2014) and the topic of the tweet identified through a Latent Dirichlet Allocation (LDA) (Blei et al.,, 2003). The nine topics that we identify with the LDA (see Appendix B) are related to the most controversial topics on social media, including politics, the Covid-19, the war between Russia and Ukraine, and the war between Israel and Hamas..
|
**A**: To maintain data integrity and minimize the influence of tweets with low virality, we remove all tweets that gathered less than one retweet per hour and notes that have changed status more than 2 times.
**B**: We consider a large number of covariates in our model to control for potential confounding factors or sources of variation.
**C**: We use the log of the number of followers of the user who posted the tweet, a dummy variable to capture if the tweet contains an image, an url, a mention or a hashtag, and day-of-the week and time of the day dummies.
|
ABC
|
CAB
|
ABC
|
ABC
|
Selection 4
|
Our paper delivers such a quantification. Particularly, we estimate the level of trade barriers imposed by the Iron Curtain, and how they fluctuated over time. <|MaskedSetence|> (2020) to simulate the trade and welfare effects of a counterfactual world without the Iron Curtain. <|MaskedSetence|> The IMF’s DOTS database, for example, which is one of the main sources of bilateral trade data in the postwar period, does not include trade flows involving East Germany and the USSR, either as exporters or importers, for many years.
We overcome this problem by collecting information from several editions of the statistical yearbooks of East Germany and the statistical reviews of foreign trade of the Soviet Union. <|MaskedSetence|> Where the DOTS database contains data for specific years, we cross-check our approach to ensure that our methodology closely matches the reported information..
|
**A**: A major challenge is the lack of complete historical data on bilateral trade flows for important countries belonging to the Eastern bloc.
**B**: In a second step, we use our estimates and a state-of-the art quantitative trade model that belongs to the class of ‘‘universal gravity’’ models described by Allen
et al.
**C**: We use exactly the same methodology as the IMF to incorporate these additional observations into the DOTS database.
|
BAC
|
BAC
|
CAB
|
BAC
|
Selection 2
|
Another contribution of my paper is to the body of research examining social media’s impact on capital markets. <|MaskedSetence|> (2014), who demonstrate that firms can diminish information asymmetry among investors by using Twitter to widely distribute news, press releases, and other disclosures. <|MaskedSetence|> (2018) report that a significant portion of S&P 1500 firms have established a corporate presence on social media platforms like Facebook and Twitter. Additionally, there’s a growing body of work, including research on StockTwits and Twitter, that investigates how investors’ engagement with various online platforms, from search engines and financial websites to forums, influences market dynamics. This line of inquiry has yielded mixed results on the predictive power of online information for future earnings and stock returns. For instance, Da et al. (2011) use Google search volume as an indicator of investors’ information demand, finding that increased searches foreshadow short-term stock price increases followed by a reversal within a year. Drake et al. (2012) observe that the correlation between returns and earnings weakens when Google searches spike before earnings announcements. <|MaskedSetence|> Chen et al. (2014) show that user-generated content on the Seeking Alpha platform can forecast earnings and long-term stock returns post-report publication. The literature also delves into how social media activity around earnings announcements affects investor behavior, with Curtis et al. (2014) finding a connection between social media buzz and heightened sensitivity to earnings news and surprises, and Cookson and Niessner (2020) indicating that while StockTwits discourse may not directly impact market movements, the platform’s message disagreements are a reliable predictor of unusual trading volumes.333Additional studies by Vamossy (2023) and Vamossy (2021).
.
|
**A**: Jung et al.
**B**: Studies by Antweiler and Frank (2004) and Das and Chen (2007) link the volume of message board posts to stock return volatility but not to the returns themselves.
**C**: The relevance of social media in this domain is underscored by studies such as Blankespoor et al.
|
CAB
|
CAB
|
BAC
|
CAB
|
Selection 2
|
Nonetheless, we investigate the endogenous adoption of anti-discrimination laws at the state level. <|MaskedSetence|> It is possible that controlling for state-level polling is not the best way to capture sentiment toward LGB workers since it is possible to discriminate against people based on sexual orientation and still support their right to marry. <|MaskedSetence|> <|MaskedSetence|> Specifically, we estimate this equation:
.
|
**A**: However, it seems plausible that the changes in state-level support for same-sex marriage are highly correlated with changes in sentiment toward LGB workers such that it will suffice for a suitable proxy.
**B**: To better get at the question of endogenous adoption of laws, we create an event-study plot showing how state laws change support for same-sex marriages.
**C**: Specifically, we use state-level polling information on support for same-sex marriage as a proxy for general sentiment toward LGB workers.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 1
|
In this paper, we have explored the implementation of subscription offers within the e-grocery sector to address inventory challenges. As subscription offers typically involve price reductions, we have outlined a three-step procedure ensuring that offering price reductions positively impacts profitability. <|MaskedSetence|> Throughout our analysis, we have considered varying purchase probabilities for different products, as well as the diverse costs incurred by retailers when inaccurately anticipating inventory levels. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Our approach involves first calculating the expected planning costs of uncertainty, assessing the value of advanced demand information, and then determining the appropriate level of price reduction.
**B**: Our findings reveal several insights:
•.
**C**: Additionally, we recognise that not all customers will be willing to subscribe to offers.
|
ACB
|
ACB
|
CAB
|
ACB
|
Selection 2
|
The literature on the social cost of carbon is dominated by Western scholars. <|MaskedSetence|> We find that Western scholars advocate a higher social cost of carbon than would national representatives. The social cost of carbon may fall further if we weigh the results by the number of people.
Having extensively surveyed the literature on the social cost of carbon (Tol, 2023, 2024a), we are fairly sure that this is the first paper to try and establish a representative social cost of carbon. It should not be the last. The Falk data are indicators of individual impatience and risk-taking that we assumed to be proportional to the presumably social pure rate of time preference and elasticity of intertemporal substitution according to Drupp. These studies are, in our opinion, the best available but not without flaws. Connecting two disparate datasets is never easy, but we here have a mismatch of both what is measured and where. In the appendix, we also find somewhat different results for the Hofstede data and the literature review. <|MaskedSetence|> We abstract from uncertainty and inequity, and so dodge the question whether the inverse of the rate of intertemporal substitution equals the rate of risk aversion and the pure rate of inequity aversion (Agneman et al., 2024, Anthoff and Emmerling, 2019, Ha-Duong and Treich, 2004, Saelen et al., 2009, Tol, 2010). Neither the Drupp nor the Falk data allow us to make this distinction. We explored only a small part of the parameter and model space of the social cost of carbon: We use a model with a single region and a single sector; we abstract from the impact of climate change on economic growth; we ignore uncertainty, ambiguity, and stochasticity; we omit fat tails and tipping points; and so on. <|MaskedSetence|>
|
**A**: The analysis here should therefore be repeated when better data become available.
**B**: More research is therefore needed but—since none of these extensions affects the relationship between the social cost of carbon on the one hand and the pure rate of time preference and the elasticity of intertemporal substitution on the other—we are confident that this would not detract from our key finding: The ethical values assumed by experts systematically deviate from the world population so that published estimates of the social cost of carbon are unrepresentative and too high.
.
**C**: We calibrate the pure rate of time preference and the rate of risk aversion to representative data for 76 countries.
|
CAB
|
CAB
|
CAB
|
BAC
|
Selection 3
|
<|MaskedSetence|> The CFL333Courant-Friedrichs-Lewy. convergence condition for the associated finite difference scheme is determined and written explicitly. For the proper calibration, our model can avoid overpricing options at the money, and underpricing options at the ends, either deep in the money444When the option has what is also called an intrinsic value, i.e. the real value of the option, that is to say the profit that could be made in the event of immediate exercise. It means that the value is at a favorable strike price relative to the prevailing market price of the underlying asset. Yet, this does not mean that the trader will be making profit, since the expense of buying, and the commission prices have also to be considered., or deep out of the money555When the option has what is also called an extrinsic value, i.e. <|MaskedSetence|> In such a case, the Delta, i.e. <|MaskedSetence|> This work opens the way to an empirical investigation and an inverse problem of the probability measure μ𝜇\muitalic_μ.
.
|
**A**: a value at a strike price higher than the market price of the underlying asset.
**B**: the Greek which quantifies the risk, is less than 50..
**C**: Special treatment is given to the self-similar case by writing an explicit formula, which enables computation of the solution.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> In Step 2 we decide the number of Horizontal Components and their hℎhitalic_h % weight to their relative Virtual Positions. In Step 3 we decide the number of Vertical Components and their v𝑣vitalic_v % weight to their relative Horizontal Components. <|MaskedSetence|>
.
|
**A**: Finally, in Step 4 we design the Waterfall Configuration by connecting each x𝑥xitalic_x Cost and y𝑦yitalic_y Note Position to their Vertical Components.
**B**:
Figure 1: Synoptic representation of the 4 steps required to Designing the Positions.
**C**: In Step 1 we compute the Virtual Positions.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|
The choice of the possible best target for a deal is made in a complex evolving landscape of partners and competitors, involving a huge effort in terms of time and human capabilities. In this paper, we propose an automatized, machine learning-inspired approach to quantify the closeness between two firms in terms of their patenting activities, and we test this and other measures in an out-of-sample forecast exercise. <|MaskedSetence|> In order to build a quantitative measure of the similarity between companies, we draw inspiration from the Economic Complexity framework Hidalgo and Hausmann (2009). In particular, our investigation centers on the concept of "Relatedness" (Hidalgo et al. (2018, 2007); Zaccaria et al. (2014)), which in our study serves as a measure of the similarity between two firms based on the technological sectors found in their patents. Our similarity metric allows us to compare and contrast the patent portfolios of acquiring and target companies, enabling a deeper understanding of the technological dynamics at play in these strategic transactions.
Similarity metrics, such as cosine similarity, are the key to constructing collaborative filtering Schafer et al. (2007), which is a widely employed technique in recommender systems and link prediction exercises. Recently, in the field of unweighted bipartite networks, it has been introduced a novel metric known as Sapling Similarity Albora et al. (2023). <|MaskedSetence|> <|MaskedSetence|> First, MASS correctly considers weighted bipartite networks, which is the context of our firm-technology network. Second,.
|
**A**: In this study, we have modified the Sapling Similarity to predict M&A events, introducing the MASS approach.
**B**: This metric has demonstrated superior performance in link prediction and recommendation tasks compared to existing metrics in the literature.
**C**: Equipped with this tool, decision-makers can assess to what extent to exploit a technology sector a firm already masters or explore new innovation possibilities.
|
CBA
|
CBA
|
CBA
|
BCA
|
Selection 1
|
•
Stop words removal. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We consider days of the week and months of the year as stop words. However, we keep elements such as no ‘not’, sí ‘yes’, muy ‘very’ and poco ‘few’, since they help to interpret the evolution of the assets..
|
**A**: We also remove urls and retweet (rt) tags.
**B**: Meaningless words such as determiners and prepositions999Available at https://www.ranks.nl/stopwords/spanish, August 2020.
**C**: are removed from the text.
|
BAC
|
BCA
|
BCA
|
BCA
|
Selection 2
|
<|MaskedSetence|> <|MaskedSetence|> This section outlines the format of the data input and details each module. In brief, the Earnings Conference Call Encoder, Time-Series Data Encoder, and Relevant News Encoder are utilized to extract features from various data types. These features are fused data undergoes processing and modeling, after which it is fed into the Multi-Task Prediction Block. <|MaskedSetence|>
|
**A**: The framework comprises four main modules: 1) Earnings Conference Call Encoder; 2) Time-Series Encoder; 3) Relevant News Encoder; and 4) Multi-Task Prediction Block.
**B**: 2 RiskLabs Framework
Figure 1 illustrates the RiskLabs Framework, designed to handle multiple data types surrounding the financial information including audio, text, and time-series from different sources.
**C**: This block is responsible for forecasting both volatility for difference interval and VaR (Value at Risk) values.
.
|
CBA
|
BAC
|
BAC
|
BAC
|
Selection 2
|
Second, our analysis focuses solely on the price impact of the speculator’s disclosure and does not model the price impact of the speculator’s trading. <|MaskedSetence|> Therefore, we present our analysis, specifically the unraveling result in Proposition 1, as a benchmark. An interesting avenue for future research is to examine whether and how the speculator’s disclosure incentives will be affected by the price impact of trading.
Third, we assume the disclosure by other information sources is exogenous and do not model their strategic incentives. <|MaskedSetence|> <|MaskedSetence|> Future research may examine this question by considering a setting where the speculator’s signal has a single dimension..
|
**A**: For example, when responding to activist short-sellers, target firms may not voluntarily issue additional information that worsens the market’s belief.
**B**: Due to the multi-dimensional signal structure, the joint strategic disclosure decisions by speculators and target firms are too complex to solve in our model.
**C**: Incorporating the speculator’s multi-dimensional information into a trading game, such as Kyle (1985) and Glosten and Milgrom (1985), would render our model intractable.
|
CAB
|
BCA
|
CAB
|
CAB
|
Selection 4
|
I find that women at lower rungs of the leadership ladder are substantially less likely to apply for promotions relative to their male counterparts. <|MaskedSetence|> <|MaskedSetence|> Column 2 shows that when using increases in direct reports as a measure for promotions, women are 30.2% (p𝑝pitalic_p=0.000) less likely to apply. Similarly, Column 3 documents an application gap of 15.4% (p𝑝pitalic_p=0.000) with respect to promotions defined by increases in managerial autonomy over hours and business decisions. I do not find any meaningful differences when only using the reporting distance to the CEO as a measure for higher-level positions (Column 4). A key difference between reporting distance and the other measures of authority is that reporting distance is not necessarily an inherent characteristic of the position. An employee’s reporting distance can change, for example when a firm shifts its focus on specific products or processes, without any changes to the employee’s position, suggesting that it may be a less crucial prerequisite for application decisions. <|MaskedSetence|>
|
**A**: Similar patterns result when separately using the measures of job authority to identify higher-level positions.
**B**: Using my preferred approach for identifying promotions based on the combined measure of job authority, I find a gender application gap of 27.4% (p𝑝pitalic_p=0.000, Column 1 of Table 2).
**C**: The finding that there are large gender differences by direct reports and managerial autonomy, but not reporting distance to the CEO, motivates a deeper investigation into the features of leadership that appeal differentially by gender, which I return to in Section 6.
.
|
BAC
|
BAC
|
CBA
|
BAC
|
Selection 2
|
<|MaskedSetence|> (2022) have integrated all the
above risk measures in one, and call them conditional distortion risk
measure. <|MaskedSetence|> This is intrinsically different from the classical law-invariant risk measures in Föllmer and Schied (2016). <|MaskedSetence|> We call it law-invariant factor risk measure.
.
|
**A**: Recently, Dhaene
et al.
**B**: The Co-risk measures actually rely on the conditional distribution of the risk on the event of systemic risk, which is determined by the joint distribution of the risk and the systemic risk.
**C**: In this paper, we will follow the same idea as Co-risk measures to consider the factor risk measures solely depending on the joint distribution of the risk and the factors.
|
BCA
|
ABC
|
ABC
|
ABC
|
Selection 4
|
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The second is the multi-class stacking ensemble described in the same section. Since both were implemented in streaming mode, they were progressively tested and trained by sequentially using each sample from the experimental data set to test the model (i.e., to predict) and then to train the model (i.e., for a partial fit). Performance metrics are obtained as their incremental averages. In particular, we employed the EvaluatePrequential313131Available at https://scikit-multiflow.readthedocs.io/en/stable/api/generated/skmultiflow.evaluation.EvaluatePrequential.html, January 2023. library..
|
**A**:
4.5 Streaming classification
In this section, we evaluate the final performance of our system to detect financial opportunities and precautions.
**B**: The first is a single-stage scheme as a baseline, using the classifiers in Section 3.3.
**C**: The results were computed using two different streaming approaches.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
III.1.4 k-Nearest Neighbor
The k-Nearest Neighbor (k-NN) algorithm stands out in the machine learning landscape for its simplicity and non-parametric nature, contrasting sharply with the complexity of SVM with RBF kernel and the linearity of Linear Regression cover1967nearest . <|MaskedSetence|> <|MaskedSetence|> Additionally, the performance heavily depends on the choice of k𝑘kitalic_k and the distance metric, which can significantly affect its accuracy. Unlike previous models, k-NN does not optimize a specific function for learning; instead, it directly uses the training data for prediction, minimizing an implicit cost function related to the distance between the query instance and its nearest neighbors, thereby determining the best fit for prediction. <|MaskedSetence|> The hyperparameter we tune in the model selection is k𝑘kitalic_k.
.
|
**A**: Despite its simplicity, k-NN’s effectiveness is contingent upon a careful balance between the choice of k𝑘kitalic_k and the distance metric, ensuring adequate performance while highlighting its intuitive approach to machine learning prediction outcomes.
**B**: k-NN operates on the principle of feature similarity, predicting the outcome for a new instance based on the majority vote or average of its k𝑘kitalic_k closest neighbors in the feature space.
**C**: This straightforward approach eliminates the need for parameter estimation, presenting an advantage in terms of simplicity and interpretability.
|
BCA
|
BCA
|
BCA
|
ACB
|
Selection 2
|
While it is an undoubted fact that large financial institutions invest to obtain private information, even when trading in markets that are not thin, theoretical studies that justify the positive relation between better information and higher gains from trading are scarce. <|MaskedSetence|> Under a competitive market setting, the fact that private information has positive value if an investor acts strategically was pointed out back in Hirshleifer (1971), while similar positive effect of private signal on traders’ welfare is shown in a competitive model with a continuum of traders in Morris and Shin (2002).
Under non-competitive market settings, information acquisition has been relatively recently studied in Vives (2011), Rostek and Weretka (2012) and Vives (2014), where insiders with correlated noisy signals are considered. An extension of these papers to a two-stage model has been recently developed in Nezafat and Schroder (2023). Therein, at the first stage, one type of traders can choose the precision of the private signal that they will get before trading. This situation is comparable to our model when the rest of the non-noise traders are uniformed (no private signal). In contrast to our model however, both the insider and uniformed traders are assumed strategic, which essentially implies that the market is thin despite the presence of noise traders. <|MaskedSetence|> We show that the existence of zero-information equilibria is not possible if the uniformed traders are price takers. In this case, the insider’s ex-ante welfare is always increasing with respect to signal’s precision, which means that the private information does have positive value for the insider. <|MaskedSetence|>
|
**A**: In addition, we show that under Pareto-allocated initial endowments, private information has positive value for uniformed traders too (even though they are assumed as price-takers).
.
**B**: Strategic uniformed traders, together with specific conditions on noise traders’ demand, lead to an equilibrium at which private information is welfare-deteriorating.
**C**: We highlight that a key factor which always leads to this positive relationship is the insider’s market power in a markets with a mass of small uniformed traders and noisy liquidity providers.
|
CBA
|
CBA
|
CBA
|
BCA
|
Selection 3
|
V CONCLUDING REMARKS
Financial Portfolio optimisation has been studied for a few decades yet is still a very challenging and significant task for investors to balance investment returns and risks under different financial market conditions. <|MaskedSetence|> <|MaskedSetence|> In this work, a multi-agent and self-adaptive portfolio optimisation framework integrated with attention mechanisms and time series namely the MASAAT is proposed in which multiple trading agents are introduced to analyse price data from various perspectives to help reduce the biased trading actions. In addition to the conventional price series, the directional changes-based data are considered to record the significant price changes in different levels of granularity for filtering any plausible noise in financial markets. Furthermore, the attention-based cross-sectional analysis and temporal analysis in each agent are adopted to capture the correlations between assets and time points within the observation period in terms of different viewpoints, followed by a spatial-temporal fusion module attempting to fuse the learnt information. <|MaskedSetence|> The empirical results on three challenging data sets of DJIA, S&P 500, and CSI 300 market indexes reveal the strong capability of the proposed MASAAT framework to balance the overall returns and portfolios risks against the state-of-the-art approaches..
|
**A**: However, due to the conventional price series involving a lot of noise, the trend patterns may not be easy to discover by most of the existing methods under the highly turbulent financial market.
**B**: There are many studies trying to use various deep or reinforcement learning approaches such as convolution-based, recurrent-based, and graph-based neural networks to capture the spatial and temporal information of assets in a portfolio.
**C**: Lastly, the portfolios suggested by all agents will be further merged to produce a newly ensemble portfolio so as to quickly respond to the current financial environment.
|
BAC
|
CBA
|
BAC
|
BAC
|
Selection 3
|
<|MaskedSetence|> The concentrated liquidity approach implemented in v3 makes the actual calculation a little more complicated, and is expanded upon in Section 2. However, the general idea of a CPMM is that swapping token A for token B within the pool raises the price of token B with respect to token A due to its relative scarcity after the transaction (and vice versa). Therefore, the exchange rate within the pool is wholly controlled by the transactions that are executed within it, rather than by a central market maker matching buy and sell orders as in a limit-order-book-style market. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: This can result in a discrepancy between the exchange rate within a pool and the more widely available market exchange rate.
**B**: The market exchange rate can be thought of as the exchange rate for a theoretical “infinite-liquidity” pool, or as the rate obtained in a large centralized exchange like Coinbase where rates are carefully managed.
.
**C**:
Uniswap uses a constant product market maker (CPMM) to execute trades, where the essential idea is that the product of the reserves for the two currencies in a pool should remain constant.
|
CAB
|
CAB
|
CAB
|
ABC
|
Selection 3
|
In this example we uses the same database as before and calculate the welfare gain that Spain would have experienced if it had had the border thickness of its synthetically constructed counterfactual. <|MaskedSetence|> We use a trade elasticity of 4 and three different values for the supply elasticity: 0, 1, and 2. <|MaskedSetence|> <|MaskedSetence|> Although the identity and number of exporters/importers can vary across years, it is important that the data are square in each year (i.e., that there are the same exporters as importers, and no missing values)..
|
**A**: A trade elasticity of 4 and a supply elasticity of zero corresponds to the original simulations by Campos et al.
**B**: (2023).
To avoid having to use a loop to simulate the model for each year, we make use of the fact that the ge_gravity2 command can be used with the by prefix.
**C**: We perform the calculation for several years.
|
CAB
|
CAB
|
CAB
|
CBA
|
Selection 3
|
Third, the problem of conflicting biases also arises in the field of behavioural interventions. <|MaskedSetence|> One of the challenges mentioned in this area of research is the fact that these interventions normally target one bias at a time, when indeed multiple biases could be at stake (Kahneman et al.,, 2021). <|MaskedSetence|> Section 3 deals with the experimental design and presents the belief-measuring tool. In section 4, I bring the experimental data and the theory together and compare two regression models, which differ in the amount of biases they incorporate. <|MaskedSetence|>
|
**A**: Once again, having a method to distinguish between different conflicting biases could come in handy in order to focus the intervention on those biases which end up being more common, or drive inference the most.
The paper continues as follows: Section 2 explains the theoretical framework, the updating setting and progressively introduces different biases into the model.
**B**: Boosting is a psychological technique aimed at de-biasing individuals when they suffer some kind of cognitive bias.
**C**: Section 5 discusses the results of the experiment, and section 6 concludes with some final remarks..
|
BAC
|
BAC
|
BAC
|
ACB
|
Selection 3
|
Conversely, a contrasting picture emerges when examining domains that have consistently been a part of the lowest harmonic centrality clusters. <|MaskedSetence|> Their genesis in more recent times often means they are still in the nascent stages of brand development and market penetration. These entities tend to be geographically situated in regional or remote areas, focusing on localized markets and specific niches. <|MaskedSetence|> <|MaskedSetence|> Characteristics of organisations in whose domain stayed in the same cluster for six years from 2018 to 2023.
|
**A**: These domains are commonly associated with younger, newly established enterprises.
**B**: This dichotomy between the high and low harmonic centrality clusters underscores the diverse spectrum of enterprises and their varying stages of growth, market presence, and digital influence, revealing the multifaceted nature of the economic and digital landscape.
Table 4.
**C**: The nature of their business often results in smaller-scale firms mirrored in their digital presence through low-traffic websites that cater to a more limited, often specialized audience.
|
ACB
|
ACB
|
CBA
|
ACB
|
Selection 1
|
6. A Discussion of the Mathematical Contribution
The results presented here on the axiomatization of allocation mechanisms are based on applications of Hahn-Banach type-extension results (Theorems 1 and 2). These results provide conditions for extending linear operators on ordered vector spaces. We study some implications of these extension results for envelope representations of sublinear operators. As in the standard real-valued setting, we show that sublinear operators can be written as the upper envelopes of their dominated linear operators. Our results abstract from any topological assumptions, and they are purely order-theoretic and algebraic in nature. The level of generality adopted makes our results applicable to all order-theoretic contexts where sublinear operators play a role. In particular, we retrieve such envelope representation results within Dedekind complete Riesz spaces, that include a large amount of widely used ordered vector spaces. <|MaskedSetence|> Among such restrictions, the main ones are monotonicity and normalization, that are both meaningful in economics and finance. <|MaskedSetence|> <|MaskedSetence|> To the best of our knowledge, the generalization of our results toward convex operators remains an open question.
.
|
**A**: Monotonicity is a non-satiation assumption of the type “more is better”, while normalization says that the value of certain outcomes should remain unchanged after any evaluation.
**B**: This motivates the mathematical analysis of such properties, that is provided in the Appendices.
**C**: The provided representation results are then refined, adding further restrictions on the behavior of the sublinear operators, and they highlight a classic feature: the dominated linear operators often inherit the same properties of the dominating sublinear operator.
|
ACB
|
CAB
|
CAB
|
CAB
|
Selection 3
|
Recommnder systems were initially a niche application of a broader, earlier form of AI, Expert Systems, which came to prominence in the 1980s. <|MaskedSetence|> One of the first academic works by Lui and Lee of Hong Kong University in 1997 considered a system that was an ”intelligent business advisor system for stock investment and was widely implemented and offered a list of features for analysing and picking stocks, based on user preferences which the authors noted needed to be supplied by the end investor. RS shot to prominence more broadly with the Netflix Prize, a competition with USD $1m in prize money for a solution to produce the best collaborative filtering algorithm for Netflix, then an online DVD rental platform. <|MaskedSetence|> Since then, many online consumer tools in areas such as music (Spotify), shopping (Amazon) and news media have continued to advance the field. <|MaskedSetence|>
|
**A**: The academic field proliferated following the publication of the ”Handbook of Applied Expert Systems” in 1997.
**B**: The prize was finally won in 2009.
**C**: Extensive academic work exists in the field of applied RS as early as the 2000s, with meta review studies being published as early as 2016 covering over 100 papers [16].
2 Common Types of RS AI and their mainstream adoption.
|
ABC
|
ABC
|
CAB
|
ABC
|
Selection 4
|
For this work, we employ a sophisticated form of ANN inspired in the developments proposed in [17], where the so-called Differential Machine Learning concept was introduced. The general idea behind it is to enhance the approximation power of an ANN by incorporating the information of the labels’ differentials (when available or easily computed). Thus, henceforth, we denote the here employed ANN as Differential Artificial Neural Network (DANN). Also in [17], aiming to gain efficiency in the DANN training phase, the authors propose the use of the so-called sampled payoffs as labels, instead of ground truth prices. In the common pricing context, this means to generate a single Monte Carlo path of the underlying model variable and consider the highly noisy price computed with it as the label to be employed in the training phase. Then, an entire training set (with thousands or millions of samples) can be generated at the cost of a classical Monte Carlo simulation-based pricing method. Here, we thus perform a Monte Carlo simulation of the considered model with a suitable time discretization (depending on the product at hand) and compute the corresponding prices/cash flows to be used as labels. Unlike the original approach, where the authors generate the all the sampled payoffs under the same distribution (i.e. with the same model parameters), in this work we take that idea a step further. <|MaskedSetence|> Then, the DANN trained with these labels is able to learn the derivative prices for a wide set of market configurations, those defined in the ranges of the training set. <|MaskedSetence|> Moreover, the presented approach can be somehow generalised (as we do in this work), making that each sampled payoff is computed by averaging a bunch of few Monte Carlo realisations which share the same distribution.
On top of the aforementioned approach, we introduce a novel strategy which intents to deepen in the idea of providing more available information to the ANN with the aim of improving its training performance and therefore producing more accurate estimations at similar computational cost. This new development consists of incorporating related financial products to be estimated by the ANN whose ground truth is “easy” to obtain. Then, besides the output/label of the ANN that represents the value of interest, additional outputs/labels are considered. <|MaskedSetence|> For example, it is well known that a derivative with early-exercise features relies on some kind of linear combination of the European counterparts. As we will see, this is precisely the type of relation that we exploit in this work. The smart combination of this idea with the differential machine learning and the (generalised) sampled payoffs constitutes the main contribution of this work, entailing several ways of improving the estimations provided by the ANN-based solution proposed here.
.
|
**A**: Ideally, these aside financial products must have a strong connection with the original product (depending on the same model and market parameters).
**B**: Thus, with the goal of covering most of the market situations (represented by the model parameters), we simulate each of the Monte Carlo paths with a different set of parameters, such that, every single sample represents a very noisy price for that particular setting.
**C**: This makes a great difference with respect to any other “classical” methodologies where, in order to obtain several prices, the corresponding algorithm needs to be repetitively executed (multiplying the computational cost).
|
BCA
|
ABC
|
BCA
|
BCA
|
Selection 3
|
The proposed challenge translates into a new problem, because the underlying networks present systems of different dimensions, i.e. <|MaskedSetence|> When given two networks, determining how similar they are typically involves quantifying their structural, topological, or functional similarities. Several methods and metrics have been developed to address this problem: Graph Invariants, Network Measures, Graph Matching Algorithms, Information-Theoretic Methods, Network Alignment, Machine Learning Approaches. <|MaskedSetence|> When we consider networks defined on the same set of nodes, the comparison becomes straightforward since there’s no need to align nodes between the two networks. <|MaskedSetence|>
|
**A**: Choosing an appropriate method depends on the specific characteristics of the networks and the research question at hand.
Indeed, approaches to network comparison can be roughly divided into two groups based on whether they consider or require two graphs defined on the same set of nodes.
**B**: networks with non-aligned nodes.
Comparing and identifying similarities between networks can indeed be a challenging problem.
**C**: For example, the cases of comparison of SRNs with the same number of nodes -aligned- has been already done by [9]..
|
BAC
|
BCA
|
BAC
|
BAC
|
Selection 4
|
An alternative approach is the risk-sensitive control (which can be traced back to Jacobson, 1973), and it later becomes popular, particularly in financial asset management, e.g., in Bielecki and Pliska, (1999). <|MaskedSetence|> <|MaskedSetence|> From this perspective, an RL agent often encounters the similar situation, in which the agent lacks information about the environment, and hence, has difficulty formulating a probabilistic model to quantify the associated risk. Therefore, it is natural to consider the risk sensitivity in the RL setting.
In this paper, I consider the risk-sensitive objective in the exponential form that is used in Bielecki and Pliska, (1999) and study this problem from RL perspective, i.e., in a data-driven and model-free (up to a controlled diffusion process) approach. <|MaskedSetence|> It is noticeable that, in this paper, the entropy regularization term is added inside the exponential form. This is motivated by regarding the regularizer as an extra source of reward for exploration, and hence, it should be treated similarly as the reward. Such choice is also made in Enders et al., (2024), who numerically document the improvement over the counterpart without entropy regularization. However, the benefits of having entropy regularization have not been theoretically investigated, even for simple cases..
|
**A**: Specifically, I adopt the entropy-regularized continuous-time RL framework proposed in Wang et al., (2020) and aim to find a stochastic policy that maximizes the entropy-regularized risk-sensitive objective.
**B**: In contrast to solely considering the expectation, risk-sensitive objective accounts for the whole distribution of the accumulated reward.
Moreover, the risk-sensitive objective function in the exponential form is well-known to be closely related to the robustness within a family of distributions measured by the Kullback–Leibler (KL) divergence, which is also known as the robust control problems (Hansen and Sargent, , 2001).
**C**: Such uncertainty on the distribution of a random variable (not just its realization whose uncertainty can be statistically quantified) is often regarded as the Knightian uncertainty or ambiguity, which often occurs due to the lack of knowledge or historical data (LeRoy and Singell Jr, , 1987).
|
BCA
|
BCA
|
BCA
|
BAC
|
Selection 2
|
One of the current theories to explain why larger population sizes give rise to higher rates of cultural complexity is Joseph Henrich’s “collective brain hypothesis” muthukrishna2016innovation . <|MaskedSetence|> The basis on which this theory is built are the recent observations that show that humans are super-imitators (for a review, see Laland2017 ). This unique cognitive feature, unmatched across the animal kingdom, allows innovations to spread and pass down across generations without deterioration henrich2004demography . <|MaskedSetence|> <|MaskedSetence|> In this view, the characteristics of individuals remain more or less constant across populations of different sizes, but not the “collective brain” of those societies. Instead, collective brains get larger as populations get larger. Hence, agglomeration leads to more innovation, higher productivity, and even higher IQ scores, but not because of smarter individual brains, but because of smarter collective brains.
How do these multi-disciplinary perspectives enrich our understanding of agglomeration economies in cities? Complexity scientists ought to be in the position to synthesize the key ideas and propose new ones. We end this chapter with some final conclusions and questions for further thought.
.
|
**A**: The cultural brain hypothesis suggests that as culture evolves and expands, it exerts evolutionary forces that adjust sociality, imitation, and transmission variance to cope with the increased complexity of tools, practices, beliefs, and behaviors.
**B**: As a result of this unique ability to mimic our close peers, cultures can accumulate an ever-expanding repertory of practices, conventions, tools, and know-how.
**C**: The argument is very different from those of economists, complexity scientists, or sociologists that we have reviewed so far in that Henrich’s explanation is rooted in evolutionary theory (the field itself is called “cultural evolution”).
|
CBA
|
CBA
|
CBA
|
CAB
|
Selection 3
|
When borrowers consider potential projects, they want to know not only what the most likely outcome is, but also how likely it is that they will earn a high return or experience a significant loss. <|MaskedSetence|> <|MaskedSetence|> As far as I am aware, the current literature on joint liability problems only focuses on the expected return or expected profit. <|MaskedSetence|> However, focusing only on the expected return can be limiting, as it does not account for the risks. Therefore, we following incorporate the risk (variance) associated with expected profit into our utility function and conduct further analysis of mean-variance utility in the context of joint liability problems. The Mean-Variance Utility function for an individual farmer can be incorporated in his/her ex-ante expected profit function in Equation 4.1, where P𝑃Pitalic_P (a random variable) represents the expected profit and γ𝛾\gammaitalic_γ represents the risk aversion parameter of this farmer.
.
|
**A**: The variability of the profit, or the range of possible returns, is an important consideration.
**B**: This means that they concentrate solely on the central point of an individual’s profit without taking into account the distribution of potential profits or the variability of these returns.
**C**: If the potential variability of returns is too high or the risk of significant losses is too large, the borrower may decide not to invest in the venture.
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
Note: For these figures the mean submission quality is used as measure of skill.
Panel A (Sabotage). Relative change in probability of rating 0-stars when competing compared to not competing across skill levels. Outer rugs show distribution of data. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|>
|
**A**: Relative change in probability of rating 5-stars when rating own submission compared to submissions by others of same skill (error band is 95% confidence interval).
**B**:
.
**C**: There are 4,512 (1,091) observations for source (target) skill ≥3.5absent3.5\geq 3.5≥ 3.5.
Panel B (Self-promotion).
|
CAB
|
CAB
|
CBA
|
CAB
|
Selection 2
|
<|MaskedSetence|> We calibrate our benchmark model without GMT to match basic data on international profit shifting. We set the total pre-tax profits of MNEs (ΠΠ\Piroman_Π) to 2,59025902,5902 , 590 billion USD, as reported in Tørsløv et al., (2023). The coverage rate of the GMT is set at ϕ=0.9italic-ϕ0.9\phi=0.9italic_ϕ = 0.9, in accordance with OECD, 2020b and our own calculations from the ORBIS database (see Table B.1 in the appendix). <|MaskedSetence|> <|MaskedSetence|> Among the non-targeted moments, our calibration arrives at a share of shifted profits of 33.3%, which closely matches the share of shifted profits in the data (37.4%), as reported in Tørsløv et al., (2023). Our model somewhat underestimates the haven tax rate and the aggregate revenue loss in the non-haven countries, as compared to their corresponding values in the data..
|
**A**:
4 Quantitative implications
In this section we analyze a calibrated version of our model, and explore its quantitative revenue effects.
**B**: Table 2 reports calibration results.
**C**: We then calculate the cost parameter of profit shifting, δ𝛿\deltaitalic_δ, to exactly match the GDP-weighted average of effective tax rates in non-haven countries of 18.6%percent18.618.6\%18.6 %, as reported in Tørsløv et al., (2023).
|
ACB
|
ACB
|
ACB
|
ACB
|
Selection 2
|
We define LCM at pair-, node-, and system-levels, offering different granularities for analysis. Pair-level LCM between two firms demonstrates the first firm’s position when the second firm experiences a shock. <|MaskedSetence|> System-level LCM aggregates multiple scenarios to portray overall centrality changes under equally probable situations. <|MaskedSetence|> We derive the asymptotic distribution of LCM through its Taylor expansion and form hypotheses for testing – specifically focusing on whether LCM significantly differs from zero. <|MaskedSetence|> In this simulated setup, the true value of LCM consistently falls within the confidence intervals of the theoretical distribution at a 95% confidence level.
.
|
**A**: Node-level LCM of a firm is the total resulting shock when it is stressed.
**B**: We also allow the association of weights with node- and system-level centralities.
Our approach involves statistical analysis to assess the robustness of the centrality measure.
**C**: We validate our framework using simulated data, demonstrating the theoretical distribution’s compatibility with empirical observations.
|
ABC
|
ABC
|
ABC
|
BAC
|
Selection 1
|
Constructing equilibria for N𝑁Nitalic_N-player games in continuous-time and space is a challenging problem. The theory of mean-field games, developed independently by [38] and [42], provides approximation results for equilibria of symmetric games with finite players. Indeed, it is typically possible to prove that mean-field equilibria define ε𝜀\varepsilonitalic_ε-equilibria for the related N𝑁Nitalic_N-player games. <|MaskedSetence|> In the mean-field game, the representative agent reacts to the long-term average of the distribution of the population, which is represented by a scalar parameter θ𝜃\thetaitalic_θ. <|MaskedSetence|> <|MaskedSetence|> Among these, we highlight the derivation of novel first-order conditions for optimality in ergodic singular stochastic control problems (see Lemma 4.3 below), which are of independent interest, as well as the probabilistic representation of the Lagrange multiplier employed in the analysis of the mean-field central planner control problem (see Lemma 5.2)..
|
**A**: In this paper, we introduce the mean-field version of the previously described stochastic game and explicitly construct the mean-field Nash equilibrium and the solution to the mean-field central planner control problem.
We also determine sufficient conditions for the existence of coarse correlated equilibria (based on suitable recommendations of the moderator).
**B**: The stationary one-dimensional setting of the mean-field game and control problem allows for explicit characterizations of the equilibria (see also [9, 16, 17, 22] and references therein in the context of singular/impulse control games).
Our contributions.
**C**: Despite the specific setting in which the game is formulated (geometric Brownian dynamics and profit function of power type), the analysis reveals a rich structure of the solution while also requiring technical results and arguments.
|
ABC
|
ABC
|
ABC
|
BAC
|
Selection 1
|
Mid-to long-term stock price overreaction was first noticed and proposed by DeBondt & Thaler (1985)[1] in the US stock market. <|MaskedSetence|> Furthermore, for more than 40 years, the short-term return reversal has been a recognized phenomenon. According to Jegadeesh (1990)[2], a reversal strategy that buys and sells stocks based on their previous month’s returns and holds them for a month produced profits of roughly 2% per month between 1934 and 1987.
Some research attributed the inefficiency of the stock market due to the irrationality of retail investors. <|MaskedSetence|> Due to institutional investors’ higher herding behavior and feedback trading tendencies than individual investors, Dennis & Strickland (2002)[3] showed that equities having a bigger fraction of fund holdings in the market tended to have higher returns. And groups of individual investors often create stock price swings, whereas institutional investors help to stabilize prices. <|MaskedSetence|>
|
**A**: They proposed momentum trading and reversal trading, and show that using the latter during an overreaction will lead to a positive stock return.
**B**: Generally speaking, institutional investors have more resources, more expertise, and easier access to transaction data than retail investors do.
**C**: Similar results were found in follow-up studies by Barber, Odean, and Zhu (2009)[4].
.
|
ABC
|
CAB
|
ABC
|
ABC
|
Selection 3
|
Hence, the most striking conclusion of this section is that many large co-jumps are in fact explained by endogenous dynamics and propagate across stocks, rather than being due to impactful external news. <|MaskedSetence|> This crash triggered a price drop in other US stocks. <|MaskedSetence|> <|MaskedSetence|> As a result, the stocks involved in the co-jump should all share the same profile around the jump, as in Fig. 24(a) for example..
|
**A**: A (in)famous example of such propagation is the flash crash of May 6th 2010, where the S&Pmini crashed in less than 30min, due to a sell algorithm set with an excessively high execution rate.
**B**: induced by news.
**C**: Here, our results suggest that this synchronization phenomenon is not such a rare event and actually happens quite often gerig2012high ; bormetti2015modelling .
This finding is further supported by examining the correlation of the individual jump time-series composing a co-jump.
Naively, one would expect large co-jumps to be exogenous, i.e.
|
ACB
|
CBA
|
ACB
|
ACB
|
Selection 3
|
The set of SSD efficient portfolios is generally very large, and investors need to decide how to select a portfolio in which to invest
from within this set. <|MaskedSetence|> Hodder et al. (2015) proposed ways to assign values to these parameters with the goal of helping investors select a single portfolio out of the efficient set.
Bruni et al. <|MaskedSetence|> <|MaskedSetence|>
|
**A**: They proposed.
**B**: The formulation from Post and Kopa (2013) may be used to find different SSD-efficient portfolios depending on how some parameters are specified.
**C**: (2017, 2012) developed an alternative approach for SD-based enhanced indexation.
|
BCA
|
BCA
|
BCA
|
BCA
|
Selection 2
|