text_with_holes
stringlengths
196
5.41k
text_candidates
stringlengths
70
1.23k
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
Traditional option pricing models, such as the Black-Scholes model, see [BS73], or the Merton model, see [merton1973theory], have been cornerstones in understanding financial derivatives. These models provide key insights for risk management and decision-making processes, allowing traders and investors to estimate the value of options under a set of simplified assumptions. However, they typically assume constant drift rates and volatility, which fails to represent the inherent unpredictability of the market and its response to external shocks such as news, economic changes, or geopolitical events. Over the last two decades, it has become clear that the assumption that the underlying asset’s price behaves like a geometric Brownian motion with constant drift and constant volatility cannot explain the market prices of options with different strike prices and maturities. Merton [merton1976option] proposed adding jumps to the behavior of asset prices, which has led to active research into models with jumps. <|MaskedSetence|> Another generalization involves stochastic volatility, as explored by Heston [heston1993closed], among others. These developments aim to capture more accurately the complex dynamics of financial markets. In order to incorporate these complexities into practical option pricing, several numerical methods have been developed. For instance, the binomial model has been enhanced by various researchers, including Cox et al. [cox1979option], Hull and White [hull1988use], and others, to capture the early exercise feature of options. However, these methods can be computationally intensive and memory-consuming. Similarly, Monte Carlo simulation approaches have been successful in generalizing option pricing [caflisch2004monte, fu2001pricing], although they have found limited use in scenarios involving early exercise [boyle1977options, broadie1997pricing]. <|MaskedSetence|> <|MaskedSetence|> The value of the American option is computed by averaging the discounted cash flows from these simulated paths. Further, the optimal exercise strategy is determined by a least-squares regression approach, which estimates the continuation values (expected future payoffs if the option is not exercised) at each potential exercise date. By regressing the continuation values against a set of basis functions of the underlying asset price, the algorithm approximates the decision to hold or exercise the option. This paper introduces a novel approach by employing a generalised stochastic hybrid system for the pricing of American options, integrating continuous dynamics with jump processes to capture more realistic market fluctuations. Specifically, we use Piecewise Diffusion Markov Processes (PDifMPs), a type of generalised stochastic hybrid system, to model the asset price dynamics. PDifMPs combine the continuous evolution of asset prices with discrete jumps, allowing for sudden changes in market conditions, which are often triggered by unexpected events such as economic announcements or geopolitical developments..
**A**: This method uses Monte Carlo simulations to generate multiple potential future paths of the underlying asset’s price. **B**: Several models have been proposed, including those by Kou [kou2002jump] or Toivanen [toivanen2008numerical], which assume a log-double exponential distribution of jump sizes, and the Carr-Geman-Madan-Yor (CGMY) model [carr2002fine], which treats the asset price as a Lévy process with possibly infinite jump activity. **C**: Furthermore, PDE methods have been notably advanced, using linear complementarity, front tracking, and front fixing methods to solve the free boundary problems associated with American options, see [brennan1978finite, jaillet1990variational, arregui2020pde]. A widely used method for pricing American options is the Longstaff-Schwartz (LS) algorithm [LongSchw01].
BCA
BCA
BCA
ACB
Selection 2
The foundation of this simulation is domain-specific structured data, including orders, order batches, and the Limit Order Book (LOB) (Gould et al., 2013). <|MaskedSetence|> We propose that the generation of orders, order batches, and LOBs will play a role in financial markets similar to that of language modeling in the digital world. <|MaskedSetence|> The key objectives are: (1) Scaling Law Evaluation: Assessing LMM’s scalability in financial markets to demonstrate its potential. <|MaskedSetence|> (3) Controlled Generation and Market Impact: Evaluating the trade-off between controlled generation and market impact from injected orders, highlighting our domain-specific design. (4) Downstream Applications: Showcasing potential capabilities of applications built on MarS. .
**A**: In financial markets, certain information, such as gaming behaviors, transactional trends, and high-resolution market dynamics, is better captured through trading orders and batches rather than text. **B**: We term our generative market model the Large Market Model (LMM), aiming to replicate the success of recent large language models. In this paper, we introduce a financial Market Simulation engine (MarS) powered by LMM, addressing the domain-specific requirement of modelling orders’ market impact and controllable generation with high realistic. **C**: (2) Realism Assessment: Determining if MarS simulations are realistic enough for financial downstream tasks, indicating practical applicability.
ABC
ABC
ABC
CAB
Selection 3
Apple’s revenue for Q2 FY23 is expected to be similar to that of Q1 FY23, with a negative year-over-year impact of nearly 4 percentage points due to foreign exchange. Services revenue growth is also expected to be similar to Q1 FY23, while facing macroeconomic headwinds in areas such as digital advertising and mobile gaming. <|MaskedSetence|> <|MaskedSetence|> Despite the challenges, Apple continues to see strong growth in its installed base of over 2 billion active devices and growing customer engagement with its services. <|MaskedSetence|>
**A**: Gross margin is expected to be between 44% and 44.5%. **B**: The company expects to continue to manage for the long term and invest in innovation and product development, while closely managing spend. **C**: The company also plans to return $90 billion to shareholders through share repurchases and dividends, maintaining its goal of getting to net cash neutral over time. .
BAC
ABC
ABC
ABC
Selection 2
The experimental results are presented in Figure 3 and Table 3. Overall, regardless of the communication method employed, the introduction of transaction attribute similarity graphs improved model performance. Notably, the TF-IDF-based attribute similarity graph yielded significant improvements, increasing the F1-Scores by approximately 2.88%, 2.98%, and 2.35% for MulDiGraph, B4E, and SPN, respectively. The B-Acc metric also saw improvements of about 2.45%, 2.13%, and 2.19% across these models. This enhanced performance can be attributed to the nature of Ethereum transaction scenarios, where phishing accounts and their transaction records appear frequently in phishing scams but constitute a small proportion of total transaction information. <|MaskedSetence|> PMI can be understood as a pre-clustering of corpus information, where strongly correlated words are grouped together. <|MaskedSetence|> <|MaskedSetence|> This is because higher θ𝜃\thetaitalic_θ values filter out edges for most words, reducing the generalizability of similarity information. .
**A**: Consequently, words associated with phishing in transaction language have higher TF-IDF values, effectively capturing key information about phishing accounts. NPMI-based methods focus on co-occurrence probabilities. **B**: In the context of normal Ethereum account transactions, most transaction records are similar, leading PMI to focus more on the semantics of normal account transactions. **C**: Therefore, compared to TF-IDF information, PMI provides limited similarity features for phishing accounts. Furthermore, the improvements brought by attribute similarity graphs are more pronounced when the threshold θ𝜃\thetaitalic_θ is relatively low, between 0 and 0.4.
ABC
ABC
ABC
CBA
Selection 1
While the specification charts summarize a large amount of information in a compact form, it can be difficult to detect the extent to which the diamonds indicating EO product form a pattern or are patternless. To dig further into coefficient ordering, we present a series of figures which represent the order (ranking) of coefficients within a country, across models, by EO source using bumpline functions (Naqvi, 2024). These are presented for total seasonal rainfall in Figure 12 and mean daily temperature in Figure 13. Results for days without rain and GDD are functionally similar and so we present figures for those variables in the Appendix. Within each country panel, the x-axis (each column) presents a different outcome variable (value or quantity) and a different econometric specification (weather only, weather with FE, and weather with fixed effects and inputs). The y-axis (each row) represents the ranking of the coefficient size for each EO source within each regression. They are ordered from one to six, with one representing the largest coefficient and six representing the smallest coefficient. These results give greater insight to the changes in ordinality across specifications and across countries. We first consider the findings from total seasonal rainfall (see Figure 12). Going country by country: in Ethiopia, we see that in the weather only specification, CPC and TAMSAT have large coefficients while ERA5 has the smallest. This ordering changes once FE are included, with ERA5 producing the largest coefficients and TAMSAT and CPC producing the smallest. <|MaskedSetence|> Rather, there appear to be trends based on the dependent variable, with ARC2 and TAMSAT producing large coefficients when the value of total farm production is the dependent variable and small coefficients when the dependent variable is quantity of maize yield. Thus, what drives heterogeneity in results in Ethiopia differs in Malawi. Results in Niger appear almost chaotic, with order of EO products jumping around with any change in specification of dependent variable. It is difficult to identify any trends other than a lack of trends. Next, we consider Nigeria, which does not demonstrate a specification- or dependent variable-based pattern, but generally has ARC2, CPC, and ERA5 ordered in the last three places with MERRA-2, CHIRPS, and TAMSAT in the top three places. Compared to the previous countries, the ordering of EO products in Nigeria appears robust, suggesting one would get similar results regardless of which EO product one chose. This can be confirmed by referencing the specification charts in Figure 8. A similar result holds in Tanzania, where we observe that ARC2 generally falls into the fourth position and TAMSAT into the sixth. <|MaskedSetence|> In the weather only specifications MERRA-2 and ERA5 produce the largest coefficients while ARC2 and TAMSAT produce the smallest. <|MaskedSetence|> Overall, based on the findings for total seasonal rainfall, we can draw some general trends about ordering or re-ordering of coefficients based on EO products within countries, but we cannot draw trends across countries. Which EO products produce the largest or smallest coefficients varies from country to country suggesting the how well an EO product captures the truth about weather depends on the geography and climate of where it is looking..
**A**: This order reverses when FEs are included. **B**: In Malawi, the same trend does not hold. **C**: Finally, we consider Uganda which somewhat resembles Ethiopia.
BCA
BCA
BCA
CBA
Selection 2
Figure 2A shows the gas consumption in Austria across consumer types according to the energy balance. Households and small consumers represent only about 19.5% of total gas consumption. The industrial sectors contribute to approximately 44.1% of annual gas consumption. When combined, heat and power (CHP) along with electricity account for an additional 26.3%. <|MaskedSetence|> <|MaskedSetence|> Note that 28% of industrial gas consumption is used for room heat. <|MaskedSetence|>
**A**: The remaining 10.1% is distributed among the service and public sector, pipeline transport, and heating plants. **B**: Fig. 2B zooms into industrial gas usage. **C**: Gas shortages for room heat will likely have less severe consequences on economic production than shortages in process heat (accounting for 62% of industry gas consumption) or non-energetic use, e.g., as a chemical reactant (10%). 3.1 Sectoral gas dependencies.
CAB
ABC
ABC
ABC
Selection 4
<|MaskedSetence|> We refer to these as near-ATM option contracts. <|MaskedSetence|> Since, contracts with large or minimal times to maturity have significantly low trading volumes, we focus on studying contracts with time-to-maturity values between 3 and 45 days. Train/Test Split: To develop a predictive model using XGBoost supervised learning, we partitioned the dataset into two segments, allocating roughly 80%percent8080\%80 % for training and 20%percent2020\%20 % for evaluating the trained models. <|MaskedSetence|> This is for avoiding information leakage as explained in [20]. Specifically, the training dataset spans 56565656 months, covering the period from January 01, 2015, to August 31, 2019..
**A**: The split is such that the training set includes data dated before the oldest data of the test set. **B**: It is observed that numerous near-ATM contracts are traded daily, with varying times to maturity. **C**: We retain datapoints only for option contracts that are near at-the-money (ATM), specifically those where the ratio of the strike price to the spot price is between 96% and 104%.
CBA
CBA
CBA
ABC
Selection 2
Implementation Details. All the experiments are conducted using PyTorch (Paszke et al. <|MaskedSetence|> <|MaskedSetence|> The lookback window is choosen from {16,32,64,128,256} and the batch size is chosen from {16, 32, 64}. <|MaskedSetence|> Unless otherwise specified, we use LLaMA3-8B333https://huggingface.co/meta-llama/Meta-Llama-3-8B as the default base LLM and use MSE loss for model optimization. Each experiment was repeated 3 times and the average performance was reported. .
**A**: We set the number of training epochs as 10. **B**: 2019) on NVIDIA A100 GPUs. **C**: We employ the Adam optimizer (Kingma and Ba 2015) with an initial learning rate 1⁢e−31𝑒31e-31 italic_e - 3 and and we selected the best hyperparameters based on the IC performance in the validation stage.
BCA
ABC
BCA
BCA
Selection 3
This section elaborates on how to apply our model to real-world trading systems and conduct strategy updates and transactions in the real algorithmic trading platform of EMoney Inc. Our model is trained every month, predicting trading data for each day after the end of trading. Based on the predictions, we employ different strategies for trading within the first half hour of the next trading day’s opening. The strategies we utilize are based on the combined optimization of CSI 300 and CSI 500, each utilizing stock pools from CSI 300 and CSI 500 respectively. <|MaskedSetence|> In Figure LABEL:figure4_1 and LABEL:figure4_2, the red curve represents the absolute returns of the model, i.e., the actual returns of our model. The blue curve represents the returns of the Shanghai CSI 300 and CSI 500 indices, reflecting the overall market returns. <|MaskedSetence|> Over a year, all strategies of our model significantly outperform the market. <|MaskedSetence|> It can be observed from the figure that our model maintains a very low drawdown rate over a long period, reaching only about 5% in the worst case. This indicates that our model not only pursues returns but also possesses the ability to prudently manage risks, making it outstanding in real trading markets. .
**A**: Additionally, the second part of Figure LABEL:figure4 shows the excess return drawdown rate, which reflects the model’s good risk management capabilities by measuring the extent to which excess returns decline from their peak to their lowest point. **B**: The yellow curve represents the excess returns, i.e., the additional returns obtained relative to the market index by our model. **C**: Figure LABEL:figure4 illustrates the effects of different strategies.
CBA
BAC
CBA
CBA
Selection 1
<|MaskedSetence|> When the global economy is expanding, the copper price tends to rise and vice versa. <|MaskedSetence|> While global stock markets have been in recession for the last few years during the pandemic, a soaring rise in copper price took place in 2020. The first reason for the unusual performance is that copper prices are settled in the US dollar, which has fallen sharply during that time. Second, production in major copper exporters such as Chile and Peru have declined due to the COVID-19 pandemic. <|MaskedSetence|>
**A**: As copper is associated with many industries, it is often regard as the leading indicator of the world economy. **B**: Thirdly, as having always been the world’s largest copper importer for a long time, China did well in the early stages of the pandemic and maintained a sound economic circumstance. . **C**: In recent years, the prices of copper and its derivatives have been fluctuating.
ACB
ACB
ACB
ACB
Selection 2
As copper is associated with many industries, it is often regard as the leading indicator of the world economy. <|MaskedSetence|> In recent years, the prices of copper and its derivatives have been fluctuating. While global stock markets have been in recession for the last few years during the pandemic, a soaring rise in copper price took place in 2020. <|MaskedSetence|> Second, production in major copper exporters such as Chile and Peru have declined due to the COVID-19 pandemic. <|MaskedSetence|>
**A**: Thirdly, as having always been the world’s largest copper importer for a long time, China did well in the early stages of the pandemic and maintained a sound economic circumstance. . **B**: The first reason for the unusual performance is that copper prices are settled in the US dollar, which has fallen sharply during that time. **C**: When the global economy is expanding, the copper price tends to rise and vice versa.
CBA
CBA
CBA
CBA
Selection 4
The system sends a prompt to all participants (each representing a separate ChatGPT-4 session) outlining the double auction rules and providing the current round information. Participants must confirm their understanding of the rules. At the start of each new round, only the information about the previous transaction is updated, while the rest of the message remains the same. Price Posting and Matching This stage involves selecting a random session using a number generator to simulate spontaneous price-posting behavior. <|MaskedSetence|> <|MaskedSetence|> A deal occurs when a seller’s price is less than or equal to the buyer’s price, or vice versa. <|MaskedSetence|>
**A**: Three types of prompts are sent, depending on whether a buyer posts a price, a seller posts a price, or both do. **B**: The deal price is added to the transaction history and shared in subsequent rounds.. **C**: If a participant decides to post a price, their response is recorded.
ACB
ACB
CBA
ACB
Selection 1
3.2 10-K statements 10-K statements are financial filings publicly traded companies submit annually to the U.S. <|MaskedSetence|> They contain information such as companies’ financial statements, risk factors, and executive compensation. 10-K statements will later be used to build a cybersecurity risk measure. <|MaskedSetence|> <|MaskedSetence|> Each line of the index file corresponds to a 10-K and is structured as follows: .
**A**: Securities and Exchange Commission (SEC). **B**: The index files from the SEC’s Edgar archives555https://www.sec.gov/Archives/edgar/full-index/ are used to download and structure the 10-K. **C**: These index files contain information about all the documents filed by all firms for a specific quarter.
ABC
ABC
ABC
CAB
Selection 1
The critical role of an active longevity-linked capital market in ensuring the long-term viability of the global retirement system has been highlighted by Blake et al. <|MaskedSetence|> Börger et al. (2023) examine the impact of market saturation on longevity risk transactions, suggesting that capital market deals may become more profitable as the reinsurance sector reaches its capacity. <|MaskedSetence|> For instance, Biffis et al. (2016) analyze the cost of collateralization in longevity swaps, while Chen et al. (2022) propose a collective longevity swap that theoretically provides indemnity longevity reinsurance to buyers and more stable cash flows to sellers. Comprehensive overviews of the development of the longevity-linked capital market and existing (both successful and failed) capital market transactions are available in Blake et al. (2023). <|MaskedSetence|>
**A**: There is also considerable discussion about the relationship between capital market solutions and traditional reinsurance products. **B**: Our paper contributes to this literature by examining the development of the longevity risk transfer market from the perspective of differing risk aversions among participating parties. . **C**: (2013).
CAB
CAB
BCA
CAB
Selection 2
We formalized the question of choosing optimal processing capacities for claims handling. <|MaskedSetence|> We studied this trade-off aiming at minimizing claims and claims processing costs. <|MaskedSetence|> This variant describes a specific mechanism to work off a backlog, and it considers a specific super-imposed cost inflation factor for late claims processing. In this regard, there are many alternative ways to model these backlog cost items. <|MaskedSetence|>
**A**: On the one hand, the claims handling capacity needs to be limited because any insurance company has only finite financial resources available. On the other hand, the capacity should be sufficiently large because long processing delays (and large backlogs) also generate various costs. **B**: Our choice is a realistic one that is still fairly well tractable, and the final intractable step was solved by a recurrent neural network approximation.. **C**: This problem has several features from queueing theory, but there are also some significant differences because claims are labeled by occurrence periods and arising expenses need to be allocated to occurrence periods to have a consistent and appropriate cost analysis of an insurance portfolio. We formalized these questions and we solved a variant of this optimal cost and capacity problem.
ACB
CAB
ACB
ACB
Selection 1
<|MaskedSetence|> <|MaskedSetence|> (Manzoni, 2002) also uses autoregressive models that test both ARCH and GARCH models. The author uses FTSE returns, the term spread, the US dollar mark to the sterling pound and previous credit spreads values to describe CS. In (Avramov et al., 2007), the authors explain 67% of the variation of credit spreads using two types of indicators, common factors, they are used by all entities and include equity market return, changes in 5Y government rates, and company-level factors that include stock momentum or change in equity volatility. An interesting feature of their process is that they include in the set of common variables a dummy variable that is 1 when the US Federal funds rate increases and 0 otherwise. They find that this variable is significant only for the highest-rated counterparties where expansionary Fed policies (i.e., decreases in the rate value) reduce credit spreads. One reason for the popularity of factorial model is probably the so-called "credit spread puzzle", which states that structural approach to credit risk and corporate bond pricing (like Merton, (Merton, 1974)) underestimates the credit spread for investment-grade counterparties. <|MaskedSetence|> One key feature of the authors work is that instead of relying solely on the historical default rate for a specific maturity and credit rating as a proxy for default probability at that exact maturity and rating, they opt to utilize a wide cross section of default rates at different maturities and ratings. The primary characteristic of their approach that enables them to consolidate default rate data across various credit ratings and maturity periods is the underlying assumption that companies, regardless of their credit ratings and the maturity of their bonds, will still adhere to a common default threshold or boundary. An emphasis will therefore be put on back-testing the model and its calibration. .
**A**: In (Feldhütter and Schaefer, 2018), the author introduces a new calibration methodology to use the Black-Cox model leading to more precise estimates of investment grade default probabilities. **B**: (Davies, 2008) for instance, conduct a study using 85 years of corporate bonds data to find determinants of credit spreads. **C**: The author uses a self-extracting threshold model with the inflation characterizing the threshold, and the factors used are the levels series for the spread, US treasury bill (T-bill), equity and industrial production variables.
BCA
BCA
CAB
BCA
Selection 1
<|MaskedSetence|> (Michaud, 1989) showed that mean-variance optimization can maximize the effect of input parameter estimation errors, which can lead to inferior results compared to an equally weighted portfolio. Other studies, such as (Best and Grauer, 1991a, b; Kallberg and Ziemba, 1984), have discussed the importance of input parameter settings in the MVO framework. <|MaskedSetence|> While these studies reveal how the degree of estimation errors affect the MVO framework, they do not go further into how the shape of estimation errors affect the MVO framework. In practice, machine learning has become very useful in the estimation of parameters and decisions are made through optimization based on the machine learning estimates as inputs (Lee et al., 2023). Hence, so-called Predict-then-Optimize method can be seen as a two-stage method. The prediction and optimization stages are separated, and thus, the prediction stage is solely concerned with enhancing prediction accuracy such as the mean squared errors (MSE). Recent studies have argued that a prediction model that minimizes traditional prediction losses, such as MSE, may not be optimal for decision-making in the subsequent optimization stage. <|MaskedSetence|> DFL has been studied in various fields, and portfolio optimization is no exception. A couple of studies (e.g., (Butler and Kwon, 2023; Costa and Iyengar, 2023)) have showed that DFL can be implemented for portfolio optimization and it can enhance investment performance. However, they have not analyzed the detailed characteristics of the DFL prediction model. .
**A**: In addition, some researchers have analyzed how the MVO optimal portfolio or the distribution of all possible portfolios change as the input parameters change (e.g., (Chopra and Ziemba, 1993; Kallberg and Ziemba, 1984; Best and Grauer, 1991a, b; Chung et al., 2022)). **B**: To overcome this issue, a framework called Decision-Focused Learning (DFL) has been proposed (e.g., (Donti et al., 2017; Elmachtoub and Grigas, 2022; Wilder et al., 2019; Pogančić et al., 2020; Mandi et al., 2022; Shah et al., 2022)). **C**: In relation to “your job”, many studies have been conducted to investigate the impact of estimation errors in input parameters on mean-variance optimization.
CAB
CBA
CAB
CAB
Selection 3
<|MaskedSetence|> <|MaskedSetence|> It generalizes the approach from Pesenti and Jaimungal (2023), in which the authors aim to find an optimal strategy, whose terminal wealth is distributionally close to a benchmark’s according to the 2222-Wasserstein distance, minimizing a static distortion risk measure of the terminal P&L in a portfolio allocation application. Wu and Jaimungal (2023) apply the approach to robustify path dependent option hedging. Then, Coache et al. (2023) design a deep RL algorithm to solve time-consistent RL problems where the agent optimizes dynamic spectral risk measures. It builds upon the work from Coache and Jaimungal (2024) by exploiting the conditional elicitability property of spectral risk measures to improve their estimation, and Marzban et al. <|MaskedSetence|> These ideas are also used in Jaimungal et al. (2023) for risk budgeting allocation with dynamic distortion risk measures. Finally, Bielecki et al. (2023) derive dynamic programming equations for risk-averse control problems with model uncertainty from a Bayesian perspective and partially observed costs. This approach simultaneously accounts for risk and model uncertainty, but requires finite state and action spaces. .
**A**: First, Jaimungal et al. **B**: (2022) develop a deep RL approach to solve a wide class of robust risk-aware RL problems, where an agent minimizes the static worst-case rank dependent expected utility measure of risk of all random variables within a certain uncertainty set. **C**: (2023) which focus on dynamic expectile risk measures.
ABC
ABC
ABC
ABC
Selection 3
Our proposed technique offers two main benefits. First, our decomposition pipeline improves the execution time of classical state-of-the-art solvers on portfolio optimization problems. <|MaskedSetence|> At the same time, for problems of sufficient size, even the decomposed subproblems become difficult to solve with classical techniques, but are are good targets for quantum optimization. As such, the second benefit of our technique is that our pipeline contributes to the solution of portfolio optimization problems using quantum computers. Specifically, by leveraging the structure present in the data defining PO problems, we may potentially reduce the number of qubits required to implement the quantum optimization algorithm on quantum devices. We propose to utilize the structure present in typical problem instances to effectively reduce the problem size, thereby making large-scale problems compatible with near-term quantum hardware. Thus, our approach paves the way to near-term hardware demonstrations for practically-relevant applications at scale. It is worth mentioning that decomposition techniques have been applied to a broad range of optimization problems [38, 39]. <|MaskedSetence|> <|MaskedSetence|> In the context of near-term quantum computing, problem decomposition has been used to tackle graph clustering and graph partitioning problems on small quantum devices [43, 44, 45, 46]. .
**A**: With continuous variables, graph theoretic decomposition algorithms have been explored in Refs. [41, 42]. **B**: A widely known partitioning procedure for solving MIP problems is Benders’ decomposition (BD) [40], wherein the problem is decomposed into small subproblems containing only integer variables and other small linear programs containing only continuous variables. **C**: We show that when utilizing the proposed decomposition pipeline and solving each subproblem with a state-of-the-art branch-and-bound based solver, the time-to-solution is significantly reduced (by at least 3×3\times3 × for the largest problems considered in our numerical experiments, with 1500150015001500 variables) as compared to directly solving the problem with the same solver.
CBA
CBA
BCA
CBA
Selection 1
Figure 3: UMPU Wilks test results. The null hypothesis for this test is that the data is distributed as a power-law, and its alternative hypothesis is that the data is distributed as log-normal. <|MaskedSetence|> <|MaskedSetence|> After the threshold is chosen, all bitcoin balance value above this threshold is sorted inversely, namely the largest value of bitcoin balance rank one, then the second largest value of bitcoin rank two. <|MaskedSetence|> The right panel is the p-value versus the threshold of bitcoin balance. .
**A**: The smaller p-value, the more certain that the null hypothesis is refused. **B**: The left panel is the p-value versus the rank of the chosen threshold of bitcoin balance on 2016-01-23. **C**: The ranking proceeds until the chosen threshold.
BAC
ABC
BAC
BAC
Selection 4
<|MaskedSetence|> Avg (DJIA), Deutscher Aktienindex (DAX), Nasdaq Composite (NASDAQ), Milano Indice di Borsa (FTSEMIB) assets. The manuscript is organized as follows. <|MaskedSetence|> Section II.2 describes the construction of the diversity measures in terms of the relative cluster entropy. Section III illustrates the approach on volatility series of tick-by-tick data of the Standard & Poor 500 (SP&500), Dow Jones Ind. Avg (DJIA), Deutscher Aktienindex (DAX), Nasdaq Composite (NASDAQ), Milano Indice di Borsa (FTSEMIB) assets. The Kullback-Leibler cluster entropy and the diversity indexes are estimated over twelve monthly periods covering the year 2018. <|MaskedSetence|> In Section IV the implications of the study are shortly discussed. A numerical example of investment is also provided. Conclusions are drawn in Section V..
**A**: Section II.1 includes the main notions and computational steps underlying the Kullback-Leibler cluster divergence approach. **B**: A comparison with the measures obtained by using the Shannon cluster entropy, the equally-weighted and the Sharpe ratio estimates are also included. **C**: The Kullback-Leibler cluster entropy will be used to define a robust and sound set of diversity measures built upon the coarse grained probability distribution of the realized volatility of tick-by-tick data of the Standard & Poor 500 (SP&500), Dow Jones Ind.
ABC
CAB
CAB
CAB
Selection 2
<|MaskedSetence|> That is, we deduce conditions for when a risk-neutral investor would optimally invest in (or withdraw from) the CPMM as a LP. With this optimal execution, second, we are able to produce a risk-neutral valuation for a liquidity token. <|MaskedSetence|> As far as the authors are aware, a formal discussion of the Greeks of the liquidity token has never been undertaken previously. Notably, as nearly 50% of LPs lose money on Uniswap [14], the introduction of a hedging strategy is of vital importance. Third, we bring our pricing and hedging theory to data in order to understand its performance in practice. <|MaskedSetence|> Bringing the theory to the data, we construct a calibrated arbitrage-free price for the liquidity token. .
**A**: As a direct consequence, the Greeks and hedging strategies for this position can be readily constructed. **B**: We find that the prevailing market price for the CPMM liquidity token readily admits arbitrage opportunities that investors can exploit. **C**: Our primary contributions and innovations for the pricing and hedging of the liquidity position in a CPMM are threefold. First, in treating this liquidity position as a derivative on the underlying assets, we find the optimal execution of the position.
CAB
CAB
CAB
CAB
Selection 3
<|MaskedSetence|> This approach helped us concentrate on the vital aspects by filtering out minor details and condensing the news into brief sentences. <|MaskedSetence|> <|MaskedSetence|> The resulting headlines are both informative and useful for making investment decisions. Below is the prompt used to generate these headlines with GPT-4 model. {mdframed}.
**A**: After collecting the daily news, we extracted headlines highlighting the day’s most important information, enabling us to summarize the information effectively. **B**: We also get headlines that don’t contradict each other. **C**: Additionally, it allowed us to gather more information by eliminating unnecessary noise and isolating the key facts in the headlines.
ABC
BCA
ABC
ABC
Selection 4
This section highlights that our findings are generally robust to a range of different specifications, samples and assumptions. We discuss these sensitivity checks in detail in Appendix D, but summarise them graphically in Figure 6. The figure shows six panels for our main outcomes of interest: the two pollution measures and four individual-level outcomes. The dots present the estimates of the impact of exposure during the ‘adjustment period’; the triangles present the estimates for the SCA being in operation. <|MaskedSetence|> <|MaskedSetence|> Rows 2-4 use alternative definitions of the control group (Section D.1). Row 5 is specific to the individual analysis, and uses alternative definitions of the sample depending on individuals’ geolocation within County Boroughs (Section D.2). Rows 6-8 explore the sensitivity to different bandwidths around the event time (Section D.3). Rows 10-17 are specific to the individual analysis and show different restrictions of the relevant birth cohorts (Section D.4). Rows 18-19 show different specifications of the time trend, specifying no time trend, or allowing for a CB-specific annual time trend (Section D.5). The main take-away from Figure 6 is that the estimates are very robust to the use of different specifications, samples, or model assumptions. In almost all specifications do we see a reduction in black smoke concentrations ranging from ∼similar-to\sim∼10–30 mcg/m3, followed by an approximately 60g increase in birth weight and 1 cm increase in adult height. <|MaskedSetence|>
**A**: Both are shown with 95% confidence intervals, with opaque colours indicating that they are significantly different from zero. **B**: The first row in each panel replicates the main specification, showing significant impacts on black smoke concentrations, as well as birth weight and height (the latter two only for those exposed after the operation date). Each of the following rows correspond to different robustness checks, where the row refers to the specific section in Appendix D. **C**: We again find no consistent evidence of impacts on years of education, nor on fluid intelligence..
ABC
ABC
ACB
ABC
Selection 1
The technological leap to conversational search leads to two problems, one easy and one hard. The first is the need to update or introduce new regulations regarding online marketing that take into account the conversational nature of the interaction. <|MaskedSetence|> The second problem is the implicit steering of user preferences, which is considerably more complex to control. In principle, chatbots can make implicit moves to steer the conversation to seek to achieve certain economic outcomes for a company or advertiser, for example using the chatbot as a shopping assistant. The complexities involved make this form of manipulation harder to control and regulate. While influencing a user’s beliefs and opinions may be an explicit goal of the LLMs this may also be an implicit outcome. For example, if the model has been disproportionately trained on data reflecting a particular range of products or services, such a bias may unintentionally steer users towards those items. In addition, strategic relationships between developers and outside organizations can introduce biases. <|MaskedSetence|> <|MaskedSetence|> This form of implicit bias may not always be transparent to the end user..
**A**: For example, OpenAI’s partnership with Axel Springer, an online media company, is designed to direct users to their media outlets when searching for news through ChatGPT [8]. **B**: While LLM-based search engine providers, like Microsoft, started flagging the ads in their chatbots, there appears to be no clear consensus on the way ads will be made visible to users and what to mark as an advertisement in conversations [6, 7]. **C**: While this arrangement aims to optimize news discovery and improve the service for users, it could inadvertently bias users toward certain news sources and foster market concentration.
BAC
BAC
CAB
BAC
Selection 4
In an earlier article, J-P Bouchaud and one of the authors (VLC) revisited a model of the FRC based on the fluctuations of a stiff elastic string (henceforth called the BBDL model for Baaquie-Bouchaud Discrete Logarithm model [1, 4]). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We will show that it faithfully accounts for the effect of liquidity on the price-volume correlations between the forward rates of different maturities and the order flow [6]. Additionally, within this framework, prices appear to exhibit short-term temporal autocorrelations, consistent with established findings in the literature. .
**A**: Specifically, we establish a connection between a non-measurable auxiliary noise field that appears in the construction of the original model and the physically measurable volumes traded across the interest rates curve, thus promoting the string model [1] to a microstructural model capable of predicting the price reaction to traded volumes along the curve. **B**: Compared to previous work, this approach accounts for two important features: (a) the discrete set of traded maturities, and (b) the scale-dependent structure of the correlation matrix across maturities [5]. The objective of this article is to demonstrate that this model can be given a microstructural interpretation, which allows for new predictions. **C**: The resulting model is more parsimonious than other cross-impact models while maintaining comparable, if not superior, performance.
BAC
BAC
BAC
BAC
Selection 3
<|MaskedSetence|> small) changes in asset prices are often followed by large (resp. small) changes, indicating that volatility tends to cluster together over time. <|MaskedSetence|> <|MaskedSetence|> Fat tails are often quantified using kurtosis, a measure of the ”tailedness” of the distribution. Distributions with high kurtosis exhibit fat tails, indicating a greater likelihood of observing values far from the mean.. While [21] provides microfoundations for the ARCH model, the microfoundations for the GARCH model remain underexplored. Specifically, it is unclear how the parameters of the GARCH model arise from the micro-level decision-making processes or the proportions of different investors. The validity of multi-agent modeling is often evaluated based on its ability to reproduce these stylized facts [4]..
**A**: Conversely, attempts to provide microfoundations for price fluctuation models in empirical finance are relatively sparse. For instance, the GARCH (Generalized AutoRegressive Conditional Heteroscedasticity) model, an extension of the ARCH model [8], is extensively used in finance to model conditional variance in various empirical studies [23, 11]. This is because the GARCH model effectively reproduces typical financial statistical properties, known as stylized facts [6], such as volatility clustering111This refers to the phenomenon where large (resp. **B**: This results in periods of high volatility and periods of low volatility within financial markets. **C**: and fat tails222This refers to the statistical property of a probability distribution where the tails (extremes) are fatter than those of a normal distribution.
BAC
ABC
ABC
ABC
Selection 3
<|MaskedSetence|> However, questions have been raised about its effectiveness, the potential for market distortion, and whether contracting authorities genuinely exert control over their in-house companies [5]. The efficacy of in-house companies is further scrutinized in various European contexts, where countries have implemented stringent criteria to ensure alignment with market efficiency and public interest. <|MaskedSetence|> <|MaskedSetence|> Programme of Prime Minister Petteri Orpo’s Government [7] has raised the in-house procurement legislation under re-evaluation by presenting a minimum of 10% ownership over one’s in-house company. This means approximately fifty in-house companies in Finland, mainly in the sectors of financial and payroll management and ICT would encounter rearrangements [18]. This research aims to contribute to practice by exploring in-house and traditional companies through business performance metrics. By analyzing key financial indicators such as hourly rates, labour costs, and operational efficiency, the study exposes inefficiencies tied to resource utilization and cost management. The change in ownership structure, from in-house to private (privatization), serves as the observed phenomenon that reveals these inefficiencies. This shift allows for a deeper understanding of how business performance is affected, providing valuable insights for public sector organizations seeking to enhance efficiency and optimize resource management. The research questions guiding this study are: .
**A**: The Confederation of Finnish Industries and the Finnish Competition and Consumer Authority have raised these issues, advocating for stricter oversight and transparency in in-house procurement practices. **B**: In Finland, concerns about market distortion and the effective use of public funds persist. **C**: In-house procurement aims at streamlining processes and at reducing costs for public entities.
CBA
CBA
CBA
CBA
Selection 3
Table 3: Simulation parameters. Now, we will review the strategy simulation from our improved simulation environment alongside the performance of the benchmark environment. <|MaskedSetence|> The left and right Figures show the strategy simulation in the benchmark and improved environments, respectively. Here, the green (blue) lines indicate when the market-maker is posted on the best bid (ask), the filled circles indicate when the market-makers LO is filled on the best bid/ask, and the unfilled circles indicate times the market-maker would have been filled if they had a LO posted. One noticeable feature, in the left Figure, is that whenever the agent is posted and price moves through their order, they do not automatically receive a fill. This, as we mentioned earlier, is because MOs are simulated independently from the price process in Cartea et al., (2015), Cartea et al., 2018b and Jaimungal, (2019), and is contrary to what would happen in reality. <|MaskedSetence|> <|MaskedSetence|> Note, this can also alter the posting strategy later on because the inventory process now evolves differently to how it would in the benchmark environment. .
**A**: And so, all the fills in this left Figure would be referred to as non-adverse fills. **B**: In the right Figure, one can see where all the adverse fills would have occurred (denoted by AFB and AFA). **C**: In Figure 3, we begin by showing a snapshot of the strategy over a random 120 second path in CL, where one can see when the market-maker is posted on the best bid/ask and when they receive trade order fills.
CAB
CAB
ABC
CAB
Selection 2
5 Strategy Simulations: Acquisition and Liquidation In this section, we simulate the performance of the above acquisition and liquidation problems over 10,0001000010,00010 , 000 different price paths, which is the total number of simulations we ran. In section 4, the optimal solution takes a static view of the markets, whereas in reality, market conditions are constantly evolving. This should be reflected in the agent’s strategy and is throughout our strategy simulations, where the strategy is updated at each time step to reflect this. Here, we will, again, specifically focus on how an increasing σ¯¯𝜎\bar{\sigma}over¯ start_ARG italic_σ end_ARG or ς𝜍\varsigmaitalic_ς can significantly change how the acquisition/liquidation strategy would evolve and how certain processes we defined in section 3 are effected. In order to setup these simulations, one must discretize the continuous-time processes introduced in section 3. In other words, the continuous-time processes in equations (14), (17), (18), (19), (21), (32)-(34) and (46)-(48) are discretized. <|MaskedSetence|> As we increment forward in time discretely, each simulation looks at the current price at every time step and then selects the matching value from the h solution matrix, which guides the agent on what the new optimal trading speed is. After a trade is made, the cash, inventory and execution price processes are updated. <|MaskedSetence|> <|MaskedSetence|> Recall there, that the top left subplot is for the Cartea et al., (2015) case, the top right subplot is for the Cartea et al., (2015) case combined with the cases in Roldan Contreras and Swishchuk, (2022) and Roldan Contreras, (2023), and the bottom two subplots are for our new more general price processes as defined in equation (17). .
**A**: In this section we will again present subplots in the same format as in Figure 1 and 2. **B**: Since both the acquisition and liquidation trading problems have already been solved numerically, as shown in section 4, this part is already given to us in a discrete form, and we can use the values in these solution matrices to guide our trading decisions in the strategy simulation. **C**: If either the terminal or boundary condition is breached, the simulation ends and the agent pays the terminal penalty to acquire/liquidate any remaining units of the asset.
BCA
CBA
BCA
BCA
Selection 1
The task of expressing a chosen indicator under the RW measure can be very tedious, especially with non-additive noise. <|MaskedSetence|> Our approach is based on expressing the transformed indicator under the RW measure as a function of (i) the expression of the transformed indicator under RN and (ii) a parametric function that needs to be calibrated. <|MaskedSetence|> Once this is obtained, through numerical simulations involving the calibration of the parametric function and the simulations of the indicator under the RN measure, we can simulate the chosen indicator under RW, fitting any given value. Furthermore, as we will see in the applications, especially in Section 3, transitioning from a model defined in RN to a model in RW will be sufficient to justify the first property. The second property requires numerical simulations, which will be performed in Section 4. The aforementioned parametric function will be calibrated to allow the real-world diffusion of the indicator to fit any given values for the indicator. The indicator we want to fit to given values is very rarely the initial quantity that is diffused by the model being considered. For instance, when working with interest rate models, the model’s initial quantity is the instantaneous rate. <|MaskedSetence|> In the application we will see in Section 3, the CIR++ intensity model describes the diffusion of the default intensity, but our indicators of interest will be: credit spreads cumulative hazard rates..
**A**: Once these relations are found, we can revert the transform to express the indicator under RW as a function of (i) the indicator under RN, (ii) a parametric function, and (iii) other terms that might arise from reverting the transformations. **B**: However, the quantity that practitioners and researchers are mostly interested in will often be zero-coupon bond rates, zero-coupon bond prices, or swaption prices. **C**: This is why, in the theoretical framework below, we start by eliminating non-additive noise through a Lamperti transformation.
CAB
CAB
CAB
CAB
Selection 3
Table 2 presents summary statistics across three types of powerlifting equipment: raw, single-ply, and multi-ply. Each category includes attributes such as personal best, first, second, and third attempt weights, successful attempts, best attempt, age, and body weight. The raw category has the largest number of participants, with 175,829 individuals, an average personal best of 74.73 kg, and a mean bodyweight of 85.35 kg. <|MaskedSetence|> In the multi-ply category, there are 14,628 participants, with an average personal best of 79.40 kg and a mean bodyweight of 94.65 kg. Across all equipment types, the mean weights lifted increase from the first to the third attempts, and the percentage of successful attempts decreases with each successive attempt. <|MaskedSetence|> Approximately 80% of participants are male. <|MaskedSetence|> These statistics highlight key performance trends and demographic characteristics in competitive powerlifting, emphasizing differences in performance and success rates among different equipment types and attempts. .
**A**: The single-ply category includes 76,605 participants, with an average personal best of 75.12 kg and a mean bodyweight of 84.90 kg. **B**: The average age of participants is consistently around the early 30s across all categories. **C**: This indicates that while lifters attempt heavier weights in successive attempts, the success rate declines.
CAB
ACB
ACB
ACB
Selection 4
<|MaskedSetence|> <|MaskedSetence|> On the other hand, lotteries pose the risk of participants losing the amount spent on tickets, as winning is entirely based on chance, and not all participants can be winners [36]. For options and sports betting, these two products also share certain similarities, particularly in terms of speculation [20] and duration [77]. In options trading, participants speculate on the future price movements of an underlying asset. They can enter into options contracts to buy or sell the asset at a predetermined price within a specified period, based on their anticipation of how the asset’s value will change [26]. Similarly, sports betting involves participants speculating on the outcome of an event or the performance of a particular team, player, or scenario in a sporting event. <|MaskedSetence|> Both markets involve time-sensitive products, with short-term options and sports bets having brief duration, making short-term price movements more predictable [77]..
**A**: Bettors place wagers based on their predictions of the event’s outcome, with the potential for winnings or losses depending on the accuracy of their speculation [97]. **B**: In the case of bonds, there exists the risk of issuer default on interest payments or failure to repay the principal amount upon maturity [71]. **C**: Secondly, both bonds and lotteries entail risks for participants.
CBA
CBA
CAB
CBA
Selection 1
The current paper seeks to explore the potential benefits of gamma-hedging using deep learning. <|MaskedSetence|> Classical analytical approaches can only examine such issues asymptotically. <|MaskedSetence|> <|MaskedSetence|> Hence, this paper seeks to use deep learning primarily as a tool to develop an explanatory model for the known market practice of gamma hedging. .
**A**: In particular we will be able to compare the relative importance of addressing transaction costs and model robustness as motivations for gamma hedging. **B**: By examining the strategies found using deep-learning in the face of such complexities we can determine under what conditions gamma-hedging will emerge as a close to optimal strategy. **C**: Using deep-learning we can calculate good approximations of optimal trading strategies in the face of complexities such as transaction costs.
BAC
CBA
CBA
CBA
Selection 3
<|MaskedSetence|> We conduct extensive experiments on 6 benchmark data sets from Kenneth R. French’s Data Library111http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html (a standard and widely-used data library for long-term PO), named FF25, FF25EU, FF32, FF48, FF100 and FF100MEOP. FF25 contains 25 portfolios (they can also be considered as “assets” in our experiments) formed on BE/ME (book equity to market equity) and investment from the US market. <|MaskedSetence|> <|MaskedSetence|> FF48 contains 48 industry portfolios from the US market. FF100 contains 100 portfolios formed on ME and BE/ME, while FF100MEOP contains 100 portfolios formed on ME and operating profitability, all from the US market. All these data sets are monthly price relative sequences, which is a conventional frequency setting for long-term PO. Their profiles are shown in Table 1. .
**A**: FF25EU contains 25 portfolios formed on ME and prior return from the European market. **B**: FF32 contains 32 portfolios developed by BE/ME and investment from the US market. **C**: 6 Experimental Results In this section, we present the performance of the proposed algorithm.
CAB
BAC
CAB
CAB
Selection 4
Unlike those previous works on Sarafu token network, in this paper only the transactions among users are considered (i.e. group accounts are excluded). Instead of considering the velocity of circulation [29], we define and calculate the recirculation time (Section 2.2). <|MaskedSetence|> Moreover, in this work, three aspects are analysed for each topological component to identify different usage strategies (Section 2.1): users who made only one operation (Section 4.2), three-nodes motifs in acyclic components (Section 4.3), and recirculation time (Section 4.4). <|MaskedSetence|> <|MaskedSetence|> As explained before, each cyclic component is defined here as a strongly connected component, where every node can be involved in one or more cycles of different length. .
**A**: This is the main difference from previous works which focused on the activity of user and group accounts at a network level [23, 31, 32]. **B**: Furthermore, like the other aforementioned works [23, 31, 32], the analysis on circulation is made on a temporally aggregate network of the whole period. **C**: Finally, instead of focusing on cycle motifs (of length 2, 3, 4, and 5) as previous work on Sarafu data [23], in this work cyclic components are considered.
BAC
BAC
ABC
BAC
Selection 1
In this phase, we experiment with both instruction modelling and instruction tuning. Results by Shi et al. (2024) suggest using instruction modelling for datasets with long prompts and short responses. They argue that by forcing the model to improve at predicting the instruction too, it learns more about the target domain. <|MaskedSetence|> We similarly find that instruction modelling is consistently outperformed by instruction tuning. <|MaskedSetence|> We want to train our models to improve not at predicting such context tokens and question tokens, but instead make use of them to correctly predict answer tokens. <|MaskedSetence|>
**A**: As such, we carefully overwrite labels of the instruction with [-100] when tokenizing training samples, as shown in Figure 2. . **B**: However, Huerta-Enochian and Ko (2024) instead find that including loss on instructions when using datasets with short responses degrades performance. **C**: This may be related to the nature of our dataset: The context field per instance comes directly from financial documents, which include elements such as tables, table descriptions and formatting information.
BCA
BCA
CBA
BCA
Selection 1
Workplace autonomy can be defined as ’the ability of the worker to make decisions about the content, methods, scheduling, and performance of work tasks’ (Breaugh, 1985). <|MaskedSetence|> For example, the Teacher Work-Autonomy (TWA) scale developed by Friedman (1999) measures the perception of workplace autonomy in educational contexts, covering areas such as teaching, curriculum development, and staff development. Workplace autonomy has been shown to have a significant impact on employee satisfaction and performance. <|MaskedSetence|> 306). <|MaskedSetence|> Karasek’s (1979) demand-control theory also suggests that the combination of high autonomy and low work pressure can significantly improve employee well-being. .
**A**: Additionally, workplace autonomy can promote performance and creativity in complex, knowledge-intensive jobs (Breaugh, 1985). **B**: Lopes, Lagoa, and Calapez (2014) found that ’workplace autonomy is positively associated with job satisfaction and workers’ well-being’ (p. **C**: Specific scales are used to measure this autonomy, assessing the degree of freedom and control that employees have over their tasks and work-related decisions.
CBA
CBA
CBA
CBA
Selection 1
<|MaskedSetence|> However, since Steam ads, releases, and bundles are globally distributed, these local shocks are less likely to align perfectly with the timing of game purchases. Potential Shortcomings: Despite the theoretical arguments above, there are caveats to this approach. <|MaskedSetence|> This subset is rare, as it consists of players who happen to have second-degree friends who also purchase the game. Consequently, the results may not be generalizable to the broader population of players. <|MaskedSetence|>
**A**: Additionally, there remains the possibility of unobserved, very localized, short-term shocks that might increase game consumption, potentially confounding our results.. **B**: The IV estimates only allows us to estimate a Local Average Treatment Effect (LATE), which means that they apply only to the subset of players whose purchase decisions are influenced by their second-degree friends. **C**: Furthermore, we acknowledge that players i𝑖iitalic_i and k𝑘kitalic_k might reside in the same geographic area, which could expose them to similar localized shocks, such as economic changes or region-specific events, that might influence game consumption.
CBA
ACB
CBA
CBA
Selection 1
Finally, we consider a realistically calibrated 1-factor model for systematic longevity risk based on the Cairns–Blake–Dowd (CBD) model Cairns et al. <|MaskedSetence|> We must solve the resulting HJB equation using PDE methods, but the use of homogeneous preferences at least allows us to reduce the dimensionality of the problem. <|MaskedSetence|> These results confirm are earlier qualitative results. Our results demonstrate that, as expected, large benefits are available through longevity-credit mechanisms and from incorporating investment in risky assets post retirement. <|MaskedSetence|>
**A**: These benefits can be obtained. **B**: (2006). **C**: This model enables us to study the consumption-investment problem with more realistic mortality risk.
BCA
BCA
CBA
BCA
Selection 2
A limitation of [Chriss(2024)] is that it is limited to unconstrained strategies. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We then show we can approximate the relevant cost functions arbitrarily well by positive-definite quadratic functions and that constraints are linear. With this we are able to translate the minimizations achieved through variational methods to convex quadratic programs in a small number of parameters with linear constraints. .
**A**: In real-world applications it is of considerable interest to find strategies constrained by practical considerations such as position and trading limits, including short-selling. **B**: This paper presents computational methods for finding best-response and equilibrium strategies with arbitrary constraints and in the case of two-trader equilibrium sheds additional light on the nature of equilibrium by studying that dynamic path to equilibrium. **C**: Therefore if [Chriss(2024)] is about the theoretical foundations of trading as game theory, this paper concerns methods for efficient computation with real-world constraints. In the immediate following sections we review the definition of trading strategies, best-response strategies and equilibrium.
ABC
ABC
ABC
ABC
Selection 2
<|MaskedSetence|> Although several RNA 3D structure prediction methods have been proposed, their accuracy is still limited [miao2017rna, 14, 15]. Predicting RNA structural information at another level, such as distance maps, remains highly valuable. Distance maps provide detailed spatial constraints between nucleotides, capturing essential relationships without requiring a full 3D model. <|MaskedSetence|> Intuitively, a distance matrix takes a step closer than a contact map to represent the 3D structure, providing detailed spatial constraints that are essential for accurate 3D modeling. However, RNA distance matrices have been seldom considered due to the limited availability of comprehensive RNA structural data. In protein research, distance matrices are regarded as simplified representations of protein structures and have been widely used in ab initio protein structure prediction [16]. This work proposes a new method that predicts the RNA distance matrix directly from its sequence by leveraging pre-trained RNA language models. Unlike traditional convolutional-based algorithms commonly used in protein research, we argue that the attention mechanism inherent in transformers naturally aligns with the task of predicting nucleotide pair distances. The attention mechanism can capture long-range dependencies and complex interactions between nucleotides, which are crucial for accurate distance predictions. Therefore, we adopt the vanilla transformer architecture and name our framework the Distance Transformer (DiT). <|MaskedSetence|> By predicting the RNA distance matrix from sequence data, we can enhance the understanding of RNA structures and their functions, facilitating advancements in both basic research and therapeutic applications. .
**A**: Our proposed method represents another view forward in RNA structure prediction. **B**: Despite these advances, predicting the precise spatial relationships between nucleotides remains a significant challenge. **C**: This intermediate level of structural information can guide more accurate 3D modeling and is computationally less intensive, making it a useful tool for improving structural predictions. Here, we propose a novel approach to tackle the RNA distance prediction problem by examining the Euclidean distances between arbitrary bases in the RNA primary sequence to assess RNA 3D structure.
BAC
BCA
BCA
BCA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> First, continuous inputs to the network are scaled by subtracting the median of the data and then dividing by the inter-quartile range; this provides inputs that are more robust to outliers. Second, we allow for learned feature selection by scaling each embedding by a constant in the range (0,1]01(0,1]( 0 , 1 ] before these enter the Transformer. <|MaskedSetence|> Finally, we set the β2=0.95subscript𝛽20.95\beta_{2}=0.95italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.95 in the optimizer, which is a best practice to stabilize optimization with Transformer architectures that use large batches and versions of the adam optimizer, see Zhai et al. [34]. .
**A**: Two of these are inspired by Holzmüller et al. [18]. **B**: Here, we briefly mention some of the other less complex modifications made to the Credibility Transformer. **C**: We initialize all FNN layers that are followed by the GeLU activation function using the scheme of He et al. [17] and use the adamW optimizer of Loshchilov–Hutter [23], with weight decay set to 0.020.020.020.02.
BAC
BAC
BAC
CAB
Selection 2
In this work, we explore the heterogeneous impact of cloud technologies on the size growth rate of firms based on a unique combination of four sources of micro data for French firms between 2005 and 2018 – French ICT surveys (2016 and 2018), administrative data from French firms’ balance sheets (2005–2019), French matched employer-employee data (2005–2019) and the French business register (2005–2019). <|MaskedSetence|> 2018, Acemoglu & Restrepo 2020). We find that cloud has a positive relationship with the growth rates of firms, which is less pronounced for large firms. We address potential endogeneity issues by adopting a causal identification strategy based on an endogenous treatment model (ET henceforth, see Heckman 1976, 1978, Maddala 1983, Vella & Verbeek 1999), where the purchase of cloud services is our endogenous treatment variable. This latent variable model is widely used in research (e.g., Shaver 1998, King & Tucci 2002, Campa & Kedia 2002) as it addresses the issue of self-selection of firms into treatment (Hamilton & Nickerson 2003, Clougherty et al. 2016). We employ lighting strikes per capita at the municipality (French commune) level, a source of spatial exogenous variation associated to investments in IT infrastructure (also see Andersen et al. 2012, Guriev et al. <|MaskedSetence|> 2023), as the exclusion restriction variable. In order to adopt cloud technologies, firms need to have access to reliable, fast, and state-of-art internet connection (Nicoletti et al. 2020, Garrison et al. 2015, Ohnemus & Niebel 2016, DeStefano et al. <|MaskedSetence|> However, by causing energy spikes and dips, lightning strikes may increase the maintenance costs of IT infrastructure, slowing down their diffusion (Andersen et al. 2012). Furthermore, lighting strikes lower the quality associated to broadband internet services, producing a four times larger frequency of broadband network failures during thunderstorms, if not adequately mitigated (Schulman & Spring 2011). Overall, our measure of lighting strikes per capita reflects the trade-off faced by internet providers who will have to balance the costs of expanding the broadband network and the potential benefits that can be harvested by expanding the network, given by the number of potential customers in each geographical area. .
**A**: 2023). **B**: 2021, Caldarola et al. **C**: We focus on long run growth rates, in line with the idea that the effects of digital technologies may take time to materialise due to large and complex organisational changes characterised by uncertainty and implementation lags (Brynjolfsson & Hitt 2003, Brynjolfsson et al.
BCA
CBA
CBA
CBA
Selection 4
(3.2) It can be viewed as a double Stieltjes transform where each pole is associated with a different overlap. Similar functions have been previously used in [11, 32] and [27]. <|MaskedSetence|> By summing the contributions of the different overlaps, the intuition is that this object is self-averaging in the large N𝑁Nitalic_N limit, meaning it converges to a deterministic function that is its expectation. <|MaskedSetence|> <|MaskedSetence|> More specifically, in Appendix D, we show that S𝑆S\,italic_S, the limit of S(N)superscript𝑆𝑁S^{(N)}\,italic_S start_POSTSUPERSCRIPT ( italic_N ) end_POSTSUPERSCRIPT, almost surely verifies.
**A**: This a key result as it demonstrates that our method can be applied to a broad range of problems, even without a clean equation for the mean squared overlaps. **B**: We are going to show that this intuition is correct, as this function converges almost surely to the solution of a deterministic differential equation that we are able to solve. **C**: However, in [11] and [32], the authors use its mean, which could not work in our setup due to the issue mentioned above, forcing us to use its random counterpart.
CBA
CBA
ACB
CBA
Selection 2
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Second, while supervised models can be powerful predictors, their rigidity in the face of rapidly altering feature significance, like earnings data, can be a limitation. As model performances vary over time, it’s evident how critical fresh earnings data is to predictions. As this data ages, models’ strengths and weaknesses become apparent, highlighting differences in their design and learning methods. .
**A**: This experiment highlighted two key findings. **B**: Their performance trajectory indicated not just learning but a nuanced understanding of changing data dynamics. **C**: First, the ability of self-supervised models, particularly CET, to recalibrate and learn from evolving patterns in financial datasets is remarkable.
ACB
ACB
ABC
ACB
Selection 2
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In the previous section, the SCRM and the adjusted ES admit large values in times of a volatile market. Hence, we test now target risk profiles based on the data of different time periods. These time periods are chosen out of the first ten years of our underlying data: (1).
**A**: But, we have to clarify, the underlying data to calculate the target risk profile. **B**: For instance, the target risk profile of the AERM at level p𝑝pitalic_p is given by the expectile of the S&\&&P 500500500500 at level p𝑝pitalic_p. **C**: In this section, we calculate the adjusted risk measures for the three stocks and use target risk profiles based on the underlying monetary risk measures calculated for the S&\&&P 500500500500.
CBA
CBA
ACB
CBA
Selection 1